The Ruling

United States v. Heppner — A CEO used Anthropic's Claude to analyze his legal strategy during a federal investigation. The court held that all 31 AI-generated documents were not privileged, on two independent grounds: (1) an AI tool is not an attorney, and (2) consumer AI terms of service destroy any reasonable expectation of confidentiality.

What Happened

Bradley Heppner, the former CEO of GWG Holdings, was indicted on fraud charges in SDNY. While working with his defense attorneys, he used the consumer version of Claude to synthesize defense strategy and draft responses to the government's evidence.

When the FBI executed a search warrant, they found the AI-generated documents on his devices. The defense team logged them as privileged. The government challenged that claim — and won.

Why Privilege Failed

Judge Rakoff's reasoning was straightforward, applying existing privilege principles to a new technology:

Two Independent Grounds
  • No attorney-client relationship. An AI tool is not a lawyer. Communications between a user and an AI platform cannot satisfy the fundamental requirement that privileged communications occur between a client and their attorney.
  • No expectation of confidentiality. Consumer AI terms of service explicitly allow the provider to review prompts, retain data on its servers, and disclose information to government authorities. The court compared it to discussing legal strategy in a crowded elevator.

The court also rejected the work-product argument. Because Heppner used the AI tool on his own initiative — not at the direction of his attorneys — the documents did not reflect counsel's strategy and did not qualify for protection.

What the Court Left Open

Judge Rakoff's written opinion explicitly noted that enterprise AI tools with contractual confidentiality guarantees may present a different analysis. The ruling targeted consumer platforms where the terms of service permit third-party review — not private AI systems where data stays within the organization's infrastructure.

The court also acknowledged that under the Kovel doctrine, AI used at counsel's direction with appropriate confidentiality expectations could potentially be treated differently. The path forward isn't to avoid AI — it's to use it under the right conditions.

Why This Matters Right Now

This ruling isn't theoretical. It has immediate, practical consequences for every law firm:

The Reality
  • Your attorneys are already using AI. Multiple surveys show 31% of legal professionals use generative AI at work. Many are using consumer tools without firm oversight.
  • "Shadow AI" is a privilege risk. Every unsupervised use of ChatGPT, Claude, or Gemini on client matters is now a potential privilege waiver under this ruling.
  • Opposing counsel will look for this. Litigators will now routinely request AI prompts and outputs in discovery. Privilege logs mentioning AI are a red flag.

What Firms Should Do

The answer is not to ban AI. Your competitors won't, and the productivity gains are too significant to ignore. The answer is to use AI in a way that preserves privilege:

The Path Forward
  • Block consumer AI on firm devices. If a tool is accessible, it will be used. Work with IT to restrict unapproved platforms.
  • Provision enterprise-grade tools. The court's reasoning endorses AI systems with contractual confidentiality guarantees where data is not shared with third parties.
  • Establish a lawyer-in-the-loop. The court noted that counsel-directed AI use could qualify for work-product protection under Kovel. Require legal department supervision for any AI use on client matters.
  • Insist on hardware-attested isolation. AMD SEV-SNP memory encryption and the NVIDIA H100 trusted execution environment now make confidentiality verifiable at the hardware level. No trust in vendor terms of service required — cryptographic attestation proves your data is sealed.

Where Rendex Fits

We built Rendex specifically for this problem — before this ruling made it front-page news.

Rendex is a private AI system that runs inside a dedicated Azure confidential virtual machine, sealed by AMD SEV-SNP memory encryption and the NVIDIA H100 trusted execution environment. When an attorney asks a question about your documents, the query, the AI processing, and the answer all stay inside the enclave. Not even Microsoft can read data in the enclave — the hardware itself enforces confidentiality, not terms of service.

Every answer cites the exact source document, section, and page number. Every query is logged to an append-only audit trail. Role-based access and matter-level permissions enforce ethical walls at the query layer. Your security team can verify the enclave seal via cryptographic attestation reports.

This is the architecture the court's reasoning points toward: AI with contractual and hardware-enforced guarantees that no third party can read the data.

United States v. Heppner, No. 25 Cr. 503 (JSR) (S.D.N.Y. Feb. 10, 2026) (oral ruling); written opinion issued Feb. 17, 2026.

This article is provided for informational purposes only and does not constitute legal advice. Consult qualified counsel for guidance specific to your firm's circumstances.

Private AI built for security and compliance review

Rendex runs inside a hardware-sealed Azure confidential enclave. Every answer is cited. No outbound egress required after setup. See it with your documents.

MG
Matthew Giordano
Founder, Rendex Systems — info@rendex.inc