Alert
AI and Attorney-Client Privilege: Southern District of New York Holds Free Public AI Use Not Protected
February 20, 2026
Authors
Richard J. Rosensweig
Director, Boston
rrosensweig@goulstonstorrs.com+1 617 574 3588Richard M. Zielinski
Director, Boston
rzielinski@goulstonstorrs.com+1 617 574 4029Related Expertise
As artificial intelligence (AI) adoption accelerates, so too do the number of headlines highlighting many new legal risks these tools present. A recent Southern District of New York decision brings the seriousness of these risks into sharp relief for law firm clients who use them without considering all the legal implications. In United States v. Heppner, No. 1:25-cr-00503-JSR (S.D.N.Y. Feb. 17, 2026), Judge Rakoff held that documents created by a criminal defendant using a free public AI license are not protected by either the attorney-client privilege or work product doctrine.
In Heppner, the defendant, prior to being charged with securities fraud and other financial crimes, used Anthropic’s Claude AI to assess the facts of his case and prepare reports outlining which charges he might expect to face and what arguments he might employ as part of his defense. He input information he learned from his attorneys and subsequently shared the resulting outlines with them. When prosecutors sought the inputs and AI-generated materials, defense counsel asserted attorney-client privilege and work product protections.
Attorney-Client Privilege
The court concluded that privilege was inapplicable in this case because Claude is not an attorney, the communications were not confidential, and the communications were not for the purpose of seeking legal advice.
First, the court emphasized that attorney-client privilege requires a human attorney-client relationship and that interactions with AI software do not satisfy that requirement. Second, Judge Rakoff points out that asserting privilege requires that there must be a reasonable expectation of confidentiality surrounding any attorney-client discussions, a missing element from most free public AI platforms. Under written privacy policies users most often agree to accept that their inputs and outputs from AI chats may be used to train future AI models, and that the developer may share this data with third parties. Finally, because AI tools expressly disclaim that they provide legal advice, and because the defendant in Heppner chose to use the tool independently of his counsel, the court rejected the argument that Claude provided him with legal advice.
Work Product
Judge Rakoff also held that the materials were not protected by the work product doctrine because they were not generated at the direction of counsel in anticipation of litigation.
Work product doctrine protects the mental process of attorneys, legal theories, and proposed litigation strategies. It also covers materials prepared by agents of the attorney. Here, because the defendant took it upon himself to employ AI without consulting his attorneys, both the inputs and outputs of those communications did not relate to counsel’s preparation. The court’s analysis therefore suggests that the decision to use AI unilaterally, even in the context of anticipated litigation, will not result in work-product-protected materials absent attorney direction.
Takeaways
As the law around AI use develops and some courts apply traditional privilege rules in this context, clients and their attorneys should keep the following suggestions in mind:
1. Enterprise AI models may be the most reliable way to ensure privilege is maintained. Heppner signals that not all AI platforms are created equal when it comes to privilege. The use of free public AI models, often governed by standard click-through terms, risks waiving attorney-client privilege if confidentiality is not maintained. In contrast, enterprise AI models, paid subscriptions with negotiated confidentiality terms, strict data controls, and no-training commitments, offer a stronger basis to argue that confidentiality and privilege have been maintained. These enterprise plans are generally available to organizations and individuals that enter into direct agreements with AI developers. While expensive, clients should consider using only enterprise AI models whose data is subject to contractual data-use restrictions when working with sensitive or privileged information.
2. Clients should be advised to only use AI tools at the direction of their attorneys. Independent and unilateral AI use by clients undermines both privilege and work product claims. When attorneys direct and structure the use of AI, particularly in anticipation of litigation, it is easier to argue that the resulting communications were prepared as part of a legal strategy and reflect attorney work. Clear oversight reduces the risk of inadvertent waiver and strengthens assertions that AI-assisted materials constitute protected work product.
3. Clients should use caution when it comes to AI and assume that inputting even privileged information into AI systems jeopardizes protection. Until courts further develop law surrounding AI use, privilege and work product protections, clients should treat AI prompts and outputs as potentially subject to disclosure. Limiting AI use to enterprise-licensed environments and ensuring that inputs do not contain sensitive information is the surest way to ensure critical privilege and work product protections remain in place.
Please contact your usual Goulston and Storrs attorney or a member of our Law Firm Defense Group with any questions about the information in this alert.
Related Insights
Recognition
Goulston & Storrs Attorneys Named to Boston Magazine’s Top Lawyers List
Boston Magazine
December 2, 2025
Recognition
Seven Goulston & Storrs Attorneys Named to Boston Magazine’s Top Lawyers List
December 5, 2023

