Friday, April 10, 2026

Do Not Paste Counsel’s Advice Into AI

By Scott Coghlan and Dylan Brown*

HR professionals and managers use generative AI every day. They use it to rewrite emails, summarize employee complaints, organize investigation notes, compare discipline options, and draft talking points for difficult conversations. That convenience creates a serious problem when someone drops counsel’s analysis, a draft response, witness summaries, or internal facts about a workplace dispute into an AI tool.

Once HR or management does that, the company may disclose privileged information to a third party, create a new discoverable record, and store sensitive strategy on a platform the company does not control. Put differently, if HR or management takes legal advice from counsel, copies it into an AI tool, and asks the tool to summarize it, rewrite it, or analyze it, the company may have just undercut the very privilege that protected that advice in the first place. That is the point employers need to understand: AI is not a secure extension of counsel, and it should not become the place where the company recycles attorney communications for a “quicker” answer.

AI Can Turn Routine HR Activity Into Discovery Material


The law in this area is still developing, and the early cases show why employers cannot rely on any clear or uniform rule. Courts have started applying ordinary privilege and work-product principles to AI use, but they have not applied those principles the same way in every case. That uncertainty creates its own risk. Employers do not know how the next court will treat a prompt, an output, or an AI-generated summary that includes legal advice or sensitive employment information.

HR or management creates that risk when it takes communications/documents from counsel, or facts gathered for counsel, and feeds that material into an AI tool for a summary, rewrite, or analysis. At that point, the employer may have disclosed privileged legal advice to a third party, weakened any claim that the communication remained confidential, and created a new record of the company’s legal strategy that an adversary may later be able to obtain.

In United States v. Heppner, No. 25 Cr. 503 (JSR), 2026 U.S. Dist. LEXIS 32697, 2026 WL 436479 (S.D.N.Y. Feb. 17, 2026), a federal court held that exchanges with an AI platform were not protected by attorney-client privilege or the work-product doctrine. The defendant used the platform on his own, without counsel’s direction, and the platform’s terms allowed the provider to collect, retain, and disclose user data. On those facts, the court found no protected attorney-client communication, no reasonable expectation of confidentiality, and no work product that counsel prepared or directed.

In Warner v. Gilbarco, Inc., No. 2:24-cv-12333, 2026 U.S. Dist. LEXIS 27355, 2026 WL 373043 (E.D. Mich. Feb. 10, 2026), another federal court reached a different result. There, the court protected AI-assisted material under the work-product doctrine because the plaintiff was proceeding pro se and was effectively acting as her own counsel in preparing for litigation. The court also rejected the argument that using ChatGPT automatically waived work-product protection. But Warner did not create blanket protection for AI use. It turned on its own facts, and it addressed work product, not a broad safe harbor for privilege.

Taken together, those decisions send a clear message. Courts will not treat AI as a special zone with special protections. They will look closely at how the user employed the tool, whether counsel directed the use, what type of protection the party claimed, what the platform’s terms allowed, and whether the circumstances supported a real expectation of confidentiality. That fact-specific approach gives employers no straight path and no room for casual use of AI with sensitive workplace material.

That risk matters in employment cases because HR documents often decide the case. A prompt asking AI whether a complaint sounds like retaliation, whether a termination looks defensible, how to explain a pay disparity, or how to answer an employee’s lawyer (or an employer’s own) can become harmful evidence. The problem gets worse when employees use AI notetakers or transcription tools in investigations, discipline meetings, accommodation discussions, or calls with counsel. Those tools can create searchable transcripts, summaries, and action items that expand the evidentiary record and complicate privilege, privacy, and consent issues.

Employers Need Guardrails Now


Employers should draw a bright line now. HR personnel, supervisors, and executives should not paste legal advice, draft attorney communications, draft attorney documents, investigation notes, interview summaries, proposed discipline, termination rationales, severance terms, or other sensitive material into AI tools. Employers should also keep AI notetakers and transcription bots out of privileged or sensitive meetings unless the company has approved the tool, reviewed the vendor terms, addressed consent requirements, and set strict controls on retention, access, and use.

The problem here is not futuristic. It is already sitting in inboxes, meeting invites, and browser tabs. A manager who runs counsel’s advice through AI for a quicker summary may waive privilege. A note-taking bot in a sensitive HR meeting may create a transcript the company never wanted. A well-meaning employee who uploads internal compensation, investigation, or performance information may create a discovery fight the company cannot undo. Because courts have not offered a single clear answer, employers should follow best practices now: update AI-use policies, train HR and management, restrict AI use for sensitive employment matters, and route legally sensitive workplace issues to counsel before a routine prompt becomes Exhibit A.

*Scott Coghlan chairs Zashin & Rich’s Workers’ Compensation Practice. He has more than 20 years of experience representing employers in all areas of workers’ compensation law, including administrative proceedings, premium rating disputes, and appeals all the way up to the Ohio Supreme Court. Dylan C. Brown represents public and private employers in all facets of labor and employment law.