By Jeffrey T. Donner, Esq.
A recent ruling has ignited applause from a surprising corner of the bar: lawyers and commentators celebrating the proposition that prompts entered into an AI platform are not protected by the work product doctrine. Some courts have even required disclosure of whether lawyers used AI in their research.
That reaction is misplaced. It reflects a failure to understand both the purpose of the work product doctrine and the functional reality of modern legal research.
This is not about attorney-client privilege. No serious lawyer argues that an AI platform is “the client” (or “the attorney”) or that prompts to a third-party system are privileged communications. The question is narrower and more important: are the research inputs, queries, and strategic framing choices of counsel protected as work product?
Under established law, they should be.
I. The Work Product Doctrine Protects Mental Impressions and Litigation Strategy
The work product doctrine, first articulated in Hickman v. Taylor, 329 U.S. 495 (1947), and codified in Federal Rule of Civil Procedure 26(b)(3), protects “documents and tangible things” prepared in anticipation of litigation. More importantly, it affords heightened protection to an attorney’s “mental impressions, conclusions, opinions, or legal theories.”
That second category—often called opinion work product—is afforded near absolute protection. Courts have repeatedly recognized that forcing disclosure of counsel’s thought processes undermines the adversarial system.
Research queries are not random keystrokes. They reflect how counsel frames issues, what legal theories are being explored, what weaknesses are anticipated, and what lines of attack are being considered. A well-crafted research query can reveal more about litigation strategy than a memorandum.
If an attorney searches:
“Florida undue influence presumption rebuttal burden recent 3d DCA”
that query discloses far more than a generic research task. It signals the theory, the procedural posture, the anticipated burden shift, and the likely appellate battleground.
The fact that the search was entered into an AI interface rather than Westlaw does not transform that mental process into discoverable material.
II. No One Demands Disclosure of Westlaw or Google Searches
Courts do not require attorneys to disclose their Westlaw history. They do not compel production of Google search logs. They do not order litigants to produce their Boolean connectors.
Why?
Because those searches are classic work product. They reveal what counsel was thinking and what theories were being tested.
The argument that AI searches are different because they are sent to a third-party server is a red herring. Nearly all modern legal research occurs on third-party servers. Westlaw and Lexis are third-party platforms. So is Bloomberg. So is Google. So is virtually every cloud-based case management system.
If mere transmission to a research vendor destroyed work product protection, modern litigation practice would collapse.
Courts have long recognized that disclosure to a vendor does not waive protection when the vendor is functioning as a tool assisting counsel. The same logic applies here. AI systems are tools—nothing more, nothing less.
III. The “It’s Not Privileged” Argument Misses the Point
Some commentators conflate attorney-client privilege and work product immunity. They are distinct doctrines serving distinct purposes.
The attorney-client privilege protects confidential communications between attorney and client for the purpose of obtaining legal advice.
The work product “immunity doctrine,” on the other hand, protects materials prepared in anticipation of litigation and, critically, counsel’s mental impressions.
A research prompt to an AI system may not be privileged. That is not the question. The question is whether it reflects protected mental impressions and litigation strategy. If so, it is protected by the “work product immunity doctrine.”
Under Hickman and Rule 26(b)(3), it plainly does.
IV. The Chilling Effect on Advocacy
If courts begin ordering disclosure of AI prompts and research inputs, several predictable consequences follow:
- Lawyers will avoid using efficient tools for fear of compelled disclosure.
- Opposing counsel will weaponize discovery to probe strategic thinking.
- Litigation will become more expensive and less efficient.
- The adversarial system will be distorted by forced transparency into one side’s mental processes.
The work product immunity doctrine was designed precisely to prevent that outcome.
The work product immunity doctrine exists because the Supreme Court understood that lawyers must be able to prepare cases without fear that their files—and their thoughts—will be opened to their adversaries.
There is no principled basis for treating AI queries differently from handwritten notes, internal memoranda, or Westlaw searches.
V. The Proper Analysis
The correct framework is straightforward:
- Was the material prepared in anticipation of litigation?
- Does it reflect counsel’s mental impressions, conclusions, opinions, or legal theories?
If yes, it is protected work product.
Whether the research was conducted in a law library in 1975, on Westlaw in 2005, or through an AI interface in 2026 is legally irrelevant.
Technology changes. Doctrine does not.
VI. Disclosure of “Whether AI Was Used” Is a Separate Question
Some courts have required certification regarding the use of AI in filings. That is a different issue from compelling production of research prompts.
Courts may regulate the accuracy of submissions. They may sanction fabricated citations. They may require reasonable diligence.
But compelling disclosure of research methodology—what tool was used, what prompts were entered—invades core work product territory.
Judicial discomfort with new technology does not justify eroding a doctrine that has protected the adversarial process for nearly eighty years.
VII. The Bottom Line
AI is not magic. It is not a co-counsel. It is not a privileged confidant.
It is a research tool.
And the law has never required attorneys to disclose their research tool inputs to their adversaries.
Treating AI prompts as unprotected because they are “new” is not doctrinally defensible. It is technologically naive and legally unsound.
If courts are concerned about accuracy, they should enforce existing rules. If courts are concerned about misuse, they should sanction misconduct.
But dismantling work product protection because the research was conducted in a modern interface is a profound mistake.
The adversarial system depends on protected preparation. That principle does not disappear when the keyboard changes.

