Generative AI: The Illinois Supreme Court’s Policy and Its Application For Attorneys

Generative AI: The Illinois Supreme Court’s Policy and Its Application For Attorneys

April 18, 2025

By Sam Schwab

The Illinois Supreme Court recently announced its official policy on Artificial Intelligence (AI) in the courts, effective January 1, 2025. The Court also released a Judicial Reference Sheet created to assist judges. The policy states that use of AI by litigants, attorneys, judges, judicial clerks, research attorneys and court staff “can be expected, should not be discouraged, and is authorized provided it complies with legal and ethical standards,” and that, “Disclosure of AI use should not be required in a pleading.” According to the AI Task Force co-chairs, this language authorizing the use of generative AI in legal practice was chosen so as not to “have a chilling effect on the beneficial use of AI.” The policy also mandates that users must thoroughly review AI-generated content before submitting it to ensure accuracy and guard against factual and legal misrepresentations pursuant to the rules of professional responsibility against sanctionable conduct. The policy also states AI applications must not “compromise sensitive information, such as confidential communications, personal identifying information (PII), protected health information (PHI), justice and public safety data, security-regulated information, or information conflicting with judicial conduct standards or eroding public trust.” It is therefore incumbent upon lawyers not to feed such protected information into publicly available text generative AI tools such as ChatGPT, Copilot, Claude, Gemini, and others.

Generative AI Prompts Included In The Judicial Reference Sheet and Their Actual Usefulness

The Judicial Reference Sheet, along with the above-mentioned policy, stresses that “Judges remain ultimately responsible for their decisions, irrespective of technological advancements.” The handout also provides examples of generative AI prompts, which vary in their actual utility for lawyers. One prompt given is, “Summarize the following legal brief and identify key arguments.” This prompt, on its own, has limited utility for attorneys because it would likely produce a long-winded and perhaps convoluted response that most lawyers do not have the time or capacity to dissect and analyze. Rather, a more useful prompt may be something along the lines of, “Summarize the brief in 1-2 sentences, as in, give me the main takeaway.” Such a prompt would more efficiently streamline a generative AI response for busy (and often digitally overstimulated) attorneys. In general, asking gen AI applications to limit responses to one or two sentences, or less than a paragraph, unless instructed otherwise, is a helpful tip to make AI summaries more efficient for attorneys.

The second prompt given in the reference sheet is, “Rewrite this paragraph in a respectful and neutral tone using plain language so it can be understood by people without legal expertise.” This prompt is more useful than the first because redrafting work that has already been written is a much better use of AI than having the tool write its own response from scratch. Generative AI platforms are powered by Large Language Models (LLMs), which are a type of algorithm trained to predict the next word in a sequence based on patterns learned from vast datasets, such as publicly available text like Wikipedia, books, and web pages. Importantly, LLMs are not search engines. They generate, rather than retrieve. Unlike search engines, LLMs do not retrieve factual data from the internet in real time – they generate responses based on probabilities derived from their training data. As such, AI tools are quite useful at redrafting for simplicity, clarity, brevity, and to synthesize technical jargon for lay audiences. This is especially useful for us as medical malpractice defense attorneys, who specialize in making complex medical terminology comprehensible for non-medically trained juries.

The third prompt, “Prepare a speech about the importance of procedural due process for an audience of judges,” is the weakest of the three prompts in terms of its practical usefulness. This type of prompt often leads the AI to hallucinate—that is, to generate content that sounds authoritative, but is factually inaccurate or unsupported. Again, free and publicly available generative AI tools like ChatGPT and Copilot generate responses based on patterns in publicly available data. They do not have access to proprietary legal research databases like Westlaw or Lexis. As a result, they lack the depth and precision needed to produce substantive content based on the black letter law, especially for expansive doctrines like procedural due process. When I tested this prompt using the application Claude, the output was coherent, but filled with vague generalities and rhetorical filler.

The Judicial handout also includes examples of photo, audio, and video prompts, including “create image of an Illinois Courtroom” and “Create movie depicting President Abraham Lincoln conducting legal research on a computer.” What the handout does not mention is that generating multimedia content requires specialized generative AI tools. The popular ChatGPT and similar free and publicly available tools are primarily text-based and are not capable of generating photo, audio, or video content. Such specialized generative AI platforms include Runway ML, Pika Labs, and Luma Labs for video; Midjourney, OpenAI’s DALL-E, and Stable Diffusion for images; and ElevenLabs and Suno for audio. Multimedia generative AI tools have the potential to be incredibly useful to us medical malpractice defense attorneys in providing illustrations, exhibits, and other visual aids to break down challenging medical topics for lay juries. So, while multimedia generative AI has massive potential for trial attorneys, it generally requires the use of paid tools, not free versions of popular websites like ChatGPT.

If you’re interested in staying informed about the ongoing discussion of generative AI, please visit the National Center for State Court’s AI Resource Center, which includes recent case law and decisions regarding technology and AI.

Sam Schwab is an associate in Hall Prangle’s Chicago office who focuses on healthcare litigation.

 

 

Disclaimer: This publication is not intended to provide legal advice but to provide general information on legal matters. Transmission is not intended to create and receipt does not establish an attorney-client relationship. Readers should seek specific legal and/or medical advice before taking any action with respect to matters mentioned in this publication. The attorney responsible for this publication is Sam Schwab. This post constitutes a form of attorney advertising as defined by some state bar associations.

Next
Next

HPS Attorneys Secure Appellate Victory in Florida