- The ABA’s Concise Guide to Lawyer Specialty Certification
- The Benefits of a Lawyer Being a Board-Certified Specialist
- Benefits to Become Board Certified - ABA Video
- Board Certified Members
- How to Become Board Certified in DUI Defense Law
- Apply for Certification
- Apply for Re-Certification Renewal
- Board Certified Senior Specialist
- Rules Governing Board Certification
- Preparing for the Exam
AI in Court Filings: The Ethics Line Between Assistance and Abdication
Posted on March 30, 2026 in Uncategorized
By Ryan Katz
It is no secret that Artificial Intelligence, or AI, is vastly changing the landscape of many different fields. While the practice of law is certainly not something anybody would want to be taken over by AI in any major way, AI is still a tool which has applications for lawyers. Generative AI can help lawyers brainstorm arguments, organize facts, draft outlines, and accelerate first-pass writing. However, it is essential to remember exactly what AI actually is, a tool in your arsenal. The ethics question is not whether lawyers may use AI. They may. The real question is whether they can use it without surrendering the professional judgment, verification, and client protection that the job requires. The answer from modern ethics guidance is clear: AI does not change the lawyer’s core duties. It simply creates new ways to violate them. The American Bar Association has stated that lawyers using generative AI must consider their ethical obligations, “including their duties to provide competent legal representation, to protect client information, to communicate with clients, to supervise their employees and agents, to advance only meritorious claims and contentions, to ensure candor toward the tribunal, and to charge reasonable fees.” (American Bar Association Formal Opinion 512)
For court filings and litigation documents, the most immediate ethical risks fall into five categories. First, competence: a lawyer must understand enough about the tool’s limits to use it safely, and must independently verify what it produces. Second, confidentiality: inputting client facts into a public or poorly governed system may expose protected information. Third, candor: filing invented cases, false quotations, or misleading AI-produced assertions can violate the duties owed to the court. Fourth, supervision: partners and supervisory lawyers must train lawyers and staff, and set guardrails for approved uses. Fifth, fees and client communication: lawyers cannot charge for AI use in misleading ways, and may need to explain or obtain consent when AI meaningfully affects the representation or creates confidentiality risk. (American Bar Association Formal Opinion 512)
Cases have already emerged which show just how important it is to use AI properly and responsibly. In a Second Circuit US Court of Appeals case, one attorney, in her reply brief, cited to a non-existent state court decision which was made up by ChatGPT. It does not appear that the attorney did so knowingly, but rather that she conducted research using ChatGPT, requesting that it identify precedent in support of her arguments. Unfortunately, she failed to actually make any effort to read or otherwise confirm the validity of the case. The Court found that her conduct, “falls well below the basic obligations of counsel,” and referred her for discipline. (Park v. Kim, No. 22-2057 (2d Cir. 2024))
Courts can treat misuse of AI as a violation of Federal Rules of Procedure. In Gauthier v. Goodyear Tire & Rubber Co., the Plaintiff’s attorney filed a response containing two nonexistent cases and multiple nonexistent quotations. The court found that the lawyer used a generative AI tool, failed to verify the content, and did not fix the problem even after the opposing party identified the issue. The sanctions in this case included a $2,000 payment, one hour of CLE on generative AI in legal practice, and an order to furnish the sanctions order to the client. That is important because it demonstrates how AI misuse can harm not just the court, but can also permanently damage the client relationship. (Gauthier v. Goodyear Tire & Rubber Co., No. 1:23-CV-281 (E.D. Tex. Nov. 25, 2024))
There is no doubt that using AI improperly could similarly run afoul of your own State’s rules regarding candor and raising only nonfrivolous claims and arguments. In fact, the issue already has been raised in some state courts. The Second Appellate District of California recently published an opinion for the sole purpose of addressing this issue. After determining that “nearly all of the legal quotations in plaintiff’s opening brief, and many of the quotations in plaintiff’s reply brief, are fabricated,” the court sanctioned the attorney $10,000. Cases like this are entirely avoidable. This opinion and massive sanction would have been entirely unnecessary had the attorney involved simply remembered that AI is merely a tool, and it must be monitored and double checked on everything. (Noland v. Land of the Free, L.P., 114 Cal. App. 5th 426 (Ct. App. 2025).)
Some courts have started to create AI-specific compliance rules. The Northern District of Texas now requires disclosure on the first page of a brief if generative AI was used to prepare it; absent that disclosure, the filing certifies that no part was AI-prepared. (txnd.uscourts.gov). Two circuits in Florida have recently adopted similar rules, requiring attorneys and self-represented litigants to disclose the use of Generated AI in any submissions. (floridabar.org). These developments matter beyond those jurisdictions. Even where no AI-specific rule exists, the trend is unmistakable: courts expect lawyers to verify AI-generated content before filing, and local practice may impose disclosure or certification requirements on top of normal ethics rules.
The best practices involving AI must keep these cases and these trends in mind. AI is a drafting assistant, not a magic or infallible associate or paralegal. Never file an AI-generated citation, quotation, or legal assertion unless a lawyer has checked the original source. It is not enough to make sure the cases cited actually exist. Instead, you must verify that every case cited is real, actually contains any quotations pulled by the AI, and that the case actually stands for the assertion the AI relied on it for. Additionally, do not input confidential or identifying client information unless you understand the particular AI tool’s retention, training, and access practices and have determined that there is no risk of confidentiality violations. Create office policies for approved AI tools, including training on prompting, source checking, and human sign-off. Train lawyers and staff. Bill only for actual lawyer time, which can include time taken for prompting the AI and reviewing its output, but not time the AI spends drafting on its own.
Used properly, AI can improve efficiency and expand access to legal services. Used carelessly, it all but asks for the attorney submitting its work to be sanctioned. Fabricated authorities, confidentiality mistakes, inflated billing, and avoidable discipline are all dangers you will have to mitigate if you choose to rely on AI. However, your duty as an attorney has always been to avoid these plain ethical pitfalls. No lawyer has ever been permitted to cite cases without reading them at all, or to discuss details and client identifying information with others. The ethics of AI in litigation are therefore not anything new or scary. Competence, candor, confidentiality, supervision, and honesty remain the core requirements for all attorneys. AI just makes it easier to forget that a machine can draft words, but only a lawyer can take responsibility for them.
Find an Attorney
Enter your city, state, or Zip code below to locate a qualified attorney who has demonstrated a commitment to defend those accused of DUI and related crimes.






