
Artificial intelligence can be valuable in tasks that are preparatory rather than determinative including summarising and analysing case laws
| Photo Credit:
iStockphoto
AI, they contend, can be an assistive tool rather than a substitute for human professional judgment and mandatory verification.
The Supreme Court of India has held that judicial orders, citing hallucinated case law constitute judicial misconduct and several High Courts, including those in Andhra Pradesh and Kerala, have issued warnings or binding rules restricting AI-assisted drafting. Similar issues have arisen globally.
“Using artificial intelligence to deliver judgments may seem easy. However, it is not appropriate for judges to rely on AI while deciding cases. Judicial decisions must be based on human reasoning, wisdom, and careful consideration of facts and law,” Mangari Rajender, a retired district judge and former Director of Telangana State Judicial Academy, has said.
Sanjeev Kapoor, Senior Partner at the law firm Khaitan & Co, said that US courts have sanctioned lawyers for submitting AI‑generated citations without verification, and errors in judicial orders overseas have highlighted the same danger.
He cited the examples of how the European Union and Singapore built frameworks to flag and address the risks associated with the use of AI in judicial delivery.
“These global principles align with core themes that hallucinations remain a real risk and that lawyers remain fully responsible for accuracy, and most importantly, AI cannot replace professional judgment,” he said.
Terming the SC ruling timely, AI Industry Analyst Kashyap Kompella felt that judicial decisions derive their authority from reasoned analysis, verified precedent and institutional accountability. “A judgment is an exercise of State power grounded in law. Any tool that risks introducing inaccuracies into that process deserves careful scrutiny. The concern really is about the uncritical use of AI,” he said.
Kashyap, who is the author of the books ‘AI for Lawyers’ and ‘AI Governance and Regulation’ and a Visiting Faculty of AI & Law, argues that AI tools such as OpenAI ChatGPT, Google Gemini or Anthropic Claude are capable of producing fluent, persuasive text.
“But that fluency creates a false sense of reliability. In reality, these systems generate responses by predicting likely word sequences; they neither verify facts nor understand legal doctrine as trained professionals do. In legal contexts, even minor inaccuracies can have serious consequences,” he pointed out.
He called for the use of purpose-built legal AI platforms trained on curated legal databases and designed to link outputs to identifiable sources.
Where it can help
Artificial intelligence can be valuable in tasks that are preparatory rather than determinative. It can assist in identifying potentially relevant case law, summarising large volumes of documents for initial review, refining drafts for clarity, improving structure and managing routine administrative work.
Published on March 4, 2026