NextFin News - The Supreme Court of India has ordered the Bar Council of India (BCI) to establish a specialized committee of experts to address the escalating threat of "hallucinated" or fake judicial precedents generated by artificial intelligence. The directive, issued on Wednesday, May 6, 2026, follows a disturbing incident where a trial court judge reportedly relied on non-existent judgments cited by a litigant—a case of digital fabrication that has sent shockwaves through the country’s legal establishment.
A bench comprising Justices P.S. Narasimha and Alok Aradhe emphasized that while the judiciary does not intend to prohibit the use of AI, the reliance on fabricated citations constitutes professional misconduct with severe legal consequences. The court has appointed senior advocate Shyam Divan as amicus curiae to assist in the matter, signaling that the judiciary is moving toward a formal regulatory framework for generative AI in legal practice. Divan informed the bench that the Supreme Court’s Centre for Research and Planning has already drafted a white paper on the subject, which will likely serve as the blueprint for the new expert panel.
The crisis highlights a growing technical vulnerability in the legal system: the absence of sovereign large language models (LLMs) tailored for Indian law. Most practitioners currently rely on commercial AI tools that are prone to "hallucinations"—a phenomenon where the software confidently asserts facts or citations that do not exist. According to Live Law, the Supreme Court expressed specific concern over the lack of localized, verified data sets, which leaves the door open for litigants to inadvertently or maliciously introduce "fake law" into the record.
The BCI, which regulates the legal profession in India, now faces the daunting task of defining where "efficient research" ends and "misconduct" begins. The expert panel is expected to include both legal scholars and technology specialists to draft guidelines that could mandate the disclosure of AI use in filings. This move mirrors global trends; in the United States, several federal judges have already issued standing orders requiring lawyers to certify that any AI-generated research has been verified by a human being against traditional legal databases.
However, some legal tech analysts remain skeptical of a purely punitive approach. While the Supreme Court’s stance is firm on misconduct, the rapid adoption of AI in lower courts—where backlogs are most severe—suggests that guidelines may struggle to keep pace with practice. The risk is that a "digital divide" could emerge, where well-funded firms use expensive, verified AI tools while independent practitioners rely on free, hallucination-prone models, further complicating the pursuit of equitable justice.
The immediate focus of the BCI panel will likely be the trial courts, where the lack of sophisticated verification infrastructure makes them the primary target for AI-generated errors. As the committee begins its work, the legal community is bracing for a shift in liability. If the panel’s recommendations are adopted, the burden of "algorithmic truth" will rest squarely on the shoulders of the advocate, making the failure to spot a fake citation not just an oversight, but a breach of professional ethics punishable by disbarment.
Explore more exclusive insights at nextfin.ai.

