AI Hallucinations Highlight Risks in Legal Contexts
Full Transcript
In a notable legal case, retired Third Circuit Judge Thomas Vanaskie addressed the implications of AI hallucinations when he admitted an expert report that contained citations to non-existent sources. This occurred in the In re: Valsartan Losartan, and Irbesartan Products Liability Litigation on September 3.
The expert, Dr. Sawyer, utilized an AI tool to assist in locating scientific articles for his report, which led to ten incorrect citations primarily in a two-page section summarizing background facts. Despite these inaccuracies, Judge Vanaskie ruled that the overall reliability of Dr.
Sawyer's opinions was intact, as they were supported by 'good grounds' and adhered to scientific methodology. This decision has sparked discussions about the reliability of AI in judicial processes, highlighting the potential dangers of relying on AI-generated content in sensitive contexts like law, especially as the case may influence future litigation where Dr.
Sawyer's testimony is presented again. The defendants argued that the mistakes in citations undermined the validity of the report, although the plaintiffs contended that these errors were minor and did not affect the core analysis of Dr.
Sawyer's work, as noted in The Volokh Conspiracy.