Australian lawyer apologises after AI generated fake quotes in murder case

image

Australian lawyer apologises after AI generated fake quotes in murder case

The fake submissions included fabricated quotes from a speech to the state legislature and nonexistent case citations purportedly from the Supreme Court.

A barrister was forced to apologise after AI created fake quotes and made-up judgments in submissions he filed in a murder case in front of an Australian court.Defence lawyer Rishi Nathwani, who holds the prestigious legal title of King’s Counsel, took “full responsibility” for filing incorrect information in submissions in the case of a teenager charged with murder in front of the Supreme Court of Victoria state in Melbourne.“We are deeply sorry and embarrassed for what occurred,” Nathwani told Justice James Elliott on Wednesday, on behalf of the defence team.The fake submissions included fabricated quotes from a speech to the state legislature and nonexistent case citations purportedly from the Supreme Court.The errors were discovered by Elliott’s associates, who could not find the cases and requested that defence lawyers provide copies.The lawyers admitted the citations “do not exist” and that the submission contained “fictitious quotes,” court documents say.The lawyers explained that they had checked the initial citations and assumed the others were also accurate.The AI-generated errors caused a 24-hour delay in resolving a case that Elliott had hoped to conclude on Wednesday. Judge Elliott ruled on Thursday that Nathwani’s client, who cannot be identified because he is a minor, was not guilty of murder because of mental impairment.“At the risk of understatement, the manner in which these events have unfolded is unsatisfactory,” Elliott told lawyers on Thursday. “The ability of the court to rely upon the accuracy of submissions made by counsel is fundamental to the due administration of justice.”The submissions were also sent to prosecutor Daniel Porceddu, who did not check their accuracy.The judge noted that the Supreme Court released guidelines last year for how lawyers use AI.“It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified,” Elliott said.Not the first case of AI court hallucinationIn a comparable case in the United States in 2023, a federal judge imposed $5,000 fines (€4,270) on two lawyers and a law firm after ChatGPT was blamed for their submission of fictitious legal research in an aviation injury claim.Judge P Kevin Castel said they acted in bad faith, but he accepted their apologies and remedial steps in lieu of a harsher sentence.Later that year, more fictitious court rulings invented by AI were cited in legal papers filed by lawyers for Michael Cohen, a former personal lawyer for US President Donald Trump. Cohen took the blame, saying he did not realise that the Google tool he was using for legal research was also capable of so-called AI hallucinations.UK High Court Justice Victoria Sharp warned in June that providing false material as if it were genuine could be considered contempt of court or, in the “most egregious cases,” perverting the course of justice, which carries a maximum sentence of life in prison.In a regulatory ruling following dozens of AI-generated fake citations put before courts across several cases in the UK, Sharp said the issue raised “serious implications for the ... public confidence in the justice system if artificial intelligence is misused."