CONFERENCE ISSUE 2024

PREVIOUS HOME NEXT

AI warning from Queensland Chief Justice


By Resolve Editor Deb Eccleston


There may be many uses for generative AI, but legal decision making isn’t one of them.

Delivering the keynote address at AILA’s 2024 National Conference, The Honourable Chief Justice Helen Bowskill didn’t hold back in expressing her views on the use of Artificial Intelligence (AI) in the legal sector.

She said while technology should be used where it saves time and money for clients, it should never be used in legal decision making.

“We are part of a human system, which requires human decision making and all that that involves, including the ability to reason and draw inferences, think critically and act compassionately,” she said.

“I'm sceptical if not completely opposed to the notion that it could be appropriately used in a legal context for substantive decision making.”


Under scrutiny

Chief Justice Bowskill had never used AI until preparing her keynote address – and the results it generated just reinforced her view.

First there was the Shakespearean-like narrative it produced when asked to create an introduction to a presentation about the use of generative AI in the legal profession.

Then there were the images provided when asked what a Chief Justice in Queensland in 2024 looks like – four pictures of white-haired, bearded men holding gavels.  

“Despite the name, generative AI chat bots are not actually intelligent in the ordinary human sense, nor is the way in which they provide answers analogous to the human reasoning process,” Chief Justice Bowskill said.

“These chat bots are built on large language models or LLM's, which analyse a large amount of training text to predict the probability of the next best word in a sentence given the context.”

While these LLM’s are trained to mimic human dialogue, the responses provided are based on what the chatbot predicts to be the most likely combination of words. Not necessarily the facts.


Inaccurate information

The text used to train generative AI chat bots comes from a range of Internet sources and doesn't necessarily come from authoritative or up-to-date databases, Chief Justice Bowskill said.

“As well, the current public generative AI chat bots have limited access to training data on Australian law or the procedural requirements that apply in Australian courts or tribunals. Even when that is improved, there will be limitations based on the currency of the data.

“Generative AI chatbots cannot distinguish between facts, inferences and opinions contained in their source material. This means the text which they generate in response to a prompt may contain incorrect, opinionated, misleading or biased statements presented as fact.

“Generative AI chatbots can make up fake cases, citations and quotes, or refer to legislation, articles or legal texts that do not exist.”

 


Pictured from left to right: The Honourable Chief Justice Helen Bowskill pictured with Kevin Holyoak (Callinan Chambers) and AILA Conference Chair Monique Moloney.


Real world cases

There are several examples where the use of generative AI has had serious implications in legal cases. Chief Justice Bowskill referred to the 2023 case Mata v Avianca Inc., in which an airline passenger brought a claim against the airline carrier for damages for personal injury after the metal serving trolley struck his knee.

Lawyers for the passenger filed submissions in opposition to the airline carrier’s strikeout application, however the airline carrier’s lawyers could not find the cases that had been cited.

Because they didn’t exist – the passenger’s lawyers had used ChatGPT to prepare the submission, and it had included fake cases.

“If that wasn’t bad enough, the lawyers doubled down and stood by the fake cases after being ordered to provide copies,” Chief Justice Bowskill said.

Chief Justice Bowskill said it was noted that if the lawyers had come clean once the issue was first pointed out, the outcome would have been very different.

But as they did not, a finding was made that they had acted in bad faith based upon acts of conscious avoidance and false and misleading statements to the court.

The sanction imposed on the lawyers included not only a financial penalty, but a requirement to write to the passenger advising him of the decision, and to all the judges who had been improperly identified in the fake cases cited.


AI in Australia

While Mata v Avianca Inc. is an extreme example, it isn’t an isolated case. Earlier this year, the case of DPP v Khan in the ACT Supreme Court highlighted a problem where generative AI is used to produce evidentiary material.

In this case, the material in question was a personal reference tended in support of the offender. The way the reference was written raised suspicion, and it was found to have been generated by AI.

“There were two things that raised suspicion,” Chief Justice Bowskill said.

“First, the way the offender’s relationship with the offender was introduced, which wasn’t what you would expect from a brother.

“Second was the paragraph praising the offender’s ‘commitment to cleanliness’ which contained non-specific, repetitive praise.”

In his ruling, Justice Mossop held that “any testimony, or character references, used in court proceedings that were likely to have been written with the assistance of AI chatbots must be given very little probative value to the extent that they cannot influence a trial.”

Justice Mossop added there was a positive duty on council appearing at the sentencing proceeding to make appropriate inquiries prior to tendering such a document.

Chief Justice Bowskill referred to other cases where the application of AI had serious ramifications, including when used to enhance video evidence or answer substantive legal questions.  

In wrapping up her address, Chief Justice Bowskill let Microsoft Copilot “have the last word”.

“In this Twilight Zone of Generative AI, we must wield our legal wands judiciously,” it said.

“Let us embrace the magic, but with eyes wide open for the ethical compass that guides us, transcends algorithms, impulses within our hearts, reminding us that justice, even in the age of AI, remains a human endeavour.

“So, my fellow legal voyagers, let us navigate this brave new world one algorithm at a time.”

 

Photo credit: AILA 2024 Conference photos supplied warringtonphotography.com

 
Back to top
 
 

Resolve is the official publication of the Australian Insurance Law Association and
the New Zealand Insurance Law Association.