NZ AI guidelines in progress
By Resolve Editor Kate Tilley
A New Zealand working group drawn from the judiciary, the Ministry of Justice and academics will soon release guidelines for consultation on use of generative AI in the NZ courts.
Justice Patricia Courtney, who was appointed to the NZ Appeal Court in 2019 after serving on the High Court since 2004, said her opening address to the NZILA’s 2023 conference was therefore a good opportunity to talk about artificial intelligence’s challenges.
“Generative AI is talked about both in terms of being the greatest technology humans have ever developed and an existential threat to humanity because it will produce a form of digital intelligence that surpasses that of humans,” she said.
The tech industry’s term for generative AI creating inadvertently fictitious information was hallucinating. “If Chat GPT can make up a citation, could it produce an actual judgement? Of course, including in the style of a particular judge.”
Justice Courtney said AI would impact on jobs, but “there’ll be far greater jobs on the other side of this and the jobs of today will get better”. There was also potential for significant harm. “The legal and insurance sectors are both … among the most likely to be affected by generative AI.”
While AI would not replace lawyers, “those who don’t use AI will be replaced by those who do”.
Language structures
Generative AI used mathematical models and data to augment, replace or improve on human cognitive tasks. Its large language models trained on vast amounts of data drawn from the internet to learn the patterns and structures of human language to enable content creation.
“Generative AI splits text into bite-sized chunks called tokens. They might be words, parts of words, like suffixes, or punctuation.” The tool’s training through billions of tokens enables it to recognise in a statistical way how human language is structured.
Generative AI chat bots could be prompted to try for better or different responses, but were not designed to recognise truth, correctness or bias.
Justice Courtney said AI tools may or may not give a correct response, which meant hallucinations remained a problem and accuracy was the major challenge.
“Generative AI will only be reliable in professional settings if there is a human on hand to confirm the correctness of the output. Obviously that human must have good knowledge to assess the accuracy of the generated product.”
Justice Courtney said there were limitations in the data chat bots had been trained on, but new data input by someone using a generative AI tool and the responses became part of the data set available for future users. That meant the tool became more experienced as it was used more, but with obvious implications for confidentiality.
She said many important sources of legal data in NZ were not accessible on the internet, so generative AI tools freely available to the public were probably not hugely useful to lawyers.
Huge resources
Justice Courtney said huge resources were being poured into generative AI tools being created explicitly for the legal profession. They aimed to significantly reduce the time required for searching databases, analysing, summarising cases, producing research manuals, writing memos, and completing discovery.
In August, Thomson Reuters said it was investing $100 million a year in generative AI. “Given the size of the NZ market and some of its idiosyncratic features it will take time to produce a tool that is fully capable in the NZ-specific context. Nevertheless, the generic tasks of analysing cases … and even predicting case outcomes based on the analysis can be expected in a quite a short time,” Justice Courtney said.
Law firms faced a broad range of issues in considering how to engage with the technology, how much to invest in it, and how to plan for the time when a substantial amount of work now done by junior lawyers would be done by AI. They had to consider how to train and retain the junior lawyers who would be needed in the future and how to charge for work that was to a greater extent going to be performed by machines.
Justice Courtney said lawyers should ask what generative AI meant for access to justice and the client community. “The most exciting possibilities lie not in swapping machines and lawyers, but in using AI to deliver client outcomes in entirely new ways. For example, through online dispute resolution rather than physical courts and more fundamentally through dispute avoidance rather than dispute resolution.”
AI could be used for low-value, high-volume disputes that take up a lot of time in the lower courts.
Human input
Justice Courtney said even if accuracy and confidentiality could be assured and a lawyer was satisfied their professional obligations would be met if they used AI, most clients would still want the input of a human to give the final advice.
“Conversely, as the level of reliability improves, clients may actually seek out AI assistance as a means of reducing costs. AI is likely to reduce the input of lawyers in simple cases or even the early stages of complex cases. This is work often done by junior to intermediate solicitors who also need to eat, sleep and go on holiday, which an AI system does not.”
Justice Courtney said if those solicitors were freed from routine work, they could engage in more meaningful work and more quickly acquire the higher-level skills that a machine could not.
Justice Courtney said little work had been done on providing specific guidance for AI use by lawyers and judges.
The European Parliament had approved rules for regulating AI under a proposed AI Act. The new rules would set different regulatory requirements depending on the level of perceived risk.
Transparency, explainability
The UK’s approach was to require existing regulators to guide use of AI in accordance with stated principles, including transparency and explainability so parties could access an AI system’s decision-making process and have the ability to contest an AI decision.
The UK Law Society supported that approach, emphasising the core values of confidentiality of information protected by legal professional privilege.
Justice Courtney said the NZ guidelines on which the profession would soon be consulted were the first attempt anywhere to provide guidance to lawyers on what was expected of them.
Regardless of how effective and reliable generative AI became, lawyers and judges were bound by ethical considerations and obligations.
However, it was likely use of generative AI would be accepted as inevitable, and as a worthwhile means of improving the quality of professional work.