My Take | Misuse of AI risks harming public confidence in the justice system
Judges around the world are being presented with arguments based on non-existent court judgments. More must be done to prevent such abuses

The risks of relying on artificial intelligence for research, without verifying the results, should by now be clear to all, almost three years after the groundbreaking launch of ChatGPT.
Lawyers using generative AI tools to prepare material for court should be setting a shining example. But judges around the world, from Britain to the US, Canada and Australia, continue to be presented with arguments based on non-existent court judgments generated by AI. More needs to be done to prevent such abuses.
Hong Kong is not immune to the problem. Secretary for Justice Paul Lam Ting-kwok used a ceremony for three new senior counsel to sound a warning last weekend.
He said the city’s legal profession faced the challenge of adopting new technology without compromising integrity. Lam then quoted from a UK court judgment delivered the previous day.
The court had warned that AI tools are “not capable of conducting reliable legal research”.
Dame Victoria Sharp, one of two judges ruling in the case of Ayinde, pointed out that AI’s “apparently coherent and plausible responses” may be entirely incorrect or simply untrue.