Advertisement
Artificial intelligence
OpinionWorld Opinion
Mark Pirie
Christopher Tang
Mark PirieandChristopher Tang

Opinion | Why reducing AI harm requires more than tech firms’ empty promises

Real action incurs real cost, which is why incentives and regulatory frameworks for tech firms matter

Reading Time:3 minutes
Why you can trust SCMP
Children attend an AI-themed lecture in a computer classroom at a school in Zhangxian county in China’s Gansu province, on September 10. Photo: Xinhua
Artificial intelligence (AI) is increasingly shaping educational practices across Hong Kong, mainland China and beyond, yet the rapid integration of AI into learning environments reveals significant risks that cannot be overlooked. Protecting young users requires coordinated action: parents, policymakers and AI developers must work together to strengthen safety standards and ensure responsible deployment.
AI is no longer merely a virtual tutor that supports students’ learning. It is increasingly becoming a confidant and companion for children, and this trend carries significant risks.

Adam Raine, a 16-year-old California high school student, used OpenAI’s ChatGPT to help with his homework. Over time, he began sharing suicidal ideations with the chatbot. Rather than interrupting or redirecting him towards safety, the chatbot reinforced some of his most harmful and self-destructive thoughts. Adam later died by suicide in April 2025.

Advertisement
In August, Raine’s parents filed a lawsuit against OpenAI, alleging that the chatbot validated and amplified his most dangerous ideas. As the complaint asserts, what began as a tool for homework assistance ultimately ended in tragedy.
The Raine family is not alone. Al Nowatzki, a generative AI quality assurance expert, tried an AI companion called Nomi. Its chatbot, “Erin”, suggested ways to attempt suicide. It even offered encouragement. This isn’t science fiction. It’s happening now.
Advertisement
These incidents are troubling, even more so given that OpenAI head of safety systems Lilian Weng suggested in 2023 that ChatGPT could be used for therapy. Despite concerns about AI chatbots lacking empathy and clinical reliability, such claims have spurred AI companies to develop therapeutic tools as potential replacements for behavioural health professionals.
Advertisement
Select Voice
Choose your listening speed
Get through articles 2x faster
1.25x
250 WPM
Slow
Average
Fast
1.25x