LettersIt’s OK to open up to an AI bot – just don’t stop there
Readers discuss ensuring AI is but a bridge to effective mental health support, Thailand’s termination of its MOU with Cambodia, and the future of warfare

Midnight. A teenager in Hong Kong lies in bed, his mind racing with thoughts that won’t stop. What if I’m not good enough? What if everyone thinks I’m fine but I’m not? The worries feel too embarrassing to tell others and too heavy to carry alone. So he picks up his phone, opens the chatbot app and starts typing.
As a clinical psychologist, I see this pattern every week across young people in general. Generative AI chatbots give many of them their first real experience of being heard when they feel overwhelmed. They offer personalised analysis, validation and concrete suggestions in just a few exchanges. This is a genuine and valuable development in a city where emotional disclosure still carries heavy stigma.
Yet convenience is not the same as competence. Feeling calmer after a chatbot exchange is not the same as being properly assessed or safely supported. Research on chatbot sycophancy shows that endlessly agreeable systems can weaken reflection over time. The boundary is clear: AI may help with emotional first aid, but it should not replace professional assessment or ongoing support.
Professional bodies have been right to issue clear cautions. Most clinicians advise young people to treat these tools with caution and to seek human support when distress deepens. While caution is necessary, we have been slower to provide practical, accessible guidance on how to use these tools responsibly.