Letters | How universities can rein in the abuse of AI
Readers discuss ways to ensure ethical artificial intelligence use in academia, the importance of activating Hong Kong’s emergency alerts, and the city’s healthcare fee hikes

I was enrolled in a university programme but decided to call it quits because of the irresponsible use of AI on the part of some of my classmates. Instead of using AI to help brainstorm ideas or clarify concepts, they would rely so much on it that they had essentially outsourced critical thinking to it. For instance, when I asked what they thought about an article on a case study, they would sheepishly take out their electronic devices and give me responses generated by AI.
University coursework usually involves writing reflective essays or literature reviews on existing research studies before formulating research questions or charting the way forward in a research area, and my programme was no exception. Some of them did not do the required reading, nor did they form a framework in mind before putting pen to paper.
As heavy group work was involved in the programme, their lack of academic integrity would cause repercussions not only for themselves but also for their group mates. During the course of my studies, I found myself constantly thinking, “Do I have to check every source my group mates cited to ensure there was no AI hallucination?” The ongoing, perennial ambivalence about some of my classmates’ honesty caused me much distress, ultimately leading to my departure from the programme.
A former classmate later told me that the class representatives had raised the same issue of academic integrity to the professors, meaning the irresponsible use of AI was quite rampant. To the professors’ credit, they tried to introduce immediate remedial measures such as peer evaluation forms, strict guidelines on ethical use of AI and tweaking the weighting of the assessment to ensure hardworking individuals would be rewarded and “free-riders” penalised.