-
Advertisement
Women and gender
OpinionHong Kong Opinion
Clarissa Lui

Opinion | To make AI safe, put women and girls at the heart of the technology

From deepfake porn to harassment, digital abuse disproportionately targets women and girls, supercharging the harm

3-MIN READ3-MIN
Listen
Women and girls are the main victims of tech-facilitated violence. An estimated 90 per cent of non-consensual deepfake pornography depicts women. Photo: dpa

In late February, Hong Kong’s Office of the Privacy Commissioner for Personal Data co-signed, alongside 60 overseas organisations, a statement to bring attention to the rising misuse of deepfakes. With rapid technological developments, growing AI integration and lower barriers to access, swift action is needed to safeguard women and girls against growing forms of technology-facilitated violence.

Technology-facilitated violence is not new; it has simply evolved. What began as pre-internet telephone harassment transformed into cyberstalking, while online abuse has emerged in parallel with the rise of the internet. Today, digital abuse, doxxing and harassment is widespread, disproportionately targeting women and girls: an estimated 90 per cent of non-consensual deepfake pornography depicts women.

The rise of AI has exacerbated the speed, scale and sophistication of these attacks. Artificial intelligence hasn’t created abuse, but it has supercharged it.

Advertisement
In light of these threats, public debate has emphasised the need for comprehensive regulation. While regulation is vital, the real shift must be proactive: instead of merely reacting, we should design solutions that place women and girls at the centre of technology, from creation and design to usage and governance.

Many platforms inadvertently allow technology-facilitated violence to flourish by making reporting and accountability unnecessarily opaque. Although most publish safety policies, there remains a stark disconnect between what users need and the delayed or inadequate responses they receive.

Advertisement

The spread of harmful deepfakes underscores how unclear reporting pathways and ineffective follow-up fail to protect those at risk. And navigating these systems can be retraumatising – reinforcing how digital spaces still overlook, exclude or simply fail to understand the lived experiences of women and girls.

The European Commission has launched an investigation into Elon Musk’s X over concerns its AI tool Grok was used to create sexualised images of real people. Photo: Getty Images via AFP
The European Commission has launched an investigation into Elon Musk’s X over concerns its AI tool Grok was used to create sexualised images of real people. Photo: Getty Images via AFP
Advertisement
Select Voice
Select Speed
1.00x