Opinion | To make AI safe, put women and girls at the heart of the technology
From deepfake porn to harassment, digital abuse disproportionately targets women and girls, supercharging the harm

In late February, Hong Kong’s Office of the Privacy Commissioner for Personal Data co-signed, alongside 60 overseas organisations, a statement to bring attention to the rising misuse of deepfakes. With rapid technological developments, growing AI integration and lower barriers to access, swift action is needed to safeguard women and girls against growing forms of technology-facilitated violence.
The rise of AI has exacerbated the speed, scale and sophistication of these attacks. Artificial intelligence hasn’t created abuse, but it has supercharged it.
Many platforms inadvertently allow technology-facilitated violence to flourish by making reporting and accountability unnecessarily opaque. Although most publish safety policies, there remains a stark disconnect between what users need and the delayed or inadequate responses they receive.
The spread of harmful deepfakes underscores how unclear reporting pathways and ineffective follow-up fail to protect those at risk. And navigating these systems can be retraumatising – reinforcing how digital spaces still overlook, exclude or simply fail to understand the lived experiences of women and girls.

.jpg?itok=IIVX5Qm-&v=1773214542)