Hong Kong government charts ethical path for AI development
The government has unveiled a framework of 12 principles designed to guide the ethical development and use of AI, offering a blueprint for public and private sectors

Recognising this delicate balance, the Hong Kong government has introduced a comprehensive set of guidelines which look to strike a balance between facilitating innovation and safeguarding the public interest.
The Ethical Artificial Intelligence Framework, developed last July by the Digital Policy Office (DPO) and inspired by developments in other countries and regions, aims to “realise the benefits and avoid adverse outcomes” as AI projects and services become increasingly common.

While primarily intended for government departments, the DPO makes it clear that other organisations are welcome to apply the guidelines – and, indeed, should do so, while adapting them to account for area- or industry-specific terms, practices and methods of assessment.
Altogether, 12 ethical principles for AI are set out, with the first two seen as the foundation. These relate to performance, in particular ensuring aspects of transparency, robustness, reliability and security.
The remaining 10 cover general principles, which range from fairness, human oversight and data privacy, to lawfulness, compliance, accountability and safety. Each of these is derived from the United Nations’ Universal Declaration of Human Rights, as well as the relevant Hong Kong Ordinances.
There is also a suggested governance structure for organisations based on three lines of defence. The first line is the project team themselves, who are responsible for AI development, risk evaluation and initial action to mitigate risks. The second line is the project steering committee and assurance team. The third is the IT board, chief information officer and, if possible, a committee of external advisers to review, monitor and strengthen existing competencies.
Besides that, there are pointers on managing the life cycle of an AI project. These include objective-setting, suitability, procurement, ethical considerations and the sensitivity of data, including checking the quality and validity of information from sources whether within or outside the organisation.
“I am a fan of the DPO framework,” says Pádraig Walsh, a partner at law firm Tanner De Witt, who specialises in technology, media and telecoms.

“One of the positives is that a government body is showing leadership in AI and providing a model for others to adapt and apply. They have gone beyond a general framework and policy statement to give implementation and practice notes,” he adds. “That’s useful and very much in line with the approach Hong Kong is taking towards AI – not prescriptive laws, but context-based and sector-specific, being adaptable and flexible, so as not to stifle innovation.”
However, Walsh remains cautious about how readily commercial organisations will follow the government’s lead, even if they are committed to using AI as a core function.
“Areas such as finance, healthcare and professional services have different missions and may have ethical guidance published by their respective professional bodies,” he says.
“I like this idea, though, of a public-private partnership with a strong emphasis on education, collaboration and good governance.”