Letters | Censorship by social media algorithm chips away at human rights
Readers discuss the impact of algorithmic bias, horse cruelty in Hong Kong, and the prospect of dogs being allowed in restaurants

In times of global tension, many of us have hesitated before pressing “post” on social media.
Words are chosen carefully, not only to avoid offence, but to ensure our messages are even seen. This quiet form of self-censorship reflects an emerging truth: algorithms, not editors or censors, now influence who is heard.
This is where the idea of algorithmic justice becomes essential. It calls for fairness, accountability and transparency in how automated systems make decisions that affect access to information and expression. In essence, it insists that technology should serve human rights, not quietly redefine them.
Algorithmic bias has long been studied in data ethics, but its impact on public discourse and democracy is especially concerning. When platforms prioritise or downplay certain content through opaque mechanisms, they effectively shape what society talks about and what it overlooks. This raises serious questions under international standards such as Article 19 of the Universal Declaration of Human Rights, which protects freedom of expression and access to information.