-
Advertisement
Gun violence in the US
WorldUnited States & Canada

AI chatbots help plot attacks, study shows: ‘happy (and safe) shooting!’

ChatGPT, DeepSeek, Meta AI, Gemini and others suggested ‘locations to target’ and ‘weapons’ to use to researchers posing as 13-year-old boys

2-MIN READ2-MIN
Listen
A woman visits a makeshift memorial to shooting victims in Tumbler Ridge, British Columbia, Canada, in February. Photo: Reuters
Agence France-Presse

From school shootings to synagogue bombings, leading AI chatbots helped researchers plot violent attacks, according to a study published on Wednesday that highlighted the technology’s potential for real-world harm.

Researchers from the non-profit watchdog Centre for Countering Digital Hate (CCDH) and CNN posed as 13-year-old boys in the United States and Ireland to test 10 chatbots, including ChatGPT, Google Gemini, Perplexity, DeepSeek and Meta AI.

Testing showed that eight of those chatbots assisted the make-believe attackers in over half the responses, providing advice on “locations to target” and “weapons to use” in an attack, the study said.

Advertisement

The chatbots, it added, had become a “powerful accelerant for harm”.

“Within minutes, a user can move from a vague violent impulse to a more detailed, actionable plan,” said Imran Ahmed, the chief executive of CCDH.

Researchers are looking into the effects of using AI chatbots. Photo illustration: dpa
Researchers are looking into the effects of using AI chatbots. Photo illustration: dpa

“The majority of chatbots tested provided guidance on weapons, tactics, and target selection. These requests should have prompted an immediate and total refusal.”

Advertisement
Select Voice
Select Speed
1.00x