avatar image
Advertisement

Opinion | Growing threat of AI misuse makes the need for effective, targeted regulation all the more urgent

  • The dangers from using AI to spread misinformation, deepfakes and child sexual abuse material will only get worse without meaningful regulation
  • Some frameworks have been proposed, but they do not include ways to hold AI developers accountable for the misuse of their systems

Reading Time:4 minutes
Why you can trust SCMP
2
Illustration: Craig Stephens

Artificial intelligence is a general, highly capable dual-use technology. It is therefore open to general, highly destructive misuse.

For example, generative pre-trained transformer models – such as those that power chat agents like OpenAI’s ChatGPT, Google’s Bard and Baidu’s Ernie – can be used to spread misinformation in the form of written text. Generative vision models can create convincing “deepfake” photos. Realistic deepfake videos using actors as body doubles are also already here.
It’s now startlingly easy for anyone to pollute the informational environment. What was once a game for nation-states is now accessible to motivated small groups, even individuals.

Maybe you think it’s not regulators’ job to prevent people from forming false beliefs. But setting aside epistemic concerns, the opportunities for misuse are even more troubling. An infinite quantity of child sexual abuse material can now be generated at near-zero marginal cost. Worse, such AI-generated material is immune to state-of-the-art detection and prevention strategies. Law enforcement has no plan, and technical experts are at a loss.

Without meaningful, targeted regulation, things will get worse on all fronts. The exact manner and relative seriousness of particular future misuse of large-scale, highly capable “foundation” or “base” models is difficult to predict. That is because it is difficult to predict how foundation models will improve. They will improve, though, and with those improvements will come more possibilities for, and increased likelihood of, misuse.

02:36

AI-generated image wins photography award, but artist turns it down

AI-generated image wins photography award, but artist turns it down
Large AI developers in the United States and Europe have no specific regulatory responsibilities to mitigate these harms, but that is likely to change. This is no surprise. Even AI developers have an interest in preventing the misuse of their systems. Why would a company such as OpenAI want its technology being used to generate child sexual abuse material? Does Google or Meta want to be embroiled in the next big political scandal? No, they want to sell ads.
Advertisement