Advertisement
DeepSeek
TechTech Trends

DeepSeek sheds light on data collection for AI training and warns of ‘hallucination’ risks

DeepSeek says data in pre-training stage is mainly collected from publicly available online information and authorised third-party data

Reading Time:2 minutes
Why you can trust SCMP
A DeepSeek logo on a smartphone. Photo: dpa
Vincent Chow
Chinese artificial intelligence start-up DeepSeek has lifted the veil on how it filters data to train its models, raising red flags about “hallucination” and “abuse” risks.

In a document published on Monday, the Hangzhou-based start-up said it “has always prioritised AI security” and decided to make its disclosure to help people use its models, at a time when Beijing is ramping up oversight over the industry.

The company said data in the pre-training stage was “mainly” collected from publicly available online information as well as authorised third-party data, and DeepSeek had no intention to collect personal data.

Advertisement

DeepSeek said it applied automated filters to remove raw data containing “hate speech, pornography, violence, spam and potentially infringing contents”. Meanwhile, it applied algorithmic detection with human review to identify “inherent statistical biases in large-scale data sets” to mitigate the impact on model values.

The company, founded by computer scientist Liang Wenfeng, said it was committed to reducing the “hallucinations” of its models through research and techniques such as retrieval-augmented generation, but added that it remained an “unavoidable” problem.

Advertisement

“AI is still in its early stages and the technology is still immature … at this stage, we cannot guarantee that our models will not produce hallucinations,” it said, reminding users to seek professional advice when necessary and emphasising that its models predicted rather than retrieved answers based on user prompts.

AI firms like OpenAI and DeepSeek have been criticised for their chatbots’ hallucinations, where they generate incorrect or misleading results. As underlying AI models become more powerful, some worries have arisen about the possibility of AI-induced psychosis and other problems arising from chatbot overreliance.
Advertisement
Select Voice
Choose your listening speed
Get through articles 2x faster
1.25x
250 WPM
Slow
Average
Fast
1.25x