DeepSeek AI tricked into Mona Lisa theft plot
A Bristol University study has found that new Chinese AI model DeepSeek has ‘severe safety risks’ that could lead to the generation of ‘extremely harmful content’ such as how to plan crimes.

DeepSeek has been making waves since its launch due to its lower computational demands compared to leading Large Language Models (LLMs) such as ChatGPT. The Chinese AI relies on Chain of Thought (CoT) reasoning, which enhances problem-solving through a step-by-step logical process rather than providing direct answers. But according to the study, carried out by the Bristol Cyber Security Group, CoT models can be tricked into providing harmful information that traditional LLMs might not explicitly reveal.
“The transparency of CoT models such as DeepSeek’s reasoning process that imitates human thinking makes them very suitable for wide public use,” said study co-author Dr Sana Belguith, from Bristol’s School of Computer Science.
“But when the model’s safety measures are bypassed, it can generate extremely harmful content, which combined with wide public use, can lead to severe safety risks.”
Register now to continue reading
Thanks for visiting The Engineer. You’ve now reached your monthly limit of news stories. Register for free to unlock unlimited access to all of our news coverage, as well as premium content including opinion, in-depth features and special reports.
Benefits of registering
-
In-depth insights and coverage of key emerging trends
-
Unrestricted access to special reports throughout the year
-
Daily technology news delivered straight to your inbox
Experts speculate over cause of Iberian power outages
The EU and UK will be moving towards using Grid Forming inverters with Energy Storage that has an inherent ability to act as a source of Infinite...