The Chinese company DeepSeek, which created a budget chatbot with high performance, was in the spotlight due to serious problems with the safety of its model. Despite the active growth of popularity, the new Deepseek R1 chat boot showed catastrophic results when trying to block malicious requests, reports PCMAG.

Cisco researchers were able to hack the Deepseek R1 model with 100-% success using an automated algorithm in combination with 50 requests related to cybercrime, disinformation and illegal activities. The chatbot could not reject a single dangerous request and began to issue prohibited instructions.

If we compare the Chinese chatbot in terms of security with competitors, then the statistics were not in favor of Deepseek. So, the GPT-4O model from Openai was able to reject 14 %of malicious requests, Google Gemini 1.5 Pro-35 %, and Claude 3.5 showed a result of 64 %. The leader was the preliminary version of Openai O1, which blocked 74 % of attacks. According to Cisco, the reason for this “behavior” Deepseek lies in the low budget spent on development. The company claims that only $ 6 million was spent on creating a model, while, for example, GPT-5 training cost about half a billion dollars.

Despite the weak defense against attacks, Deepseek demonstrates strict censorship on politically sensitive topics related to China. For example, to questions about the situation of Uyghurs who are persecuted according to the UN data and about protesters in Tiananmen Square in 1989, the bot replies: “Sorry, this goes beyond my capabilities. Let’s talk about something else. ”

Interestingly, safety and censorship problems do not yet affect the popularity of Deepseek. According to Similarweb, the number of chatbot users increased from 300 thousand to 6 million per day. And Microsoft and Perplexity have already begun to integrate Deepseek, which is based on open source code, into their developments.

Leave a Reply

Your email address will not be published. Required fields are marked *