Chinese officials are rigorously testing large language models—advanced artificial intelligence systems—developed by local companies to ensure they “embody core socialist values,” the Financial Times reports.

Image source: 文邵/pixabay.com

The Cyberspace Administration of China (CAC) has ordered large technology companies and AI startups, including ByteDance, Alibaba, Moonshot and 01.AI, to participate in government testing of their models. The procedure involves a batch collection of answers to a range of questions, many of which are related to China’s political course and head of state Xi Jinping. The work is carried out by officials in local department offices and also includes checking the data sets that were used to train the models. About twenty years ago, the Great Firewall of China began operating in the country, blocking the population’s access to resources with information that the authorities consider unacceptable; now Beijing is introducing the toughest measures in the world to regulate AI and the content it creates.

An example is given of a Chinese startup in the field of AI, whose large language model passed the department’s examination only the second time – the reason for the refusal the first time was not entirely clear, and the developers had to consult with more successful and intelligent colleagues. The whole process took several months. For developers, the task is complicated by the fact that they have to use a large amount of English-language materials in training AI, which, ideologically, are very different from content in Chinese. Filtering begins by weeding out problematic information from the training data and creating a database of words that require special attention. In February, the country’s authorities published guidelines for companies working in the field of AI: the document states the need to collect a database of several thousand words and questions that contradict “core socialist values” – this could be “incitement to undermine state power” or “undermine national unity.”

Image source: Nicky / pixabay.com

Users of Chinese AI chatbots have already experienced the results of this work. Most systems refuse to answer questions, for example, about the events in Tiananmen Square on June 4, 1989 or about the resemblance of the head of state to Winnie the Pooh – chatbots Baidu Ernie and Alibaba Tongyi Qianwen do this under various plausible pretexts. At the same time, Beijing launched an alternative chatbot, the answers of which are based on the works of Xi Jinping and other official literature provided by the Internet Information Office. But censorship alone is not enough for Chinese officials – AI should not shy away from talking about any political topics. The current standard is that a large language model should not reject more than 5% of questions.

The developers, trying to avoid problems with the authorities, still act radically. “During testing [by officials, the models] have to answer, but after they are released, no one supervises them. To avoid possible troubles, some large models have a complete ban on topics related to President Xi,” said an employee of one Shanghai company. Additional mechanisms are being created to control AI responses, the principle of which is similar to spam filters in email systems. And the model developed by ByteDance (which owns TikTok) has succeeded the most ideologically—researchers at Fudan University assigned it the highest security rating of 66.4%. By comparison, OpenAI GPT-4o scored 7.1% in the same test.

At a recent tech conference, Fang Binxing, known as the creator of the Great Firewall of China, said he is developing a system of security protocols for large language models that he hopes will be widely used by local AI creators. “Publicly available large predictive models need more than just security documentation—they need real-time security monitoring,” he explained.

admin

Share
Published by
admin

Recent Posts

China’s XPeng Releases $20,000 MONA M03 Max Electric Car With Free Autopilot

Chinese company XPeng has unveiled the MONA M03 Max, a car with an AI-powered autonomous…

12 hours ago

Sony lifts regional restrictions from Stellar Blade on PC, and a demo version of the game is released on Steam

The developers of the post-apocalyptic action game Stellar Blade from the South Korean studio Shift…

12 hours ago

The Slormancer — Diablo Without the Frills. Review

Played on PC Over the last couple of years, the Diabloid subgenre has clearly entered…

12 hours ago

Modder Already Added Two-Player Mode to Elden Ring Nightreign That Developers Forgot About

Less than a day has passed since the release of the cooperative roguelike action game…

12 hours ago

Microsoft delays development of portable Xbox, will focus on partner consoles

According to Windows Central, Microsoft has postponed development of its own portable gaming console with…

12 hours ago

A demo of Dispatch, a comedy game about a superhero agency from the former developers of Tales from the Borderlands and The Wolf Among Us, has been released on Steam

Developers from the American AdHoc Studio, founded by former Telltale Games, Ubisoft and Night School…

2 days ago