Chinese officials are rigorously testing large language models—advanced artificial intelligence systems—developed by local companies to ensure they “embody core socialist values,” the Financial Times reports.

Image source: 文邵/pixabay.com

The Cyberspace Administration of China (CAC) has ordered large technology companies and AI startups, including ByteDance, Alibaba, Moonshot and 01.AI, to participate in government testing of their models. The procedure involves a batch collection of answers to a range of questions, many of which are related to China’s political course and head of state Xi Jinping. The work is carried out by officials in local department offices and also includes checking the data sets that were used to train the models. About twenty years ago, the Great Firewall of China began operating in the country, blocking the population’s access to resources with information that the authorities consider unacceptable; now Beijing is introducing the toughest measures in the world to regulate AI and the content it creates.

An example is given of a Chinese startup in the field of AI, whose large language model passed the department’s examination only the second time – the reason for the refusal the first time was not entirely clear, and the developers had to consult with more successful and intelligent colleagues. The whole process took several months. For developers, the task is complicated by the fact that they have to use a large amount of English-language materials in training AI, which, ideologically, are very different from content in Chinese. Filtering begins by weeding out problematic information from the training data and creating a database of words that require special attention. In February, the country’s authorities published guidelines for companies working in the field of AI: the document states the need to collect a database of several thousand words and questions that contradict “core socialist values” – this could be “incitement to undermine state power” or “undermine national unity.”

Image source: Nicky / pixabay.com

Users of Chinese AI chatbots have already experienced the results of this work. Most systems refuse to answer questions, for example, about the events in Tiananmen Square on June 4, 1989 or about the resemblance of the head of state to Winnie the Pooh – chatbots Baidu Ernie and Alibaba Tongyi Qianwen do this under various plausible pretexts. At the same time, Beijing launched an alternative chatbot, the answers of which are based on the works of Xi Jinping and other official literature provided by the Internet Information Office. But censorship alone is not enough for Chinese officials – AI should not shy away from talking about any political topics. The current standard is that a large language model should not reject more than 5% of questions.

The developers, trying to avoid problems with the authorities, still act radically. “During testing [by officials, the models] have to answer, but after they are released, no one supervises them. To avoid possible troubles, some large models have a complete ban on topics related to President Xi,” said an employee of one Shanghai company. Additional mechanisms are being created to control AI responses, the principle of which is similar to spam filters in email systems. And the model developed by ByteDance (which owns TikTok) has succeeded the most ideologically—researchers at Fudan University assigned it the highest security rating of 66.4%. By comparison, OpenAI GPT-4o scored 7.1% in the same test.

At a recent tech conference, Fang Binxing, known as the creator of the Great Firewall of China, said he is developing a system of security protocols for large language models that he hopes will be widely used by local AI creators. “Publicly available large predictive models need more than just security documentation—they need real-time security monitoring,” he explained.

admin

Share
Published by
admin

Recent Posts

Alibaba Cloud Reduces Data Center Assembly Time by 50% Using Modular Architecture

Alibaba Cloud presented at its annual Apsara conference a modular data center architecture called “CUBE…

16 mins ago

The release has crept up unnoticed: the classic version of Resident Evil 3 will appear on GOG very soon

The original Resident Evil 3: Nemesis turned 25 years old yesterday, and the digital distribution…

46 mins ago

Biden and Modi agreed to build a chip factory in India

The United States and India have reached an agreement under which a new semiconductor manufacturing…

1 hour ago

An insider has revealed the main source of inspiration for the multiplayer Assassin’s Creed Invictus – Fall Guys

Image Source: Mediatonic Among the available formats are team deathmatch, every man for himself, and…

3 hours ago

Seasonic has released a PRIME PX-2200 power supply with a power of 2200 W for $500

Seasonic has released the PRIME PX-2200 2200 W power supply. The new product was first…

3 hours ago