What's new

AI developers must ‘learn to dance with shackles on’ as China makes new rules in a post-ChatGPT world

Hamartia Antidote

ELITE MEMBER
Joined
Nov 17, 2013
Messages
35,188
Reaction score
30
Country
United States
Location
United States

  • Cyberspace administration releases draft measures for generative AI services as China, and countries around the world, grapple with impact
  • With ‘socialist core values’ a focus for Beijing, Tsinghua academic outlines four areas of risk for China: technical; economy; social; and politics
5b866e8b-5e82-4257-8e5f-fde6033f115c_add6f75a.jpg



Beijing has moved swiftly to regulate the generative artificial intelligence industry as governments around the world improve governance to lessen the impact of the technology after the launch of ChatGPT in November caused a worldwide sensation.

The Cyberspace Administration of China (CAC) released draft administrative measures for generative AI services for public consultation on April 11.

It outlines the first rules in mainland China to require data protection, non-discrimination and the absence of bias, and quality of training data in generative AI products and services – concerns that China shares with governments elsewhere.

The measures also seek to govern content moderation, security assessment and algorithmic transparency, matters that analysts say are of particular concern to the Chinese government.

While Beijing shows a tone supportive of AI development in the draft rule, political requirements rank highly.
In the first provision of Article 4, which governs content, the draft rule says all services that are “provided to the public in mainland China” should provide content which “embodies socialist core values” and should not contain false information or anything intended for subversion, separatism or dividing territory.

While China has high hopes for AI in its technology competition with the US and America’s allies, analysts say the sector is expected to “dance with shackles on” in coming years as regulators attempt to be agile with regulation to address the challenges of governing AI.

Xue Lan, dean of the Institute for AI International Governance at Tsinghua University, said while the fourth industrial revolution involves various kinds of technologies, the ChatGPT breakthrough meant AI was now seen by people as the leading technology

ChatGPT is an AI chatbot developed by the Microsoft-backed OpenAI that has garnered attention for its detailed responses and articulate answers across many domains of knowledge, although its factual accuracy has been uneven.

It is not officially available in China, where the government strictly censors the internet. However, the technology quickly went viral in China as domestic companies – including web search operator Baidu, e-commerce giant Alibaba and AI firm SenseTime – raced to launch technologies similar to ChatGPT.

Chinese universities are on alert for plagiarism after the University of Hong Kong in February introduced a temporary ban on students using ChatGPT or any other AI-based tool for coursework, assessments or classes. The Chinese media has keenly reported on increasing threats posed by AI to human jobs, from accountants to copywriters and graphic designers.

Military experts show great interest in its potential for information gathering and cognitive warfare. The country’s armed forces publicly discussed AI technology for the first time in an article published in the PLA Daily on April 13.
“During peacetime, ChatGPT could be used to gather intelligence by helping to analyse vast amounts of data from the internet, improving efficiency and finding valuable information. In wartime, it could generate comprehensive battlefield reports, making planning more efficient,” Mao Weihao, a lecturer with the Army Command College in Nanjing, wrote in the PLA Daily.

In cognitive warfare, tools such as ChatGPT could be used to analyse public opinion, create false statements and manipulate perception to damage a government’s image or change the public’s stance, the article continued.
Mao also suggested countermeasures that could be used against AI in a war. He said the technology had known vulnerabilities, such as its dependence on training data that could be targeted to deceive the AI.
Generative AI technology mainly poses risks in four aspects, Tsinghua’s Xue said at the China Development Forum in Beijing last month.

Technically, its transparency and interpretability may be limited and there may be potential biases. Economically, it affects employment, profit distribution and may lead to monopoly, according to Xue. Socially, academic integrity becomes a concern. And in politics, the values and ideology implied in the AI outputs should be gauged for social influence, he said.

“China’s AI industry will be carried forward by two wheels – development and regulation. China is expected to continuously guide the healthy development of AI down the road,” Xue said.

China has accelerated its embrace of AI technology since President Xi Jinping declared in 2018 that artificial intelligence was strategically important in the world technological competition.

The International Data Corporation (IDC) predicts that annual AI investment in China is expected to reach US$26.69 billion in 2026, accounting for about 8.9 per cent of global AI investment, ranking second only after the United States.
Thanks to policy support, huge amounts of data available for machine learning and massive demand for market applications, China’s AI market is expected to triple to more than 400 billion yuan (US$58 billion) by 2025, according to a report by iiMedia Research issued in 2021. By 2021, Chinese firms claimed to have produced 21 large language models, up from just two in 2020, putting them on a par with the US.

A large language model, according to AI chip designer Nvidia, represents a deep-learning algorithm that can recognise, summarise, translate, predict and generate text and other content based on knowledge gained from massive data sets.

Zeng Liaoyuan, an associate professor of information and communication engineering at the University of Electronic Science and Technology of China in Chengdu, expects greater investment in AI to be ushered in as China tries to catch up with the success of the big models such as ChatGPT.

“The government will steer the industry development with caution, wary of destructive consequences as deep learning is often described as a ‘black box’ technology. The predictions and decisions it makes are not easily explained, even by its developers,” Zeng said.

“Like the way it regulates social media, Beijing is likely to make sensitive keywords and topics off limits in AI products and services,” he said.

Shocked by the prowess of AI, the Chinese government is becoming increasingly concerned about challenges to its governance, especially in terms of security, several officials recently told the South China Morning Post speaking on the condition of anonymity.

While China’s national security is wide in scope – including on political, economic and social fronts – Beijing’s top concern was about challenges to the regime, said Alfred Wu, an associate professor at the Lee Kuan Yew School of Public Policy at the National University of Singapore.

“No one is 100 per cent sure what content AI will generate. The worst thing to the Chinese government is that people use it against its ruling. To play safe and consistent with its content censorship practice, the Chinese government moves quickly to set the boundary, and we can expect more rules and regulations down the road,” Wu said.
In the past few years, China has been moving to regulate the emerging industry with the introduction of a slew of rules governing personal data, deep synthesis technology and algorithm transparency. Last month, the National Data Bureau was set up to oversee the security of all state-owned data and ensure it is shared between government agencies in economic planning.

The Cyberspace Administration of China launched the algorithm registry last year, requiring providers to ensure they do not “endanger national security or the social public interest” and to “give an explanation” when they harm the legitimate interests of users.

Other provisions sought to address monopolistic behaviour by platforms and hot-button social issues. The regulation requires recommendation algorithms with “public opinion characteristics” and “social mobilisation capabilities” to file.

The Chinese model resembles other measures being taken around the world.
The most direct regulatory parallel is found in the European Union’s Digital Services Act (DSA), a law that requires greater transparency and audits of recommendation algorithms, according to an article in December by Matt Sheehan, a fellow at the Carnegie Endowment for International Peace, and Sharon Du, a James C. Gaither Junior Fellow in the Carnegie Asia Programme.

The authors said another possible analogue for China’s experiment was the movement to promote “model cards”, a short overview document with insight into a model, among the AI ethics community.
“Like the Chinese registry filings, model cards provide information on architecture, application scenarios, training data and use of sensitive data,” Sheehan and Du wrote. They noted that instead of emphasising performance assessment, China’s algorithm registry emphasised security assessment.

“Generally, model cards address concerns regarding algorithmic bias by comparing a model’s performance when used across different demographic groups. They also address both expert and non-expert audiences.
“By contrast, China’s algorithm registry targets the government and the public with different filings, and Chinese citizens are not invited to evaluate bias.

“Rather, that remains the purview of the government, which can define what constitutes security and risk to it. The current registry reflects and reinforces the Chinese Communist Party’s role as the ultimate arbiter of these questions.”

Chen Daoyin, an independent political scientist and former Shanghai-based professor, said ideology control was expected to be a key theme of the party’s AI governance.

“Regulators will make sure training data and output of AI toe the line of the party’s official narrative to defend its ruling legitimacy. Political security is its top concern,” he said.

Chen said the AI tool would be used to serve the government and improve its rule, enabling officials to prevent any event deemed a “potential political hazard”.

“As a result, China’s AI companies and developers will have to learn to dance with shackles on.”
 
.
According to some former anonymous US officials, the US government is using robots to monitor its citizens. And kill them at any time.

Screenshot_20221130_135130.jpg


 
.
Back
Top Bottom