Saturday, December 7, 2024

Is Xi Jinping an AI doomer?

Date:


IN JULY last year Henry Kissinger travelled to Beijing for the final time before his death. Among the messages he delivered to China’s ruler, Xi Jinping, was a warning about the catastrophic risks of artificial intelligence (AI). Since then American tech bosses and ex-government officials have quietly met their Chinese counterparts in a series of informal gatherings dubbed the Kissinger Dialogues. The conversations have focused in part on how to protect the world from the dangers of AI. American and Chinese officials are thought to have also discussed the subject (along with many others) when America’s national security adviser, Jake Sullivan, visited Beijing from August 27th to 29th.

Many in the tech world think that AI will come to match or surpass the cognitive abilities of humans. Some developers predict that artificial general intelligence (AGI) models will one day be able to learn unaided, which could make them uncontrollable. Those who believe that, left unchecked, AI poses an existential risk to humanity are called “doomers”. They tend to advocate stricter regulations. On the other side are “accelerationists”, who stress AI’s potential to benefit humanity.

Western accelerationists often argue that competition with Chinese developers, who are uninhibited by strong safeguards, is so fierce that the West cannot afford to slow down. The implication is that the debate in China is one-sided, with accelerationists having the most say over the regulatory environment. In fact, China has its own AI doomers—and they are increasingly influential.

Until recently, China’s regulators have focused on the risk of rogue chatbots saying politically incorrect things about the Communist Party, rather than that of cutting-edge models slipping out of human control. In 2023 the government required developers to register their large language models. Algorithms are regularly marked on how well they comply with socialist values and whether they might “subvert state power”. The rules are also meant to prevent discrimination and leaks of customer data. But, in general, AI-safety regulations are light. Some of China’s more onerous restrictions were rescinded last year.

China’s accelerationists want to keep things this way. Zhu Songchun, a party adviser and director of a state-backed programme to develop AGI, has argued that AI development is as important as the “Two Bombs, One Satellite” project, a Mao-era push to produce long-range nuclear weapons. Earlier this year Yin Hejun, the minister of science and technology, used an old party slogan to press for faster progress, writing that development, including in the field of AI, was China’s greatest source of security. Some economic policymakers warn that an over-zealous pursuit of safety will harm China’s competitiveness.

But the accelerationists are getting pushback from a clique of elite scientists with the party’s ear. Most prominent among them is Andrew Chi-Chih Yao, the only Chinese person to have won the Turing award for advances in computer science. In July Mr Yao said AI posed a greater existential risk to humans than nuclear or biological weapons. Zhang Ya-Qin, the former president of Baidu, a Chinese tech giant, and Xue Lan, the chairman of the state’s expert committee on AI governance, also reckon that AI may threaten the human race. Yi Zeng of the Chinese Academy of Sciences believes that AGI models will eventually see humans as humans see ants.

The influence of such arguments is increasingly on display. In March an international panel of experts meeting in Beijing called on researchers to kill models that appear to seek power or show signs of self-replication or deceit. A short time later the risks posed by AI, and how to control them, became a subject of study sessions for party leaders. A state body that funds scientific research has begun offering grants to researchers who study how to align AI with human values. State labs are doing increasingly advanced work in this domain. Private firms have been less active, but more of them have at least begun paying lip service to the risks of AI.

Speed up or slow down?

The debate over how to approach the technology has led to a turf war between China’s regulators. The industry ministry has called attention to safety concerns, telling researchers to test models for threats to humans. But it seems that most of China’s securocrats see falling behind America as a bigger risk. The science ministry and state economic planners also favour faster development. A national AI law slated for this year fell off the government’s work agenda in recent months because of these disagreements. The impasse was made plain on July 11th, when the official responsible for writing the AI law cautioned against prioritising either safety or expediency.

The decision will ultimately come down to what Mr Xi thinks. In June he sent a letter to Mr Yao, praising his work on AI. In July, at a meeting of the party’s Central Committee called the “third plenum”, Mr Xi sent his clearest signal yet that he takes the doomers’ concerns seriously. The official report from the plenum listed AI risks alongside other big concerns, such as biohazards and natural disasters. For the first time it called for monitoring AI safety, a reference to the technology’s potential to endanger humans. The report may lead to new restrictions on AI-research activities.

More clues to Mr Xi’s thinking come from the study guide prepared for party cadres, which he is said to have personally edited. China should “abandon uninhibited growth that comes at the cost of sacrificing safety”, says the guide. Since AI will determine “the fate of all mankind”, it must always be controllable, it goes on. The document calls for regulation to be pre-emptive rather than reactive.

Safety gurus say that what matters is how these instructions are implemented. China will probably create an AI-safety institute to observe cutting-edge research, as America and Britain have done, says Matt Sheehan of the Carnegie Endowment for International Peace, a think-tank in Washington. Which department would oversee such an institute is an open question. For now Chinese officials are emphasising the need to share the responsibility of regulating AI and to improve co-ordination.

If China does move ahead with efforts to restrict the most advanced AI research and development, it will have gone further than any other big country. Mr Xi says he wants to “strengthen the governance of artificial-intelligence rules within the framework of the United Nations”. To do that China will have to work more closely with others. But America and its friends are still considering the issue. The debate between doomers and accelerationists, in China and elsewhere, is far from over.

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.

MoreLess



Source link

Share post:

spot_img

Popular

More like this
Related

Health Insurance Companies Pull Down Information About Executives After Assassination of CEO

Image by Getty / FuturismThese are scary times...

Sakana AI’s CycleQD outperforms traditional fine-tuning methods for multi-skill language models

Join our daily and weekly newsletters for the...

Meta launches Llama 3.3, shrinking powerful 405B open model

Join our daily and weekly newsletters for the...