
Chinese President Xi Jinping speaks at the 2025 Asia-Pacific Economic Cooperation meeting in Jeonju, South Korea.Source: Yonhap via AP/Alamy
Despite risks ranging from worsening inequality to causing existential catastrophe, the world has not yet agreed on the regulations governing artificial intelligence. Although there is a mixture of national and regional regulations, binding rules are still developing in many countries.
In October, at a meeting of the Asia-Pacific Economic Cooperation Forum, Chinese President Xi Jinping reiterated his country’s proposal to create a body known as the World Artificial Intelligence Cooperation Organization (WAICO), which would bring countries together as a step toward creating a global AI governance system.

China seeks self-reliance in science in its next five-year plan
The proposal is part of a broader push to be at the helm of efforts to control AI, in contrast to the US approach that focuses on deregulation. When it comes to transparency and AI policy, “China is the good guy right now,” Wendy Hall, a computer scientist at the University of Southampton in the UK, told reporters at an event in London in October.
Many hurdles lie in the way of creating a binding intergovernmental agreement on artificial intelligence, but some advocates say it is possible, comparing the technology to other risky but beneficial endeavors for which there are agreements, such as nuclear power and aviation. nature This book looks at China’s approach, what a global AI governance body might look like and its chances of success.
How is China’s AI ecosystem different from that of other countries?
Encouraged by the government, Chinese companies tend to release models with open weight, meaning they can be downloaded and built upon. Compared with Western countries, China is focusing less on making machines that can outperform humans — often referred to as artificial general intelligence — and is instead focusing on the race to use AI to drive economic growth. This is evident in a policy introduced last August called AI+, says Quan Yi Eng, who leads the international AI department at Concordia AI, a Beijing-based consulting firm focused on AI safety.
How is China handling regulation of artificial intelligence?
China was among the first countries to introduce AI regulations, starting in 2022, and has wide-ranging rules on harmful content, privacy and data security, for example. Ng says developers of AI-powered public services should allow Chinese regulators to test their systems before deploying them. The result is that models like those developed by Hangzhou-based DeepSeek, which found global fame with its R1 model earlier this year, are among “the most structured in the world,” says Joanna Bryson, a computer scientist and AI ethics researcher at Hertie College in Berlin. Despite this, authorities often take a lax approach to enforcing this regulation, says Angela Chang, a legal researcher who specializes in artificial intelligence regulation at the University of Southern California in Los Angeles.
By contrast, the United States has no comprehensive AI legislation at the federal level, and in January, President Donald Trump rescinded an executive order aimed at ensuring the safety of AI. He has positioned his administration as pro-industry, suggesting last month in a social media post that it would add a provision to a federal bill to prevent states from regulating artificial intelligence. The EU’s approach has been to classify AI systems by level of risk, with different rules on transparency and oversight at each level, and commitments that came into force in August target the most powerful AI systems. Meanwhile, the UK government has postponed its plans to introduce comprehensive AI legislation until next year, at the earliest.
What international legislation exists that governs artificial intelligence?
Very little. The only legally binding international regulation comes from the Council of Europe, an international organization of European member states, separate from the EU, which established the Framework Convention on Artificial Intelligence in May 2024. The treaty requires any signatory country to implement its broad obligations – such as ensuring AI activities are compatible with human rights. — Through its national laws. But there are no sanctions or supranational enforcement body, says Lucia Velasco, an economist and researcher at Oxford Martin College’s Artificial Intelligence Governance Initiative, based in New York.
Companies and countries have also signed several non-binding agreements, such as the UNESCO Recommendation on AI Ethics, the OECD AI Principles, and the Bletchley Declaration, an international agreement signed by 28 countries at the UK AI Safety Summit in 2023. Several expert groups have issued documents outlining the risks, most notably the International AI Safety Report. The United Nations runs a “dialogue” process and has established a scientific committee to guide countries’ regulatory efforts.
What did China suggest?
Chinese officials said WAICO would be a way for countries to harmonize AI governance rules while “fully respecting differences in national policies and practices” and supporting the Global South. China has proposed that the authority’s headquarters be located in Shanghai, but other details are still uncertain.
WAICO seems unlikely to govern AI directly in any enforceable way (China has said it supports the UN approach to global AI governance), but it could be a path for countries to gradually rally around a framework.
Ng says Xi’s call to set up WAICO was at least the fourth such push from a Chinese official in four months, suggesting the idea is important to the government. One reason is commercial, Zhang says. “The fact that China can set standards helps spread Chinese products around the world,” she says.
This move also gives China some diplomatic flair. “China is trying to be like that [an older] “We are the brother of the Global South, saying, ‘We also need to have a voice in AI governance and not be dictated to,’” Zhang says.