China responsible on AI ethical governance

A humanoid robot called Walker X plays Chinese chess at the World Artificial Intelligence Conference in Shanghai in June 2021. (PHOTO PROVIDED TO CHINA DAILY)

Artificial Intelligence is one of the most representative disruptive technologies that enable the development of society at the global level. AI has advantages for humankind, while also bringing with it safety and security challenges. The Position Paper of the People's Republic of China on Strengthening Ethical Governance of Artificial Intelligence (AI) released on Nov 16 highlights China's vision, practices and views for ethical governance of the technology from the perspective of global cooperation and coordination. It calls for global consensus with mutual respect, and actions for the good of humanity.

The position paper points out that China perceives AI as an empowering technology to push forward global sustainable development and enhance the common well-being of all humankind. This implies that AI is an electricity-like technology that empowers all, and contributes to a positive and sustainable future for humankind, ecology and the global good.

A responsible AI approach is the fundamental framework for China's proposal, calling for shared responsibilities, and multiple stakeholder co-governance. This approach focuses not only on the complementary responsibilities of different stakeholders (governments, academia, industry, general public, etc), but also on whole life-cycles of AI systems and services (ranging from design, research and development, deployment, utilization and management), with necessary monitoring and evaluations from the perspective of ethical governance. There is consensus among many countries about adopting a responsible AI approach as it is seen as one of the best choices to push forward international AI governance framework.

The position paper highlights the concept of ethical governance of AI, and the vision is to give priority to ethics, and make ethics the fundamental basis for AI governance throughout the whole life-cycle.

The position paper recommends that the AI governance framework make use of different governance methods and tools such as ethical principles, norms and specifications, standards, laws, etc and put them under the agile governance methodology to complement each other and maximize their respective effectiveness. Since the history, cultures, political systems, and development stages of AI are different in various countries, our practices on maximizing the positive impact of different tools and minimizing uncertainties must adapt to our own circumstances. Meanwhile, at the global level, we should realize, understand and respect these differences, and learn from each other from a complementary view.

Values and principles in the position paper follow international consensus on, including but not limited to, human rights and fundamental freedoms, human dignity, equality, fairness and justice, privacy, transparency, explainability and reliability, safety and security, sustainability, avoiding misuse and abuse, making AI verifiable, regulatable, traceable, predictable and trustworthy, etc. And we should always bear in mind and act in practice that AI should remain under meaningful human control. These guiding values and principles, which are consistent with UNESCO's recommendations on AI Ethics, serve as common consensus at the global level for AI governance.

Besides perceiving and regulating AI as a whole, the position paper also briefly highlights different dimensions that contribute to the overall picture, including data, algorithms, and applications of AI. This is very relevant to, and gets inspiration from, China's own experiences. From the high level design on regulating AI, the documents named Governance Principles for the New Generation Artificial Intelligence: Developing Responsible Artificial Intelligence published in 2019, the Ethical Norms for the New Generation Artificial Intelligence published in 2021 by the National Governance Committee of New Generation Artificial Intelligence, and Opinion on Strengthening the Ethical Governance of Science and Technology issued by the Central Office of the Communist Party of China and the Office of the State Council, serve as national guidelines on ethical AI governance in China. From the data perspective, Personal Information Protection Law of the People's Republic of China enacted in 2021, and Civil Code of China, and the data security law enacted in the same year, serve as the national basis for data governance in China, while the Global Data Security Initiative proposed by China and released in 2020 serves as China's vision and outreach contributing to the global data ecosystem. From the algorithms perspective, the regulations of the Cyberspace Administration of China named Guiding Opinions on Strengthening Overall Governance of Internet Information Service Algorithms published in 2021, and Internet Information Service Algorithmic Recommendation Management Provisions enacted in 2022 serve as the major efforts from China. While from the applications perspective, the Provisions of the Supreme People's Court (SPC) on Several Issues Concerning the Application of Law in the Trial of Civil Cases Involving the Processing of Personal Information Using Facial Recognition Technology published by China's Supreme People's Court, and Notice on Entry and Road Tests of Mid — to High-level Autonomous Driving Intelligent Connected Vehicles (open for public opinion) jointly announced by the Ministry of Industry and Information Technology and the Ministry of Public Security, are valuable practices that China can share with the world.

Calling for international cooperation on the ethical governance of AI is one of the main goals for this position paper, especially the encouragement on transnational, and cross-cultural exchanges and cooperation. To implement this vision, the requirements for the ethical governance of AI in countries where cooperating parties are located, should be respected. AI risks are happening everyday in different geolocations, and many risks at different places are similar in nature. For the human community as a whole, we need to collaborate to minimize risks and maximize benefits.

AI should not be monopolized by any rich or ideological group or club. With "leave no one behind" from the United Nations Sustainable Development Goals and the Global Development Initiative in mind, the position paper opposes the building of exclusive groups. The world needs an inclusive network to coordinate the ethical governance of AI at the global level, sharing development practices and experiences, avoiding safety and security challenges and risks together. Only in this way, can we realize building a community with a shared future for mankind powered by AI at an early date.

The author is a professor from the Institute of Automation, Chinese Academy of Sciences, and a member of the National Governance Committee of New Generation Artificial Intelligence. 

The views don't necessarily represent those of China Daily.