新闻类别
新闻动态 (15)
百家争鸣 (1)
联系我们

Address:Room 3, P, 1st floor, phase III, triumph business centre, 9-11 Heyuan street, Hung Hom, Kowloon

Tel:+852-595 22 004

Email:2388337820@qq.com


新闻资讯
当前位置:网站首页 - 新闻资讯 - 正文  
Analysis of the current situation and countermeasures of ethical issues in AI
编辑:   浏览:1963  时间:2023-05-29
Analysis of the current situation and countermeasures of ethical issues in artificial intelligence (abstract)

Zhang Zhaoxiang and Tan Tieniu's Innovation Research, published in Beijing at 14:37 on March 30, 2022



1. Current ethical issues in artificial intelligence

Ethics is the principle and order norms that govern the relationship between people and society.

After human society entered the information age in the mid to late 20th century, information technology ethics gradually attracted widespread attention and research, including personal information leakage, information divide, information cocoon, and inadequate regulation of new power structures.

Moore, the founder of computer ethics, divided ethical agents into four categories:

1. Ethical impact on intelligent agents (ethical impact on society and the environment);

2. Implicit ethical agents (designed through specific software and hardware built-in security and other implicit ethical designs);

3. Display ethical intelligent agents (able to take reasonable actions based on changes in circumstances and their understanding of ethical norms);

4. Fully ethical intelligent agents (like humans with free will and the ability to make ethical decisions in various situations).

The current development of artificial intelligence is still in the weak stage of artificial intelligence, but it has also had a certain ethical impact on society and the environment. People are exploring the integration of ethical rules into artificial intelligence and the implementation of artificial intelligence technology through ethical reasoning, which also includes an understanding of ethical rules.

At present, the analysis and solution path construction of ethical issues in artificial intelligence should mainly revolve around the first three types of ethical intelligent agents, that is, artificial intelligence is characterized as a tool rather than a subject.

For example:

1. The defects and value setting issues of artificial intelligence systems may pose a threat to citizens' rights to life and health. In 2018, the fatal accident of Uber autonomous vehicle in Arizona was not caused by sensor failure, but by Uber's decision to ignore obstacles such as leaves and plastic bags identified by artificial intelligence algorithms in consideration of passenger comfort when designing the system.

2. The bias of artificial intelligence algorithms in target demonstration, algorithm discrimination, and training data may lead to or expand discrimination in society, infringing on citizens' equal rights.

3. The abuse of artificial intelligence may threaten citizens' privacy and personal information rights.

4. Complex artificial intelligence algorithms such as deep learning can lead to algorithmic black box problems, making decisions opaque or difficult to explain, thereby affecting citizens' right to know, procedural legitimacy, and citizen supervision.

5. The abuse and misuse of artificial intelligence technologies such as precise information push, automated fake news writing, intelligent targeted dissemination, and deep forgery may lead to issues such as information cocoons and the proliferation of false information, as well as potentially affecting people's access to important news and their participation in public issues; The precise dissemination of false news may also increase the impact on people's understanding and views of facts, which may incite public opinion, manipulate commercial markets, and affect politics and national policies. Cambridge Analytics uses data from Facebook to conduct political preference analysis on users, and based on this, conducts targeted information push to influence the US election, which is a typical example.

6. Artificial intelligence algorithms may use algorithmic discrimination or collusion to form horizontal monopoly agreements or radial agreements in situations where they are less likely to be detected and proven, thereby disrupting the market competition environment.

The application of algorithmic decision-making in various fields of society may cause changes in the power structure. With its technological advantages in processing massive data and the embedding advantages in ubiquitous information systems, algorithms have a significant impact on people's rights and freedoms. For example, using algorithms for credit evaluation in bank credit will affect whether citizens can obtain loans, and using algorithms for social hazard assessment in criminal justice will affect whether pre-trial detention is carried out, which are prominent manifestations.

8. The abuse of AI in the work scene may affect the rights and interests of workers, and the substitution of AI for workers may lead to a crisis of large-scale structural unemployment, bringing risks in terms of labor rights or employment opportunities.

Due to the increasingly widespread application of artificial intelligence in various aspects of social production and life, security risks such as vulnerabilities and design defects in artificial intelligence systems may lead to personal information and other data leakage, industrial production line stoppage, traffic paralysis, and other social problems, posing a threat to financial security, social security, and national security.

The misuse of artificial intelligence weapons may exacerbate inequality worldwide, threaten human life and world peace

The governance of ethical risks in artificial intelligence is complex and has not yet formed a comprehensive theoretical framework and governance system.

Ethical Guidelines, Governance Principles, and Approaches for Artificial Intelligence

At present, global artificial intelligence governance is still in the early exploration stage, starting from the formation of a basic consensus on ethical standards for artificial intelligence, gradually deepening into practical implementation such as credible evaluation, operational guidelines, industry standards, policies and regulations, and accelerating the construction of an international governance framework system for artificial intelligence.

Ethical guidelines

In recent years, relevant institutions and industry organizations in China have also been actively involved. For example, in January 2018, the China Electronics Standardization Institute released the White Paper on AI Standardization (2018), which proposed the principle of human interests and the principle of responsibility as the two basic principles of AI ethics; In May 2019, the Beijing Consensus on Artificial Intelligence was released, proposing 15 principles that should be followed by all parties involved in the research and development, use, and governance of artificial intelligence, which are beneficial to the construction of a community with a shared future for mankind and social development; In June 2019, the National New Generation Artificial Intelligence Governance Professional Committee released the "New Generation Artificial Intelligence Governance Principles - Developing Responsible Artificial Intelligence", proposing 8 principles for the development of artificial intelligence and outlining the framework and action guidelines for artificial intelligence governance; In July 2019, the Shanghai Artificial Intelligence Industry Security Expert Advisory Committee released the "Shanghai Initiative for the Development of Artificial Intelligence Security"; In September 2021, the Zhongguancun Forum released the "New Generation Artificial Intelligence Ethics Standards" and other guidelines formulated by the National New Generation Artificial Intelligence Governance Professional Committee. From the perspective of published content, all guidelines have achieved a high degree of consensus on values such as people-oriented, promoting innovation, ensuring security, protecting privacy, and clarifying responsibilities. However, there is still room for further theoretical research and argumentation to establish consensus.



Governance principles

Article 55 of the "Regulations on Optimizing the Business Environment", which came into effect on January 1, 2020, specifically stipulates the regulatory principle of "inclusiveness and prudence": The government and its relevant departments should, in accordance with the principle of encouraging innovation, implement inclusive and prudent supervision over new technologies, industries, formats, models, etc., formulate and implement corresponding regulatory rules and standards based on their nature and characteristics, leaving sufficient development space, while ensuring quality and safety, and not simply prohibiting or not regulating. This provides the basic principles and methodology for the current ethical governance of artificial intelligence.

On the one hand, it is necessary to pay attention to observation and recognize that new technologies and things often have positive social significance and objective laws for their development and improvement. A certain space should be given to enable them to develop and improve, and regulatory methods and measures should be formed where necessary in their development.

On the other hand, we must adhere to the bottom line, including the bottom line of citizen rights protection and safety. For important rights and values that have already formed a high degree of social consensus and are embedded in the law, they must be protected in accordance with the law in the process of law enforcement and justice. This is not only a clear requirement of the law for relevant technology developers and users, but also a solemn commitment of the law to protect citizens' rights and promote technology for the better in the era of intelligence.

Governance approach

In terms of the overall path selection of AI governance, there are mainly two theories: "antithesis" and "systems theory".

The "opposition theory" mainly focuses on the conflict between artificial intelligence technology and human rights and well-being, and establishes corresponding review and regulatory systems. In 2020, the Rome Initiative on Artificial Intelligence Ethics proposed seven main principles - transparency, inclusiveness, responsibility, fairness, reliability, security and privacy. The European Commission in 2019's Ethical Guidelines for Reliable Artificial Intelligence proposed that the whole life cycle of artificial intelligence systems should comply with three requirements: legitimacy, ethics and robustness, which all reflect this approach.

The "systems theory" emphasizes the coordination and interaction between AI technology and human, other artificial agents, laws, non intelligent infrastructure and social norms. One of the eight principles proposed in the Ethical Design issued by IEEE (Institute of Electrical and Electronics Engineers) is "competence". This principle proposes that the system creator should clarify the requirements for the operator, and the operator should follow the principle of knowledge and skills required for safe and effective operation, which reflects the systems theory perspective of making up for the lack of artificial intelligence from the perspective of user requirements, New demands have been put forward for education and training in the era of intelligence.



The "New Generation Artificial Intelligence Governance Principles - Developing Responsible Artificial Intelligence" released by the National New Generation Artificial Intelligence Governance Professional Committee in 2019 proposes the "governance principles" from a more systematic perspective, which are eight principles that should be followed by all parties involved in the development of artificial intelligence; In addition to the principles of harmony and friendliness, respect for privacy, and security and controllability that focus on the openness and application of artificial intelligence, there is also a special emphasis on "improving management methods", "strengthening artificial intelligence education and science popularization, enhancing the adaptability of vulnerable groups, and striving to bridge the digital divide", The important principles of "promoting the coordination and interaction of international organizations, government departments, scientific research institutions, educational institutions, enterprises, social organizations and the public in the development and governance of artificial intelligence" reflect the "systems theory" thinking and the idea of multi governance, including educational reform, ethical norms, technical support, legal regulation, international cooperation and other multi-dimensional governance.



Education reform

To better support the development and governance of artificial intelligence, improvements should be made from four aspects:

1. Popularize cutting-edge technological knowledge such as artificial intelligence, enhance public awareness, and make the public treat artificial intelligence rationally;

2. Strengthen artificial intelligence ethics education and professional ethics training among scientific and technological workers;

3. Provide a continuous lifelong education system for workers to address the potential unemployment issues caused by artificial intelligence;

4. Research the transformation of youth education, break the limitations of knowledge-based education inherited from the industrial era, and respond to the demand for talent in the era of artificial intelligence.

Ethical norms

In China's "New Generation Artificial Intelligence Development Plan", it is mentioned that "conducting research on artificial intelligence behavior science and ethics, establishing a multi-level judgment structure for ethics and morality, and an ethical framework for human-machine cooperation". At the same time, it is necessary to establish ethical norms and codes of conduct for artificial intelligence product development and design personnel, as well as future users, to constrain and guide them from the source to the downstream.

There are currently 5 key tasks that can be carried out:

1. Research refined ethical guidelines for key areas of artificial intelligence, and develop actionable norms and recommendations.

2. Provide appropriate guidance in publicity and education to further promote the formation of ethical consensus on artificial intelligence.

3. Promote the awareness and practice of ethical risks in artificial intelligence among research institutions and enterprises.

4. Fully leverage the role of the national level ethics committee, promote the promotion of advanced ethical risk assessment and control experience by formulating national level ethical guidelines and promotion plans for artificial intelligence, regularly evaluating ethical risks for new formats and applications, and regularly selecting best practices in the artificial intelligence industry.

5. Promote the establishment of an ethics committee for artificial intelligence research institutes and enterprises, leading the assessment, monitoring, and real-time response of ethical risks in artificial intelligence, so that ethical considerations in artificial intelligence are integrated into the entire process of artificial intelligence design, development, and application.



Technical support



Reducing ethical risks through improved technology is an important dimension of ethical governance in artificial intelligence. Currently, driven by research, the market, and laws, many research institutions and enterprises have carried out activities such as federated learning and privacy computing to better protect personal privacy through technological research and development; At the same time, research on artificial intelligence algorithms that enhance security, interpretability, and fairness, as well as technologies such as dataset anomaly detection and training sample evaluation, has also proposed many models of ethical agents in different fields. Of course, it is also necessary to improve the patent system, clarify the patentability of algorithmic related inventions, and further stimulate technological innovation to support the design of artificial intelligence systems that meet ethical requirements.



In addition, the development of recommended standards in some key areas cannot be ignored. In the development of artificial intelligence standards, efforts should be made to strengthen the implementation and support of ethical standards for artificial intelligence, focusing on the development of standards for privacy protection, security, availability, interpretability, traceability, accountability, evaluation, and regulatory support technologies. Enterprises are encouraged to propose and publish their own enterprise standards, actively participate in the establishment of relevant international standards, and promote the inclusion of relevant patented technologies in China into international standards, Help China enhance its voice in the formulation of international artificial intelligence ethical standards and related standards, and lay a better competitive advantage for Chinese enterprises in international competition.



Legal regulation



At the level of legal regulation, it is necessary to gradually develop digital human rights, clarify the allocation of responsibilities, establish a regulatory system, and achieve an organic combination of the rule of law and technological governance. At the current stage, we should actively promote the effective implementation of the Personal Information Protection Law and the Data Security Law, and carry out legislative work in the field of autonomous driving; And strengthen research on algorithm regulatory systems in key areas, distinguish different scenarios, explore the necessity and prerequisites for the application of measures such as artificial intelligence ethical risk assessment, algorithm audit, dataset defect detection, and algorithm authentication, and prepare theoretical and institutional recommendations for the next step of legislation.



international co-operation



Currently, human society is entering the era of intelligence, and the rule order in the field of artificial intelligence worldwide is in its formative stage. The European Union has conducted many studies focusing on the values of artificial intelligence, hoping to transform Europe's human rights traditions into new advantages in the development of artificial intelligence through legislation and other means. The United States also attaches great importance to artificial intelligence standards. In February 2019, Trump issued an executive order for the "American Artificial Intelligence Program", requiring government agencies such as the White House Office of Science and Technology Policy (OSTP) and the National Institute of Standards and Technology (NIST) to develop standards to guide the development of reliable, robust, trustworthy, secure, concise, and collaborative artificial intelligence systems, and calling for leadership in the development of international artificial intelligence standards.



China is at the forefront of artificial intelligence technology in the world, and needs to be more proactive in addressing the challenges posed by ethical issues in artificial intelligence, and assume corresponding ethical responsibilities in the development of artificial intelligence; Actively carry out international exchanges, participate in the formulation of relevant international management policies and standards, and grasp the discourse power of scientific and technological development; Occupy the commanding heights of development among the most representative and breakthrough technological forces, and make positive contributions to achieving global governance of artificial intelligence.



Authors: Zhang Zhaoxiang 1, Zhang Jiyu 2, Tan Tieniu 1 *, 3

1 Institute of Automation, Chinese Academy of Sciences

Law School of Renmin University of China

3 Academician of the CAS Member

Fund project: Science and Technology Ethics Research Project of the Academic Department of the Chinese Academy of Sciences (XBKJLL2018001)

This article is reproduced from the Journal of the Chinese Academy of Sciences on WeChat official account, and originally published in the Journal of the Chinese Academy of Sciences, Issue 11, 2021
上一篇:Summary of "Opinions on Strengthening Ethical Governance of Science and Technology" 下一篇:Promoting the high-quality development of online information industry through practical actions
打印本页 | 关闭窗口
版权所有©World Institute of Knowledge Management   制作与维护:星河科技   后台管理   企业邮箱