新闻类别
新闻动态 (15)
百家争鸣 (1)
联系我们

Address:Room 3, P, 1st floor, phase III, triumph business centre, 9-11 Heyuan street, Hung Hom, Kowloon

Tel:+852-595 22 004

Email:2388337820@qq.com


新闻资讯
当前位置:网站首页 - 新闻资讯 - 正文  
Analysis of the current situation and countermeasures of ethical issues in AI 人工智能伦理问题的现状分析与对策(摘要)
编辑:   浏览:1592  时间:2023-05-29
Analysis of the current situation and countermeasures of ethical issues in artificial intelligence (abstract)

Zhang Zhaoxiang and Tan Tieniu's Innovation Research, published in Beijing at 14:37 on March 30, 2022



1. Current ethical issues in artificial intelligence

Ethics is the principle and order norms that govern the relationship between people and society.

After human society entered the information age in the mid to late 20th century, information technology ethics gradually attracted widespread attention and research, including personal information leakage, information divide, information cocoon, and inadequate regulation of new power structures.

Moore, the founder of computer ethics, divided ethical agents into four categories:

1. Ethical impact on intelligent agents (ethical impact on society and the environment);

2. Implicit ethical agents (designed through specific software and hardware built-in security and other implicit ethical designs);

3. Display ethical intelligent agents (able to take reasonable actions based on changes in circumstances and their understanding of ethical norms);

4. Fully ethical intelligent agents (like humans with free will and the ability to make ethical decisions in various situations).

The current development of artificial intelligence is still in the weak stage of artificial intelligence, but it has also had a certain ethical impact on society and the environment. People are exploring the integration of ethical rules into artificial intelligence and the implementation of artificial intelligence technology through ethical reasoning, which also includes an understanding of ethical rules.

At present, the analysis and solution path construction of ethical issues in artificial intelligence should mainly revolve around the first three types of ethical intelligent agents, that is, artificial intelligence is characterized as a tool rather than a subject.

For example:

1. The defects and value setting issues of artificial intelligence systems may pose a threat to citizens' rights to life and health. In 2018, the fatal accident of Uber autonomous vehicle in Arizona was not caused by sensor failure, but by Uber's decision to ignore obstacles such as leaves and plastic bags identified by artificial intelligence algorithms in consideration of passenger comfort when designing the system.

2. The bias of artificial intelligence algorithms in target demonstration, algorithm discrimination, and training data may lead to or expand discrimination in society, infringing on citizens' equal rights.

3. The abuse of artificial intelligence may threaten citizens' privacy and personal information rights.

4. Complex artificial intelligence algorithms such as deep learning can lead to algorithmic black box problems, making decisions opaque or difficult to explain, thereby affecting citizens' right to know, procedural legitimacy, and citizen supervision.

5. The abuse and misuse of artificial intelligence technologies such as precise information push, automated fake news writing, intelligent targeted dissemination, and deep forgery may lead to issues such as information cocoons and the proliferation of false information, as well as potentially affecting people's access to important news and their participation in public issues; The precise dissemination of false news may also increase the impact on people's understanding and views of facts, which may incite public opinion, manipulate commercial markets, and affect politics and national policies. Cambridge Analytics uses data from Facebook to conduct political preference analysis on users, and based on this, conducts targeted information push to influence the US election, which is a typical example.

6. Artificial intelligence algorithms may use algorithmic discrimination or collusion to form horizontal monopoly agreements or radial agreements in situations where they are less likely to be detected and proven, thereby disrupting the market competition environment.

The application of algorithmic decision-making in various fields of society may cause changes in the power structure. With its technological advantages in processing massive data and the embedding advantages in ubiquitous information systems, algorithms have a significant impact on people's rights and freedoms. For example, using algorithms for credit evaluation in bank credit will affect whether citizens can obtain loans, and using algorithms for social hazard assessment in criminal justice will affect whether pre-trial detention is carried out, which are prominent manifestations.

8. The abuse of AI in the work scene may affect the rights and interests of workers, and the substitution of AI for workers may lead to a crisis of large-scale structural unemployment, bringing risks in terms of labor rights or employment opportunities.

Due to the increasingly widespread application of artificial intelligence in various aspects of social production and life, security risks such as vulnerabilities and design defects in artificial intelligence systems may lead to personal information and other data leakage, industrial production line stoppage, traffic paralysis, and other social problems, posing a threat to financial security, social security, and national security.

The misuse of artificial intelligence weapons may exacerbate inequality worldwide, threaten human life and world peace

The governance of ethical risks in artificial intelligence is complex and has not yet formed a comprehensive theoretical framework and governance system.

Ethical Guidelines, Governance Principles, and Approaches for Artificial Intelligence

At present, global artificial intelligence governance is still in the early exploration stage, starting from the formation of a basic consensus on ethical standards for artificial intelligence, gradually deepening into practical implementation such as credible evaluation, operational guidelines, industry standards, policies and regulations, and accelerating the construction of an international governance framework system for artificial intelligence.

Ethical guidelines

In recent years, relevant institutions and industry organizations in China have also been actively involved. For example, in January 2018, the China Electronics Standardization Institute released the White Paper on AI Standardization (2018), which proposed the principle of human interests and the principle of responsibility as the two basic principles of AI ethics; In May 2019, the Beijing Consensus on Artificial Intelligence was released, proposing 15 principles that should be followed by all parties involved in the research and development, use, and governance of artificial intelligence, which are beneficial to the construction of a community with a shared future for mankind and social development; In June 2019, the National New Generation Artificial Intelligence Governance Professional Committee released the "New Generation Artificial Intelligence Governance Principles - Developing Responsible Artificial Intelligence", proposing 8 principles for the development of artificial intelligence and outlining the framework and action guidelines for artificial intelligence governance; In July 2019, the Shanghai Artificial Intelligence Industry Security Expert Advisory Committee released the "Shanghai Initiative for the Development of Artificial Intelligence Security"; In September 2021, the Zhongguancun Forum released the "New Generation Artificial Intelligence Ethics Standards" and other guidelines formulated by the National New Generation Artificial Intelligence Governance Professional Committee. From the perspective of published content, all guidelines have achieved a high degree of consensus on values such as people-oriented, promoting innovation, ensuring security, protecting privacy, and clarifying responsibilities. However, there is still room for further theoretical research and argumentation to establish consensus.



Governance principles

Article 55 of the "Regulations on Optimizing the Business Environment", which came into effect on January 1, 2020, specifically stipulates the regulatory principle of "inclusiveness and prudence": The government and its relevant departments should, in accordance with the principle of encouraging innovation, implement inclusive and prudent supervision over new technologies, industries, formats, models, etc., formulate and implement corresponding regulatory rules and standards based on their nature and characteristics, leaving sufficient development space, while ensuring quality and safety, and not simply prohibiting or not regulating. This provides the basic principles and methodology for the current ethical governance of artificial intelligence.

On the one hand, it is necessary to pay attention to observation and recognize that new technologies and things often have positive social significance and objective laws for their development and improvement. A certain space should be given to enable them to develop and improve, and regulatory methods and measures should be formed where necessary in their development.

On the other hand, we must adhere to the bottom line, including the bottom line of citizen rights protection and safety. For important rights and values that have already formed a high degree of social consensus and are embedded in the law, they must be protected in accordance with the law in the process of law enforcement and justice. This is not only a clear requirement of the law for relevant technology developers and users, but also a solemn commitment of the law to protect citizens' rights and promote technology for the better in the era of intelligence.

Governance approach

In terms of the overall path selection of AI governance, there are mainly two theories: "antithesis" and "systems theory".

The "opposition theory" mainly focuses on the conflict between artificial intelligence technology and human rights and well-being, and establishes corresponding review and regulatory systems. In 2020, the Rome Initiative on Artificial Intelligence Ethics proposed seven main principles - transparency, inclusiveness, responsibility, fairness, reliability, security and privacy. The European Commission in 2019's Ethical Guidelines for Reliable Artificial Intelligence proposed that the whole life cycle of artificial intelligence systems should comply with three requirements: legitimacy, ethics and robustness, which all reflect this approach.

The "systems theory" emphasizes the coordination and interaction between AI technology and human, other artificial agents, laws, non intelligent infrastructure and social norms. One of the eight principles proposed in the Ethical Design issued by IEEE (Institute of Electrical and Electronics Engineers) is "competence". This principle proposes that the system creator should clarify the requirements for the operator, and the operator should follow the principle of knowledge and skills required for safe and effective operation, which reflects the systems theory perspective of making up for the lack of artificial intelligence from the perspective of user requirements, New demands have been put forward for education and training in the era of intelligence.



The "New Generation Artificial Intelligence Governance Principles - Developing Responsible Artificial Intelligence" released by the National New Generation Artificial Intelligence Governance Professional Committee in 2019 proposes the "governance principles" from a more systematic perspective, which are eight principles that should be followed by all parties involved in the development of artificial intelligence; In addition to the principles of harmony and friendliness, respect for privacy, and security and controllability that focus on the openness and application of artificial intelligence, there is also a special emphasis on "improving management methods", "strengthening artificial intelligence education and science popularization, enhancing the adaptability of vulnerable groups, and striving to bridge the digital divide", The important principles of "promoting the coordination and interaction of international organizations, government departments, scientific research institutions, educational institutions, enterprises, social organizations and the public in the development and governance of artificial intelligence" reflect the "systems theory" thinking and the idea of multi governance, including educational reform, ethical norms, technical support, legal regulation, international cooperation and other multi-dimensional governance.



Education reform

To better support the development and governance of artificial intelligence, improvements should be made from four aspects:

1. Popularize cutting-edge technological knowledge such as artificial intelligence, enhance public awareness, and make the public treat artificial intelligence rationally;

2. Strengthen artificial intelligence ethics education and professional ethics training among scientific and technological workers;

3. Provide a continuous lifelong education system for workers to address the potential unemployment issues caused by artificial intelligence;

4. Research the transformation of youth education, break the limitations of knowledge-based education inherited from the industrial era, and respond to the demand for talent in the era of artificial intelligence.

Ethical norms

In China's "New Generation Artificial Intelligence Development Plan", it is mentioned that "conducting research on artificial intelligence behavior science and ethics, establishing a multi-level judgment structure for ethics and morality, and an ethical framework for human-machine cooperation". At the same time, it is necessary to establish ethical norms and codes of conduct for artificial intelligence product development and design personnel, as well as future users, to constrain and guide them from the source to the downstream.

There are currently 5 key tasks that can be carried out:

1. Research refined ethical guidelines for key areas of artificial intelligence, and develop actionable norms and recommendations.

2. Provide appropriate guidance in publicity and education to further promote the formation of ethical consensus on artificial intelligence.

3. Promote the awareness and practice of ethical risks in artificial intelligence among research institutions and enterprises.

4. Fully leverage the role of the national level ethics committee, promote the promotion of advanced ethical risk assessment and control experience by formulating national level ethical guidelines and promotion plans for artificial intelligence, regularly evaluating ethical risks for new formats and applications, and regularly selecting best practices in the artificial intelligence industry.

5. Promote the establishment of an ethics committee for artificial intelligence research institutes and enterprises, leading the assessment, monitoring, and real-time response of ethical risks in artificial intelligence, so that ethical considerations in artificial intelligence are integrated into the entire process of artificial intelligence design, development, and application.



Technical support



Reducing ethical risks through improved technology is an important dimension of ethical governance in artificial intelligence. Currently, driven by research, the market, and laws, many research institutions and enterprises have carried out activities such as federated learning and privacy computing to better protect personal privacy through technological research and development; At the same time, research on artificial intelligence algorithms that enhance security, interpretability, and fairness, as well as technologies such as dataset anomaly detection and training sample evaluation, has also proposed many models of ethical agents in different fields. Of course, it is also necessary to improve the patent system, clarify the patentability of algorithmic related inventions, and further stimulate technological innovation to support the design of artificial intelligence systems that meet ethical requirements.



In addition, the development of recommended standards in some key areas cannot be ignored. In the development of artificial intelligence standards, efforts should be made to strengthen the implementation and support of ethical standards for artificial intelligence, focusing on the development of standards for privacy protection, security, availability, interpretability, traceability, accountability, evaluation, and regulatory support technologies. Enterprises are encouraged to propose and publish their own enterprise standards, actively participate in the establishment of relevant international standards, and promote the inclusion of relevant patented technologies in China into international standards, Help China enhance its voice in the formulation of international artificial intelligence ethical standards and related standards, and lay a better competitive advantage for Chinese enterprises in international competition.



Legal regulation



At the level of legal regulation, it is necessary to gradually develop digital human rights, clarify the allocation of responsibilities, establish a regulatory system, and achieve an organic combination of the rule of law and technological governance. At the current stage, we should actively promote the effective implementation of the Personal Information Protection Law and the Data Security Law, and carry out legislative work in the field of autonomous driving; And strengthen research on algorithm regulatory systems in key areas, distinguish different scenarios, explore the necessity and prerequisites for the application of measures such as artificial intelligence ethical risk assessment, algorithm audit, dataset defect detection, and algorithm authentication, and prepare theoretical and institutional recommendations for the next step of legislation.



international co-operation



Currently, human society is entering the era of intelligence, and the rule order in the field of artificial intelligence worldwide is in its formative stage. The European Union has conducted many studies focusing on the values of artificial intelligence, hoping to transform Europe's human rights traditions into new advantages in the development of artificial intelligence through legislation and other means. The United States also attaches great importance to artificial intelligence standards. In February 2019, Trump issued an executive order for the "American Artificial Intelligence Program", requiring government agencies such as the White House Office of Science and Technology Policy (OSTP) and the National Institute of Standards and Technology (NIST) to develop standards to guide the development of reliable, robust, trustworthy, secure, concise, and collaborative artificial intelligence systems, and calling for leadership in the development of international artificial intelligence standards.



China is at the forefront of artificial intelligence technology in the world, and needs to be more proactive in addressing the challenges posed by ethical issues in artificial intelligence, and assume corresponding ethical responsibilities in the development of artificial intelligence; Actively carry out international exchanges, participate in the formulation of relevant international management policies and standards, and grasp the discourse power of scientific and technological development; Occupy the commanding heights of development among the most representative and breakthrough technological forces, and make positive contributions to achieving global governance of artificial intelligence.



Authors: Zhang Zhaoxiang 1, Zhang Jiyu 2, Tan Tieniu 1 *, 3

1 Institute of Automation, Chinese Academy of Sciences

Law School of Renmin University of China

3 Academician of the CAS Member

Fund project: Science and Technology Ethics Research Project of the Academic Department of the Chinese Academy of Sciences (XBKJLL2018001)

This article is reproduced from the Journal of the Chinese Academy of Sciences on WeChat official account, and originally published in the Journal of the Chinese Academy of Sciences, Issue 11, 2021


人工智能伦理问题的现状分析与对策(摘要)

张兆翔 谭铁牛 创新研究 2022-03-30 14:37 发表于北京

 

1 当前人工智能伦理问题

伦理是处理人与人之间关系、人与社会之间关系的道理和秩序规范。

人类社会于20世纪中后期进入信息时代后,信息技术伦理逐渐引起了广泛关注和研究,包括个人信息泄露、信息鸿沟、信息茧房、新型权力结构规制不足等。

计算机伦理学创始人Moore将伦理智能体分为4类:

1. 伦理影响智能体(对社会和环境产生伦理影响);

2. 隐式伦理智能体(通过特定软硬件内置安全等隐含的伦理设计);

3. 显示伦理智能体(能根据情势的变化及其对伦理规范的理解采取合理行动);

4. 完全伦理智能体(像人一样具有自由意志并能对各种情况做出伦理决策)。

当前人工智能发展尚处在弱人工智能阶段,但也对社会和环境产生了一定的伦理影响。人们正在探索为人工智能内置伦理规则,以及通过伦理推理等使人工智能技术的实现中也包含有对伦理规则的理解。

当前阶段对人工智能伦理问题的分析和解决路径构建应主要围绕着前3类伦理智能体开展,即将人工智能定性为工具而非主体。

例如:

1. 人工智能系统的缺陷和价值设定问题可能带来公民生命权、健康权的威胁。2018年,Uber自动驾驶汽车在美国亚利桑那州发生的致命事故并非传感器出现故障,而是由于Uber在设计系统时出于对乘客舒适度的考虑,对人工智能算法识别为树叶、塑料袋之类的障碍物做出予以忽略的决定。

2. 人工智能算法在目标示范、算法歧视、训练数据中的偏失可能带来或扩大社会中的歧视,侵害公民的平等权。

3. 人工智能的滥用可能威胁公民隐私权、个人信息权。

4. 深度学习等复杂的人工智能算法会导致算法黑箱问题,使决策不透明或难以解释,从而影响公民知情权、程序正当及公民监督权。

5. 信息精准推送、自动化假新闻撰写和智能化定向传播、深度伪造等人工智能技术的滥用和误用可能导致信息茧房、虚假信息泛滥等问题,以及可能影响人们对重要新闻的获取和对公共议题的民主参与度;虚假新闻的精准推送还可能加大影响人们对事实的认识和观点,进而可能煽动民意、操纵商业市场和影响政治及国家政策。剑桥分析公司利用Facebook上的数据对用户进行政治偏好分析,并据此进行定向信息推送来影响美国大选,这就是典型实例。

6. 人工智能算法可能在更不易于被察觉和证明的情况下,利用算法歧视,或通过算法合谋形成横向垄断协议或轴辐协议等方式,破坏市场竞争环境。

7. 算法决策在社会各领域的运用可能引起权力结构的变化,算法凭借其可以处理海量数据的技术优势和无所不在的信息系统中的嵌入优势,对人们的权益和自由产生显著影响。例如,银行信贷中通过算法进行信用评价将影响公民是否能获得贷款,刑事司法中通过算法进行社会危害性评估将影响是否进行审前羁押等,都是突出的体现。

8. 人工智能在工作场景中的滥用可能影响劳动者权益,并且人工智能对劳动者的替代可能引发大规模结构性失业的危机,带来劳动权或就业机会方面的风险。

9. 由于人工智能在社会生产生活的各个环节日益广泛应用,人工智能系统的漏洞、设计缺陷等安全风险,可能引发个人信息等数据泄露、工业生产线停止、交通瘫痪等社会问题,威胁金融安全、社会安全和国家安全等。

10. 人工智能武器的滥用可能在世界范围内加剧不平等,威胁人类生命与世界和平……

人工智能伦理风险治理具有复杂性,尚未形成完善的理论架构和治理体系。

人工智能伦理准则、治理原则及进路
当前全球人工智能治理还处于初期探索阶段,正从形成人工智能伦理准则的基本共识出发,向可信评估、操作指南、行业标准、政策法规等落地实践逐步深入,并在加快构建人工智能国际治理框架体系。

伦理准则

近年来,中国相关机构和行业组织也非常积极活跃参与其中。例如:20181月,中国电子技术标准化研究院发布了《人工智能标准化白皮书(2018版)》,提出人类利益原则和责任原则作为人工智能伦理的两个基本原则;20195月,《人工智能北京共识》发布,针对人工智能的研发、使用、治理3个方面,提出了各个参与方应该遵循的有益于人类命运共同体构建和社会发展的15条原则;20196月,国家新一代人工智能治理专业委员会发布《新一代人工智能治理原则——发展负责任的人工智能》,提出了人工智能发展的8项原则,勾勒出了人工智能治理的框架和行动指南;20197月,上海市人工智能产业安全专家咨询委员会发布了《人工智能安全发展上海倡议》;20219月,中关村论坛上发布由国家新一代人工智能治理专业委员会制定的《新一代人工智能伦理规范》等。从发布内容上看,所有准则在以人为本、促进创新、保障安全、保护隐私、明晰责任等价值观上取得了高度共识,但仍有待继续加深理论研究和论证,进一步建立共识。

 

治理原则

202011日起实施的《优化营商环境条例》第55条中专门规定了“包容审慎”监管原则:“政府及其有关部门应当按照鼓励创新的原则,对新技术、新产业、新业态、新模式等实行包容审慎监管,针对其性质、特点分类制定和实行相应的监管规则和标准,留足发展空间,同时确保质量和安全,不得简单化予以禁止或者不予监管。”这为当前人工智能伦理治理提供了基本原则和方法论。

一方面,要注重观察,认识到新技术新事物往往有其积极的社会意义,亦有其发展完善的客观规律,应予以一定空间使其能够发展完善,并在其发展中的必要之处形成规制方法和措施。

另一方面,要坚守底线,包括公民权利保护的底线、安全的底线等。对于已经形成高度社会共识、凝结在法律之中的重要权益、价值,在执法、司法过程中都要依法进行保护。这既是法律对相关技术研发者和使用者的明确要求,也是法律对于在智能时代保护公民权益、促进科技向善的郑重承诺。

治理进路

在人工智能治理整体路径选择方面,主要有两种理论:“对立论”和“系统论”。

“对立论”主要着眼于人工智能技术与人类权利和福祉之间的对立冲突,进而建立相应的审查和规制制度。2020年《人工智能伦理罗马倡议》中提出7项主要原则——透明、包容、责任、公正、可靠、安全和隐私,欧盟委员会于2019年《可信赖人工智能的伦理指南》中提出人工智能系统全生命周期应遵守合法性、合伦理性和稳健性3项要求,都体现了这一进路。

“系统论”则强调人工智能技术与人类、其他人工代理、法律、非智能基础设施和社会规范之间的协调互动关系。IEEE(电气与电子工程师协会)发布的《合伦理设计》中提出的8项原则之一即为“资质”(competence),该原则提出系统创建者应明确对操作者的要求,并且操作者应遵守安全有效操作所需的知识和技能的原则,这体现了从对使用者要求的角度来弥补人工智能不足的系统论视角,对智能时代的教育和培训提出了新需求。

 

我国国家新一代人工智能治理专业委员会2019年发布的《新一代人工智能治理原则——发展负责任的人工智能》从更系统的角度提出了“治理原则”,即人工智能发展相关各方应遵循的8项原则;除了和谐友好、尊重隐私、安全可控等侧重于人工智能开放和应用的原则外,还专门强调了要“改善管理方式”,“加强人工智能教育及科普,提升弱势群体适应性,努力消除数字鸿沟”,“推动国际组织、政府部门、科研机构、教育机构、企业、社会组织、公众在人工智能发展与治理中的协调互动”等重要原则,体现出包含教育改革、伦理规范、技术支撑、法律规制、国际合作等多维度治理的“系统论”思维和多元共治的思想。

 

教育改革

为更好地支撑人工智能发展和治理,应从4个方面进行完善:

1. 普及人工智能等前沿技术知识,提高公众认知,使公众理性对待人工智能;

2. 在科技工作者中加强人工智能伦理教育和职业伦理培训;

3. 为劳动者提供持续的终身教育体系,应对人工智能可能引发的失业问题;

4. 研究青少年教育变革,打破工业化时代传承下来的知识化教育的局限性,回应人工智能时代对人才的需求。

伦理规范

我国《新一代人工智能发展规划》中提到,“开展人工智能行为科学和伦理等问题研究,建立伦理道德多层次判断结构及人机协作的伦理框架”。同时,还需制定人工智能产品研发设计人员及日后使用人员的道德规范和行为守则,从源头到下游进行约束和引导。

当前有5项重点工作可以开展:

1. 针对人工智能的重点领域,研究细化的伦理准则,形成具有可操作性的规范和建议。

2. 在宣传教育层面进行适当引导,进一步推动人工智能伦理共识的形成。

3. 推动科研机构和企业对人工智能伦理风险的认知和实践。

4. 充分发挥国家层面伦理委员会的作用,通过制定国家层面的人工智能伦理准则和推进计划,定期针对新业态、新应用评估伦理风险,以及定期评选人工智能行业最佳实践等多种方式,促进先进伦理风险评估控制经验的推广。

5. 推动人工智能科研院所和企业建立伦理委员会,领导人工智能伦理风险评估、监控和实时应对,使人工智能伦理考量贯穿在人工智能设计、研发和应用的全流程之中。

 

技术支撑

 

通过改进技术而降低伦理风险,是人工智能伦理治理的重要维度。当前,在科研、市场、法律等驱动下,许多科研机构和企业均开展了联邦学习、隐私计算等活动,以更好地保护个人隐私的技术研发;同时,对加强安全性、可解释性、公平性的人工智能算法,以及数据集异常检测、训练样本评估等技术研究,也提出了很多不同领域的伦理智能体的模型结构。当然,还应完善专利制度,明确算法相关发明的可专利性,进一步激励技术创新,以支撑符合伦理要求的人工智能系统设计。

 

此外,一些重点领域的推荐性标准制定工作也不容忽视。在人工智能标准制定中,应强化对人工智能伦理准则的贯彻和支撑,注重对隐私保护、安全性、可用性、可解释性、可追溯性、可问责性、评估和监管支撑技术等方面的标准制定,鼓励企业提出和公布自己的企业标准,并积极参与相关国际标准的建立,促进我国相关专利技术纳入国际标准,帮助我国在国际人工智能伦理准则及相关标准制定中提升话语权,并为我国企业在国际竞争中奠定更好的竞争优势。

 

法律规制

 

法律规制层面需要逐步发展数字人权、明晰责任分配、建立监管体系、实现法治与技术治理有机结合。在当前阶段,应积极推动《个人信息保护法》《数据安全法》的有效实施,开展自动驾驶领域的立法工作;并对重点领域的算法监管制度加强研究,区分不同的场景,探讨人工智能伦理风险评估、算法审计、数据集缺陷检测、算法认证等措施适用的必要性和前提条件,为下一步的立法做好理论和制度建议准备。

 

国际合作

 

当前,人类社会正步入智能时代,世界范围内人工智能领域的规则秩序正处于形成期。欧盟聚焦于人工智能价值观进行了许多研究,期望通过立法等方式,将欧洲的人权传统转化为其在人工智能发展中的新优势。美国对人工智能标准也尤为重视,特朗普于20192月发布“美国人工智能计划”行政令,要求白宫科技政策办公室(OSTP)和美国国家标准与技术研究院(NIST)等政府机构制定标准,指导开发可靠、稳健、可信、安全、简洁和可协作的人工智能系统,并呼吁主导国际人工智能标准的制定。

 

我国在人工智能科技领域处于世界前列,需要更加积极主动地应对人工智能伦理问题带来的挑战,在人工智能发展中承担相应的伦理责任;积极开展国际交流,参与相关国际管理政策及标准的制定,把握科技发展话语权;在最具代表性和突破性的科技力量中占据发展的制高点,为实现人工智能的全球治理作出积极贡献。

 

作者:张兆翔、张吉豫、谭铁牛

1 中国科学院自动化研究所

2 中国人民大学法学院

3 中国科学院院士

基金项目:中国科学院学部科技伦理研究项目(XBKJLL2018001

本文转载自微信公众号中国科学院院刊,原载于《中国科学院院刊》202111

 






上一篇:Summary of "Opinions on Strengthening Ethical Governance of Science and Technology" 《关于加强科技伦理治理的意见》摘要 下一篇:Promoting the high-quality development of online information industry through practical actions以实际行动推动网信事业高质量发展
打印本页 | 关闭窗口
版权所有©World Institute of Knowledge Management   制作与维护:星河科技   后台管理   企业邮箱