Situation, Challenge and Prospect of Generative Artificial Intelligence Governance

    Core Tip: As a key technology to lead a new round of scientific and technological revolution and industrial transformation, the generative artificial intelligence represented by ChatGPT has constantly spawned new scenarios, new formats, new models and new markets, changed the mode of production of information and knowledge, and reshaped the interaction mode between human beings and technology, which has a great impact on education, finance, media and games. In this context, countries around the world have introduced artificial intelligence development strategies and specific policies to seize the strategic commanding heights. At the same time, the security risks exposed by generative artificial intelligence, such as data leakage, false content generation and improper use, have also attracted wide attention from all countries. The development, application and governance of generative artificial intelligence is no longer a common challenge faced by a certain country but the whole international community.

    [Abstract] As a key technology to lead a new round of scientific and technological revolution and industrial transformation, the generative artificial intelligence represented by ChatGPT has constantly spawned new scenarios, new formats, new models and new markets, changed the mode of production of information and knowledge, and reshaped the interaction mode between human beings and technology, which has a great impact on education, finance, media and games. In this context, countries around the world have introduced artificial intelligence development strategies and specific policies to seize the strategic commanding heights. At the same time, the security risks exposed by generative artificial intelligence, such as data leakage, false content generation and improper use, have also attracted wide attention from all countries. The development, application and governance of generative artificial intelligence is no longer a common challenge faced by a certain country but the whole international community. In order to effectively deal with the new challenges of generative artificial intelligence to information content governance, we should balance the relationship between security and development, the relationship between technological innovation and technological governance, and the relationship between corporate compliance obligations and corporate affordability.

    Keywords: multidimensional balance of internal and external challenges in the legislation of generative artificial intelligence governance model [Chinese library classification number] D92 [document identification code] a

    Generative AI (Generative AI) is a new technology in the development of artificial intelligence, which can generate new content according to probability by statistical methods, such as video, audio, text and even software code. By using Transformer (a neural network model based on self-attention mechanism), generative artificial intelligence can deeply analyze the existing data sets, identify the location connection and correlation among them, and then continuously optimize and train with the help of feedback reinforcement learning from human feedback to form a large language model, and finally make decisions or predictions for the generation of new content. ① In addition to text generation and content creation, generative artificial intelligence also has a wide range of application scenarios, such as customer service, investment management, artistic creation, academic research, code programming, virtual assistance and so on. In a word, self-generation, self-learning and rapid iteration are the basic characteristics that distinguish generative artificial intelligence from traditional artificial intelligence.

    The Interim Measures for the Management of Generative Artificial Intelligence Services (hereinafter referred to as the Interim Measures) officially issued by seven departments, including the National Network Information Office, came into effect on August 15, 2023, aiming at promoting the healthy development and standardized application of generative artificial intelligence and safeguarding national security and social public interests. As a key technology to lead a new round of scientific and technological revolution and industrial transformation, generative artificial intelligence represented by ChatGPT has constantly spawned new scenarios, new formats, new models and new markets, changed the mode of production of information and knowledge, and reshaped the interaction mode between human beings and technology, which has a great impact on education, finance, media and games. In this context, countries around the world have introduced artificial intelligence development strategies and specific policies to seize the strategic commanding heights. At the same time, the security risks exposed by generative artificial intelligence, such as data leakage, false content generation and improper use, have also attracted wide attention from all countries. It can be said with certainty that the development, application and governance of generative artificial intelligence is no longer a common challenge faced by a certain country but the whole international community.

    Comment on the main supervision modes of artificial intelligence abroad

    Artificial intelligence is a "double-edged sword", which promotes social progress, but also brings risks and challenges. The international community is committed to continuously promoting the development of artificial intelligence. For example, UNESCO adopted the first global agreement on artificial intelligence ethics-Proposal on Artificial Intelligence Ethics, which defined ten principles, including proportionality and non-harm, safety and security, fairness and non-discrimination. Since 2018, the EU has continued to promote the design, development and deployment of artificial intelligence, while striving to standardize the use and management of artificial intelligence and robots. The European Union’s Artificial Intelligence Act, which came into effect in early 2024, helped this work reach a climax and even became a milestone in the history of artificial intelligence governance. The United States pays more attention to the development of artificial intelligence, and regulates the development of artificial intelligence with the blueprint of artificial intelligence bill of rights (hereinafter referred to as the blueprint of bill of rights) as the main measure. In view of the relatively mature and representative measures of artificial intelligence governance in the European Union and the United States, the advantages and disadvantages of different regulatory models are discussed in the following, hoping to provide reference for the healthy development and effective governance of artificial intelligence in China.

    EU artificial intelligence legislation: giving priority to safety and giving consideration to fairness. From the legislative history, in April 2021, the European Commission issued the legislative proposal "Regulations of the European Parliament and the Council on Formulating Uniform Rules for Artificial Intelligence (Artificial Intelligence Law) and Amending Some EU Legislation" (hereinafter referred to as the "Artificial Intelligence Act"), which opened the "hard law" road of artificial intelligence governance. In December 2022, the final version of the compromise draft of the Artificial Intelligence Act was formed. In June 2023, the European Parliament passed the draft negotiating authorization of the Artificial Intelligence Act and revised the original proposal. On December 8, 2023, the European Parliament, the European Council and the European Commission reached an agreement on the Artificial Intelligence Act, which stipulated comprehensive supervision in the field of artificial intelligence. On the whole, the Artificial Intelligence Act establishes an ethical and legal framework for the development and use of artificial intelligence in the European Union, supplemented by the Directive on Responsibility for Artificial Intelligence to ensure its implementation. Several rounds of discussions on the Artificial Intelligence Act mainly focused on the following contents:

    The first is the definition of artificial intelligence and the scope of application of the bill. Article 3 of the Artificial Intelligence Act defines artificial intelligence as software developed by one or more technologies and methods, which can affect the output of interactive environment (such as content, prediction, suggestion or decision) and achieve specific goals specified by people. This definition has a wide range, which may cover a large number of software that is not traditionally regarded as artificial intelligence, which is not conducive to the development and governance of artificial intelligence. Therefore, the current version limits the definition of artificial intelligence to "a system based on machine learning or logic and knowledge", which aims to run at different levels of autonomy and can influence the output of prediction, suggestion or decision-making in physical or virtual environment for explicit or implicit goals. At the same time, Annex I and the authorization of the European Commission to modify the definition of artificial intelligence were deleted. Although the Artificial Intelligence Act does not involve generative artificial intelligence, the appearance of ChatGPT makes legislators add definitions of general artificial intelligence and basic model in the amendment, and requires generative artificial intelligence to comply with additional transparency requirements, such as disclosing the source of content and designing models to prohibit illegal generation. The Artificial Intelligence Act has extraterritorial effect and applies to all providers and deployers of artificial intelligence systems (whether established in the EU or in a third country), all distributors and importers, authorized representatives of providers, manufacturers of certain products established or located in the EU, and EU data subjects whose health, safety or basic rights are greatly affected by the use of artificial intelligence systems.

    The second is the supervision mode of artificial intelligence. The "Artificial Intelligence Act" adopts a risk-based approach, which classifies and sets different obligations according to the potential risks to health, safety and the basic rights of natural persons: first, unacceptable risks are prohibited from being deployed by any enterprise or individual; The second is high risk, which allows relevant subjects to put them on the market or use them after fulfilling their obligations such as prior assessment, and requires continuous monitoring during and after the event; The third is finite risk, which does not need to obtain a special license, certification or fulfill the obligations of reporting and recording, but should follow the principle of transparency and allow proper traceability and interpretability; The fourth is low risk or minimum risk, and the corresponding subject can deploy and use according to free will. As far as generative artificial intelligence is concerned, because it has no specific purpose and can be applied to different scenarios, it cannot be classified according to the general mode or operation mode, but should be based on the expected purpose and specific application field of developing or using generative artificial intelligence. ②

    The third is the general principle of artificial intelligence. Specifically, it includes: human agency and supervision: the development and use of artificial intelligence system must be a tool to serve human beings, respect human dignity, individual autonomy and function, and be properly controlled and supervised by human beings; Technical robustness and security: the development and deployment of artificial intelligence should minimize accidents and unexpected damage, ensure robustness when unexpected problems occur, and be flexible when malicious third parties try to change the performance of artificial intelligence systems and illegally use them; Privacy and data protection: The artificial intelligence system must be developed and used according to the existing privacy and data protection rules, and at the same time, it must deal with data that meet high standards in terms of quality and integrity; Transparency: The development and use of artificial intelligence system must allow proper traceability and interpretability, make human beings aware of their communication or interaction with artificial intelligence system, and properly inform users of the capabilities and limitations of artificial intelligence and the rights enjoyed by the affected people; Non-discrimination and fairness: the development and use of artificial intelligence systems must include different participants, promote equal use, gender equality and cultural diversity, and avoid discriminatory influence and unfair prejudice prohibited by EU or national laws; Social and environmental well-being: Artificial intelligence systems should be developed and used in a sustainable and environmentally friendly way, benefiting everyone, while monitoring and evaluating the long-term impact on individuals, society and democracy.

    The EU intends to establish a global standard for artificial intelligence supervision through the Artificial Intelligence Act, so that Europe can gain a dominant position in international intelligence competition. The "Artificial Intelligence Act" has formulated relatively reasonable rules for dealing with artificial intelligence systems, which can avoid discrimination, surveillance and other potential hazards to a certain extent, especially in areas related to basic rights. For example, the Artificial Intelligence Act lists some uses that prohibit artificial intelligence, and facial recognition in public places is one of them. In addition, it integrates the control measures to mitigate risks into the business departments where risks may occur, which can help organizations understand the cost-effectiveness of artificial intelligence systems, conduct compliance (self-review) to clarify responsibilities and obligations, and finally confidently adopt artificial intelligence. At the same time, however, the Artificial Intelligence Act also has defects in risk classification, supervision, rights protection and responsibility mechanism, etc. For example, it adopts horizontal legislation, trying to bring all artificial intelligence into the scope of supervision, without in-depth consideration of the different characteristics of artificial intelligence, which may lead to the problem that relevant risk prevention measures can not be implemented. ③

    American artificial intelligence legislation: emphasizing self-regulation and supporting technological innovation. Under the global background of artificial intelligence law and policy making, the United States has gradually formed a regulatory framework based on the voluntary principle. The comprehensive regulatory measure in the United States is the Blueprint of Bill of Rights issued by the White House Office of Science and Technology Policy (OSTP) in October 2022, which aims to support the protection of citizens’ rights during the design, deployment and governance of automation systems. Under the guidance of "Blueprint of Bill of Rights", federal departments began to perform their respective duties and set out to formulate specific policies. For example, the US Department of Labor formulated the Fair Manual of Artificial Intelligence, which aims to prevent artificial intelligence from prejudicing job seekers and employees based on race, age, gender and other characteristics. The core content of the Blueprint of the Bill of Rights is five principles: a safe and effective system: the public is protected from unsafe or ineffective systems; Algorithm discrimination protection: the public should not face the discrimination of algorithms and systems, and the automation system should be used and designed in a fair way; Data privacy: the automation system should have built-in protection measures to ensure that the public’s data is not abused, and to ensure that the public enjoys the leading right to use the data; Know and explain: the public has the right to know that they are using the automation system, and understand what it is and how to produce results that have an impact on the public; Principle of substitutability: Under appropriate circumstances, the public should be able to choose not to use the automation system and use manual alternatives or other alternatives. ④ Because the above principles are not regulated, they are not binding.Therefore, the Blueprint of the Bill of Rights is not an enforceable "Bill of Rights" with legislative protection, but rather a forward-looking blueprint for governance based on future assumptions.

    At present, the US Congress has adopted a relatively non-interference approach to the supervision of artificial intelligence, although the Democratic leadership has indicated its intention to introduce a federal law to supervise artificial intelligence. Chuck Schumer, the majority leader of the US Senate, proposed a new framework to guide the legislation and supervision of artificial intelligence in the future, including "who", "where", "how" and "protection", requiring technology companies to review and test artificial intelligence systems before releasing them, and provide users with the results. However, in the case of checks and balances between the two parties in the United States, the possibility of passing the law through Congress is low, and even if there is a chance to pass it, it will need several rounds of revision. In contrast, faced with the strategic competitive pressure brought by the European Union’s "Artificial Intelligence Act" and the multi-field security risks of generative artificial intelligence represented by ChatGPT, the federal agencies in the United States have taken an active intervention attitude and conducted supervision within their jurisdiction. For example, the US Federal Trade Commission (FTC) actively supervises deceptive and unfair behaviors related to artificial intelligence by implementing the Fair Credit Reporting Act and the Federal Trade Commission Act. The first investigation that OpenAI faced was conducted by FTC; In January, 2023, the National Institute of Standards and Technology (NIST) formulated the Risk Management Framework of Artificial Intelligence, which classified the risks existing in artificial intelligence in detail. In April 2023, the US Department of Commerce publicly solicited opinions on accountability measures for artificial intelligence, including whether artificial intelligence models should go through certification procedures before release.

    At the local level in the United States, California passed the California Consumer Privacy Act of 2018 (CCPA) as a positive response to the EU General Data Protection Regulations. In 2023, under the impetus of Senator Bauer Kahan, California proposed the AB331(Automated decision tools) bill, requiring deployers and developers of automated decision tools to conduct an impact assessment of any automated decision tools they use on or before January 1, 2025, and every year thereafter, including the purpose of automated decision tools and a statement of their expected benefits, uses and deployment environment. New york has promulgated the Automated Employment Decision Tools Act (AEDT Law), which requires employers and professional platforms that use the Automated Employment Decision Tools in their recruitment and promotion decisions to conduct a brand-new and thorough review of the artificial intelligence system and a more comprehensive risk test to assess whether it falls within the scope of AEDT Law. In addition, legislatures in Texas, Vermont and Washington state have introduced relevant legislation, requiring state institutions to review the artificial intelligence systems being developed and used, and effectively disclose the use of artificial intelligence. It can be seen that local governments in the United States still have a positive attitude towards artificial intelligence governance. The main challenge faced by local governments is how to supervise artificial intelligence innovation in Silicon Valley with minimal obstacles. In California, for example, in order to overcome this challenge,The proposed bill has two provisions: first, the focus of artificial intelligence governance is on the rights, opportunities and risks of citizens’ access to key services, rather than the details of specific technologies, which can allow the innovative development of artificial intelligence; The second is to establish the transparency of artificial intelligence, requiring developers and users to submit an annual impact assessment to the California Department of Civil Rights, which can explain in detail the types of automation tools involved, for public access. Developers also need to develop a governance framework, detailing how the technology is used and its possible impact. In addition, the California Act also mentions the issue of private litigation rights, which provides a remedy for the protection of rights by allowing individuals to file lawsuits when their rights are violated. ⑤

    At present, the focus of artificial intelligence supervision in the United States is still to apply existing laws to artificial intelligence, rather than to formulate specialized artificial intelligence laws. For example, FTC has repeatedly stated that Article 5 of the Federal Trade Commission Act "Prohibition of Unfair or Fraud" can be fully applied to artificial intelligence and machine learning systems. In fact, how to coordinate the conflict between existing laws and artificial intelligence is not only an urgent task for a certain country, but also an urgent problem facing the international community. At present, the US Congress has not reached a consensus on the federal legislation on artificial intelligence supervision, including the regulatory framework, risk classification and other specific contents. Chuck Schumer, the majority leader of the US Senate, advocates comprehensive supervision of artificial intelligence similar to that of the European Union, and speeds up the legislative process of Congress by formulating frameworks and special forums, which conflicts with the voluntary principle of the United States. Therefore, it will take a long time for the legislation of artificial intelligence supervision at the federal level to appear.

    The New Challenge of Generative Artificial Intelligence to Information Content Governance

    Generally speaking, the risks faced by artificial intelligence mainly come from three aspects: technology, market and specification. ⑦ The generative artificial intelligence represented by ChatGPT is close to the goal of general artificial intelligence, but it also brings new challenges to information content governance, which urgently needs forward-looking and targeted theoretical research. The main perspectives of academic analysis of the risks of generative artificial intelligence include: first, based on the general risks of artificial intelligence and combined with the unique characteristics of generative artificial intelligence, the risks in intelligent ethics, intellectual property protection, privacy and personal data protection are studied; ⑧ The second is to study the risk and governance of generative artificial intelligence in specific fields, such as the risk of sentencing deviation of generative artificial intelligence in judicial judgment; Thirdly, based on the operation structure of generative artificial intelligence, the risk problems are studied from preparation, operation to generation, and then corresponding measures are taken, such as using technology and management to correct the algorithm discrimination risk of generative artificial intelligence in operation stage. ⑨ This paper thinks that it can also be analyzed from the internal and external perspectives. The internal challenges are the challenges existing in the model itself, including input quality problems, processing problems and output quality problems. External challenges come from non-model itself, including improper use risk and legal supervision risk.

    Input quality: Artificial intelligence consists of algorithm, computing power and data elements, in which data is the basis of artificial intelligence and determines the accuracy and reliability of its output to some extent. Generative artificial intelligence is a kind of artificial intelligence, so its output (generated content) will also be affected by the quantity and quality of data. Generative artificial intelligence must be trained with high-quality data. Once the data set is polluted or tampered with, the trained generative artificial intelligence may damage users’ basic rights, intellectual property rights, personal privacy and information data rights, and even produce social prejudice.

    Handling process problems: In addition to data, the algorithm model used in training will also affect the output results of artificial intelligence. If the training of artificial intelligence is regarded as a cuisine, the training data affects the quality of the final cuisine with the role of "material", while the algorithm model plays its own role with the role of "recipe", and both are indispensable. Once the selected algorithm model has problems or is inconsistent with the expected purpose, even if more and better data are input, a well-behaved artificial intelligence system cannot be obtained. The discrimination and prejudice of artificial intelligence caused by machine learning algorithms and training data are collectively called pre-existing algorithm bias, which corresponds to the sudden algorithm bias caused by the emergence of new knowledge, new formats and new scenarios. Technological change has not eliminated the problem of false generation, but only packaged and concealed it. Therefore, generative artificial intelligence often has sudden algorithm bias, which further increases the risk of use and the difficulty of governance, thus requiring more accurate protective measures.

    Output quality problem: In fact, the risk essentially stems from people’s insufficient ability to recognize and control things and cannot be solved in time before the problem sprouts or even erupts. From this point of view, the controllability of technology is inversely proportional to its risk, and the more difficult it is to control, the higher the risk. Large language model and thinking chain not only endow generative artificial intelligence with the ability of logical deduction, but also make its output content more and more difficult to predict. In other words, generative artificial intelligence has low controllability and high potential risks. For example, due to social and cultural differences, the output of generative artificial intelligence may be appropriate in one cultural background, but it is offensive in another cultural environment. Humans can distinguish such differences, but generative artificial intelligence may be unable to distinguish the subtle differences of culture due to the lack of cultural pre-design, and inadvertently produce inappropriate content.

    Risk of improper use: Generative artificial intelligence has high intelligence, but the difficulty and cost of using it are low, which provides possible space for some people to engage in illegal activities by using its powerful power, and the risk of improper use arises from this. The People’s Republic of China (PRC) Academic Degrees Law (Draft) drafted by the Ministry of Education of China clearly stipulates the use of artificial intelligence to write dissertations and other behaviors and their handling. OpenAI said that it is training ChatGPT on sensitive words, and when users’ questions obviously violate the embedded ethics and legislation, ChatGPT will refuse to answer them. Even so, some people can still bypass the pre-set "firewall" of ChatGPT and instruct them to generate illegal content or complete illegal operations, and the risk of improper use has not been effectively curbed. In the long run, generative artificial intelligence may lead to a crisis of social trust, which will make people fall into the dilemma of being difficult to distinguish between true and false, and eventually lead to the "end of truth" and make human society enter the "post-truth era". ⑩

    Legal supervision risk: the internal input quality problems, processing process problems and output quality problems of generative artificial intelligence, as well as the external improper use risks jointly push its supervision difficulty to a new peak, resulting in the legal risk of supervision failure. The legal risk of generative artificial intelligence is not limited to a specific field, but crosses multiple fields and needs the collaborative governance of multiple departments. The terms of use of generative artificial intelligence often lack authorization to deal with user interaction data, which may lead to personal privacy and national security problems, because some data in large enterprises already have public attributes, and once leakage occurs, it will not only harm the interests of enterprises, but also pose a threat to national security. In addition, the problem that the decision-making process of generative artificial intelligence is not transparent enough will also have an impact on legal supervision, and even become a decisive reason for the regulatory authorities to prohibit its deployment in specific fields.

    Solution: seek multi-dimensional balance

    Facing the great application value of generative artificial intelligence and the internal and external risk challenges, we need to make a reasonable choice. At present, although all countries in the world have doubts about the safety of generative artificial intelligence, they unanimously recognize its application potential in international competition, economic development, digital government construction, etc. By formulating relevant policies and regulations, they try to handle the relationship between safety and development, technological innovation and public interest, laying the premise and guarantee for the use of generative artificial intelligence. Our country is no exception. Judging from Article 1 of the Interim Measures, the purpose of this law is "to promote the healthy development and standardized application of generative artificial intelligence, safeguard national security and social public interests, and protect the legitimate rights and interests of citizens, legal persons and other organizations". Article 3 clearly points out the general attitude of legislators to the governance of generative artificial intelligence, that is, "adhere to the principle of paying equal attention to development and safety, promoting innovation and combining governance according to law." In fact, through this principle, we can find a solution that is suitable for China’s national conditions, that is, to seek a multi-dimensional balance of generative artificial intelligence governance.

    First of all, we should balance the relationship between security and development. Under the overall national security concept, security is the premise of development, development is the guarantee of security, and non-development is the greatest insecurity. China must actively develop modern scientific and technological means such as generative artificial intelligence to promote economic development and social progress, and constantly enhance China’s international competitiveness. Article 3 of the "Interim Measures" points out that "the generative artificial intelligence service shall be subject to inclusive, prudent and classified supervision". Regarding specific measures, the Interim Measures also gives a plan: Articles 5 and 6 specify the direction and content of encouraging development, such as supporting industry organizations, enterprises, educational and scientific research institutions, public cultural institutions, relevant professional institutions, etc. to cooperate in technological innovation, data resource construction, transformation and application, and risk prevention of generative artificial intelligence; Articles 7 and 8 respectively put forward requirements for the safety of generative artificial intelligence from training data processing and data labeling, such as pre-training and optimization training. For example, the service provider of generative artificial intelligence should use data and basic models with legal sources, and shall not infringe on the intellectual property rights enjoyed by others according to law.

    Secondly, the relationship between technological innovation and technological governance should be balanced. While taking measures to accelerate the innovative development of generative artificial intelligence, we should also recognize the internal and external risk challenges it brings and deal with them accordingly. In short, we should govern in innovation and innovate in governance to ensure that technological innovation runs on the track of rule of law. To this end, on the one hand, we should give play to the role of policies in leading economic and social development, actively encourage enterprises to carry out technological innovation by simplifying administrative licensing and reducing taxes, and at the same time guide enterprises to use their own development to promote economic and social development, and confirm their legitimate rights and interests through legislation; On the other hand, adhere to the bottom line of governance according to law, and timely regulate and deal with possible violations of laws and regulations by enterprises. Specifically, according to the provisions of Article 19 of the Interim Measures, the relevant competent departments shall supervise and inspect the generative artificial intelligence services according to their duties, and the providers shall cooperate with them according to law; For the acts of generative artificial intelligence providers that violate the Network Security Law of the People’s Republic of China, the Data Security Law of People’s Republic of China (PRC) and the Personal Information Protection Law of People’s Republic of China (PRC), they shall be fined, ordered to stop providing services, and investigated for criminal responsibility. If the existing laws and regulations can’t effectively regulate the new format of technological development, the competent department can first give a warning, informed criticism, and order it to make corrections within a time limit, and then legislate in time (the dialectical relationship between special legislation and general legislation, national legislation and local legislation needs to be considered) to ensure that there are laws to follow and administration according to law.

    Finally, the relationship between corporate compliance obligations and corporate affordability should be balanced. Effective governance of generative artificial intelligence requires enterprises that carry out model training and provide generative services to undertake corresponding compliance obligations, but such obligations must be appropriate and should not exceed the affordability of enterprises. In this regard, we can find normative support from the relevant provisions of the Interim Measures: First, Article 3 requires classified and hierarchical supervision of generative artificial intelligence services, which implies that governance should be carried out according to specific risks, and different compliance obligations should be put forward for enterprises that provide different generative artificial intelligence obligations in different fields, which is quite similar to the risk-based hierarchical supervision of the Artificial Intelligence Act; Second, compared with the original draft for comments, Articles 7 and 8 have significantly reduced the compliance requirements for enterprises to develop and train generative artificial intelligence. For example, Item (4) of Article 7 has been changed to "Take effective measures to improve the quality of training data and enhance the authenticity, accuracy, objectivity and diversity of training data", and Article 8 has been changed to "Sampling and verifying the marked contents" Third, Articles 9 and 14 give the provider more flexible compliance space by redistributing the rights and obligations of service providers and users, so as to enhance their willingness to actively comply. For example, changing "content producer responsibility" to "network information content producer responsibility",So that enterprises providing services do not have to bear the responsibility for users’ malicious abuse of illegal content generated by generative artificial intelligence.

    Generative artificial intelligence technology has caused great waves in many industries, such as commerce and trade, news communication, etc. How to effectively supervise and make it serve human society is a realistic proposition that all countries should consider and solve for a long time. The artificial intelligence supervision mode chosen by different countries reflects specific social values and national priorities, and these different requirements may conflict with each other, such as paying attention to protecting users’ privacy and promoting technological innovation, thus creating a more complicated supervision environment. Although there are many differences between the EU’s "safety first, giving consideration to fairness" and the US’s "emphasizing self-regulation and supporting technological innovation", there is also a consensus part, which can more or less promote the technological innovation, safe use and legal governance of generative artificial intelligence. At the same time, transparency and interpretability will be the key to abide by emerging regulations and cultivate trust in generative artificial intelligence technology. The legislative trends in Europe and America also remind China to carry out artificial intelligence legislation at the national level as soon as possible, determine the basic principles of artificial intelligence governance, as well as the risk management system, the distribution of main responsibilities, legal responsibilities, etc., coordinate the national governance layout, and give full play to local enthusiasm, and try first through local legislation to avoid being unavailable and accumulate experience for national artificial intelligence legislation.

    (The author is a professor and doctoral supervisor at Guanghua Law School of Zhejiang University)

    [Note: This article is the phased achievement of the major project of the National Social Science Fund "Research on Establishing and Perfecting China’s Network Comprehensive Governance System" (project number: 20ZDA062)]

    [Notes]

    ①Ouyang L,Wu J,Jiang X,et al."Training language models to follow 

    instructions with human feedback",Advances in Neural Information 

    Processing Systems,2022(35),pp.27730-27744.

    ②Natali Helberger,Nicholas Diakopoulos."ChatGPT and the AI 

    Act",Internet Policy Review,2023,12(1).

    ③ Zeng Xiong, Liang Zheng and Zhang Hui: Regulation Path of Artificial Intelligence in EU and Its Enlightenment to China —— Taking the Artificial Intelligence Act as the Analysis Object, E-government, No.9, 2022.

    ④The White House,"Blueprint for an AI Bill of Rights",October 

    2022.

    ⑤Friedler, Sorelle, Suresh Venkatasubramanian, Alex Engler,"How 

    California and other states are tackling AI legislation",Brookings,March 

    2023.

    ⑥Müge Fazlioglu."US federal AI governance: Laws, policies and 

    strategies",International Association of Privacy Professionals,June 

    2023.

    ⑦ Cheng Le: "Research on the Development Trend of Artificial Intelligence and Guiding Ideas of Standardization", National Governance, No.6, 2023.

    ⑧ Cheng Le: The Legal Regulation of Generative Artificial Intelligence —— From the Perspective of ChatGPT, On Politics and Law, No.4, 2023.

    9. Liu Yanhong: Three Security Risks and Legal Regulation of Generative Artificial Intelligence —— Taking ChatGPT as an Example, Oriental Law, No.4, 2023.

    Attending Zhang Guangsheng: National Security Risks and Countermeasures of Generative Artificial Intelligence, People’s Forum Academic Frontier, No.7, 2023.


文章导航