HOW CAN WE IMAGINE THE GENERATIVE AI’S REGULATORY SCHEME? – PERSPECTIVES FROM ASIA

East Asian governments have started to embrace the potential of generative AI, albeit with some caution. Their ambitions can be seen in the plans for (no) regulation. While ideally, appropriate regulation may guide development, promote applications, and protect human rights, generative artificial intelligence (GAI) poses significant new challenges for regulators.

This blog post offers a concise summary of how current governments in China, Japan, South Korea, and Taiwan plan to regulate the development of GAI and provides an analysis based on the ongoing experience in East Asia proposing a 3W1H framework to consider the regulatory challenges. In this era of accelerated global adoption of GAI, it is envisioned and needed to create more dialogues between regions when considering regulation.

East Asian governments in action and the need for transnational dialogue

In East Asia, the governments and GAI have very active interactions. People are discussing the China government’s ‘Chinese’ approach to GAI regulation, the CEO of OpenAI visited the Prime Minister of Japan, the South Korean government is developing a Korean version of ChatGPT and is looking to GAI to create a paperless digital platform government, and the Taiwan government is also starting to develop a Taiwanese version of ChatGPT to prevent biased discourse from China.[1] These interactions inevitably lead to the issue of regulation of GAI. As GAI has made great strides in people’s lives and can be expected to have an increasing impact and reliance, the global race to regulate AI, including GAI, is underway.

In general, these governments appear to have embraced the existence of GAI and have sought to respond to it in terms of development, application and regulation. Despite this, while technology is blossoming, the positive and negative aspects of different regulatory schemes in these governments are apparent. This post draws on the experience of East Asia, a region that makes extensive use of technology and where the people’s privacy concerns differ from that of Europe and the US. In addition, because of the large number of people who use technology, there are many different scenarios for its application in societies. When discussed together, these differences in experience and perceptions can help develop a more inclusive framework for policy responses. It also helps to envision and address more diverse GAI issues.

Therefore, I am in a position that dialogue between regions is necessary, notably about regulation, as the impact of digital technology is often cross-border, and the regulatory ‘temperature differential’ between jurisdictions, as shown at the G7 digital ministers meeting, can affect the direction of technology development. I wouldn’t say there should be the same temperature everywhere, but it is important to be aware of the possible effects of these temperature differences. For example, in a globalised world, if there is a large application of GAI in a region’s industry, in addition to the local workforce, the international industrial chain will also be affected. In addition to the possible development of a common international ethical code for GAI, a synchronised mechanism of dialogue between countries and the region is thus necessary, as GAI policy needs to be sensitive to industry and civil society across cultures and borders.

Of course, the first steps are to understand each other’s situation, and I will briefly describe the responses of the East Asian governments to the GAI.

Mapping the ongoing regulatory approaches in East Asia

China: I want it in my shape

ChatGPT is not available in China, but there are many alternative local products, and China’s rapid introduction of the GAI regulatory scheme has attracted global attention. On 11 April, Chinese regulators presented draft Measures for the Management of Generative Artificial Intelligence Services. On 14 July, the draft became a temporary regulation to be implemented on 15 August. Aside from attempting to protect personal privacy and intellectual property rights, the following points with “Chinese features” in the regulation are worthy of note. Firstly, value constraints. The content generated must be in line with the core values of socialism, must not undermine national unity, and must not contain content that promotes terrorism, extremism or ethnic hatred, violence, pornography, false information, or content that may disrupt the economic and social order. Secondly, a prior licence must be applied to provide GAI services, and any breach of the law afterwards must be reported to the government. Thirdly, real-name users. The regulation emphasises that the use of GAI must align with the established cybersecurity law, which means users must also provide real-name information when using any GAI service. In short, this draft shows that the government’s goal is clear: China wants to be ahead of the curve regarding technology and control data.

Korea: a favorable regulatory environment for development

Korea has made significant achievements in recent years regarding the speed of legislative response. In 2020, the Korean government proposed Korea’s roadmap for revamping laws, systems and regulations and has accordingly amended key privacy-related laws to facilitate data use. For GAI, the Korean government plans to amend the Personal Data Protection Act to support the safe use of data for the hyper-scale AI industry and to reduce privacy violations. They mentioned the need for regulation and social impact assessment to ensure the ethical and reliable nature of the technology. The government also plans to amend the copyright law to facilitate the development of GAI products in Korea. It is fair to say that the Korean government has been very active in promoting AI/GAI development through regulation. Still, the state of rights protection needs to be observed continuously. In response to the security threats posed by the GAI, the Ministry of Science and ICT held a forum in June, and said it will study appropriate regulatory measures to build a safe and trustworthy Internet environment. 

Japan and Taiwan: Welcome the technology, and study for the regulations, but when?

On the other hand, in Japan and Taiwan, the governments have been more cautious in their regulatory actions. It is worth noting that they seem to have a “let the bullets fly a little longer” attitude regarding regulations. While governments have pointed out the need for risk management, the plans for regulation are still in the consideration stage as the governments pursue proactive technological development.

Japan is considered to be lagging behind other countries in terms of digitization and artificial intelligence technology. In this context, the LDP, the ruling party, wants to make thorough use of the GAI and conduct a human rights and security risk review in parallel. For the latter, however, the content still needs to be clarified. The head of the LDP’s digital strategy pointed out that although the risk of not being regulated would be high, technology application development would continue regardless of the high risk. The government wants to keep regulations as minimum as possible to catch up with the technological development of other advantaged countries. In June, the Personal Information Protection Commission issued a reminder on GAI. It pointed out that operators and administrations should pay attention to the scope of data collection/use as necessary to fulfil the purpose and the requirements of the Personal Data Protection Act. On the other hand, general users should pay attention to the correctness of the output content and the terms of use set by operators. It is common practice for the Japanese government to use reminders, rather than more compulsory means, to respond to the need to regulate new technologies. However, whether the government should adopt mandatory regulation to cope with the risk is still under study.

The Taiwan government aims to propose a draft of a basic law on artificial intelligence in September. The minister of the Executive Yuan, who said he used ChatGPT to generate his speech, expressed that the government is wary of the freedom of expression and intellectual property rights issues set out by the GAI and mentioned that 2023 would be the first year of the legalization of AI in Taiwan, and. This also means that there is currently a vacuum in the regulation of AI and GAI. But in the meantime, the government has also invested heavily in the development of GAI.

3W1H for dealing with GAI regulatory challenges

Although the governments in all four examples mentioned the risks associated with GAI, it is a bit daunting whether their responses can address these risks. Next, I try to propose a 3W1H framework for examining these advances and challenges in GAI regulation, hoping to review the examples. Also, I believe a common analytical framework will make international dialogue easier.

A prerequisite question is whether regulation is necessary. But in the experience of East Asian countries, even Japan, which argues that regulation should be minimal, still recognises the need for regulation. This article believes that, from the point of view of security and rights protection, an appropriate degree of risk control by regulation is necessary.

a) Why?

This is the fundamental question, what is the purpose of regulation? Ideally, the goal of regulation should be to enable GAI technology to be responsible, ethical and provide value to society. Safety and rights protection are important elements in achieving these goals. In practice, however, China and Korea present regulatory schemes that go in quite different directions: ensuring that the state controls the development of the GAI industry or modifying rights protection regulations to facilitate the GAI development. However, despite the questionable means, protecting rights is still a common language. Japan wants to achieve a minimum level of rights protection. Taiwan, on the other hand, sounds very interested in protecting rights – but we have yet to see a specific bill.

b) What?

But what exactly should be regulated? The nature of technology’s continuing progress often defies proper definition. People are still expecting GAI’s potential to go further. But from the perspective of rights protection, the preliminarily affected rights should be identifiable. It is possible to imagine a way of protecting rights by creating a protective shell against infringements from new and innovative technological means. However, the governments have not yet expressed a detailed understanding of the issues of freedom of expression, intellectual property rights and privacy to the public. I believe that East Asian authorities urgently need to explore impact assessments of rights in depth.

c) When?

All want to make technical progress, but the timing of the regulation is somewhat different. This illustrates how the design of regulations can influence their primary function. The speedy introduction of bills is important for state control and removing existing regulatory barriers. But if new safety and rights regulations are created out of thin air, the governments don’t want to be so quick. However, as long as there is a regulatory vacuum, infringements of rights may continue to occur without redress. Risk management should be considered in tandem with technological advances.

d) How?

I am referring to how the government comes up with these regulations. Given the broad impact of GAI’s technology on society, we should ensure that the rule-making process is participatory and inclusive. The China draft, for example, was introduced in a version known as the ‘Call for Comments’. The format itself is commendable, but it depends on how effective the comments are in practice. The Korean government also organised a multi-stakeholder forum in June. The multi-stakeholder model may make the legislative process longer, but the openness and transparency of the process can bring the community together to consider the possible risks and responses. This dynamic understanding of normative design can enhance the ultimate social acceptance of the regulations or not deviate from professional, industrial or social needs.

Considering the East Asian experience in this 3W1H framework, it is suggested that regulators should attempt to strike a balance between multiple objectives, conduct an in-depth analysis of the rights that may be affected, and begin a dynamic process of opinion gathering with industry and the public as soon as possible to achieve better GAI regulation.

Conclusion: Facing the regulatory challenges of GAI together

This post offers insights from the region and an analytical framework for forming a regulatory scheme for GAI. It suggests that the prompt formulation of inclusive principles for the GAI and the establishment of a dynamic mechanism for cooperation and dialogue are necessary to protect human rights and promote safe industry development.

From the 3W1H framework, I would like to conclude with a reference to East Asia. Regulation can serve many functions, including protecting rights, promoting industrial development, maintaining national dominance, also controlling society. This highlights the challenge democracies face in reconciling technological advances with human rights, while authoritarian governments also develop their regulatory frameworks. This poses an additional challenge, as the potential impact of technology is likely to be transnational, making collaboration and awareness of regional differences essential. Korea has chosen to remove barriers to development through law, which should also analyse the impact of rights as technology continues to evolve. But while a deeper understanding of what is important, in the case of Japan and Taiwan, this cannot be an excuse for procrastination, and it is important to start a multi-stakeholder response system as soon as possible.

Every country wants to be technologically advanced, but protecting rights in East Asia needs to be developed more. If the EU were to introduce stricter regulations for general artificial intelligence (GAI), this would also affect the promotion of local GAI products in East Asia. As East Asian countries continue to push the boundaries of technology, serious data violations without adequate rights protection could undermine trust in technology globally. Consequently, technology and risk control must go hand in hand, and the time to seriously face regulatory challenges can no longer be postponed.

So, hold on tight. The technological and regulatory surge of GAI will continue.

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post

Asia’s opportunity for generative AI

Next Post

5 Southeast Asian Startup Companies Using Generative AI for Social Impact

Related Posts