由下而上建立值得人民信賴的司法

The Judicial Reform Foundation’s opinions on the National Science and Technology Council’s “Artificial Intelligence Basic Act”

Mandarin Version(中文版)

The Judicial Reform Foundation (“JRF”) has long promoted democracy and the rule of law. In recent years we have been paying particular attention to the changes in power structures such as people's rights and the boundaries of public power amidst the rapid digitization of both public and private sectors. As a civil society organization with legal expertise, the JRF is committed to promoting the legalization of digital rights. We believe that only when people's rights are recognized by law, under a system that is practically operable and accountable, can we comprehensively protect individual freedom and our democratic society in the digital era. In response to the draft "Artificial Intelligence Basic Act" (hereinafter “AI Basic Act” or “Draft”) published by the National Science and Technology Commission on July 15, 2024, the JRF submits:

1. The construction of the digital rule of law should be expanded to the entire digital environment; people’s rights should be recognized; and a legally effective and operatable accountability mechanism should be established.

(1) General support for adopting the Rule of Law approach

The cornerstone of a democratic society is the rule of law. In the face of major shifts of power structures as the society digitizes, instead of solely relying on patchwork administrative policies, articulating the relationship between rights and responsibilities through legislation that is based on legitimate public opinion is the correct direction forward. Operating on the core values of being "people-oriented" and "rights-based", we support the government's choice of a legislative response to regulating the digital sphere. We also hope that the government can cooperate with civil society in this legalization endeavor, and collectively work towards the achievement of the digital rule of law.

(2) The construction of the digital rule of law should be expanded to cover the entire digital environment, not just AI.

However, the development of technology, especially information technologyis rapid , while the attention of industry, government, and even the stock market shifts easily. Therefore, even when relevant policies or reforms are proposed by the government, they may lack a systematic and values-based approach to regulating the digitization process of our entire society.

What we need is not only the AI Basic Act. Our government’s vision should not be limited to the currently popular AI; it should also expand its scope of attention to the entire digital environment. In designing a democratic and free digital soceity, the formation of social consensus and values should also be informed by the understanding that our society is currently undergoing a process of digitization.

(3) Reject biased legislation that only encourages technological development but does not enact people’s rights; legal effects and schedules should be specific and timely.

Although the current draft of the AI Basic Act is a “Basic Act”, its contents are mostly policy declarations – declaring that the government should actively promote the development of artificial intelligence in both public and private sectors.[1] However, it downplays and fails to propose specific mechanics on how to reduce the negative impacts and risks caused by the large-scale application of artificial intelligence in a short period of time. There are also no clear stipulations on accountability and competent authority.

This single-purposed Draft only encourages development and favors applications without formulating any legally binding control and accountability mechanism, and neither acknowledges individuals’ claim of rights nor the government’s obligation to protect the people. Put simply, itis like a train that only accelerates without any deceleration system in place. In our opinon, such legislation cannot achieve the purpose set out in Article 1 of the Draft to “protect the lives, bodies, health, safety and rights of citizens.”

Since this Draft is called the “Basic Act”, it should establish the “rights” of the people and the corresponding “obligations” of the state like other basic laws of our country. However, the current Draft does not mention the people’s rights at all and unfortunately only has the shell of a basic law which is effectively a hollow policy statement.

If this draft of the AI Basic Act is passed, it will to a certain extent define the future development of the rule of law as well as the civil rights and government responsibilities in a free digital society. While we should be cautious, we should also refrain from the risks associated with “saving the discussion for later.”.

2. The Draft seriously lacks the functions and elements of a basic law

If one selects “basic law” as the legislative form of this legalization, then the legislation should naturally conform to the functions that a basic law should have. The JRF has proposed the necessary elements that are seriously lacking in the current draft:

(1) The basic law should recognize the rights that people can assert legally.

First, one of the important functions of a basic law is to declare the list of rights that should be protected in various fields. For example, the Educational Fundamental Act recognizes that “the people have the right to request academic evaluation” (Article 14), “the people have the freedom to pursue education for educational purposes” (Article 7), and “students’ rights to learn, receive education, and have physical autonomy and the right to personal development" (Article 8). The Indigenous Peoples Basic Law explicitly protects “Indigenous peoples’ rights to communication and media access” (Article 12), “indigenous peoples’ rights to land and natural resources” (Article 20 Article). These are all appropriate cases. The Educational Fundamental Act further expressly stipulates that when the relevant rights of teachers and students are improperly or illegally infringed upon, the government has the responsibility to provide effective and fair relief channels (Article 15).

However, the current draft of the AI Basic Act fails to mention any rights protection clauses at all. Even though Article 3 attempts to list some basic principles, it falls flat due to the lack of concrete and operational definitions. Additionally, because the Draft does not point out the legal effects or any adverse consequences of violating the principles, these basic principles that are beneficial to the people have been reduced to superficial embellishments with no genuine protective effect, sugarcoating this Draft that solely promotes technological development.

For emerging fields of regulatory oversight such as artificial intelligence, the rights protection provisions in its basic law are particularly important. For example, legislation should be adopted to recognize the people’s “right to privacy” (including the right to information self-determination, the right to confidentiality, the right to anonymity, etc.), the right to autonomy (including the right to withdraw, the right to request minimum data, and right to genuine alternatives, etc.), digital mental health rights and more.

(2) The basic law should provide a framework plan for competent authorities, future legislative methods and regulatory review deadlines.

Secondly, another function of the basic law is to determine the future systems and policy directions of various government agencies. To fulfill this function, the basic law should at least provide a framework plan for competent authorities, future legislative methods and regulatory review periods.

However, the current Draft fails to clearly explain even the most basic competent authorities and regulatory review deadlines[2]. Others can only rely on fragments of clues to piece together a vague image of the regulatory governance of artificial intelligence, with no ability to foresee and prepare for possible future regulatory developments.

3. The content of the Draft raises many concerns, namely authorization concerns, lack of deadlines and definitions, and easy exemptions.

There are many uncertain and ambiguous provisions within the Draft., therefore the JRF will only – in principle – make suggestions on fundamental aspects such as general direction and framework as mentioned above. Regarding suggestions on specific draft provisions, we will only raise a few significant points of contention as follows:

(1) On what basis does the basic law have the function of formulating regulatory orders?

Article 10[3] of the Draft authorizes competent authorities of each industry to formulate the AI risk classification standards related to their portfolio.[4] However, basic law is different from functional law and does not serve the function of authorizing the formulation of regulatory orders. The current design of the Draft is confusing in nature.

Additionally, even if this Draft adopts the normative model of risk classification, it does not pinpoint the corresponding effects of risk classification. It does not even mention the most general direction or scope and misses the framework guidance function that a basic law should have.

(2) Reject “saving the discussion for later” without a deadline.

Even if this Draft partially possesses functional law properties, it seems procedural progress cannot be guaranteed through the Draft. For the current draft that has no specific schedule, “saving the discussion for later” not only means “the contents will be revealed later,” but also entails “how much later is later” is unknown, and no authority has the responsibility to monitor the progress.

This is as if the government decided to encourag all cars and motorcycles to take to the road on a large scale before establishing traffic rules (Articles 4 to 8), and even using taxpayers’ money to subsidize, invest, and reward[5] or provide tax, financial and other financial preferential measures,[6] to encourage everyone to drive on the road. What future will the traffic rules hold? Saved for later; what happens if rules are broken? Unknown; and when will the rules be announced? Still unknown.

Such a rough legislative style may not only fail to protect the rights and interests of the people, but is also contrary to the government’s intention of industrial promotion as the private sector hesitate due to regulatory uncertainty.Therefore, the current Draft makes it difficult to achieve any of the goals listed in Article 1 of this Draft.

(3) Prohibited uses or situations should be clearly stated.

As a framework legislation, the AI Basic Act should clearly set out prohibitions in specific situations or uses, in addition to stipulating people’s rights and corresponding state obligations. The discussion on “AI systems and models with unacceptable risks” during the legislative process of the EU Artificial Intelligence Act can be referenced, and then make appropriate adjustments based on our country’s local context.

Using the EU’s AI Act as an example, because the application of AI will pose a potential threat to fundamental rights and democracy, the following applications are completely prohibited: subliminal manipulation, social scoring, use of biometric categorization systems that use sensitive characteristics, individual predictive policing and facial recognition, just to name a few.

(4) The definition of important terms such as “artificial intelligence” should be improved.

“Artificial intelligence” is the only term defined throughout the Draft whereas definitions of other terms are ambiguous. This will greatly reduce the actual effectiveness of the legislation, and even provide opportunities for wrongdoers to evade responisbility.

The technologies and related social issues involved in artificial intelligence are novel, and they should be defined in legislation before being implemented in the future. Take the EU AI Act as an example, the definitions article (Article 3) alone covers 68 term, which is enough to highlight the serious shortcomings of the Draft’s lack of clear definitions. Since this Draft intends to adopt the EU risk classification model, we mustpoint out that in the EU AI Act, even the word “risk” has a clear legal definition to ensure that subsequent relevant regulations rest on a solid basis.

Conversely, the only article in the Draft that attempts to define the word “artificial intelligence” is piled up through other words that have been provided no legal definition. For instance, “Artificial intelligence as defined in this act refers to a machine-based system that has autonomous operation capabilities and – through input or sensing, machine learning and algorithm – achieve predictions, content, suggestions or decisions that affect the output of the physical or virtual environment of explicit and implicit goals. What are the definitions of “autonomous operation capability”, “machine learning” and “algorithm”? If there is no clear definition, it will inevitably become a source of legal disputes in the future.

(5) Exemption clauses should strengthen definitions, thresholds and related supporting mechanisms.

The language of Article 12 of this Draft is not: “Apart from complying with the basic principles of Article 3, any artificial intelligence research and development activities are not bound by the regulations related to application responsibilities”. First, without clearly defining “development,” “research” and “application,” it is conceivable that “development” and “research” are likely to become a safe harbor for companies and the government to avoid responsibilities. In other words, as long as the business claims that its AI is still under testing, trial operation, calibration and training stages – i.e. making a best effort to explain that its AI has not yet fallen into the scope of “application” – it is possible to obtain complete immunity. Therefore, under such an exemption structure, it is very important to clearly define the boundaries among development, research and application. The lack of legal definition could become the biggest loophole to our country’s AI governance. Although the basic law is a framework and guiding legislation, it should at least outline and define the life cycle of artificial intelligence, including but not limited to research, development, deployment, product launch, and the corresponding responsibility standards at each stage.

Furthermore, whether all responsibilities can be directly exempted even during purely early stages of research and development should also be carefully considered. In fact, reconciling innovation and regulations is not unforeseen in artificial intelligence. The so-called supervision sandbox is a feasible method. When we start to further discuss the institutional design and responsibility mechanism of the sandbox, we can be more thorough in designing relevant liability reduction conditions and supporting mechanisms, including regulatory adjustments that can avoid mending negotiations after product launch, and finding synergistic ways in the early stages of research and development.

4. The AI Basic Act lacks foresight – the people need a “Digital Bill of Rights”

As a democratic and free country, Taiwan is riding the wave of rapid changes and widespread application of science and technology. Our society needs a clearly formulated list of rights and legal effects (people's rights, state obligations, and accountability mechanisms), as well as stipulations on procedural schedules. The JRF hopes that the Draft will fully gravitate towards the above-mentioned directions after public consultation.

We also hope that all sectors of society will not only see the current artificial intelligence issues, but also expand their attention to the overall digital environment and the far-reaching impact of digitalization on individual freedom and collective democracy. The JRF also invites legal and information technology experts to draft a “Digital Bill of Rights,” with the goal that it will serve as the core value of promoting “digital basic law” in the future. All are welcome to review and exchange information.


[1] Policy goals such as “actively promote the research and development, application and infrastructure of artificial intelligence” (Article 4), “establish or improve the innovative experimental environment for existing artificial intelligence research and development and application services” (Article 6), “public-private collaboration and international policy goals such as promoting the innovative application of artificial intelligence through cooperation” (Article 7) and “improving the availability of artificial intelligence data” (Article 15) are highly encouraging, but there is no corresponding firewall designs to prevent AI’s potential for technical abuse and rights infringement.

[2] Article 17 of the Draft stipulates “the government shall review and adjust its responsibilities, businesses and regulations in accordance with the provisions of this act after its implementation to fulfill the purpose of this act. The preceding paragraph shall be formulated or amended if there are no provisions in existing laws and regulations, the central industry competent authorities shall cooperate with the central science and technology competent authorities to interpret and apply them in accordance with the provisions of this act.” The Fundamental Communications Act (Article 16, two years), the Indigenous Peoples Basic Law (Article 34, three years) and the Ocean Basic Act (Article 16, two years) all have clear deadlines for reviewing laws and regulations.

[3] Article 10 of the Draft stipulates “the Digital Development Department should refer to international standards or standard development of artificial intelligence information security protection, risk classification and management, and promote an artificial intelligence risk classification framework that interfaces with the international community. The central industry competent authorities may follow the previous section’s risk grading framework and set out the risk grading standards for the businesses it manages."

[4] Please note that the second paragraph of Article 10 of the Draft reads “...the competent authority ‘may’ follow ...” Since the language is “may” and not “shall,” the competent authority has not been obligated to establish regulations, and the competent authority is not obligated to follow the digital sector risk classification framework if it does in fact establishe regulations. If so, what are the practical benefits of this rule?

[5] Article 4 of this Draft.

[6] Article 4 of this Draft.