The rapid development of artificial intelligence (AI) is posing unprecedented challenges to legal systems worldwide. From copyright disputes over generative AI and liability attribution for autonomous vehicles to the fairness and transparency of algorithmic decision-making — these issues strike at the most fundamental conceptual frameworks of law: What constitutes "creation"? What constitutes "negligence"? What constitutes "discrimination"? As a researcher with a doctoral background in law who has deeply engaged with AI technology applications in recent years, I have continuously confronted the tensions between law and technology — through my legal training at Nagoya University, my technology policy research at Cambridge University, and my current AI development practice leading Meta Intelligence. This article analyzes the legal challenges of the AI era across three core dimensions — copyright, liability, and regulatory frameworks — and offers forward-looking institutional recommendations based on a comparative analysis of the EU, the United States, and China.
I. The Legal Vacuum of AI: Why Current Legal Frameworks Fall Short
The foundational assumption of modern legal systems is that the subject of legal acts is a "person" (natural person or legal entity). All rights, obligations, and liabilities are attributed to "persons." But the advent of AI has shaken this assumption — when an algorithm autonomously generates a painting, makes a loan rejection decision, or causes a traffic accident, how should the law assign responsibility?
This question is far from purely theoretical. Throughout my legal research career, I have personally witnessed the evolution of three waves of AI legal impact.
The first wave (2015–2019) came from "narrow AI" — particularly the application of algorithmic decision-making in finance, insurance, criminal justice, and other fields. When banks use AI models to evaluate loan applications, does the rejected applicant have the right to know the reason for rejection? When courts use risk assessment algorithms (such as the COMPAS system in the United States) to assist sentencing decisions, does the defendant have the right to challenge the algorithm's biases? These questions go to the heart of procedural justice — in a society governed by the rule of law, every decision affecting individual rights should be explainable and challengeable. But the "black box" nature of AI models (especially deep learning models) makes "explanation" extraordinarily difficult.[1]
The second wave (2020–2022) came from autonomous driving. When an autonomous vehicle causes an accident, who should bear responsibility — the vehicle owner? The manufacturer? The software developer? Or the AI system itself? Traditional tort law is built upon the concept of "negligence" — a party who fails to exercise a duty of care and thereby causes harm should bear liability for compensation. But in autonomous driving, the "actor" is an algorithm that has no legal "duty of care"; and in fully autonomous driving scenarios, the human driver may have had no opportunity to intervene in the accident at all. This "liability gap" poses fundamental difficulties for the application of existing tort law.[2]
The third wave (2022–present) comes from generative AI. The explosion of generative AI tools such as ChatGPT, Midjourney, and Stable Diffusion has thrust AI legal challenges into the center of public attention. These tools can generate articles, images, music, and code in seconds — and the legal status of these outputs is deeply contested: Do they constitute "works" protected by copyright law? Does the use of copyrighted data to train AI models constitute infringement? How should the "similarity" between generated outputs and training data be assessed?
These three waves reveal a structural problem: Current legal systems were constructed in an era when AI did not exist; their core concepts — creator, negligence, intent, personhood — all presuppose humans as the default subject. As AI begins to assume more and more functions originally performed by humans, what the legal system needs is not mere "patching" but "reconstruction" — redefining fundamental concepts, redesigning liability frameworks, and rebalancing the allocation of rights and obligations.[3]
II. The Copyright Dilemma: Who Owns AI's Output?
Copyright law is perhaps the legal domain most dramatically impacted by AI. In my research and practice, I categorize AI copyright issues into three layers: the copyrightability of AI output, the legality of AI training, and the ambiguity of rights attribution.
The first layer: Does AI output constitute a "work"? Copyright law protects "an author's original expression" — in most jurisdictions, "author" is implicitly or explicitly defined as a natural person. The U.S. Copyright Office stated clearly in its 2023 guidance document that content generated entirely by AI without human creative involvement does not meet the requirements for copyright protection. That same year, in Thaler v. Perlmutter, a U.S. court upheld the Copyright Office's position, ruling that images generated by the AI system DALL-E cannot be registered for copyright because copyright law requires human authorship.[4]
In practice, however, there is a vast gray area between "entirely AI-generated" and "entirely human-created." When an artist carefully crafts detailed prompts, iterates dozens of times through selection and refinement, and ultimately selects and modifies the final work from AI output — how should the human creative contribution in this process be quantified? The U.S. Copyright Office offered a preliminary answer in its 2023 Zarya of the Dawn decision: the "narrative arrangement" of the entire work has copyright, but the individual images generated by AI do not. While this "case-by-case review" approach is flexible, it also introduces significant uncertainty.
The second layer: Does AI training constitute "fair use"? Training generative AI models requires massive amounts of data — text, images, music, code — much of which is protected by copyright. During training, these protected works are copied, analyzed, and statistically processed, and the patterns "learned" are encoded into the model's parameters. Does this process constitute "fair use" (under U.S. law) or a "text and data mining exception" (under EU law)?
Several major lawsuits are currently being litigated in the United States — The New York Times v. OpenAI, Getty Images v. Stability AI, and multiple authors v. Meta, among others. The central dispute in these cases is whether AI training meets the four-factor test for "fair use" — particularly the factor concerning "the effect on the market value of the original work." AI developers argue that training is a "transformative use" because AI learns statistical patterns rather than copying originals; rights holders counter that AI outputs directly compete with original works in the market, severely undermining their economic interests.[5]
The EU and Japan have taken different legislative approaches to this issue. The EU's Digital Single Market Copyright Directive (2019) provides two exceptions for text and data mining: mining for scientific research purposes is unrestricted; mining for commercial purposes allows rights holders to "opt out." Japan's Copyright Act Article 30-4 provides a broader exception — permitting the use of protected works when the purpose is not to "enjoy the expression of the work," which has been widely interpreted as a general permission for AI training.
The third layer: The problem of rights attribution. Even when AI output is deemed a copyrightable work, to whom should the rights belong — the prompt author? The AI model developer? The training data provider? No jurisdiction currently has a clear legislative answer to this question. This uncertainty poses serious obstacles for commercial applications — when a company uses AI to generate marketing copy or product designs, does it own the copyright to those outputs? If not, can competitors freely use the same outputs? In the AI development projects I lead at Meta Intelligence, these questions are among the most frequently raised legal concerns by our clients.[6]
III. The Liability Framework: When AI Causes Harm, Who Is Responsible?
If copyright issues concern "the right to create," then liability issues concern "the attribution of harm." When decisions or actions by AI systems cause damage — discrimination in loan assessments, misdiagnosis in medical contexts, accidents involving autonomous vehicles — how does existing law allocate responsibility?
Traditional liability law offers three primary paths of attribution: negligence liability (proving the defendant failed to exercise reasonable care), product liability (manufacturers bear strict liability for defective products), and vicarious liability (employers are liable for employees' tortious acts). The characteristics of AI systems create difficulties for all three paths.
The difficulty with negligence liability lies in "foreseeability." Establishing negligence requires that the actor "could have reasonably foreseen" the harm their actions might cause. But the behavior of deep learning models is inherently unpredictable — even the model's developers cannot fully anticipate the model's behavior under all possible input conditions. When a medical AI gives an incorrect diagnosis in a rare case, could the developer have "reasonably foreseen" this error? If not, negligence liability cannot be established.
The difficulty with product liability lies in defining "defect." Traditional product liability law distinguishes three types of defects: design defects, manufacturing defects, and warning defects. But the "defects" of AI systems often fall into none of these categories — a model may be reasonable in design, free of errors in "manufacturing" (training), and accompanied by appropriate usage warnings, yet still produce harmful outputs in specific situations. Furthermore, AI systems are continuously updated — a trained model may constantly adjust its behavior through online learning, meaning that the product's characteristics at the "time of shipment" and at the "time of use" may be entirely different. The EU's proposed AI Liability Directive (2022) attempts to address this issue by introducing a "rebuttable presumption of causation" — when an AI system is non-compliant and a causal link between this non-compliance and the harm is reasonably probable, the causal relationship is presumed, and the burden of proof shifts to the defendant.[7]
The difficulty with vicarious liability is that AI is not an "employee." Vicarious liability presupposes the existence of an employment relationship — employers are liable for tortious acts committed by employees within the scope of their work. Some scholars advocate treating AI systems as analogous to "digital employees," making the organizations that deploy AI bear employer-like liability. This analogy has intuitive plausibility in certain scenarios (for example, a bank using AI for credit assessments is akin to using a credit analyst). But it also faces theoretical challenges — the foundation of vicarious liability is the employer's "control" over employee behavior, and the "autonomous learning" nature of AI systems means that the deployer's control over their behavior may be far less than their control over human employees.
In practice, I believe the most viable direction is to establish a "risk-based tiered liability system" — applying different attribution standards according to the risk level of the AI application. High-risk applications (such as medical diagnosis, criminal justice, autonomous driving) should be subject to strict liability — deployers should bear compensatory liability without the need to prove negligence, because their choice to deploy high-risk AI itself constitutes an assumption of risk. Medium-risk applications (such as credit assessment, insurance pricing) should be subject to a presumption of negligence — deployers must prove they exercised reasonable care. Low-risk applications (such as recommendation systems, content generation) should be subject to ordinary negligence liability. The spirit of this tiered system is highly consistent with the "risk-based tiered regulation" philosophy of the EU AI Act.[8]
IV. Global Regulatory Comparison: Three Paths of the EU, the United States, and China
In the AI regulatory race, the EU, the United States, and China have embarked on fundamentally different paths. The differences among these three approaches profoundly reflect the influence of different legal traditions, political systems, and strategic priorities.
The EU path: "Rules first + Rights-oriented." The EU AI Act is the world's first comprehensive AI legislation, officially adopted in 2024. It employs a "risk-based tiered regulation" architecture, classifying AI systems into four risk categories: unacceptable risk (such as social scoring systems and real-time remote biometric identification — prohibited in principle), high risk (such as AI in medical devices and critical infrastructure management — subject to strict compliance requirements), limited risk (such as chatbots — subject to transparency obligations), and minimal risk (such as spam filters — not subject to additional regulation).[9]
The core philosophy of the EU path is "fundamental rights protection" — the use of AI should not infringe upon the fundamental rights of EU citizens, including the right to privacy, non-discrimination, and effective remedy. This "rights-oriented" regulatory philosophy is a direct extension of the EU's GDPR, reflecting the high value placed on individual rights in the European legal tradition. Its strength lies in legal certainty and predictability — businesses can clearly understand what is permitted and what is prohibited. Its weakness lies in the potential to stifle innovation — strict compliance requirements may put European companies at a disadvantage in the AI development race against competitors from the United States and China.
The U.S. path: "Industry self-regulation + Enforcement guidance." In stark contrast to the EU's comprehensive legislation, the United States has yet to pass federal-level comprehensive AI legislation. The Biden administration's 2023 Executive Order on AI set policy directions for safety, privacy, and fairness, but executive orders lack the binding force of congressional legislation. In practice, U.S. AI regulation operates primarily through two mechanisms: first, the "expanded interpretation" of existing regulatory agencies — for example, the FTC (Federal Trade Commission) extending its existing authority over consumer protection and unfair competition to the AI domain; and second, industry self-regulation — voluntary safety commitments and best practice guidelines from major AI companies.[10]
The strength of the U.S. path lies in its flexibility — rather than pre-establishing strict rules, it responds incrementally as actual problems emerge, avoiding the innovation-dampening effects of "premature regulation." But its weakness lies in fragmentation — different federal agencies and individual states may adopt different or even contradictory positions on the same AI issues, increasing compliance uncertainty for businesses. California, Colorado, Illinois, and other states have already enacted or are in the process of enacting their own AI regulations, creating a "patchwork" regulatory landscape.
The China path: "Distributed legislation + State governance." China's legislative pace in AI regulation has been remarkable. From the Personal Information Protection Law (PIPL) in 2021, to the Provisions on the Management of Deep Synthesis Internet Information Services (commonly known as the "deepfake law") in 2022, to the Interim Administrative Measures for Generative Artificial Intelligence Services in 2023, China has enacted multiple regulations targeting specific AI applications within just three years. Unlike the EU's "single comprehensive legislation" approach, China has adopted a "distributed legislation" strategy — formulating specialized administrative measures for different AI application scenarios.
The distinctive feature of China's path is the tight integration of regulation and industrial policy. On one hand, China imposes strict restrictions on certain AI applications — for example, requiring that generative AI outputs align with "core socialist values" and mandating labeling obligations for deepfake technology. On the other hand, China views AI as a national strategic priority — the "New Generation Artificial Intelligence Development Plan" (2017) set the goal of becoming a global AI innovation center by 2030. This strategy of "regulation within promotion, promotion within regulation" reflects the regulatory logic of "political capitalism" that I discussed in my dialogue with Professor Milanovic — the state flexibly switches between promoting and restricting to serve broader national strategic objectives.[11]
A comparison of the three paths reveals an important insight: There is no "best model" for AI regulation — each path is a product of its respective legal system, political regime, and stage of economic development. But regardless of which path is taken, all countries face the same core challenge: how to achieve a dynamic balance between "promoting AI innovation" and "mitigating AI risks."
V. Forward-Looking Recommendations: Toward an "Adaptive AI Legal Framework"
Drawing on my dual experience in legal research training and technology policy practice, I offer the following five recommendations for the future development of AI legal frameworks.
First, establish "technology-neutral, risk-oriented" regulatory principles. The pace of AI technology evolution means that any legislation targeting specific technologies is likely to become obsolete in the short term. A more effective approach is to formulate "technology-neutral" regulatory principles — rather than creating rules specifically for "generative AI" or "deep learning," create rules for "high-risk automated decision-making" or "algorithmic applications that may affect fundamental rights." This way, when new AI technologies emerge (such as quantum machine learning), the existing legal framework can apply without fundamental revision. The EU AI Act has made an important attempt in this direction — its regulatory subject is "AI systems" rather than specific technical methods.[12]
Second, establish a "co-creation" copyright framework for AI output. I recommend that jurisdictions consider establishing a new copyright category — "AI-assisted works" — that recognizes the "co-creation" relationship between human users and AI systems. Under this framework, the granting of copyright depends on the degree of human creative contribution — the greater the contribution, the stronger the copyright protection; the lesser the contribution (for example, inputting only a simple prompt), the weaker the copyright protection or none at all. At the same time, a mandatory "AI-generated labeling" system should be established, requiring all works containing AI-generated content to be clearly marked, ensuring the public's right to be informed.
Third, establish a dual-track liability mechanism of "ex-ante audit + ex-post accountability." For high-risk AI applications, I advocate introducing a mandatory "Algorithmic Impact Assessment" (AIA) system — before an AI system is deployed, a systematic risk assessment must be conducted, including bias testing, fairness analysis, and safety verification. Assessment results should be filed with regulatory authorities and, where necessary, disclosed to the affected public. Simultaneously, an "ex-post accountability" mechanism should be established — whether the deployer can demonstrate that an ex-ante audit was completed and its recommendations were followed will become an important factor in liability determination.
Fourth, promote a minimum consensus on international AI governance. The cross-border nature of AI means that purely domestic regulation is inherently insufficient. An AI system developed in the EU, trained in the United States, and deployed globally — which country's law should apply? When requirements from different jurisdictions contradict each other (for example, the EU demands high transparency while China requires content review), how should enterprises respond? I believe the international community needs to establish a "minimum consensus" on AI governance — not unified global law (which is infeasible in the short term), but a set of minimum standards acceptable to all nations, covering core principles such as safety, transparency, non-discrimination, and human oversight. The OECD's AI Principles published in 2019 and the Hiroshima AI Process in 2023 are important steps in this direction.
Fifth, integrate "legal literacy" into AI education and development practice. In leading the AI development team at Meta Intelligence, I consistently emphasize that every technical decision carries legal and ethical dimensions. Choosing what training data to use is making a copyright and privacy decision; choosing what loss function to use is making a fairness and discrimination decision; choosing what deployment method to use is making a liability and safety decision. AI developers do not need to become lawyers, but they need basic legal awareness — an understanding of the legal consequences that their technical choices may produce. Equally, legal professionals need to understand the basic principles of AI — otherwise they cannot effectively regulate a technology they do not understand.
In retrospect, the legal challenges of the AI era are not merely the "legalization of technology issues" but the "modernization of law itself." When I was pursuing my doctorate in law at Nagoya University, I studied the legal architecture of financial regulation — a relatively stable field. Today, AI is reshaping the objects, instruments, and objectives of law at an unprecedented pace. Facing this transformation, what the legal system needs is not defensive "patching" but proactive "reconstruction" — rethinking the fundamental concepts of creation, liability, fairness, and governance to adapt to a new era of human-machine coexistence. This is the core mission of our generation of legal researchers and policymakers.
References
- Pasquale, F. (2015). The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press.
- Scherer, M. U. (2016). Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies. Harvard Journal of Law & Technology, 29(2), 353–400.
- Calo, R. (2017). Artificial Intelligence Policy: A Primer and Roadmap. UC Davis Law Review, 51(2), 399–435. ucdavis.edu
- U.S. Copyright Office. (2023). Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence. Federal Register, 88 FR 16190. copyright.gov
- Samuelson, P. (2023). Generative AI Meets Copyright. Science, 381(6654), 158–161. doi.org
- Grimmelmann, J. (2016). There's No Such Thing as a Computer-Authored Work — And It's a Good Thing, Too. Columbia Journal of Law & the Arts, 39(3), 403–416.
- European Commission. (2022). Proposal for a Directive on Adapting Non-Contractual Civil Liability Rules to Artificial Intelligence (AI Liability Directive). COM(2022) 496 final.
- European Parliament. (2024). Regulation (EU) 2024/1689 Laying Down Harmonised Rules on Artificial Intelligence (AI Act). Official Journal of the European Union.
- Veale, M. & Borgesius, F. Z. (2021). Demystifying the Draft EU Artificial Intelligence Act. Computer Law Review International, 22(4), 97–112. doi.org
- The White House. (2023). Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. whitehouse.gov
- Roberts, H. et al. (2021). The Chinese Approach to Artificial Intelligence: An Analysis of Policy, Ethics, and Regulation. AI & Society, 36, 59–77. doi.org
- OECD. (2019). Recommendation of the Council on Artificial Intelligence. OECD/LEGAL/0449. oecd.ai