On August 1, 2024, the EU Artificial Intelligence Act (Regulation (EU) 2024/1689) officially entered into force, becoming the world's first comprehensive legislative framework for artificial intelligence.[1] The impact of this legislation extends far beyond the EU's 27 member states — it is reshaping the global AI governance landscape through a mechanism scholars call the "Brussels Effect." According to Stanford HAI's 2025 AI Index Report, by the end of 2025, 69 countries had passed AI-related regulations, a 23-fold increase from 2016.[2] However, the EU AI Act is unique in its extraterritorial applicability — any enterprise that places an AI system on the EU market, whether registered in Brussels, Silicon Valley, or Hsinchu, must comply with this law. Maximum penalties can reach 7% of global annual revenue or EUR 35 million, whichever is higher.[1] For Taiwan — an export-oriented economy that occupies a pivotal position in the global semiconductor and electronics supply chain — the EU AI Act is not a "distant European problem" but an imminent compliance challenge and strategic opportunity. Drawing from my experience conducting technology governance research at the University of Cambridge and now leading Meta Intelligence Ltd in providing AI strategy services to enterprises, I feel strongly that understanding the logic of the EU AI Act has become essential knowledge for every Taiwanese business leader.

I. The Architectural Logic of the EU AI Act: A Risk-Based Governance Philosophy

The core design principle of the EU AI Act is a "risk-based approach" — not all AI applications require the same degree of regulation; rather, they are classified into four risk tiers based on their potential threat to fundamental rights and safety, with obligations of varying intensity applied accordingly.[1]

Tier One: Unacceptable Risk. These AI practices are completely prohibited, effective from February 2, 2025. They specifically include: AI systems that use subliminal techniques to manipulate human behavior, government social credit scoring, real-time remote biometric identification in public spaces (with strictly limited law enforcement exceptions), and AI systems that exploit vulnerable groups.[3] These prohibitions reflect the EU's bottom-line commitment to human dignity and autonomy — regardless of how technology advances, certain application scenarios are absolutely unacceptable.

Tier Two: High-Risk AI. This is the most central and complex category of the Act, becoming fully applicable on August 2, 2026. High-risk AI systems span eight domains: biometric identification, critical infrastructure management, education and vocational training, employment and human resource management, access to essential public services (such as social welfare and credit assessment), law enforcement, immigration and asylum management, and judicial and democratic processes.[1] Providers of these systems must establish risk management systems, ensure training data quality, maintain technical documentation, implement human oversight mechanisms, and conduct conformity assessments before market placement. Notably, the "high-risk" determination depends not only on the technology itself but also on the application context — the same AI model used for recommending movies carries minimal risk, but when used for screening job applicants, it becomes high-risk.

Tier Three: Limited Risk. This primarily involves transparency obligations. AI systems that interact with humans (such as chatbots) must inform users that they are interacting with AI; systems that generate deepfake content must label it as AI-generated; and emotion recognition systems must inform the individuals being analyzed.[4] The regulatory logic for this category is the "right to know" — users have the right to know when they are interacting with AI.

Tier Four: Minimal Risk. Applications such as spam filters and AI-driven video games are not subject to special regulations, reflecting the Act's principle of proportionality and avoidance of over-regulation.

The rules for "General-Purpose AI Models" (GPAI), effective from August 2, 2025, represent another chapter with far-reaching implications.[5] All GPAI model providers — including OpenAI, Google, Meta, Anthropic, and others — must maintain technical documentation, comply with EU copyright law, and publish detailed summaries of training content. Models identified as posing "systemic risk" (currently defined by a threshold of training computation exceeding 10^25 FLOPs) must additionally undergo adversarial testing, assess and mitigate systemic risks, report serious incidents, and ensure adequate cybersecurity measures. These rules have structural implications for the global AI industry chain — any foundation model developer wishing to deploy their model in the EU market will need to reassess their development processes.

II. The Brussels Effect: Why EU Standards Will Become Global Standards

Columbia Law School Professor Anu Bradford, in her groundbreaking work The Brussels Effect: How the European Union Rules the World, articulated a powerful mechanism: how the EU, through the sheer scale of its single market, unilaterally externalizes its regulatory standards as de facto global standards without requiring the consent of other nations or international treaties.[6] The GDPR (General Data Protection Regulation) is the most classic example of this effect — since its implementation in 2018, over 160 countries have enacted or revised data protection laws that closely resemble GDPR.[7]

The Brussels Effect operates through five conditions: market size, regulatory capacity, stringent standards, inelastic targets, and non-divisibility. In the context of the AI Act, all five conditions hold. The EU possesses a unified market of 450 million consumers; the European Commission has a mature enforcement mechanism; the Act's standards are the world's strictest; AI enterprises cannot easily abandon the EU market (inelastic targets); and — this is the crucial point — many multinational corporations find that maintaining two different AI governance systems (one for EU compliance, another for other markets) costs more than simply adopting EU standards as a global unified standard.[6]

This is precisely why Microsoft, Google, Meta, and other tech giants chose to globalize EU standards after GDPR took effect — not because they endorsed the EU's regulatory philosophy, but because economic rationality dictated it. The AI Act will replicate this trajectory. McKinsey's analysis indicates that by 2027, over 60% of the Global Fortune 500 will adopt a globally unified AI governance framework that complies with the EU AI Act.[8]

From a game theory perspective, the Brussels Effect creates a "first-mover advantage" regulatory game. In this game, the EU, as the first economy to enact comprehensive AI regulation, sets the global baseline — all subsequent national legislation must position itself relative to this baseline. Canada's Artificial Intelligence and Data Act (AIDA), Brazil's AI Act draft, and even the series of AI regulations that China rolled out between 2023 and 2025 all reflect, to varying degrees, the influence of the EU's risk classification framework.[9] This is no coincidence but rather the inevitable result of institutional isomorphism — when an economy with market power establishes a regulatory paradigm first, other countries face a choice not of "whether to regulate" but of "how to engage with this paradigm."

III. Compliance Economics: Costs, Benefits, and Game Equilibria

For enterprises, compliance with the EU AI Act is not a yes-or-no question — it is a complex economics problem. According to the European Commission's own impact assessment, the initial cost for a medium-sized enterprise to establish a compliance system for a single high-risk AI system is approximately EUR 6,500 to 8,500, with ongoing compliance monitoring costs of approximately EUR 3,000 to 7,000 per year.[10] However, industry estimates are generally higher than official figures. A survey of European AI companies shows that compliance costs account for 5% to 15% of their annual budgets, with SMEs bearing a particularly heavy burden.[11]

These figures must be understood within the framework of penalties. The AI Act's penalty structure is progressive: violation of the unacceptable-risk prohibitions — 7% of global annual revenue or EUR 35 million; non-compliance with high-risk AI system obligations — 3% or EUR 15 million; providing incorrect information to competent authorities — 1.5% or EUR 7.5 million.[1] For Taiwan's large tech companies, the absolute amounts of these penalties could be astronomical. Taking TSMC (with global revenues of approximately USD 95 billion in 2025) as an example, the 7% maximum penalty could theoretically mean up to USD 6.65 billion in risk — although TSMC's core business (semiconductor manufacturing) largely does not directly fall within the AI Act's high-risk categories, its advanced processes and packaging services for AI chips may involve compliance obligations at the supply chain level.

From a game-theoretic perspective, compliance decisions can be modeled as a sequential game with incomplete information. Each enterprise faces three strategies: (A) Full compliance — investing the highest cost but eliminating all penalty risk; (B) Selective compliance — prioritizing compliance for the highest-risk systems while deferring others; (C) Delayed compliance — waiting to observe enforcement attitudes before deciding. Under the precedent of GDPR, the risk of strategy (C) has been thoroughly validated — between 2023 and 2025, total GDPR fines exceeded EUR 4 billion, with Meta fined EUR 1.2 billion for a single violation.[12] Rational enterprises should adopt strategy (A) or (B), depending on their degree of exposure to the EU market.

It is worth noting that compliance is not only a cost — it can also become a competitive advantage. In a market characterized by information asymmetry, enterprises that achieve AI Act compliance certification send a credible "quality signal" to customers and partners.[13] This is analogous to the role of ISO certification in manufacturing — it serves both as a market access threshold and a differentiation tool. Article 40 of the EU AI Act explicitly establishes a "presumption of conformity" mechanism — AI systems that conform to EU or international standards can be presumed to comply with the Act's requirements.[1] This provides Taiwanese enterprises with a strategic window: investing early in establishing an AI governance framework that meets EU standards can yield a first-mover advantage in global markets.

IV. Taiwan's Strategic Positioning: From Compliance Pressure to Institutional Advantage

Taiwan occupies a unique position in the global AI ecosystem — it is not a major developer of AI foundation models (unlike the US or China), but it is an indispensable supplier of AI hardware infrastructure. Over 90% of the world's advanced AI chips are manufactured by TSMC; Taiwan's electronics contract manufacturing supply chain (Foxconn, Quanta, Wiwynn, etc.) is a major production base for global AI servers.[14] This means that while Taiwanese companies may not directly serve as AI system "providers" as defined by the EU AI Act, as "importers," "distributors," or critical nodes in the supply chain, they will still be indirectly affected by the Act.

Taiwan's passage of the Artificial Intelligence Basic Act in July 2025 marked Taiwan's formal entry into the institutional construction phase of AI governance.[15] However, Taiwan's AI Basic Act currently remains principally declaratory, lacking the specific obligations and penalty mechanisms found in the EU AI Act. This creates an "institutional gap" — Taiwanese enterprises may not feel compliance pressure in the domestic market, but when facing the EU market, they may be disadvantaged due to insufficient institutional preparation.

I believe the strategy Taiwan should adopt is the opposite of "regulatory arbitrage" — rather than exploiting institutional gaps to reduce costs, Taiwan should proactively "upgrade" domestic standards to EU levels, using this as leverage to enhance industrial competitiveness. Specific recommendations include:

First, establish a national-level AI risk classification framework. Taiwan's Artificial Intelligence Basic Act should use the EU AI Act's risk classification as a blueprint, combined with Taiwan's industrial characteristics (such as semiconductors, electronics manufacturing, and medical devices), to develop a high-risk AI inventory tailored to the Taiwanese context. This is not "copying" EU law but rather "localized translation" after understanding its logic.

Second, foster an AI compliance certification ecosystem. Article 43 of the EU AI Act establishes a third-party conformity assessment mechanism.[1] Taiwan can cultivate domestic AI compliance certification bodies — this would not only serve Taiwanese enterprises' EU market access needs but could also position Taiwan as a regional hub serving Southeast Asian markets. Taiwan's extensive experience with ISO/IEC quality management systems provides a solid foundation for this endeavor.

Third, integrate AI governance into semiconductor geopolitical strategic considerations. Taiwan's semiconductor industry is its greatest geopolitical asset. Within the framework of the AI Act, ensuring the positioning of Taiwanese chips in the "responsible AI" supply chain can further consolidate Taiwan's irreplaceability in the global AI ecosystem.

V. General-Purpose AI Models: A New Era of Foundation Model Compliance

The provisions on "General-Purpose AI Models" (GPAI) in the EU AI Act may be the most far-reaching clauses for the global AI industry. These provisions, for the first time, create legal obligations for foundation model developers — not merely for downstream application deployers.[5]

For all GPAI model providers, basic obligations include: maintaining and, when required, providing up-to-date technical documentation to competent authorities and downstream deployers; providing sufficient information and documentation to downstream AI system providers to enable them to understand the model's capabilities and limitations; establishing policies for compliance with EU copyright law — particularly the rights reservation mechanism for text and data mining under the Digital Single Market Copyright Directive (2019/790); and publishing a "sufficiently detailed summary" of the content used for training.[1]

For models identified as posing "systemic risk," additional obligations are significantly elevated: they must undergo advanced model evaluation including adversarial testing; assess and mitigate systemic risks; report situations that could constitute serious incidents; and ensure adequate cybersecurity protection.[16] Currently, the EU AI Office has begun developing codes of practice, which are expected to further refine the operational standards for these obligations. By the end of 2025, major GPAI model providers including OpenAI, Google, and Anthropic had all begun adjusting their development processes to accommodate these new requirements.

The impact on Taiwan follows an indirect but profound pathway. Taiwan's AI industry is primarily application-layer focused — most enterprises use international foundation models (such as GPT, Claude, Gemini) for secondary development and deployment. Under the EU AI Act framework, these enterprises, as AI system "deployers," have the right to demand that upstream model providers furnish the technical documentation needed for compliance. This means Taiwanese enterprises need to develop the capability to assess and manage the compliance status of their GPAI suppliers.

VI. The Global AI Regulation Game Landscape: Formation of a Tripartite System

The EU AI Act's enactment has accelerated the formation of a "tripartite system" in global AI governance: the EU's norm-driven model, the US's innovation-driven model, and China's national security-driven model.[17]

The EU model centers on protecting fundamental rights and promoting trustworthy AI, establishing a mandatory compliance framework through hard law. The US, during the Biden administration, adopted a more moderate approach through Executive Order 14110 — relying primarily on voluntary commitments and industry self-regulation as its main instruments.[18] However, the Trump administration revoked Biden's AI executive order on January 20, 2025, pivoting toward a more relaxed regulatory stance that emphasizes "removing barriers to AI innovation."[19] China has taken a distinctive path — building a gradual regulatory system through a series of regulations targeting specific AI applications (algorithmic recommendation management provisions, deep synthesis management provisions, interim measures for generative AI services), with social stability and national security as core concerns.[9]

In this tripartite landscape, Taiwan's strategic space is limited but clear. Taiwan neither possesses the EU's market scale to set global standards nor the US's or China's foundation model development capabilities. However, Taiwan has two unique advantages: first, its hub position in the global AI hardware supply chain; and second, as a mature democracy, it shares a natural value affinity with the EU on "trustworthy AI" institutional construction.[15] This means Taiwan's optimal strategy is not "equidistant observation among the three poles" but rather proactive alignment with the EU model — not based on political choice but on economic rationality. For Taiwan's export-oriented industries, adopting the world's strictest standards (i.e., EU standards) as the domestic benchmark can maximize the market access scope for their products globally.

VII. Conclusion: From Compliance Cost to Institutional Capital

The EU AI Act is not merely a legal compliance issue — it is a global experiment in institutional construction for the AI era. Just as the GDPR evolved from a European law into the de facto global standard for privacy protection, the AI Act is following a similar but far more consequential trajectory. It not only reshapes the compliance obligations of AI enterprises but also redefines the global norms for "responsible AI development and deployment."

For Taiwan, this presents both challenges and opportunities. The challenge lies in the fact that Taiwan's AI-related legal infrastructure is still in its early stages, and enterprise awareness and capability for compliance remain underdeveloped. The opportunity lies in Taiwan's institutional accumulation in semiconductors, precision manufacturing, and quality management, which provides a solid foundation for building AI compliance capabilities. More importantly, at this very moment when the global AI governance landscape is taking shape, Taiwan has the opportunity to transform from a "standard taker" into a "standard co-builder" — contributing Taiwan's experience and perspective to global AI governance through institutional alignment with the EU at the tech diplomacy level.

The cost of compliance is definite and calculable; the risk of non-compliance is uncertain but potentially fatal. In this calculation, the rational choice is clear — treat compliance as an investment rather than a cost, and view institutional construction as a source of competitiveness rather than a burden. The global ripple effect of the EU AI Act has only just begun, and Taiwan needs to prepare before — not after — the ripples arrive.

References

  1. European Parliament and Council. (2024). Regulation (EU) 2024/1689 of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union
  2. Maslej, N. et al. (2025). The AI Index 2025 Annual Report. Stanford Institute for Human-Centered Artificial Intelligence. aiindex.stanford.edu
  3. European Commission. (2025). AI Act: Prohibited AI Practices — Questions and Answers. digital-strategy.ec.europa.eu
  4. Future of Life Institute. (2024). The EU Artificial Intelligence Act: Summary and Analysis. artificialintelligenceact.eu
  5. European AI Office. (2025). General-Purpose AI Models in the AI Act. digital-strategy.ec.europa.eu
  6. Bradford, A. (2020). The Brussels Effect: How the European Union Rules the World. Oxford University Press. Oxford University Press
  7. DLA Piper. (2025). Data Protection Laws of the World. dlapiperdataprotection.com
  8. McKinsey & Company. (2025). The State of AI in 2025. mckinsey.com
  9. Carnegie Endowment for International Peace. (2025). Global AI Governance Tracker. carnegieendowment.org
  10. European Commission. (2021). Impact Assessment Accompanying the Proposal for the AI Act. SWD(2021) 84 final. digital-strategy.ec.europa.eu
  11. CEPS (Centre for European Policy Studies). (2025). Economic Impact of the AI Act on European SMEs. ceps.eu
  12. GDPR Enforcement Tracker. (2025). GDPR Fines Statistics. enforcementtracker.com
  13. Spence, M. (1973). Job Market Signaling. The Quarterly Journal of Economics, 87(3), 355–374. doi.org
  14. Semiconductor Industry Association. (2025). 2025 State of the U.S. Semiconductor Industry. semiconductors.org
  15. Executive Yuan. (2025). Artificial Intelligence Basic Act. Executive Yuan
  16. Veale, M. & Zuiderveen Borgesius, F. (2021). Demystifying the Draft EU Artificial Intelligence Act. Computer Law Review International, 22(4), 97–112. doi.org
  17. Smuha, N. A. (2021). From a 'Race to AI' to a 'Race to AI Regulation': Regulatory Competition for Artificial Intelligence. Law, Innovation and Technology, 13(1), 57–84. doi.org
  18. The White House. (2023). Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. whitehouse.gov
  19. The White House. (2025). Executive Order: Removing Barriers to American Leadership in Artificial Intelligence. whitehouse.gov
Back to Insights