In February 2026, an open-source project called OpenClaw surpassed 200,000 GitHub Stars in just 84 days, becoming the fastest-growing open-source project in software history — outpacing the comparable milestones of React, Vue, and Linux.[1] Its predecessor, Clawdbot, was built by Austrian developer Peter Steinberger in a single evening, initially as nothing more than a simple bridge connecting chat applications to AI models. Yet within three months, it had evolved into a full-fledged AI agent capable of controlling users' computers through messaging platforms such as WhatsApp, Telegram, and Signal — reading and writing files, executing code, browsing the web, managing calendars, and even controlling smart home devices. Supporters dubbed it "AI with hands."[2] However, these hands also brought unprecedented governance challenges: 73 security vulnerabilities, over 135,000 instances exposed on the public internet, a 12% malicious skill ecosystem, and emergency bans from Meta and Microsoft. In my experience conducting technology governance research at the University of Cambridge, leading cross-border regulatory framework design for the World Bank and the United Nations, and currently leading Meta Intelligence in AI software development, I have never seen an open-source project trigger a perfect storm of technological innovation, security crisis, and governance debate in such a short time. OpenClaw is not merely a software product — it is the overture to the era of agentic AI and a stress test for the global AI governance system.

I. The OpenClaw Phenomenon: How Did an Open-Source Project Reshape the AI Industry in 84 Days?

Understanding the explosion of OpenClaw requires first grasping its position in the history of technology. Over the past two years, the capabilities of large language models (LLMs) have leaped from "conversation" to "action." ChatGPT, Claude, Gemini, and other models can no longer merely answer questions — they can understand instructions, decompose tasks, invoke tools, and execute operations in the real world. This transformation has been called the paradigm shift "from AI chatbot to AI agent" — Gartner predicts that by the end of 2026, 40% of enterprise applications will embed AI agent capabilities.[3]

The core innovation of OpenClaw lies not in the AI model itself, but in solving an engineering problem: how to enable ordinary users — not developers, but anyone with a smartphone — to control a fully system-privileged AI agent through the messaging apps they use daily. Its architecture centers on a WebSocket gateway that connects over fifteen messaging platforms, including WhatsApp, Telegram, Discord, Signal, and Slack, to an Agent Runtime responsible for assembling context, invoking LLMs, executing tool operations, and persisting state.[4] In other words, OpenClaw stitches together the AI agent's "brain" (LLM) and "hands" (system operation capabilities) through the interface users are most familiar with.

This architectural design produced three key breakthroughs. First, the barrier to entry approaches zero. Users do not need to install an IDE, learn the command line, or understand APIs — they simply send a message on WhatsApp to command the AI to complete complex tasks. This elevated the AI agent from a developer tool to a mass consumer product. Second, the functional boundaries expanded dramatically. OpenClaw's AgentSkills system provides over 100 pre-built skills covering file management, web automation, email, calendar, and smart home control, with community-driven extension through the ClawHub skill marketplace. Third, model agnosticism. OpenClaw supports multiple LLMs including Claude, GPT, and DeepSeek, and can even run local models via Ollama, freeing users from dependence on any single AI provider.

OpenClaw's growth velocity was staggering. It achieved a single-day record of 25,310 new GitHub Stars (January 26, 2026), accumulated over 35,400 forks, 11,456 commits, and more than 100 contributors.[1] Community forks in eight programming languages — ZeroClaw (Rust), PicoClaw (Go), NanoClaw (Python), TinyClaw (Shell) — spontaneously emerged within weeks. The project website exceeded 2 million visits in a single week. This was not incremental technological evolution but an explosive industry restructuring.

Yet OpenClaw's own history reflects the chaos of open-source AI governance. The project underwent three name changes in three months — from Clawdbot (due to an Anthropic trademark complaint) to Moltbot (a reference to lobster molting), to OpenClaw — dubbed by media as "the fastest triple rename in open-source history."[2] Founder Steinberger announced on February 14, 2026, that he would join OpenAI to lead development of next-generation personal agents, and OpenClaw was transferred to an independent open-source foundation.[5] Sam Altman publicly praised Steinberger as "a genius with many amazing ideas about intelligent agent interaction." VentureBeat's analysis was more direct: what OpenAI acquired was not AI capability, but "workflow infrastructure." This signaled that the competitive focus of the AI industry was shifting from model capability to control over agent infrastructure.

II. 73 Vulnerabilities and 12% Malicious Skills: The Structural Security Dilemma of Open-Source AI Agents

If the growth story of OpenClaw is a narrative about the democratization of innovation, its security record is a cautionary tale about how technological zeal can outpace governance capacity. As of February 2026, OpenClaw had accumulated 73 security advisories, including a critical vulnerability with a CVSS score of 9.4 — CVE-2026-25253 — which allowed attackers to achieve remote code execution through a single malicious link, thereby fully hijacking the user's AI agent.[6] Since OpenClaw runs with root privileges by default, a successful compromise effectively grants the attacker complete control of the user's computer.

The scale of the attack surface was equally alarming. According to analysis by the SecurityScorecard STRIKE team, as of February 9, 2026, over 135,000 OpenClaw instances were directly exposed on the public internet.[7] Censys monitoring data showed that exposed instances surged from approximately 1,000 to over 21,000 in just one week (January 25-31, 2026) — each exposed instance representing a potential remote attack entry point. OpenClaw's gateway binds by default to port 18789/tcp, and a large number of users deployed it without enabling authentication or configuring firewall rules.

The deeper security crisis stems from contamination of the skill ecosystem. The Cisco security research team found that among 2,857 skills on the ClawHub skill marketplace, 341 (approximately 12%) were confirmed as malicious.[8] Attackers used professional documentation and innocuous-sounding names — such as "solana-wallet-tracker" and "What Would Elon Do?" — to disguise malicious code. One skill was confirmed to exfiltrate user data via curl commands to attacker-controlled servers, effectively constituting a data theft backdoor. This was not an isolated incident — it was the first large-scale supply chain attack of the AI agent era.

The root cause of these security issues lies not in developer negligence, but in the structural dilemma of open-source AI agents. Traditional open-source software — a web framework, a database — even when vulnerabilities exist, the damage scope is typically limited to specific functionality. However, the essence of an AI agent is a "general-purpose system agent" — it is designed to execute any operation, meaning the potential damage scope of any security vulnerability extends to the full privileges of the entire computer. OpenClaw's own technical documentation acknowledges: "There is no 'completely secure' configuration."[9] This candor is viewed as a virtue in the technical community, but in the context of enterprise governance, it is a red flag — a system that cannot even guarantee its own security is being deployed on hundreds of thousands of computers, handling emails, files, code, and business data.

In my past research on fintech regulation conducted for the World Bank and the United Nations, I observed a recurring pattern: when the speed of technological innovation far outpaces the construction of governance frameworks, the intervening "governance vacuum" tends to become a breeding ground for systemic risk. The derivative financial instruments before the 2008 financial crisis and the token issuances during the 2017 ICO bubble were precedents of this pattern. OpenClaw sits at the very center of this governance vacuum — the technology has already been deployed, the risks have already materialized, but the governance framework is still under construction.

III. From Singapore to Brussels: Three Paths for Global Agentic AI Governance

In the face of the governance challenges posed by agentic AI, major jurisdictions worldwide are forming three fundamentally different regulatory philosophies, with divergences even greater than those regarding the regulation of traditional AI models.

Singapore's path is a "principle-based soft law framework." On January 22, 2026, Singapore's Infocomm Media Development Authority (IMDA) released the world's first Model AI Governance Framework for Agentic AI at the World Economic Forum, announced personally by Minister for Communications and Information Josephine Teo.[10] The framework focuses on four pillars: first, pre-deployment risk assessment and action-space bounding — requiring deployers to evaluate the applicable scenarios for AI agents before launch and clearly define the boundary of their executable operations; second, human accountability — establishing human-in-the-loop checkpoints at critical decision nodes; third, technical controls — including sandbox isolation, least-privilege principles, and behavioral logging at the engineering level; and fourth, end-user responsibility — explicitly defining users' risk management obligations for AI agent behavior after deployment.[11]

The strategic value of Singapore's framework lies in its non-binding positioning. It is not legislation, but a set of best practice guidelines currently open for public consultation. This "soft law" strategy reflects Singapore's longstanding regulatory philosophy — at a stage when technology is still rapidly evolving, replace rigid regulations with principle-based guidance to avoid premature normative constraints that could freeze innovation space. In my past experience studying fintech regulatory sandboxes, Singapore's approach achieved a sound balance in the financial innovation space — MAS's regulatory sandbox became a global benchmark. However, the risk profile of AI agents fundamentally differs from that of financial products: the damage from financial products is typically quantifiable economic loss, whereas the potential harm from AI agents — from data breaches to system takeover — more closely resembles the nonlinear risk profile of cybersecurity incidents.

The EU's path is a "rule-based hard law framework." The EU AI Act will enter full enforcement on August 2, 2026, when Article 50 transparency obligations will begin mandatory enforcement — including AI interaction disclosure, synthetic content labeling, and deepfake identification requirements.[12] For AI agents, the core challenge of the EU AI Act lies in the applicability of its risk-based approach. The current regulation classifies AI systems into four risk tiers (unacceptable, high-risk, limited risk, and minimal risk), but the risk level of AI agents is dynamic — the same agent may span multiple risk tiers when executing different tasks. An OpenClaw instance scheduling meetings falls under "minimal risk," but when autonomously sending emails or modifying files, it may constitute a "high-risk" application. This risk variability poses a fundamental challenge to the EU's static classification system.

Even more noteworthy is the revision of the EU Product Liability Directive — with an implementation deadline of December 2026 — which has explicitly brought AI software within the definition of "product." This means that when an AI agent causes harm, victims can claim damages without needing to prove developer negligence. For open-source AI agents, the implications of this provision are profound: OpenClaw has over 100 contributors, most of whom are anonymous or pseudonymous individual developers — under this distributed development model, who exactly bears product liability?

The U.S. path is "deregulatory market self-governance." On February 20, 2026, White House advisors explicitly stated at a summit in India that the United States "completely rejects" a global AI governance framework. This stance is consistent with the U.S. approach to digital sovereignty issues — prioritizing the innovation freedom of domestic technology industries and avoiding international norms that could constrain the competitiveness of American companies. However, this deregulatory posture stands in sharp contradiction to the security risks exposed by OpenClaw: a significant proportion of the 135,000 AI agent instances exposed on the internet are located in the United States — and in the absence of federal-level regulation, the security governance of these instances is effectively in a state of "nobody in charge."

The divergence of these three paths reflects a fundamental governance dilemma: the risks of agentic AI are global (a single malicious skill can affect all OpenClaw users worldwide), but regulatory authority is local. This structural mismatch between "global risk and local governance" shares a high degree of structural isomorphism with the cross-border data flow governance dilemma I have studied in the past — the difference being that cross-border data disputes still have a window for negotiation, whereas the security risks of AI agents are unfolding in real time.

IV. Enterprise AI Agent Governance: From Shadow AI to Institutionalized Deployment

OpenClaw's impact on enterprise governance first manifested in the form of "Shadow AI." Shadow AI refers to the practice of employees installing and using AI tools on corporate devices without IT department approval — and OpenClaw, with its zero-barrier installation and powerful automation capabilities, became the focal case of the Shadow AI problem in 2026. American Banker reported that cases had already emerged in the banking industry of employees installing OpenClaw on work computers to process business emails and documents, with these AI agents potentially accessing customer data, financial information, and internal communications without supervision.[13]

The corporate emergency response was swift and forceful. Meta instructed employees to immediately remove OpenClaw from work devices, citing "urgent security concerns."[14] Microsoft issued a similar internal warning. The title of an Institutional Investor article bluntly conveyed institutional investors' attitude: "OpenClaw: The AI Agent Institutional Investors Need to Understand but Should Not Touch." The speed and intensity of these reactions reflect that the governance challenge of AI agents has shifted from a "future issue" to an "immediate crisis."

However, banning is not the answer. A blanket prohibition on AI agent usage is akin to some enterprises' attempts to ban employees from using cloud services in the early 2010s — ultimately only pushing usage behavior into a more invisible and uncontrollable underground state. What enterprises need is a transition from emergency response to a systematic AI agent governance framework. Drawing on the framework from my research on AI-era corporate governance and insights from the Singapore governance framework, I recommend that enterprises build a governance system across four dimensions:

  1. Intake Assessment. Establish an intake review mechanism for AI agent tools, evaluating their security architecture, data handling practices, permission requirements, and compliance status. Any AI agent tool must pass both technical evaluation by the cybersecurity team and compliance review by the legal team before being introduced into the enterprise environment. For open-source tools such as OpenClaw, the assessment should include source code auditing, the trust mechanisms of the skill marketplace, and the responsiveness to community security advisories.
  2. Permission Governance. Following the Principle of Least Privilege, configure precise operational permissions for AI agents. OpenClaw's default design of running with root privileges is unacceptable in an enterprise environment. Enterprises should establish a tiered permission architecture — limiting AI agent operational capabilities to specific applications, specific data scopes, and specific network segments. The concept of "bounding the action-space" emphasized in the Singapore framework has direct practical relevance here.
  3. Behavioral Auditing. Establish comprehensive logging and real-time monitoring mechanisms for AI agent behavior. Every operation executed by an AI agent — file access, network requests, email sending, code execution — should be recorded, categorized, and available for post-hoc audit. This is not only a security requirement but also critical evidence for demonstrating that the enterprise has fulfilled its duty of care in the event of legal disputes.
  4. Incident Response. Integrate AI agent security incidents into the enterprise's existing cybersecurity incident response processes (CSIRT), and develop specialized response plans for AI agent-specific threat scenarios — such as malicious skill installation, agent hijacking, and data exfiltration through agents. Response speed is critical: the time gap between the disclosure of OpenClaw's CVE-2026-25253 vulnerability and the publication of proof-of-concept exploit code was mere hours.

This four-dimensional governance framework essentially extends the concept of enterprise digital resilience from "defending against external attacks" to "governing internal agents." An AI agent is not an external threat — it is an autonomous actor that the enterprise itself has introduced, which means traditional "perimeter defense" thinking must evolve into "internal governance" thinking.

V. Legal Liability for Agentic AI: Who Is Responsible for AI's Actions?

The most fundamental governance question raised by OpenClaw is perhaps not technical security vulnerabilities, but legal liability attribution. When an AI agent autonomously executes operations — sending an inappropriate business email, deleting critical files, leaking confidential information — who should bear legal responsibility?

Under current legal frameworks, the liability an enterprise bears for its AI agent's actions is analogous to employer liability for employee conduct (respondeat superior). Analysis by the U.S. law firm Squire Patton Boggs identifies five major categories of AI agent legal risk: erroneous or unauthorized operations, unlawful conduct, biased and discriminatory decisions, data breaches, and unintended damage to connected systems.[15] The key point is that the law does not distinguish between whether AI behavior was malicious or unintentional — as long as harm is caused, the deployer may bear liability.

However, liability attribution for open-source AI agents faces three unique legal challenges:

First, the diffusion of contributor liability. OpenClaw has over 100 contributors, most of whom use pseudonyms. When a security vulnerability or functional defect causes user harm, should liability be attributed to the project founder, core maintainers, or the specific contributor who introduced the problematic code? Open-source licenses (OpenClaw uses the Apache 2.0 license) typically include disclaimer clauses, but the effectiveness of these clauses under consumer protection law — particularly under the new EU Product Liability Directive framework — remains subject to significant legal uncertainty. Even more challenging is liability attribution within the ClawHub skill marketplace: when 12% of skills are confirmed as malicious, how should responsibility be allocated among the platform (the OpenClaw foundation), skill developers, and users who installed the skills?

Second, the difficulty of establishing causation. The decision-making process of AI agents involves probabilistic reasoning by LLMs — the same instruction may produce different behaviors at different points in time. When OpenClaw makes an operational decision based on its "understanding" of email content, the causal chain — from user instruction, to LLM reasoning, to tool invocation, to final outcome — is often opaque and not fully reproducible. This poses a challenge to the "causation" requirement of traditional tort law. In international arbitration cases I have previously studied, establishing causation was already a complex issue — the intervention of AI agents makes it even more difficult.

Third, the complexity of cross-border jurisdiction. An OpenClaw instance deployed in Taiwan may use an American company's LLM (such as OpenAI's GPT), execute operations involving EU personal data, and have installed skills uploaded to ClawHub by developers of unknown nationality. When harm occurs, which country's law applies? Which court has jurisdiction? This type of "nested multi-jurisdiction" problem represents a structural blind spot in the legal system of the agentic AI era.

Facing these legal gaps, some industry observers have proposed the concept of "agent liability insurance" — similar to automobile liability insurance, requiring enterprises deploying AI agents to insure against potential damages they may cause. This direction is worth exploring, but it presupposes the ability to establish reliable actuarial risk models — and currently, the deployment history of agentic AI is too short, the case base too small, and the risk distribution too unclear to support actuarial pricing. A more practical short-term approach is for enterprises, when deploying AI agents, to clearly define the AI agent's operational scope, liability allocation, and dispute resolution mechanisms within their contractual architecture — shifting governance from "post-hoc accountability" to "pre-deployment agreement."

At a broader level, the legal liability question of agentic AI is forcing the legal system to undertake a fundamental rethinking: traditional law distinguishes between "natural persons" and "legal persons" as actors, yet an AI agent is a novel entity that is neither a natural person nor a legal person but possesses autonomous behavioral capabilities. Whose will drives its behavior? The user's instructions provide only the objective, while the execution path is autonomously determined by the LLM — this hybrid behavioral model of "delegation-autonomy" challenges the law's foundational assumptions about the relationship between "will" and "action." Experts predict that by 2027, the regulatory focus will shift from "model transparency" to "real-time agent auditing" — requiring enterprises to bear traceable governance responsibility for every operation performed by their AI agents.[12]

References

  1. OpenClaw.report. (2026). OpenClaw surpasses 200K GitHub Stars in 84 days. openclaw.report
  2. CNBC. (2026). From Clawdbot to Moltbot to OpenClaw: the rise and controversy of AI's hottest open-source project. cnbc.com
  3. EWSolutions. (2026). Agentic AI Governance: A Strategic Framework for 2026. ewsolutions.com
  4. innFactory AI. (2026). OpenClaw Architecture Explained. innfactory.ai
  5. TechCrunch. (2026). OpenClaw creator Peter Steinberger joins OpenAI. techcrunch.com
  6. The Hacker News. (2026). OpenClaw Bug Enables One-Click Remote Code Execution When Visiting Malicious Link. thehackernews.com
  7. Bitsight. (2026). OpenClaw AI Security Risks: Exposed Instances. bitsight.com
  8. Cisco Blogs. (2026). Personal AI Agents Like OpenClaw Are a Security Nightmare. blogs.cisco.com
  9. CrowdStrike. (2026). What Security Teams Need to Know About OpenClaw AI Super Agent. crowdstrike.com
  10. IMDA Singapore. (2026). New Model AI Governance Framework for Agentic AI. imda.gov.sg
  11. Baker McKenzie. (2026). Singapore Governance Framework for Agentic AI Launched. bakermckenzie.com
  12. Legal Nodes. (2026). EU AI Act 2026 Updates: Compliance Requirements and Business Risks. legalnodes.com
  13. American Banker. (2026). OpenClaw AI creates shadow IT risks for banks. americanbanker.com
  14. AI CERTs. (2026). Meta's OpenClaw Ban Spotlights AI Security Imperatives. aicerts.ai
  15. Squire Patton Boggs. (2026). The Agentic AI Revolution: Managing Legal Risks. squirepattonboggs.com
Back to Insights