"Lobster" Bares Its Claws: How Can Enterprises Safely "Farm Shrimp"? Exclusive Interview with Bai Yaohua from Dehedata Law Firm: Setting Boundaries and Standards for "Digital Employees"

When it comes to stock trading, rely on Golden Kylin Analysts’ reports—authoritative, professional, timely, comprehensive—helping you uncover potential thematic opportunities!

(Source: Daily Economic News)

Daily Economic News Reporter: Song Sijian Daily Editor: Liao Dan

Currently, as major internet companies launch OpenClaw (an open-source AI agent framework, commonly known as “Lobster”) and some enterprises attempt to integrate it into production environments, the legal risks associated with “raising lobsters” are gradually coming to the forefront: How do the legal risks of “Lobster” differ from those of ordinary large models? If “Lobster” accidentally deletes data or places incorrect orders during operation, who is responsible? If companies want to “raise lobsters,” how should they manage compliance?

On March 19, Daily Economic News reporter (hereinafter NBD) conducted a written interview with Bai Yaohua, Director of AI and Autonomous Driving Department and Senior Partner at Beijing DeHeng Law Offices (Shanghai), on the above issues.

Bai Yaohua Lawyer Image source: Provided by interviewee

Follow the principle of “least necessary” and set up mandatory “manual confirmation” steps for high-risk operations

NBD: As an “open-source AI agent framework,” how does OpenClaw’s legal risk compare to that of ordinary large model applications?

Bai Yaohua: As an open-source AI agent framework, OpenClaw’s legal risks are more complex and upfront than those of typical large model applications. The core differences lie in its “open-source” nature, which blurs responsibility boundaries, and its “agent” attribute, which shifts its role from “information generator” to “behavior executor,” introducing new risks of operational loss and accountability.

Specifically, the legal risks of ordinary large models (such as chatbots) mainly involve intellectual property infringement of generated content, dissemination of false information, and compliance with personal data processing. In contrast, frameworks like OpenClaw have compounded and upgraded risks:

First, the “open-source” responsibility network risk. Open-source frameworks allow developers worldwide to contribute code and develop plugins, spreading responsibility for security management and data protection across a loose ecosystem. The Cybersecurity Law of the People’s Republic of China requires network operators to establish and implement security management systems. In an open-source ecosystem, who is the “operator”? The original developers, secondary development enterprises, plugin providers—all may bear responsibilities at different stages, making responsibility attribution very difficult.

Second, the “agent” behavior risk. Ordinary large models mainly “talk,” while applications driven by agent frameworks “act”—they can automatically operate external systems (such as placing orders or deleting data). This extends risks from “speech and information” to “behavior and operation.” Errors could directly cause financial loss, breach of contracts, or system damage, with more immediate and tangible harm. Developers and users must manage these like “digital employees,” setting strict permission boundaries and behavioral guidelines.

For enterprises, I suggest: when adopting such frameworks, first clarify all legal entities involved in the technical supply chain and define their security obligations and responsibilities through contracts; internally, establish special management systems for automated tools, differentiating them from general content-generation AI.

NBD: The biggest feature of OpenClaw is that it can not only answer questions but also automatically operate web pages, systems, and tools. What does this mean legally?

Bai Yaohua: The ability to automate operations legally means that AI agents driven by OpenClaw are upgraded from “assistive tools” to “behavioral agents.” The core legal significance lies in the scope of “behavior authorization” and “responsibility for consequences,” which may directly trigger contractual performance, tort liability, and stricter data processing compliance requirements.

This “behavioral agent” capability means it no longer just provides advice but can directly interact with third-party systems on behalf of users or developers, creating legal effects. For example:

First, it could serve as a party executing contractual actions. Automated ordering or quoting by the agent could be viewed as an offer or commitment made by the user or developer. Once an agreement is reached, a contract is formed. If errors occur (such as incorrect quotes), it could lead to contractual disputes.

Second, it involves data processing or access actions. Automated operations necessarily involve reading, modifying, or deleting system data, which falls under the regulation of the Cybersecurity Law and Personal Information Protection Law of China. Especially if the operations involve automated decision-making (e.g., automatically offering different prices based on user data), data handlers must conduct a personal information impact assessment as required by Article 55 of the Personal Information Protection Law.

Third, potential infringing actions. If the agent exceeds authorized access to others’ systems, deletes data mistakenly, or posts defamatory content, its actions may infringe on cybersecurity, property rights, or reputation rights, with responsibility traced back to its controllers.

For enterprises, I recommend: clearly define and restrict the agent’s operational permissions, following the “least necessary” principle; prohibit access to unrelated systems or data; for high-risk operations like contract signing, payments, or data deletion, set up mandatory “manual confirmation” steps, avoiding full automation.

NBD: If OpenClaw-driven agents make mistakes—such as data deletion, incorrect orders, wrong quotes, or inappropriate speech—how should responsibility be allocated?

Bai Yaohua: There is no unified standard for responsibility; a layered fault analysis framework is needed. The key is to prove causality between the damage and the fault behaviors (such as design flaws, management negligence, or erroneous instructions). Usually, multiple parties are involved: framework providers, custom developers, plugin providers, and end users.

Referring to judicial practices, responsibility can be considered as follows:

First, the framework provider/open-source community. They are typically responsible only for known, unpatched underlying security vulnerabilities. Violations of open-source licenses or damages caused by framework defects are separate issues. If the error is not due to a fundamental flaw in the framework, their liability may be limited.

Second, the custom developer/integrator. They are most likely to bear primary responsibility. Responsible for designing specific logic, setting permissions, and conducting security testing. If errors result from design flaws (e.g., no secondary confirmation for deletions), misuse, or insufficient testing, they should bear corresponding liability.

Third, plugin or external tool providers. They are responsible for the security of their modules. If a malicious or vulnerable plugin causes errors, the provider should be liable.

Fourth, the end enterprise user. They bear the ultimate supervision and compliance obligations. If the user issues vague or erroneous instructions, ignores security warnings, or fails to properly control permissions, they should also be responsible.

Fifth, scope of damages. If the agent is an essential part of core business (e.g., automated trading), operational errors causing losses should be evaluated based on overall business impact, not just direct damages.

My advice: enterprises must specify responsibility clauses clearly in contracts when commissioning or purchasing AI agents, especially for damages caused by design flaws or algorithm errors, and establish clear compensation mechanisms; also, keep comprehensive operation logs and audit records as key evidence for fault determination.

If data leaks occur due to OpenClaw, enterprises may also face joint liability.

NBD: Since frameworks like OpenClaw depend on plugins, external tools, browser automation, and code execution environments, what cybersecurity legal risks do they pose?

Bai Yaohua: Such architectures that heavily rely on external components significantly expand the “attack surface,” mainly bringing supply chain security risks, data leakage risks, joint compliance liability, and third-party infringement risks, which can easily violate cybersecurity grading protection systems.

Specifically, four main risks:

First, supply chain attacks. Malicious plugins or contaminated external tools can serve as “Trojan horses,” leading to data theft or system control. According to Article 24 of the Cybersecurity Law, “Network product and service providers shall not set malicious programs.” If users introduce insecure components causing damage, they may bear responsibility for failing to conduct due diligence.

Second, data leakage and violation risks. Plugins and automation tools may collect and transmit sensitive data without user awareness. Failure to explicitly inform and obtain user consent violates the Personal Information Law’s provisions on collection and use.

Third, joint compliance liability. According to Article 27 of the Data Security Law, data handlers should establish comprehensive data security management systems. Using external components makes the enterprise a unified “data processing activity.” Any security flaw in a component leading to data leaks could result in administrative penalties for the entire organization.

Fourth, third-party infringement risks. Automated operations like bulk data crawling may exceed reasonable bounds, constituting unauthorized access or even cyberattacks.

For enterprises, I recommend: establish strict plugin “white list” management, only using thoroughly audited and trusted sources; implement network isolation (sandboxing) for the agent environment, restricting network access to prevent vulnerabilities from affecting core systems.

NBD: For enterprise users planning to use OpenClaw, what compliance suggestions do you have?

Bai Yaohua: Enterprises should treat “governing digital employees” as a compliance approach, building a risk control system covering access evaluation, permission management, process auditing, and emergency response, with core obligations embedded in contracts.

During the access phase, conduct thorough due diligence: review open-source licenses of OpenClaw and plugins, clarify commercial restrictions and open-source obligations; sign clear contracts with developers, specifying functions, security standards, IP rights, liability limits, and compensation.

During deployment, follow the principles of least privilege and isolation. Minimize permissions—only assign the necessary system and data access for specific tasks—and sandbox the environment, especially for high-risk operations, running in isolated test environments before limited deployment.

During operation, ensure full auditability and intervention capability. First, maintain detailed logs of decisions, actions, and results, with retention periods complying with the Cybersecurity Law (at least six months). Second, for critical operations like fund transfers, contract signing, or data deletion, set up mandatory manual approval or confirmation steps. Third, conduct regular security assessments, including vulnerability scans and compliance audits, especially for third-party components.

Additionally, prepare for emergencies: develop contingency plans for abnormal behaviors (e.g., erratic trading, data leaks). Consider purchasing cybersecurity or product liability insurance to transfer some potential large liabilities.

I believe that the technological potential of OpenClaw is immense, but its legal risks grow exponentially. Enterprises embracing innovation must simultaneously build rigorous legal and compliance frameworks to ensure steady and sustainable development.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin