Openclaw AI Goes Viral In China, Raising Cybersecurity Fears
The momentum mirrors a broader shift first seen in the United States at the start of the year, in which developers moved beyond conversational models toward agents capable of performing real-world actions. That wave has now reached China, triggering debate within industry and government over governance, safeguards and the risks of delegating sensitive tasks to software that may operate with limited transparency.
The Chinese government has warned that OpenClaw, with access across email and bank accounts, could expose sensitive personal and financial data. In China, deploying OpenClaw is nicknamed“raising lobsters,” a nod to the project's lobster mascot.
“The OpenClaw technology is spreading rapidly across society, from enterprises to individual users, bringing efficiency gains alongside rising security risks,” the Ministry of State Security said in providing“guidelines on raising lobsters” on social media on Tuesday.“Agent systems operate with broad permissions and can interact across multiple platforms, creating new vulnerabilities if not properly controlled.”
“'Lobsters' lack professional maintenance and patching mechanisms, and attackers may use malicious plugins to bypass their controls and actively exfiltrate users' core sensitive data, often with stealth exceeding traditional trojans,” it said.“Users should remain vigilant and avoid exposing critical resources to uncontrolled agent access.”
The ministry suggested that users:
-
check public exposure, permissions, credentials and plugin trust;
apply least privilege, limit scope, encrypt data, keep audit logs, run in a sandbox virtual machine and restrict core access;
treat it as a digital employee, enforce governance and keep use compliant, secure and controlled.
Prior to this, the National Computer Network Emergency Response Technical Team/Coordination Center of China (CNCERT/CC) had warned on March 10 that OpenClaw can control computers via natural language, but weak default security leaves users exposed to“prompt injection,” caused by hidden instructions that trick the AI agent into harmful actions.
Legal Disclaimer:
MENAFN provides the
information “as is” without warranty of any kind. We do not accept
any responsibility or liability for the accuracy, content, images,
videos, licenses, completeness, legality, or reliability of the information
contained in this article. If you have any complaints or copyright
issues related to this article, kindly contact the provider above.

Comments
No comment