How Exposed OpenClaw Deployments Turn Agentic AI Into an Attack Surface
Public attention around OpenClaw has centered on dramatic narratives about autonomous AI agents coordinating independently or behaving unpredictably. SecurityScorecard’s STRIKE Threat Intelligence team identified the actual risk facing users and organizations alike in new research released this week: The immediate risk is not autonomy, but access and exposed infrastructure that attackers can abuse.
STRIKE found tens of thousands of exposed OpenClaw instances, many of which are vulnerable to Remote Code Execution (RCE), with 35.4% of observed deployments flagged as vulnerable at time of writing.
“There’s no shortage of adversaries that want to target those exposures if they aren’t already.”
— Jeremy Turner, VP of Threat intelligence & Research
OpenClaw and other agentic AI tools are designed to take actions on a user’s behalf, interact with infrastructure, and move across connected services. They can modify files, deploy services, respond to messages, interact with APIs, and operate across connected systems. That functionality is the appeal. It is also the risk for users around the globe, said Jeremy Turner, SecurityScorecard’s VP of Threat Intelligence & Research.
When they are deployed without guardrails, they expose infrastructure in ways attackers already know how to exploit.
Read the full research here or watch our video discussion on the research:
For a full breakdown of STRIKE’s findings, including exposure trends and vulnerability categories (updated every 15 minutes), visit the STRIKE team’s declawed dashboard.
How will attackers find exposed OpenClaw instances?
The STRIKE Threat Intelligence team built declawed.io to show defenders what that exposure looks like at scale. The dashboard carries important lessons for users and enterprises alike: If threat researchers can see these exposures, so too can the adversaries.
“Certainly the perspective that we see is the same one that the adversaries see as well.” Turner said. “Users just need to be aware that when they install these things, there’s a lot of additional exposures that are going to come along with it. And there’s no shortage of adversaries that want to target those exposures if they aren’t already.”
How could bad actors abuse exposed OpenClaw agents?
One of the most serious findings in the research is the prevalence of Remote Code Execution (RCE) vulnerabilities across exposed OpenClaw instances, Turner noted.
RCE vulnerabilities allow an attacker to send a malicious request to a service and execute arbitrary code on the underlying system. In traditional environments, that is already a critical issue. In agentic AI environments, the consequences escalate quickly, as this could provide would-be attackers control of agents. The attacker inherits whatever access the agent already has.
“This is an open door or open window, so to speak,” Turner said. “It’s a straight path for adversaries from having no access to having the ability to access systems and make changes.”
When OpenClaw runs with permissions to email, APIs, cloud services, or internal resources, an RCE vulnerability can become a pivot point. A bad actor does not need to break into multiple systems. They need one exposed service that already has authority to act.
What a realistic attack path looks like
That access may include credentials stored on the system, integrations with third-party services, or permissions to modify infrastructure. Because the agent’s actions may look legitimate, abuse can blend into normal activity.
Turner noted that organizations should think of AI agents as additional identities inside the environment, each with their own access, permissions, and potential to create risk if misused.
“Be aware of all the third parties and suppliers that might have users or technology teams that are experimenting with this,” Turner advised. “It’s very easy to create pretty serious vulnerability or data exposures with these technologies. It’s literally just like hiring somebody and placing them in the organization with access to whatever data is on that system where the software gets installed.”
Security teams should also be aware that bad actors may seek to abuse OpenClaw exposures by creating botnets, which are a massive group of computers that attackers control in order to launch malicious hacking operations.
“It’s pretty safe to say that somebody will try to make a botnet out of these exposures,” Turner warned. “It’s a golden opportunity, and I’m sure it’s not going to be missed.”
Compromised agentic AI systems provide compute, connectivity, and persistence. That combination may make them enticing targets for cryptocurrency mining operations as well, Turner noted. Even if no sensitive data is accessed, infrastructure abuse still carries cost and operational impact.
How widespread are the security issues with OpenClaw?
One of the more unusual trends in the OpenClaw research is how the number of instances that are exposed appears to be growing over time, Turner noted.
In just the first 24 hours of scanning, STRIKE identified more than 40,000 exposed instances of OpenClaw, and that number has continued to grow since. Turner noted that unlike traditional vulnerability scans, where exposure typically declines as patches are applied, OpenClaw exposure has increased as adoption accelerates.
“It’s a pretty staggering growth rate. Usually when we do scans for a vulnerability, what we start with in the first scan is the most of the exposure and then over time it decreases. In this case, because it’s a new technology and more users are adopting it, we see that trend actually doing the opposite,” Turner said.
As interest in agentic AI grows, more users will likely deploy agentic AI or tools like OpenClaw, Turner noted. Many do so quickly, without security models designed for agents that can act across systems. Adoption is currently outpacing hardening.
Protecting organizations from exposed agentic AI and automation risks
For organizations, these risks matter even if OpenClaw is not an approved tool. Employees experiment, developers test, and third parties adopt tools without formal review. Each case may expand the attack surface.
These systems receive instructions, interpret context, and act across environments. If access is broad, mistakes and abuse propagate quickly. While traditional security fundamentals still apply, security teams must apply them deliberately.
Network segmentation limits where agents can operate. Role-based access controls restrict what they can do. Multi-factor authentication protects associated identities. Least privilege reduces damage when something goes wrong.
Users need to think about isolation, segmentation, and the blast radius of possible issues before trusting these tools with sensitive access. Running agents in isolated environments limits exposure. Segmenting networks prevents lateral movement. Restricting permissions reduces the impact of compromise.
You can learn more from the new STRIKE intel on OpenClaw exposures here or at declawed.io, where you can view updated data every 15 minutes.
Connect with SecurityScorecard’s STRIKE Threat Intelligence Team for detailed reports, API access, or custom intel feeds at declawed.io