Blog February 17, 2026

What Are the Real Security Risks of Agentic AI and OpenClaw?

For weeks, headlines have focused on AI agents seemingly organizing online and appearing to act independently, fueling speculation about artificial general intelligence (AGI). They’re not necessarily revolting or posting of their own accord, though. The agents are just doing what they were built to do in the first place, Anne Griffin, Head of AI Product Strategy, and Jeremy Turner, VP of Threat Intelligence & Research, explained.

The behavior of agentic AI assistants can appear self-directed because they are built to continuously react to prompts and context, Griffin noted.

“Some people have asked, ‘is this AGI?’ because the agents appear to be having conversations with each other. However, a lot of the things that LLMs are doing are happening because of a prompt,” Griffin said.

Large Language Models (LLMs) predict the next token based on prior data. When powering an agentic framework, that predictive engine may have access to tools, permissions, and the ability to execute multi-step actions. It may send emails, call APIs, post on public forums, and respond to new inputs.

Watch our video discussion on the research:

Tens of thousands of OpenClaw instances are already vulnerable, providing countless opportunities for malicious threat actors to exploit, according to new research from SecurityScorecard’s STRIKE Threat Intelligence team.

Read the full research here.

What do OpenClaw exposures and vulnerabilities mean?

STRIKE researchers identified tens of thousands of OpenClaw instances exposed to the public internet, many of which are vulnerable to Remote Code Execution (RCE).

RCE is a particular class of vulnerability which allows an attacker to run arbitrary code on a system, exposing users to any motivated actor. If an attacker breaks in to an AI agent, they will inherit each of the privileges that are already there.

Users may give OpenClaw permission to:

  • Send emails
  • Access internal files
  • Deploy services
  • Call third-party APIs

Turner warned that OpenClaw adoption and exposures are moving faster than organizations realize, especially as AI agents themselves may be able to introduce new security issues:

“Some of those vulnerabilities may have even been introduced by the agents actually deploying things, installing services, taking certain actions, changing firewall rules. It really depends on how much permissions those users gave the system. It’s only a matter of time before we see threat actors actively exploiting these exposures.”

— Jeremy Turner, VP of Threat Intelligence & Research at SecurityScorecard

That observation underscores a deeper concern. Agentic systems can interact with infrastructure in ways that can expand the already vulnerable attack surface.

In other words, the agent may not just sit on top of risk, it may help create it.

For a full breakdown of STRIKE’s findings, including exposure trends and vulnerability categories (updated every 15 minutes), visit the STRIKE team’s declawed dashboard.

How prompt injection creates real-world AI security risk

Even without exploiting the Common Vulnerabilities and Exposures (CVEs) listed in STRIKE’s declawed.io dashboard, threat actors may attempt something far simpler: manipulating the agent through malicious input, or prompt injection. 

Because agentic AI systems operate based on prompts, they can be susceptible to prompt injection. In these attacks, a malicious actor crafts input meant to override or manipulate an agent’s intended output.

“Because this is really just a series of prompts for these agents, it becomes incredibly easy for people to prompt inject your agent,” Griffin said. “If you gave it access to your email, your schedule, financial details, you’re in a lot of trouble.”

Griffin shared a practical framework for assessing the security risk of agentic AI, which can help users evaluate how much exposure they may be absorbing via an AI tool before granting access to sensitive accounts or systems. If the agent can read sensitive data, receive untrusted input, and send messages or execute actions outward, a single malicious prompt can spread rapidly across sensitive platforms or systems.

“Does it have access to private and sensitive data? Does it have exposure to untrusted input? And does it have the ability to also publish or send messages outward?” Griffin noted. “All three together are incredibly lethal.”

— Anne Griffin, Head of AI Product Strategy

Prompt injections can translate into direct, real-world impact, from unauthorized data disclosure to fraudulent transactions or reputational damage.

What are agentic AI security best practices?

Despite the novelty of agentic AI, the defensive playbook remains grounded in fundamentals.

Turner emphasized that implementing security guardrails for agentic AI does not require new frameworks. “The safeguards that are universally applicable are the standard go-tos for good information security hygiene. It’s network segmentation, role-based access,” Turner said. “Keep it on a separate network. And if you do introduce data, understand that that data could be exposed.”

Organizations should treat AI agents as identities with authority. If you would not grant a new employee unrestricted access to sensitive systems, you should not grant it to an agent either.

Public fascination with agentic AI can often drift toward philosophical questions about autonomy and superintelligence. But the immediate threat is exposed infrastructure, according to the STRIKE team’s findings.

Attackers do not need to outsmart a large language model. They need one vulnerable service running with broad access.

You can learn more from the new STRIKE intel on OpenClaw exposures at securityscorecard.com/strike or declawed.io, where you can view updated data every 15 minutes. 

Connect with SecurityScorecard’s STRIKE Threat Intelligence Team for detailed reports, API access, or custom intel feeds at declawed.io

 

default-img
default-img

Have a media inquiry?

Contact us today