The identity problem at the heart of agentic AI security
With over 43,000 attendees last week, San Francisco cybersecurity conference RSA exceeded its pre‑pandemic high‑water mark. Can you guess the topic du jour? “The entire conference felt like an agentic AI show,” Bob Kalka, Global Lead for Security Sales at IBM who attended RSA, toldIBM Thinkin an interview. “Almost every one of hundreds of vendors on the expo floor were talking about agentic AI security.”
Agents are inherently more vulnerable than static code. AI agents, for example,change behavior dynamically at runtimewhen they call tools or shift contexts. This means many new attack opportunities emerge as they execute tasks and collaborate with other agents.Yet while the excitement for agentic AI security was palpable, end-to-end orchestration for securing these agentic systems “was not a big theme” at the conference, said Dave McGinnis, Vice President for Global Cyber Threat Management on a forthcoming episode of theSecurity Intelligencepodcast. Suja Viswesan, Vice President for Security Products at IBM, observed the same. “Very few vendors spoke of end-to-end solutions,” she said. “There were a lot of point solutions, [but] we need to get to a place where we’re bringing things together.”This holistic approach to security becomes ever important as AI agents emerge with more dynamic functionality—a new kind of identity to protect, one that traditionalidentity and access management(IAM) frameworks weren’t built to handle. This is where tools likeIBM VerifyandHashiCorp Vaultstep in to deliver on that end-to-end identity security, with the former handling human identities and integrating with the latter to help oversee agentic ones.
HashiCorp Field CTO Jake Lundberg also attended RSA where he came across some confusion about how to coordinate agents within a broader platform. He said a challenge he sees when meeting with clients is “not everyone has a great handle on the scope of their identities in the first place; and second, how do I attest that those identities are doing what they should be?”For this reason, Lundberg works with companies to “ring-fence their identities and their workflows,” he said on the podcast. This ring-fencing, he added, is particularly important in regulated industries, such as finance and healthcare where the data handled is highly sensitive and one small compromise can snowball quickly, amplified by the speed and autonomy of an AI agent.On the flip side, other companies—particularly those with fewer regulations—may be moving quickly to deploy new AI agent workstreams, feeling pressure from the board or stakeholders to innovate as fast as possible. “You have folks [who] are ‘YOLO’ing’ it,” Lundberg said. He likened this moment to the emergence of cloud, where many businesses were rushing to adopt the new technology, in part for fear of missing out, without having a good handle on its security or workflows.Across the board, the experts agreed that it’s high time to address agent security. In the latestCost of a Data Breach report, 97% of organizations that reported an AI-related security incident lacked proper AI-dedicated access controls.Ultimately, the competitive cost of subpar AI security is too high to ignore—especially when organizations with a coordinated, multi-agent strategy expect a 42% higher ROI compared to organizations with no AI security strategy, according to a newIBM Institute for Business Value study.For Lundberg, at the most basic level, it comes down to the identities and the isolation of their workflows. “The fundamental pieces that are going to help you protect your environment,” Lundberg said, “is this ability to very quickly both stand up and change those identities in the event that something goes wrong.”Join Bob Kalka and Jake Lundberg as they explore whatmodern identity governance looks like against the backdrop of 2026 X-Force Threat Intelligence Reportin this April 9 webinar.