Okta Tightens Agent Identities For Machine-To-Machine Connections

By Adrian Bridgwater Senior Contributor

Okta Tightens Agent Identities For Machine-To-Machine Connections

Black and white cybernetic robot hands pointing at each other

When will full autonomy happen? It’s the question tabled at every technology vendor meeting these days. The IT industry is frantically building agentic AI services and everybody wants a stake at the table. With various CEOs (including those from Salesforce and Microsoft) both claiming to now hand over somewhere between 30 to 50 percent of work to AI services, the degree to which agents now start talking to agents is of great importance.

Human, Out Of The Loop?

Until recently, technology advocates and evangelists were fond of mentioning the human-in-the-loop (and human handoff) element when talking about emerging AI services. It was a sort of appropriate lip service that needed mentioning, just to calm the people who worry about the robots taking over. A lot of that has changed and of course Google underlined the trend this April with the introduction of the A2A agentic communications standard.

“[We have launched] a new, open protocol called Agent2Agent, with support and contributions from more than 50 technology partners. The A2A protocol will allow AI agents to communicate with each other, securely exchange information and coordinate actions on top of various enterprise platforms or applications. We believe the A2A framework will add significant value for customers, whose AI agents will now be able to work across their entire enterprise application estates,” noted the Google for Developers blog.

But where are humans in the loop now?

Speaking at a press gathering in London this week, Nutanix CEO Rajiv Ramaswami acknowledged the forthcoming inevitability of agentic intercommunication and said that his firm is working to provide as broad a scope of cloud infrastructure as possible to enable the new (and next) age of AI with simpler (if not pleasingly invisible) cloud services. Acknowledging that the infrastructure comes first… and then agentic identity management comes as a subsequent tier (for which Nutanix itself will look to collaborate with its now significantly expanded partner ecosystem, which has swelled in the wake of VMware’s move to Broadcom), Ramaswami called for an understanding into how, when, why and where we weave this new fabric of intelligence.

MORE FOR YOU

Identity Steps Up

If it is time for hardcore identity players to come forward, then identity platform company Okta would arguably rank in the “usual suspect” lineup in this space. This summer, the company introduced Cross App Access, a new protocol to help secure AI agents. As an extension of the open standard OAuth (technology that provides authorization controls to grant third-party applications access to other resources), Okta says its new services bring control to both agent-driven and app-to-app interactions.

In short, it allows developers and data scientists to decide what apps are connecting to what… and what information AI agents can actually access.

According to Arnab Bose, chief product officer for Okta platform, more AI tools are using technologes like Model Context Protocol and A2A to connect their AI learning models to relevant data and applications within the enterprise. However, for connections to be established between agents and apps themselves (think about Google Drive or Slack as good examples of applications that an agent might want access to) users need to manually log in and consent to grant the agent access to each integration.

Amplified Agentic Explosion

Bose says that despite this truth, app-to-app connections occur without oversight, with IT and security teams having to rely on manual and inconsistent processes to gain visibility. This creates a big blind spot in enterprise security and expands an increasingly unmanaged perimeter.

This challenge, he says, will be amplified with the explosion of AI agents, which are introducing new, non-deterministic access patterns, crossing system boundaries, triggering actions on their own and interacting with sensitive data. The position at Okta is that “today’s security controls aren’t equipped to handle their autonomy, scale and unpredictability” and that existing identity standards are not designed for securing an interconnected web of services and applications in the enterprise. The company says that while MCP improves transparency and communication between agents, it could still benefit from additional identity access management features.

“We’re actively working with the MCP and A2A communities to improve AI agents’ functionality, their increased access to data and the explosion of app-to-app connections will create new identity security challenges,” said Bose. “With Cross App Access, Okta brings oversight and control to how agents interact across the enterprise. Since protocols are only as powerful as the ecosystem that supports them, we’re also committed to collaborating across the software industry to help provide agents with secure, standardized access to all apps.”

Where Agents Need Tightening

The question now, presumably, is where exactly should we tighten up identity controls for agentic AI services first? The password login box has been a bull’s-eye for attackers for a long time. Why? Because it’s the primary path to sensitive data. Although most people now realize that “password123” is a bad idea, organizations will now need to gain a new and fundamental understanding of their sprawling human and machine identities.

“Now, take that existing chaos and multiply it by a million. Picture a world where millions of AI agents, autonomous pieces of code acting on behalf of both users and other machines, are interacting with your systems. Suddenly, that messy frontline looks like a wide-open battlefield. We could be in for a world of trouble,” said Shiv Ramji, president, AuthO at Okta.

According to PwC’s AI Agent Survey, nearly 80% of senior executives said their companies are already adopting AI agents. However, by moving quickly from prototypes to production without adequate governance and access controls, there is a real potential for agentic AI “shadow IT” and the introduction of systemic risk. The bottom line for developers is all about keeping the IT stack secure, enabling new agent-to-agent intercourse to happen… and still keep the existing operational lights on. But this time, it’s not just identity. It extends beyond access to who has permissions to specific resources, such as databases, documents, internal sites, wiki pages, other tools/systems, and other agents.

Agentic Weakness Factors

Ramji asks us to consider the following risk factors:

Token Sprawl: Every AI agent needs credentials, often in the form of tokens, to access resources and perform actions. A single token leak isn’t just one compromised account, it can unlock what he calls “a systematic compromise” across an organizations infrastructure, giving malicious parties a “digital skeleton key” into a business.

No Audit Trail: When an AI agent acts, IT needs to know on whose behalf those acts exist for and which human initiated the agent. Without a robust audit and control mechanism, accountability evaporates. This breaks compliance mandates, obscuring who did what and when.

Lateral Movement: If an attacker compromises one agent and that agent has overly broad permissions or trusts other agents implicitly, a company just handed them a roadmap to navigate its entire network. They can move freely, escalating privileges and exfiltrating data, all while blending in with legitimate agent activity.

Overboard Access: Just like humans, AI agents are often granted more access than they strictly need to do their job. This “principle of least privilege” is critical. If an agent designed to update a database can also delete entire datasets, a firm is setting itself up for failure.

“So, how do we tackle these systemic risks at scale? This isn’t just about individual application hardening; it’s about establishing a standardized, secure way for agents to function in an interconnected world. Open protocols, such as MCP and Google’s A2A, will be key to this, enabling interoperability and preventing vendor lock-in. While MCP focuses on an agent’s interaction with tools, Google’s A2A protocol addresses the equally crucial problem of how AI agents communicate and collaborate with each other. In a complex enterprise environment, you won’t have just one agent; you’ll have an ecosystem of specialized agents,” said Ramji. “This is also why you need to build identity security into your AI agents from the ground up.

The Way Forward

The safest way forward in this space appears to include several factors, such as the need to architect bespoke login flows for AI agents. This means dedicated authentication mechanisms designed for machine-to-machine interaction.

Okta’s Ramji concludes his commentary in this space by saying that organizations need to use OAuth 2.0 for secure tool integrations i.e. when AI agents integrate with external services like Gmail or Slack, we don’t need to reinvent the wheel, we can lean on established, secure authorization frameworks like OAuth 2.0 today. Organizations should also still design for human-in-the-loop approvals, especially for critical or sensitive actions, bake in a mechanism for human oversight.

While Okta’s key competitor list includes Microsoft Entra ID, Cisco (for Duo Security) ForgeRock, OneLogin, CyberArk, IBM for its Security Verify layer and all three major cloud hyperscalers from AWS to Google Cloud to Microsoft Azure… most of the vendors in this space would largely concur with the general subject matter discussed here. It’s all about human management in the first instance and that’s why documentation is fundamental in any scenario like this where code annotations have to exist to prove what connects to what.

Humans will still be in the loop, even when that loop is humans building an agent-to-agent loop… and that’s a large part of of how we keep this tier of software application development working properly.

Editorial StandardsReprints & Permissions

Read More…