The rapid integration of AI agents into business workflows is exposing a critical flaw: no clear system exists for managing their identities and access rights. As these agents gain the ability to log into systems, retrieve data, and execute actions on behalf of companies, the question of who is responsible – and how to control that access – remains largely unanswered.
This isn’t a theoretical concern. Experts like Alex Stamos (Corridor) and Nancy Wang (1Password) warn that developers are already making dangerous mistakes, such as pasting credentials directly into AI prompts. This bypasses security protocols and creates a massive vulnerability.
The Problem: Agents Have Secrets Too
The issue isn’t just about preventing unauthorized access; it’s about accountability. Unlike human users, AI agents don’t inherently belong to an organization or individual. They operate under an authority that dictates what they can do, but tracking this authority is proving difficult. As Wang explains, companies are seeing a familiar pattern: employees adopt tools like AI coding assistants (Claude Code, Cursor) and then bring them into the enterprise, replicating the early adoption of password managers like 1Password.
The problem isn’t just that agents have credentials; it’s that existing security infrastructure isn’t designed for them.
Why Existing Solutions Fail
Traditional security models focus on authentication (verifying identity) but struggle with authorization (granting appropriate access). Giving an AI agent full access to a system is equivalent to handing a human a key to the entire building – far more than necessary for any single task.
This mismatch is especially dangerous because LLMs are prone to false positives. A security scanner flagging legitimate code as malicious can derail an entire development session, making precision crucial. Traditional static analysis tools aren’t optimized for this level of accuracy.
The Path Forward: Workload Identity Standards
The industry is exploring solutions like SPIFFE and SPIRE, standards originally designed for containerized environments, but adapting them is imperfect. The core principle is granting scoped, auditable, time-limited identities. Just as a human should only have access to specific rooms in a building, an AI agent should only be granted credentials for the task at hand, expiring after completion.
Companies will need to track which agent acted, under what authority, and with which credentials. This requires building new infrastructure from the ground up, rather than retrofitting human-centric security models.
The Scale Problem: Billions of Users Change Everything
At massive scale, even “edge cases” become real threats. Stamos, drawing on his experience as Facebook’s CISO, notes that dealing with 700,000 account takeovers per day reframes the concept of risk. Identity management for both humans and AI agents will be a “humongous problem,” requiring consolidation around trusted providers.
Ultimately, the current rush to deploy AI agents is outpacing the development of proper governance frameworks. The solution isn’t proprietary, patented tools (Stamos dismisses them outright), but rather open standards like OIDC extensions that prioritize security without sacrificing usability. The future of enterprise AI hinges on solving this identity crisis before it leads to widespread breaches and irreversible damage.





























