AI Models Advancing Cybersecurity Threats Amidst Tech Innovations

Published
December 17, 2025
Category
Technology
Word Count
388 words
Voice
eric
Listen to Original Audio
0:00 / 0:00

Full Transcript

IT managers have limited visibility into when users give external apps access to company data. When those external apps are AI agents, the security risks multiply dramatically. Okta has proposed a new standard to provide organizations with greater visibility and control over these permissions.

By the end of 2026, many organizations are expected to have at least one AI-powered agent operating behind the scenes, and potentially tens or hundreds within five years. These agents will make autonomous decisions and connect to multiple data sources to optimize their actions.

This growing reliance on AI agents presents a significant challenge for cybersecurity, particularly as employees may grant these agents access to sensitive corporate resources without proper oversight.

Current credentialing methods, such as OAuth tokens, may not be adequate for managing this new landscape of AI-driven access. Okta recognized a flaw in how access was approved for applications like Slack, where identity and access management systems are often not involved in decisions made by end users.

Okta is working with the Internet Engineering Task Force on an open standard called Identity Assertion Authorization Grant, or IAAG, which aims to bridge this gap. This standard allows IT managers to maintain control over applications and AI agents, ensuring that only authorized access permissions are granted.

Early adopters of the IAAG standard include major companies like Google, Amazon, and Microsoft, indicating a collective industry push towards better security measures. Okta's director of identity standards, Aaron Parecki, emphasized the importance of this standard, especially in a future where AI agents could autonomously engage in OAuth workflows without organizational oversight.

This shift is particularly critical given the increasing prevalence of cyberattacks that exploit stolen OAuth tokens, as demonstrated by recent breaches affecting Salesforce. The new approach allows organizations to have a more centralized control over permissions granted to AI agents, addressing the potential for security vulnerabilities that could arise from their use.

The proposal is timely as organizations face the growing risk of malicious AI agents that could exploit vulnerabilities in their systems. The IAAG standard aims to ensure that consent for resource access is managed through the organization's IAM system rather than relying solely on end users, who may not adequately protect against threats.

Overall, the IAAG standard represents a significant step towards securing organizational resources in an environment increasingly populated by autonomous AI agents.

← Back to All Transcripts