ServiceNow AI Agents Vulnerable to Malicious Exploits
Full Transcript
Malicious actors can exploit vulnerabilities in ServiceNow's Now Assist AI platform, allowing them to orchestrate prompt injection attacks. According to The Hacker News, these attacks leverage the platform's default configurations, enabling unauthorized actions through agent-to-agent discovery capabilities.
Aaron Costello, chief of SaaS Security Research at AppOmni, highlighted that this isn't a bug but an expected behavior of the system. When agents can discover and recruit each other, a harmless request can escalate into a significant threat, potentially allowing criminals to steal sensitive data or gain unauthorized access to internal systems.
The attack is facilitated by the ability of one agent to parse specially crafted prompts embedded in accessible content, recruiting other agents to perform harmful actions such as copying sensitive data, modifying records, or sending emails.
Notably, these actions are executed behind the scenes, often without the victim organization's knowledge. Default settings that enable cross-agent communication include the automatic grouping of agents into teams and the default mark of discoverability when agents are published.
While these settings facilitate communication, they also increase vulnerability to prompt injections. The underlying large language models, specifically Azure OpenAI and Now LLM, support agent discovery, raising the stakes for potential exploitation.
AppOmni warns that attackers can redirect benign tasks assigned to a harmless agent into malicious actions by utilizing the functionality of other agents. Importantly, agents operate under the privileges of the user who initiated the interaction, not the user who created the malicious prompt.
Following responsible disclosure, ServiceNow confirmed that this behavior is intended and has updated its documentation for clarity. The findings emphasize the urgent need for enhanced protection of AI agents as enterprises increasingly utilize AI in their operations.
To mitigate these risks, organizations are advised to configure supervised execution mode for privileged agents, disable the autonomous override property, segment agent duties by team, and monitor AI agents for suspicious behavior.
Costello reiterated the importance of closely examining configurations, warning that organizations using Now Assist's AI agents may already be at risk if they do not take these precautions.