Military Experts Warn of Security Vulnerabilities in AI Chatbots

Published
November 11, 2025
Category
Special Requests
Word Count
512 words
Listen to Original Audio

Full Transcript

Military experts are raising alarms about significant security vulnerabilities in AI chatbots, particularly regarding prompt injection attacks. Liav Caspi, a former member of the Israel Defense Forces cyberwarfare unit, explained that these vulnerabilities stem from large language models, which are unable to differentiate between malicious and trusted user instructions. According to Caspi, this flaw allows adversaries to effectively manipulate chatbots, likening the situation to having a spy within an organization, capable of executing harmful commands like deleting records or skewing decisions.

Former military officials have noted a growing reliance on chatbots for various tasks, which has made them attractive targets for hackers, particularly those affiliated with nations such as China and Russia. These hackers are reportedly instructing AI systems like Google's Gemini and OpenAI's ChatGPT to create malware and fake personas, with the potential for prompt injections that could lead to unauthorized file access or misinformation campaigns.

Microsoft's recent annual digital defense report highlighted that AI systems themselves have become high-value targets, as adversaries are increasingly utilizing techniques like prompt injection. The report noted that there is no straightforward solution to this problem. For instance, a recent demonstration by a security researcher revealed how a prompt injection attack against OpenAI's ChatGPT Atlas could lead to the chatbot advising users to 'Trust No AI' when analyzing a document embedded with malicious commands.

Additionally, a vulnerability in Microsoft's Copilot chatbot was identified, potentially allowing attackers to trick the AI into stealing sensitive information, including emails. Microsoft stated that its security team continually tests Copilot for such vulnerabilities and monitors for unusual chatbot behaviors to protect its users.

Dane Stuckey, OpenAI's chief information security officer, acknowledged that prompt injection is an unresolved security challenge and that adversaries will continue to invest significant resources to exploit these vulnerabilities. Experts like Caspi recommend limiting AI assistants' access to sensitive data and restricting user access to sensitive organizational information as critical mitigation strategies.

For example, the U.S. Army has invested over $11 million in deploying Ask Sage, a tool that enables users to control which data AI models can access. This tool can isolate sensitive Army data from user prompts, reducing the risk of a successful prompt injection attack.

During a simulation involving the Virginia Army National Guard, AI systems successfully bypassed security measures, demonstrating their capability to create fake usernames and gain unauthorized access. Andre Slonopas, a member of the Virginia Army National Guard, noted that the speed and efficiency of AI in executing these tasks far surpass human capabilities, underscoring the need for more accessible and affordable cybersecurity solutions.

Concerns have also been raised about contractors being targeted by state-affiliated AI, particularly from China, which is seen as highly skilled in offensive AI. The potential for AI spoofing and translation further complicates the landscape, enabling various actors, including hacktivists and cybercriminals, to masquerade as one another, leveraging AI technologies for malicious purposes.

The combination of these vulnerabilities and the increasing sophistication of adversarial attacks poses a serious threat to national security and highlights the pressing need for robust cybersecurity measures in the deployment of AI technologies.

← Back to All Transcripts