Current and former military officers are expressing concerns that adversaries may take advantage of inherent vulnerabilities in artificial intelligence chatbots to carry out malicious activities such as file theft, public opinion distortion, and betrayal of trusted users. This issue arises from prompt injection attacks, whereby large language models, which are foundational to chatbots, cannot effectively differentiate between harmful and legitimate user instructions.
Liav Caspi, a former member of the Israel Defense Forces cyberwarfare unit and co-founder of Legit Security, noted, “The AI is not smart enough to understand that it has an injection inside, so it carries out something it’s not supposed to do.” He indicated that adversaries could manipulate a chatbot to execute unintended commands, likening it to having a spy within an organization.
Military experts have warned that increasing reliance on chatbots could elevate risks, particularly as hackers—including those backed by China and Russia—are already utilizing tools like Google’s Gemini and OpenAI’s ChatGPT to create malware and deceptive identities. The prompt injection threat presents a significant danger, potentially enabling bots to be used for file copying or spreading misinformation.
In a milestone annual digital defense report released in September 2023, Microsoft highlighted the rise of AI systems as high-value targets for adversaries employing prompt injection techniques. Despite the growing awareness, the challenge of defending against prompt injection lacks straightforward solutions, as confirmed by OpenAI and other security researchers.
Prompt injection attacks can be executed by embedding harmful instructions within the content that chatbots consume, such as blog posts or PDF files. For instance, a security researcher illustrated a prompt injection attack on ChatGPT Atlas, triggering the bot to respond “Trust No AI” when given tainted documents. Additionally, a recent vulnerability in Microsoft’s Copilot was reported, which could have led to sensitive data theft.
Microsoft stated that its security team regularly attempts to identify prompt injection vulnerabilities and takes measures to mitigate them. Furthermore, they continuously monitor for unusual chatbot behavior to secure their systems against evolving threats. Dane Stuckey, OpenAI’s chief information security officer, acknowledged the prompt injection issue as a complex security challenge that adversaries will actively try to exploit.
Caspi emphasized the importance of limiting the impacts of these vulnerabilities by restricting AI tool accesses to sensitive information. For example, the U.S. Army has awarded contracts totaling at least $11 million to implement “Ask Sage,” a tool allowing users to limit the data that chatbots can access and ensuring isolation from external data sources.
In a broader context, the Army aims to enhance cybersecurity measures through simulations involving AI-based cyberattacks, collaborating with essential services to protect against AI-driven threats. During a September simulation, participants witnessed an AI successfully carrying out unauthorized actions against its systems.
Andre Slonopas, a member of the Virginia Army National Guard, stressed the urgent need to improve the accessibility and affordability of cybersecurity AI solutions, especially for smaller utilities. He stated that having advanced AI defense capabilities could significantly amplify human efforts in safeguarding networks against cyber threats.
Despite the ongoing challenges, there are assertions that certain nation-states, such as China, are particularly skilled in offensive AI tactics. A military official, speaking anonymously due to the sensitive nature of the information, noted that while China’s capabilities are notable, the use of AI tools enables various actors—including countries and cybercriminals—to imitate one another’s actions effectively.
Aliya Sternstein, J.D., is the investigative journalist responsible for this coverage, bringing extensive experience in technology, cognition, and national security.












