Google Detects Rise of AI-Powered Malware Threats in 2025
Full Transcript
In 2025, Google identified a significant rise in AI-powered malware, revealing alarming trends in cybersecurity. According to the Google Threat Intelligence Group, adversaries have begun leveraging large language models to enhance malware capabilities, leading to the emergence of new and sophisticated malware families.
One such malware, named PROMPTFLUX, utilizes Google's Gemini AI model to rewrite its own code dynamically, allowing for advanced obfuscation techniques to evade detection by antivirus software. This self-modifying capability is termed 'just-in-time' self-modification, which gives the malware unprecedented operational versatility that traditional malware lacks.
PROMPTFLUX operates by querying the Gemini API for obfuscation techniques and can save the new code to the Windows Startup folder to ensure persistence on infected systems. Google researchers noted that while PROMPTFLUX is still in development and has not yet proven capable of inflicting significant damage, its existence highlights the potential for future threats.
Another notable malware identified is FruitShell, a PowerShell reverse shell that also incorporates hard-coded prompts to bypass LLM-powered security systems. Additionally, QuietVault, a JavaScript credential stealer, targets tokens from platforms like GitHub and NPM, showcasing a trend where malware is utilizing AI to enhance its capabilities in credential theft.
Google also reported on PromptLock, a cross-platform ransomware that utilizes Lua scripts to encrypt data across various operating systems. The report highlights a concerning pattern where threat actors, including state-sponsored groups from China and Iran, have abused AI tools like Gemini for a variety of malicious purposes, including crafting phishing lures, developing malware, and enhancing operational security.
For instance, a China-nexus actor used Gemini to bypass safety filters while posing as a participant in a capture-the-flag exercise to extract exploit details. Similarly, Iranian hackers utilized Gemini for malware development, even accidentally exposing critical infrastructure in the process.
As the market for AI-powered cybercrime tools matures, Google has observed a growing presence of advertisements for malicious AI-based tools on underground forums. These tools lower the technical barrier for executing complex attacks, reflecting a shift in cybercriminal operations towards more sophisticated means of attack.
Google emphasizes that the approach to AI development must be responsible, incorporating strong safety measures to prevent misuse. The insights from this report signify a crucial moment for cybersecurity, as the integration of AI in malware represents a new frontier for cyber threats that both organizations and individuals must prepare for.