Google says AI hacking surged in three months, 2fa on alert
Google says criminal groups have used AI to push 2fa abuse from a nascent problem into an industrial-scale threat in just three months. The company’s threat intelligence group says the shift is already changing how attackers refine, test, and scale operations across software systems.
John Hultquist on the race
John Hultquist, chief analyst at Google’s threat intelligence group, said, “There’s a misconception that the AI vulnerability race is imminent. The reality is that it’s already begun.”
He also said, “Threat actors are using AI to boost the speed, scale, and sophistication of their attacks.”
For a security team, that means the pressure is no longer limited to finding a single flaw. It now extends to defending against attackers who can iterate faster and build better malware with the same commercial tools ordinary users can access.
Commercial models in use
Google’s report said criminal groups and state-linked actors from China, North Korea and Russia appear to be widely using commercial models. The models named in the report included Gemini, Claude and tools from OpenAI.
That puts mainstream AI products inside the attack chain. It also means the concern is not a niche tool built for cybercrime, but software that can be repurposed for testing operations, persisting against targets, and making other improvements.
Zero-days and Mythos
The report said a criminal group was recently on the verge of using a zero-day vulnerability for a mass exploitation campaign. Google said that group appeared to be using an AI large language model that was not Mythos.
The timing matters because Anthropic declined to release Mythos last month after saying it had extremely powerful capabilities. Anthropic said Mythos had found zero-day vulnerabilities in every major operating system and every major web browser.
Steven Murdoch, professor of security engineering at University College London, said, “That’s why I’m not panicking. In general we have reached a stage where the old way of discovering bugs is gone, and it will now all be LLM-assisted. It will take a little while before the consequences of this get shaken out,”
The unresolved issue is which commercial model defenses can realistically block first: the open use of mainstream AI tools, or the faster, more focused abuse that Google says is already under way.