A new report from Google’s Threat Analysis Group (GTAG) reveals that state-backed hackers from North Korea, Iran, and China are actively experimenting with and optimizing cyberattacks using artificial intelligence (AI) tools—specifically, Google’s Gemini.
According to Google, multiple state-affiliated groups have been leveraging its large language models for a variety of malicious activities, including reconnaissance, social engineering, malware development, and enhancing all stages of their operations. These stages range from reconnaissance and phishing lure creation to command and control (C2) development and data exfiltration.
### AI-Enabled Sophistication in Cyberattacks
The report uncovers evidence of novel and sophisticated AI-enabled attacks, warning that generative AI is lowering the technical barriers for malicious operations. AI helps attackers work faster and with greater precision, making cyber threats more dangerous.
This new insight builds on similar warnings from Microsoft and OpenAI, which disclosed comparable experimentation by the same trio of nation-backed threat actors. Additionally, Anthropic—the company behind Claude AI—released a report detailing how it has been detecting and countering AI-assisted attacks, with North Korea-linked groups prominently identified as key perpetrators.
### North Korean State Actors Turn to AI
Google’s latest threat intelligence update provides detailed examples of how these groups misuse AI:
– **Iranian Group TEMP.Zagros (aka MuddyWater)**: Used Gemini to generate and debug malicious code disguised as academic research aimed at developing custom malware. This operation inadvertently exposed critical operational details, allowing Google to disrupt parts of its infrastructure.
– **China-Linked Actors**: Employed Gemini to improve phishing lures, conduct reconnaissance on target networks, and research lateral movement techniques within compromised systems. In some cases, they used Gemini to explore unfamiliar environments such as cloud infrastructure, Kubernetes, and vSphere, suggesting efforts to expand technical capabilities.
– **North Korean Operators**: Observed probing AI tools to enhance reconnaissance and phishing campaigns. A North Korean threat group known for cryptocurrency theft attempted to use Gemini to write code designed to steal cryptocurrency. Google successfully mitigated these attacks and shut down the involved accounts.
### Anthropic’s Findings on AI Misuse
Anthropic’s report, released in August 2025, supports Google’s findings on AI misuse by state-linked actors. The company found North Korean operatives using its Claude model to pose as remote software developers seeking jobs. They generated resumes, code samples, and technical interview answers with Claude to secure freelance contracts abroad.
While this fraudulent job-seeking behavior was a tactic for gaining access, it potentially paved the way for larger hacking operations against the hiring organizations. Anthropic’s findings reinforce Google’s conclusion that bad actors are systematically testing AI tools for nefarious advantages.
### The Growing Challenge for Cybersecurity
These discoveries present a new challenge for the global cybersecurity community. The very features that make AI models powerful productivity tools also enable them to become potent weapons for harm. As AI technology advances, attackers will continue to adapt, making their tactics increasingly sophisticated.
Governments and technology companies have started responding to these threats. Moving forward, ongoing collaboration among all stakeholders will be essential to mitigate AI-enabled cyberattacks effectively.
—
**Stay Informed**
The smartest crypto minds already read our newsletter. Want in? Join them today.
https://bitcoinethereumnews.com/finance/google-warns-north-korea-iran-and-china-are-using-ai-to-enhance-cyberattacks/
