NextFin News - In a comprehensive report released on February 12, 2026, the Google Threat Intelligence Group (GTIG) disclosed that state-sponsored cyber actors from China, North Korea, and Iran have integrated the Gemini artificial intelligence platform into their offensive operations. The findings, published in the Q4 2025 Threat Intelligence Report, detail how these adversaries are transitioning from experimental usage to active deployment of Large Language Models (LLMs) to enhance the speed and sophistication of cyber espionage and financial theft. According to GTIG, these groups are leveraging Gemini for a range of malicious activities, including target profiling, the generation of hyper-personalized phishing lures, and the troubleshooting of malware code.
The report identifies specific threat clusters involved in these activities. A Chinese actor tracked as APT31 has reportedly adopted a structured approach, using Gemini to automate vulnerability analysis by prompting the model with expert cybersecurity personas. Meanwhile, Iranian-backed group APT42 has utilized the AI to conduct deep reconnaissance on potential business partners and craft convincing social engineering personas. North Korean actors, specifically UNC2970, have been observed using Gemini to synthesize open-source intelligence to profile high-value targets within the defense and cybersecurity sectors. These operations demonstrate how AI is being used to bypass traditional manual labor in the attack lifecycle, allowing state actors to scale their operations with unprecedented efficiency.
Beyond the use of AI as a tool for external attacks, Google also reported a surge in "distillation attacks" directly targeting Gemini’s own intellectual property. These attacks involve adversaries sending a massive volume of prompts—in one instance exceeding 100,000 queries—to probe the model’s internal logic and reasoning processes. The goal of such distillation is to effectively "clone" the model or extract proprietary algorithms, representing a significant shift toward AI-centric industrial espionage. John Hultquist, Chief Analyst at GTIG, noted that the scale of these attacks suggests they are no longer isolated incidents but a burgeoning trend that could soon threaten smaller, custom AI tools used by private enterprises.
The economic and security implications of this trend are profound. As U.S. President Trump’s administration continues to emphasize the protection of American technological leadership, the weaponization of commercial AI by foreign adversaries presents a dual-use dilemma. While LLMs like Gemini are designed to boost global productivity, they simultaneously provide state-sponsored hackers with a low-cost, high-efficiency "force multiplier." The ability of AI to generate nuanced, culturally accurate phishing messages in multiple languages removes one of the most common red flags for cyber defense: poor grammar and awkward syntax. This evolution necessitates a shift in defensive strategies, moving away from pattern recognition toward more robust, AI-driven behavioral analysis.
Furthermore, the rise of distillation attacks highlights the vulnerability of the AI industry’s core assets. Companies have invested billions of dollars into developing the "reasoning" capabilities of models like Gemini. If state-sponsored actors or rival commercial entities can successfully extract these mechanisms through automated querying, the competitive advantage of leading AI firms could be rapidly eroded. This form of intellectual property theft is particularly difficult to combat because the models must remain accessible to the public to be commercially viable. While Google has implemented classifiers to detect and block such probing, the inherent openness of LLM interfaces remains a structural risk.
Looking forward, the integration of "agentic AI"—models capable of acting with a degree of autonomy—is expected to be the next frontier for threat actors. GTIG warned that hackers are already experimenting with automating malware development and deployment using autonomous agents. As these technologies mature, the window for human intervention in cyber defense will continue to shrink. The industry must anticipate a future where cyber warfare is characterized by "AI vs. AI" engagements, where the speed of automated exploitation can only be countered by equally rapid, autonomous defensive systems. For now, the report serves as a "canary in the coal mine," signaling that the era of AI-augmented conflict has moved from theory into a persistent, global reality.
Explore more exclusive insights at nextfin.ai.
