Google warns attackers are wiring AI directly into live cyberattacks
Friday, February 13, 2026
For now, though, Google characterizes most observed use as augmentation rather than replacement of human operators.
At least one cyber expert, Dr. Ilia Kolochenko, chief executive at ImmuniWeb SA, wasn’t impressed with the report. He told SiliconANGLE via email that “this seems to be a poorly orchestrated PR of Google’s AI technology amid the fading interest and growing disappointment of investors in generative AI.”
First, he said, “even if advanced persistent threats utilize generative AI in their cyberattacks, it does not mean that generative AI has finally become good enough to create sophisticated malware or execute the full cyber kill chain of an attack. Generative AI can indeed accelerate and automate some simple processes — even for APT groups — but it has nothing to do with the sensationalized conclusions about the alleged omnipotence of generative AI in hacking.”
Second, he said, “Google may be actually setting a legal trap for itself. Being fully aware that nation-state groups and cyber-terrorists actively exploit Google’s AI technology for malicious purposes, it may be liable for the damage caused by these cyber-threat actors. Building guardrails and implementing enhanced customer due diligence does not cost much and could have prevented the reported abuse. Now the big question is who will be liable, while Google will unlikely have a convincing answer to it.” Read Full Article
The Register: UK's 'world-first' deepfake detection framework unlikely to stop the fakes, says expert
SecurityWeek: Cyber Insights 2026: Cyberwar and Rising Nation State Threats