Google advierte de que los atacantes están integrando la IA directamente en ciberataques activos.
Friday, February 13, 2026
Por ahora, sin embargo, Google describe la mayor parte del uso observado como un complemento a los operadores humanos, más que como un reemplazo.
El Dr. Ilia Kolochenko, director ejecutivo de ImmuniWeb SA y experto en ciberseguridad, no quedó impresionado con el informe. Comentó a SiliconANGLE por correo electrónico que «parece una mala campaña de relaciones públicas de la tecnología de IA de Google, en medio de la disminución del interés y el creciente desencanto de los inversores con la IA generativa».
First, he said, “even if advanced persistent threats utilize generative AI in their cyberattacks, it does not mean that generative AI has finally become good enough to create sophisticated malware or execute the full cyber kill chain of an attack. Generative AI can indeed accelerate and automate some simple processes — even for APT groups — but it has nothing to do with the sensationalized conclusions about the alleged omnipotence of generative AI in hacking.”
Second, he said, “Google may be actually setting a legal trap for itself. Being fully aware that nation-state groups and cyber-terrorists actively exploit Google’s AI technology for malicious purposes, it may be liable for the damage caused by these cyber-threat actors. Building guardrails and implementing enhanced customer due diligence does not cost much and could have prevented the reported abuse. Now the big question is who will be liable, while Google will unlikely have a convincing answer to it.” Read Full Article
The Register: UK's 'world-first' deepfake detection framework unlikely to stop the fakes, says expert
SecurityWeek: Cyber Insights 2026: Cyberwar and Rising Nation State Threats