Microsoft Copilot Flaw Exposed Confidential Emails

Tuesday, February 24, 2026
Bad actors could instruct the chatbot to summarise files a user had accessed that day and surface personal details, such as where the user lived or had travelled. Microsoft has since said that the vulnerability has been patched.
These Incidents Will Likely Surge in 2026
Dr Ilia Kolochenko, CEO at ImmuniWeb and a Fellow at the British Computer Society (BCS), said: “With the rapid proliferation of agentic AI and AI-powered plugins for traditional software, incidents like this one will likely surge in 2026, possibly becoming the most frequent type of security incident at both large and small companies around the globe.”
According to him, most corporations are not ready to properly secure and manage AI at workplace, while both employers and employees are rapidly switching to mushrooming AI solutions in the hope of gaining some productivity. “Traditional security controls, such as DLP systems, are currently unable to reliably detect unauthorized or excessive use of AI by unwitting employees or malicious insiders. Worse, cybercriminals are already actively creating malicious AI agents and applications to steal sensitive data from users.”
AI Will be a Disaster for Privacy
Misuse of AI will also be a disaster for privacy in 2026, Kolochenko adds. “Every day, tons of sensitive personal data are shared with LLMs around the globe without any precautions. Even governmental agencies of developed countries are exposed to this risk because of inadequate or simply missing governance of AI in the workplace. Shadow AI, when employees bring their own devices with AI apps to scan or otherwise ingest confidential data, will be among the key challenges to tackle.”
In 2026, and moving forward, he says we will probably see many class-action and individual lawsuits against both tech giants and AI boutiques for unlawful collection of user data. “Some unscrupulous actors who purposely use Agentic AI to obtain valuable or confidential data will likely claim that they have been collecting the data without authorization by mistake. Whether such a defence will stand in courts depends on many factors, but AI industry will likely suffer a lot, with some AI vendors going out of business due to litigation and reputational losses.”
Lastly, he says, after a few security incidents of a sufficient scale and damage happen, like a crash of a Critical National Infrastructure (CNI) provider or a massive leak of classified documents, governments on both sides of the Atlantic will probably rush to severely regulate use of AI, possibly creating a new AI winter. Read Full Article
SiliconANGLE: Google warns attackers are wiring AI directly into live cyberattacks
The Register: UK's 'world-first' deepfake detection framework unlikely to stop the fakes, says expert