Total Tests:

How AI is becoming a ticking time bomb within organisations

International Security Journal
By Eve Goode for International Security Journal
Thursday, March 12, 2026

ISJ hears exclusively from Dr Ilia Kolochenko, CEO of ImmuniWeb about the increasing security risks organisations face as AI becomes integrated.

Can you introduce yourself and tell me about your role at ImmuniWeb?

My name is Dr Ilia Kolochenko and I am the CEO and Chief Architect of ImmuniWeb, leading the technical teams, products and innovation.

I am also an attorney-at-law in Washington, DC and a Vice-Chair of the Information Security Committee at the American Bar Association (ABA).

Why do you believe AI is becoming a “ticking time bomb” in 2026? What’s changed?

AI has a huge potential for humanity, but its reckless and thoughtless implementation may indeed be a ticking time bomb, capable of provoking global havoc and decades of economic and social recession.

The problem is that AI cannot just “fix” things, like legacy infrastructure or obsolete applications, without almost inevitably breaking something.

Moreover, most AI models are trained on all kinds of human-created data – including disinformation, malicious and harmful data – that will impact the output of LLM models even if we build multilayered guardrails to prevent this.

While synthetic data (i.e. data generated by AI) is increasingly used for training purposes, creating even bigger risks that models will hallucinate and go off-road in a completely unpredictable and unmanageable way.

Right now, poorly built AI technology and its derivatives progressively penetrate all layers of our society and daily lives, like toxins or radiation invisible to the human eye and sooner or later, we will arrive at a point of no return.

The now-fashionable Agentic AI may catalyse the looming collapse: AI agents make virtually uncontrolled decisions in unsupervised environments, oftentimes operating with business-critical systems or data.

We need to understand that AI is just a tool, having nothing in common with human intelligence, let alone the human decision-making processes.

Although AI may falsely appear to be omniscient and omnipotent, it cannot do much without thorough supervision by human experts.

Trying to impress their investors, AI giants have no shortage of sensational press releases and grandstanding announcements, mostly designed to denigrate their competitors or promulgate technically inaccurate statements.

For example, modern-day AI may indeed be very helpful for basic legal or medical questions, but unless you are a lawyer or a doctor yourself, using AI in litigation or medical treatment is a reliable recipe for losing your case or getting seriously ill.

How are tools like Copilot and Gemini increasing security risks inside companies, especially with the rise of “shadow AI”?

First, because of the insufficient or simply missing security awareness about the risk of AI, many enterprise users thoughtlessly share sensitive and confidential data with the mushrooming number of AI tools, bots and services.

Nobody really questions or cares what will eventually happen with this data. In reality, unscrupulous corporations will exploit it for commercial purposes, while cyber-threat actors now start building malicious AI-powered services to grab your data for extortion and corporate espionage.

Second, even when an organisation has an AI governance program, shadow AI will penetrate its premises through various loopholes.

For instances, corporate users may still use their personal smartphones to take pictures of confidential documents for translation or summarisation with AI.

More importantly, virtually all vendors – spanning from Zoom to Adobe and Microsoft – now offer a plethora of “free” AI features that can be enabled in one click.

The price users and their organisations will pay is their privacy and confidentiality.

Finally, most organisations still struggle to find a solution even with basic third-party risks stemming from their supply chain and external vendors.

Needless to say, it would be an arduous task to find a single corporation around the globe that would confidently say that they know whether and how their data is being used by the AI-powered systems of their suppliers.

Are large language models fundamentally incompatible with privacy laws like the UK GDPR?

I do not think that a single LLM (i.e. large AI model designed for all purposes) that currently exists is technically or architecturally compatible with the UK GDPR.

For instance, once a model ingests some personal data, it would be almost impossible to delete it without incurring significant costs.

Another example is clearly defined use of personal data limited to certain lawful purposes, which is a “mission impossible” when LLM is used by millions of unidentifiable users.

Implementation of guardrails will not really help: it is akin to taking a painkiller instead of curing the disease or closing your metal stores instead of addressing the issue of unbridled criminality on your street.

Having said this, when AI models – possibly even LLMs – are properly built from scratch, they might be compatible with key provisions of the UK GDPR.

But we are unlikely to see such models in the near future, as they will be too expensive to train and maintain, while AI vendors tend to prioritise profits over everything else.

Could tech companies face legal or financial consequences if their AI tools are used for hacking or data abuse?

Sure, many people advocate for AI-specific laws to regulate novel risks and threats created by the rapid proliferation of AI.

However, in most jurisdictions on both sides of the Atlantic, existing laws can regulate most of AI-triggered or AI-propelled incidents.

For instance, if your AI bot insults a customer, ships a wrong item to your client, cancels your hotel’s reservation or gives harmful advice – “AI did it” will not stand judicial scrutiny in court.

Having said this, sector-specific regulation may be desirable for the AI industry to prevent creative and smart lawyers from searching numerous loopholes or shifting the blame when AI goes rogue.

Do efforts like AI-powered deepfake detection actually work or are they just temporary fixes?

For the time-being, no.

Moreover, it is not really about detecting deep fakes, it is rather about how human beings perceive deepfakes.

Even if most law-abiding platforms will dutifully signal AI-created content as such, people who want to believe in it, will still believe.

This is about human psychology, not AI.

What kind of major AI incident could trigger tougher regulation and how should businesses prepare now?

Once there is an accident that kills hundreds of people, wipes out many billions of dollars of stock market value, or interferes with lawful elections, then politicians and lawmakers will likely rush to cruelly regulate AI.

Sadly, it will probably already be too late, while over-regulation may be the trigger behind the collapse of the AI bubble, as investors will certainly be scared and start a massive selloff, ultimately creating a domino effect.

What is one piece of advice you would give to companies going forward?

  • AI has huge potential, but it is just a tool
  • Use critical thinking when implementing, building or assessing AI tools and technologies
  • Ask questions
  • Take everything that AI vendors or their allies tell you about the bright future of AI with a grain of salt
  • Use common sense, reason and logic to make well-informed decisions
  • Calm and intelligent humans are here to command AI
Read Full Article


Ask a Question