Total Tests:

AI hallucinations and their risk to cybersecurity operations

Help Net Security
By Mirko Zorz for Help Net Security
Monday, May 19, 2025

One emerging concern is the phenomenon of package hallucinations, where AI models suggest non-existent software packages. This issue has been identified as a potential vector for supply chain attacks, termed “slopsquatting.” Attackers can exploit these hallucinations by creating malicious packages with the suggested names, leading developers to inadvertently incorporate harmful code into their systems.

“If used without a thorough verification and manual validation, AI-generated code can introduce substantial risks and complexities. Junior developers are particularly susceptible to the risks of erroneous code or config files because they lack sufficient skills to properly audit the code. As to senior developers, they will likely spot an error in a timely manner, however, the increasing number of them over-rely on GenAI, blindly trusting its output,” said Ilia Kolochenko, CEO of ImmuniWeb.

Another concern is the potential for AI to produce fake threat intelligence. These reports, if taken at face value, can divert attention from actual threats, allowing real vulnerabilities to go unaddressed. The risk is compounded when AI outputs are not cross-verified with reliable sources. Read Full Article


Ask a Question