Is Malicious AI a Threat to Cybersecurity?
Malicious AI could pose a serious threat to cybersecurity in the near future, according to a new report, which also calls for effective strategies to tackle the growing problem.
According to the report, the growing utility of AI products and services, against a background of increasing sophistication spells trouble in both the digital and physical worlds if not taken seriously. First and foremost, the ability of attackers to use AI to automate tasks involved in carrying out cyberattacks will alleviate the existing trade off between the scale and efficacy of attacks, which could result in a rise in threats connected to labour-intensive cyber attacks (such as spear phishing). In addition, the report pointed to the possibility of exploiting the AI systems themselves, though techniques such as adversarial examples and data poisoning.
Other avenues for attackers might include exploiting human vulnerabilities, such as through the use of speech synthesis for impersonation, and through existing software vulnerabilities by enabling faster, more potent automated hacking, said the report, which drew on experience from 26 authors from 14 institutions, spanning academia, civil society, and industry.
Ilia Kolochenko, CEO High-Tech Bridge urged caution in using the term AI too widely: “First of all, we shall clearly distinguish Strong AI [capable of replacing human brain] and generally misused “AI” term that has become amorphous and ambiguous.”
“So far, virtually all ML/AI algorithms are only as good as humans who design, train and improve them. Since a while already, cybercriminals are progressively using simple ML algorithms to increase efficiency of their attacks, for example to better profile and target the victims and increase speed of breaches. However, modern cyberattacks are so tremendously successful mainly because of fundamental cybersecurity problems and omissions in organizations, ML is just an auxiliary accelerator.”
“One should also bear in mind that AI/ML technologies are being used by the good guys to fight cybercrime more efficiently. Moreover, development of AI technologies usually requires expensive long term investments that Black Hats usually cannot afford. Therefore, I don’t see substantial risks or revolutions that may happen in the digital space because of AI in the next five years at least.”
The report makes four high-level recommendations:
Policymakers should collaborate closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI.
Researchers and engineers in artificial intelligence should take the dual-use nature of their work seriously, allowing misuse related considerations to influence research priorities and norms, and proactively reaching out to relevant actors when harmful applications are foreseeable.
Best practices should be identified in research areas with more mature methods for addressing dual-use concerns, such as computer security, and imported where applicable to the case of AI.
Actively seek to expand the range of stakeholders and domain experts involved in discussions of these challenge
High-Tech Bridge’s AST platform ImmuniWeb leverages Machine Learning and Artificial Intelligence for intelligent automation and acceleration of application security testing. Complemented by highly qualified manual testing, it detects the most sophisticated application vulnerabilities but also comes with a zero false-positives SLA. The platform was named a a Key Innovator on the global market of cybersecurity companies that leverage AI and Machine Learning technologies in 2018 by Markets and Markets.