Will AI really be ‘weaponised’ next year?
Survey finds that majority of cybersecurity experts believe AI will be used by attackers within 12 months
Artificial intelligence (AI) and machine learning may well be the toast of the security industry, but not everyone is 100 per cent positive about the burgeoning technology. In a survey of attendees at BlackHat 2017 researchers found that 62 per cent of infosec experts believe AI will be used for malicious cyberattacks in the coming year.
Although AI and machine learning may hold the key to longer term security, it seems that it will also boost hackers and augment their tactics, in the short term at least. However, as the researchers noted, this should prove more of a speed bump than a barrier: “As cybercriminals and nation-states begin using AI to increase the rate of attacks, the need for smarter solutions that can help human security teams keep up will only become more apparent.”
Ilia Kolochenko, security expert and CEO of High-Tech Bridge, commented: “From what we have seen so far, cybercriminals mainly use machine learning for various classification tasks aimed to improve victim selection and targeting. They also use some elements of machine learning to identify valuable data in large amounts of stolen documents. Another aspect is deception of security systems based on machine learning, for example attackers can make a security system believe that an attack is a legitimate activity and ignore it. Many cybercrime groups already use Big Data and machine learning to better select their victims, bypass security solutions and thus get the highest ROI from their criminal business. Many cyber gangs are way ahead in machine learning than some cybersecurity companies stuck with buzzy marketing around AI.”
The researchers, from Cylance, surveyed 100 attendees at BlackHat 2017, and also uncovered that OS patching and updating was the top concern among them, with 39 per cent agreeing it remains a major challenge, followed by compliance issues (24 per cent), then ransomware (18 per cent), triaging alerts (10 per cent), and identity and DoS attacks (8 per cent).
In terms of what is keeping them up at night, more than 1 in 3 (36 per cent) said that phishing was a primary concern, followed by 33 per cent worrying about reported attacks on critical infrastructure. Additional top concerns included Internet of Things (IoT) attacks (15 per cent), ransomware attacks (14 per cent) and botnet attacks (1 per cent).
Although far from a panacea, AI certainly offers potential respite against some of the major challenges security professionals face day-to-day. One major problem is the gradual decrease in effectiveness of fully automated tools, which is gradually increasing the pressure on scarce human resources. One example uncovered recently by High-Tech Bridge researchers is that 53 per cent of simple flaws from OWASP Top Ten, such as XSS, are no longer detectable by vulnerability scanners and other fully automated solutions.
This is due to the increasing sophistication of applications, where exploiting a vulnerability more and more frequently requires a complicated chain of exploitation that requires human intervention. For example, many [at a first glance] simple XSS flaws require a valid client ID or Google’s reCAPTCHA, or are only reproducible with a long set of other valid HTTP parameters. Moreover, complicated authentication systems (e.g. using 2FA and session expiration in case of abnormal behaviour) preclude vulnerability scanners from testing the authenticated part of the applications.
As a result, Gartner revealed that 75 per cent of mobile applications failed basic security tests through 2015, and over 70 per cent of security vulnerabilities exist at the application layer have not improved over the intervening months. In fact, High-Tech Bridge researchers found in early 2017 that 83 per cent of mobile apps within banking, financial and retail sectors have a mobile backend (web services and APIs) that is vulnerable to at least one high-risk security vulnerability.
AI could well hold at least a part of the solution, as a recent Frost and Sullivan report pointed out: “The application vulnerability scanning could be optimized by using machine learning algorithms. Intelligent automation of testing based on the support of machine learning strongly decreases the necessary time to process tests compared to pure human intervention. Frost & Sullivan strongly thinks that the machine learning technology is an innovative and relevant way to optimize web application security testing”.
The bad guys certainly seem to think it holds promise - maybe it’s time you took a closer look...