Security breach detection times cut - but are we safer?
Organisations with the ability to proactively detect and investigate incidents beating their peers - but is it enough?
The speed at which organisations are detecting intrusions has considerably improved in the last 12 months, according to researchers.
The average number of days from an intrusion to the detection of a compromise decreased to 49 days in 2016 from 80.5 days in 2015, with individual values ranging from zero days to almost 2,000 days (more than five years). For internally detected incidents the median was 16 days, while 65 was the median number of days for externally detected incidents.
Containment is relatively quick too, with the median number of days from detection to containment was 2.5 in 2016 with values ranging from −360 days, meaning the intrusion ended 360 days before detection, to 289 days. In cases where containment occurred after detection, the median duration was 13 days from detection to containment. The improvement follows Gartner’s prediction that worldwide spending on information security is will reach $90 billion in 2017, an increase of 7.6 per cent over 2016, with key areas of spend being enhancing detection and response capabilities.
This good news comes in spite of increasingly sophisticated malware deliberately working against defenders, with 83 per cent of malware samples Trustwave examined in 2016 used obfuscation, while 36 per cent used encryption. Indeed, the volume of malicious activity delivered over SSL/TLS has rocketed 400 percent from 2016 according to separate research, with attackers using stealthier encrypted channels to conceal device compromises, hide data exfiltration and conceal botnet command and control information.
The Trustwave report found that web applications were the major weak link, however, with 99.7 per cent of web applications tested in 2016 included at least one vulnerability, with the mean number of vulnerabilities detected being 11 per application.
Ilia Kolochenko, CEO of High-Tech Bridge, said: “Web applications are often developed in-house and accumulate dozens of vulnerabilities and weaknesses because of flawed, or simply missing, SDLC and insufficient security testing. While popular network services (e.g. email, VPN of web servers) usually come from large vendors, which already managed to patch the vast majority of security vulnerabilities during the years of their software existence.
“Last but not least, usually the number of web applications exceed the number of non-web services. However, often the biggest problem is with the basics - inappropriate risk assessment, management and mitigation.”
It is often the basics, such as a lack of internal coordination, human negligence or a business reason that introduce security issues to otherwise secure environments, as High-Tech Bridge researchers found recently.
Two out of three companies that take a DevSecOps approach to application development, had at least one high or critical risk vulnerability in their external web applications as a result of one of these reasons. For example, a highly secure web application can be located on a domain with a file upload form, or a recent database backup, in a predictable location. The researchers concluded that the bigger the organization is, the more complicated is to prevent such incidents, as numerous data and process owners change their decisions and requirements much faster than IT has time to properly adopt them, following internal processes.
In short, although security spend has increased, and detection and containment times have dropped in parallel, it is often internal processes that let real-world businesses down...