OWASP Top 10 in 2021: Security Logging and Monitoring Failures Practical Overview
Security Logging and Monitoring Failures is #9 in the current OWASP top Ten Most Critical Web Application Security Risks.
Security Logging and Monitoring Failures
Logging and monitoring go hand in hand. There is little point in having adequate logs if they are not adequately monitored.
Do you want to have an in-depth understanding of all modern aspects of
Security Logging and Monitoring Failures?
Read this article and bookmark it to get back later, we regularly update this page.
The problem of insufficient logging and monitoring covers the entire IT infrastructure and not just the internet-facing web application – as does the solution. For that reason, we will not limit this discussion to just logging and monitoring web apps.
One of the primary problems is that there are so many logs – almost all contemporary systems generate their own logs. Log management thus becomes a major problem. By the time that all the different logs are gathered together and preferably collated, the sheer size of the data set becomes too large to effectively monitor manually.
The solution is in increased automation of the process. For example, some access control systems can be given their own monitoring rules. Log-on rules can be set to allow a predefined number of log-on attempts per session. The system logs the attempts, and then blocks access from that IP, either for a predefined period or indefinitely. Such systems will also likely alert the security team that something not right is happening.
But it still requires the security team to monitor the alerts – and failure to see the anomalous event can be as dangerous as not logging it in the first place.
Other security controls will generate their own logs and can similarly alert the security team if something seems amiss – but again it requires the security team to interpret the alerts and triage the company response.
This is the basic problem. Systems need to generate adequate logs (not all do), and security personnel need to fully monitor and adequately interpret the messages coming from those logs (very few can).
The whole problem is worsening with the rise of very sophisticated, sometimes state-sponsored attacks, that are specifically designed to be stealthy and not trigger alerts from installed logging and monitoring software. Fileless attacks, for example, will not drop any malicious files onto hard drives – meaning there is no file to be detected by always-on anti-virus monitoring software. They will equally employ legitimate operating system software, such as PowerShell, that will not trigger monitoring software watching for unusual behavior.
A November 2017 analysis from the Ponemon Institute reports that 35% of attacks will be fileless in 2018. Barkly, who commissioned the report, claims that fileless attacks are ten times more likely to succeed than file-based attacks – and this is largely because file-based attacks can be detected by traditional logging and monitoring systems while fileless attacks cannot.
Analysis from the Ponemon Institute reports that 35% of attacks in 2018 will be fileless.
The whole problem has led to the rise of AI-enhanced anomaly monitoring systems better able to detect subtle issues in the logs that might indicate an intruder. The problem remains that the logs produced by such systems are massive and problems are easily missed even though the AI is designed to sort the wheat from the chaff – or the needle from the hay.
This in turn has generated a new category of staff for the security team – the threat hunter. But it remains a problem that good threat hunters are rare and very expensive. So difficult are these problems to solve that the #10 OWASP web application security risk is getting harder rather than easier to solve.
This risk transcends just web applications – but it is the internet-facing web application that so often provides the entry point for full network compromise.
The extent of the problem
While insufficient logging and monitoring is too abstract to be a direct attack vector, it affects the detection and response to every single breach. If web application and server incidents are improperly monitored, suspicious activity can easily be missed. If security risks are not correctly logged – or the logs are badly stored or hard to access – then these flaws will go unaddressed.
The Online Trust Alliance's 2018 report analyzing the previous year's breaches estimated that 93% of breaches would have been preventable with basic security measures. Keeping software patched, better authentication and anti-phishing training were included in these suggested measures. However, improved logging and monitoring would also play a big part in prevention. The OTA provided a breakdown of 2017's reported breaches:
The Online Trust Alliance's 2018 report analyzing the previous year's breaches estimated that 93% of breaches would have been preventable with basic security measures.
- 52% due to direct ‘hacks’
- 15% due to lack of security software
- 11% due to physical skimming of credit cards
- 11% due to insufficient internal controls against negligent or malicious employee actions
- 8% due to phishing attacks.
In most cases, adequate logging and monitoring would detect some form of anomaly that could trigger the correct company response before the damage is done. Well-implemented logging will create alerts whenever anomalies or security issues arise in a web application, and diligent monitoring allows for action to be taken against the exploitation of vulnerabilities. This would apply, at least as a mitigating factor, to the direct hacks, lack of security software and insufficient internal controls if not the other categories. Therefore, 78% of breaches could conceivably be either prevented or mitigated with improved logging and monitoring practices.
Figures from Mandiant suggest some improvement. Its 2018 M-Trends survey reports that 62% of breaches are now being discovered internally, up from 53% in 2016. The dwell time – the time it takes to make that discovery – rose from 99 days to 101 days in 2017.
The Ponemon Institute's 2017 Cost of Data Breach Study claims it is worse: the average time taken to identify a data breach is 191 days. Improved logging and monitoring procedures would identify security issues much sooner, thereby reducing subsequent and consequent damage.
A good way to test for the inadequate logging risk is to use a pentester, who will probe and seek to breach your web applications. If you cannot subsequently detect what is done during the testing, then your logging is inadequate.
Note, however, that while inadequate logging and monitoring is a risk, adequate logging is not a solution. You still need to be able to differentiate between benign and malicious anomalies within those logs. Many products will help to do this, from long established technologies like IDS and IPS, to the newer AI-enhanced log management and network anomaly detection technologies.
The problem here is that you become reliant on the monitoring capabilities of those technologies. Just because they do not detect a problem does not mean that there is no problem. But just as penetration testing can be used to confirm the logging side of the risk, red and blue teaming can be used to confirm the monitoring side of the risk.
In this scenario, the blue team can relate to an in-house threat hunter. The red team should be outside whitehat hackers employed to break into the system. Their purpose is to break into your system without detection from monitoring controls. If they succeed, the threat hunter can learn from their efforts and better understand the signals that can be found in the logging and monitoring technologies.
Logs should be kept safe, away from unnecessary user accounts that might edit, delete, or damage them. It would be best to use encryption for central logging, but it can be quite expensive in terms of performance and staff. The best use of logs is post-event review; discovery of compromised areas and infected devices. Once cleaned and the vulnerabilities found, they can be acted upon and fixed.
While logging and monitoring are one of application security's weakest areas right now, they could become one of the best weapons against breaches. Gartner predicts that analytics will play a greater and greater part in security. According to them, 40% of large organizations will establish a security data warehouse by 2020, which will store and manage security logs and aid with adaptive security.
According to Gartner, 40% of large organizations will establish a security data warehouse by 2020.
It is important then, that your web application logs can be easily consumed by your infrastructure’s central anomaly detection system – if the intruder is not detected at the web application, he could be detected during lateral movement across your internal network.
Organizations have a lot of resources to draw on for guidance when it comes to logging, monitoring and responding to security incidents. As well as OWASP's cheat sheet for security logging, there are guidelines and standards from organizations like NIST and NCSC.
But before anything else, good logging and monitoring requires a comprehensive inventory of all the components being used. No matter how good the logging policy, the monitoring capability, or the subsequent incident response, if a single web-app or API slips through the cracks, an attacker can find a way to blindside the organization and cause a breach. High Tech Bridge founder and CEO Ilia Kolochenko explains:
“It is enough to forget about one tiny web application to get attackers on board...to help companies tackle this problem, at High-Tech Bridge we launched a free discovery service that enumerates your external mobile and web apps, as well as their APIs. Once you have inventory of your digital assets, you can continue with patch management, security hardening, threat hunting and anomaly monitoring – without a risk to ruin all your efforts by one forgotten app.”