Google reports site attacks rocket due to ageing code
A patch in time saves… a 32 per cent rise in site compromises, according to Google figures. So why is patching still an issue?
It has been revealed that in spite of 12 months of hard work by the security industry, more websites were hacked in 2016 than 2015. The figures were reported by Google, no less, as as part of its #NoHacked campaign, and represented a 32 per cent rise in successful compromises by hackers.
The web giant isn’t optimistic about those figures falling either: “We don’t expect this trend to slow down. As hackers get more aggressive and more sites become outdated, hackers will continue to capitalize by infecting more sites”, it said in a blogpost.
The company’s key message is around verifying your site in Google's Search Console, because the 61 per cent of webmasters who aren’t notified of a compromise by Google themselves are not verified. Of those that do get notified, 84 percent are successful in cleaning their sites.
Unfortunately, the 32 per cent rise in website compromises contrasts against a worldwide spend on security products of $81.6bn in 2016, an increase of 7.9 per cent over 2015, according to Gartner.
Ilia Kolochenko, CEO of High-Tech Bridge said: “Cybercriminals are starting to use more and more efficient technologies, including machine learning, for intelligent automation of website hacking. They crawl the web 24/7, searching for vulnerable web applications and websites running outdated CMSs, exploit the vulnerability, backdoor the website and even patch the flaw to prevent competition from getting in.”
“After an attack, such websites are sold on the Dark Web, usually via large packages consisting of several thousand breached websites. Other cybercrime groups purchase them to steal and reuse personal data (e.g. password), place malware to conduct watering hole attacks, or even use the websites in targeted spear-phishing campaigns against a particular group of users.”
“Cybercrime increasingly relies on insecure web applications as a facilitator of chained attacks against financial industry and governments. Therefore, if you are negligent and careless about your website security, don’t be surprised to receive a lawsuit from a third-party, hacked via your website, claiming the damages.”
Google said that most attacks it tracked are not sophisticated, and involve compromised passwords, missing security updates, insecure themes and plugins, social engineering and failures of security policy.
A good example of a more sophisticated albeit widely-publicised attack is the recently patched WordPress REST API Endpoint vulnerability, responsible for more than one million website defacements. A related XSS bug has just been patched too, but overall the update process can easily be automated, dodging all but zero-day attacks which are more expensive to find and exploit, thus less common.
However, while simpler sites running popular software such as WordPress can be automatically updated, it is not always so easy, as another recent issue makes clear. More than a week ago, a critical flaw in the Apache Struts Web application framework was patched, triggering a new wave of attacks against the vulnerability.
The five-year-old issue allows attackers to inject commands into the web server hosting it, and affects a variety of high-level sites including banks, government agencies, and Internet companies. So why wasn’t it patched immediately? Unlike Wordpress updates, fixing an app that was built using a buggy version of Apache Struts requires rebuilding the app with a patched version. This in turn means developers not only poring through old source code, but then testing and QA-ing the resulting ‘new’ app - a process that can require significant resource and most importantly, time.
As Google’s security team wrote in their blogpost: “A chain is only as strong as its weakest link”...