It’s been a busy week among software companies and OEM’s, as both Microsoft and Adobe have released a flurry of patches. Microsoft’s current “Patch Tuesday” bundle features fixes for almost one hundred flaws in Windows and other Microsoft software. Adobe’s updates continue to patch their Flash and Shockwave technologies, both of which are unfortunate poster children for insecure software.
For both of these companies, progress toward ensuring that their products are secure has been a long road. In the span of a few short years in the early 2000’s, Microsoft transformed themselves from a company with a spotty, almost nonchalant attitude toward security, to one of the industry’s best examples of security patch management by incorporating frequent, scheduled patch updates coupled with native auto-update features within their products. For Adobe, progress has been more difficult, with Adobe platforms such as Flash and Acrobat still considered to have significant, fundamental security flaws – but regardless, Adobe as a company is much more reactive to security flaws in their products than they used to be.
Truth be told, patch management is often viewed as one of the least exciting, least sexy aspects of information security, and it’s not hard to see why. Patch management is a never-ending task that requires constant vigilance to determine when patches are released. The scope of patch management in most organizations seemingly never ends, as new software applications, network devices and Internet of Things (IoT) devices come online. And the rewards of effective patch management are not frequently seen: if your patch management process is working, then nothing happens (or at least, nothing bad).
And yet, patch management is one of the great bastions of effective information security. In a previous life, I conducted a lot of vulnerability assessments and penetration tests for clients. Invariably, there were two key findings that always were the most common: (1) operating systems were not patched, often including entire operating systems that were no longer supported by the vendor; and (2) even when the OS was patched, supporting software like Adobe Acrobat, web server software like Apache HTTPd, and other tools and technologies were woefully out-of-date. Collectively, these issues point to a big problem: organizations are not effectively addressing the basic, block-and-tackle operations of security, including patch management.
Addressing patch management requires following three basic “golden rules” that will reduce risks from unpatched systems while minimizing the impact to your business:
- Application patches and firmware are just as important as OS patches. Far too often, organizations focus intently on operating system patches, while ignoring patches to applications and device firmware; this is a huge mistake. The reality is, malware attacks application software vulnerabilities as frequently – if not more so – than it does operating system holes. This is particularly true when it comes to common document handlers (such as Adobe Acrobat and Microsoft Office programs) and network-oriented services such as web servers and middleware, because these applications tend to be critical support components of revenue-generating business processes. For this reason, it’s absolutely critical to not forget about these components during the patching process; focusing only on OS-level patches while ignoring application and middleware patches is a sure recipe for disaster.
- Testing is important, but so is timely deployment. New patches and firmware issued for operating systems, software and devices always carry the risk of “breaking something”. The possibility that critical, revenue-generating systems could be impacted by a new patch that does something unexpected – such as altering how a service or protocol is used – is (and should be) a very real concern. For many organizations, new patches are tied into a testing process that confirms that these patches and updates don’t adversely affect systems. In this way, critical functionality can be tested before the patches are deployed. However, there is a bit of a Catch-22 scenario: the longer an organization takes to deploy patches, the more likely it is that there will an attempt to exploit the unpatched systems. For this reason, it’s important to set a maximum timeframe for deployment of patches; a typical best practice we see in the field is a maximum of 30 days before security patches with a “critical” rating are deployed, and 60 days for non-critical and non-security patches.
- Good patch management is a process, not just a procedure. While it is certainly possible to maintain a manual patching process in some organizations, that’s not likely to be an efficient process. Good patch management includes not only deployment of patches, but verification that those patches were deployed – and that almost invariably requires automation. While there are patch management solutions out there (EiQ’s SOCVue provides integrated, out-of-box patch management along with continuous monitoring and vulnerability management, for example) it’s important to make sure that whatever solution you deploy (be it automated, manual or a combination of the two) meets some basic criteria, including the ability to:
- Schedule patch deployment (you don’t want systems rebooting in the middle of the day after a critical patch is installed!)
- Report on the success or failure of patch deployment to individual systems, and the ability to audit your systems on the current patches deployed
- Identify systems that are missing high-criticality patches
Patch management may not be the most sexy aspect of information security, but the fact is, it is the last line of defensive security. Even with the most effective controls in place for anti-malware, anti-spam, intrusion detection/prevention and other security measures, eventually malicious code or an attacker is going to get through. And when they do, it will be effective patching that prevents these bad actors from successfully exploiting your operating systems and applications.