Automation dramatically reduces costs, but it works best when the components being handled are all configured the same so they’ll fit together easily.

 

In software patching, though, custom configurations are more common and can cause more patching problems than you may know. Here’s what to look out for and some suggestions for avoiding problems.

 

Why Custom Configurations?

Some custom configurations are created intentionally, such as when an administrator blocks ports or turns off default services to harden an operating system against attack. As long as you track the changes you make, or are one of the few who create “templates” of hardened systems on which you can test patches, this is a manageable issue.

 

What’s trickier is when applications, as part of their installation process, change an operating system in ways that aren’t immediately apparent. Examples include databases that specify how a server is configured, or Web applications that require certain features be enabled on a Web server.

 

It’s even more troublesome when one application changes something on the operating system that breaks another application or the patch process for it.  For example, both Symantec Endpoint Protection and WSUS rely on a virtual directory called “content,” but neither checks whether it already exists before trying to create it. An administrator might thus install WSUS on an alternate virtual directory to allow Symantec Endpoint Protection to run. But WSUS itself has no way of knowing about this change, so if a patch requires a fresh install of WSUS, it re-creates the “content” virtual directory and thus breaks Symantec Endpoint Protection.

 

None of these potential conflicts are caught by vendors because they can’t possibly anticipate all the combinations of apps users might be running. When Microsoft tests its updates, it uses machines that have been updated with the current service pack and have no applications installed. The same is true for Adobe, Apple and other vendors.

 

Test Environments for Patching: Good Enough?

Ideally you would have a test environment that mirrored all the configurations in your production environment.  But most companies can’t afford that, even if they have a complete inventory of their production environment.

 

Most organizations instead use low-risk machines, or those owned by users they trust to report problems, before rolling patches out to a wider audience. In many cases, test servers are seen as lower-risk than production servers, and thus aren’t hardened the same way.  That makes them inaccurate test beds for patching. 

Delaying the deployment of patches to higher-risk systems is also counterproductive because those are often the systems with the most customized configurations. Patching those last means you only learn of problems with patches when they’ve crashed your more critical systems.

 

What To Do?

A common thread throughout this post has been that what you don’t know about your custom configurations can and will hurt you when it comes to patching.  If you’re using automated tools to manage your virtual infrastructure, and to create templates for common types of VMs, use those tools to create and track “golden images” of custom configurations to test patches on them. 

 

Keeping track of configuration drifts is also important when preparing deployment of updates.  If you have a CMDB, often these systems track configuration changes and golden master configurations.  Whatever your change control process is (spreadsheets, logs, change management specialized tooling), it should incorporated with your patch management process. 

 

Whatever your environment and staffing levels, there are ways to learn more about which custom configurations you have hiding in your data center. Understanding them is the first step towards avoiding these patch management land mines. 

 

Please share your experience in how you approach this problem of patching custom-configured applications.