In this post, my intention is to provide some guidelines about how to effectively manage and schedule patching in your organization.

(Editor’s note: PatchZone welcomes Augusto Alvarez.   Augusto is no stranger to blogging - he has served as a thwack ambassador and has his own blog.  He is now celebrating the publication of his second book, "Microsoft Application Virtualization Advanced Guide.")

 

Having a detailed and effective systems’ patching strategy is something that several organizations don’t pay much attention. Why is that? Most people think that it’s too expensive to have a plan and an entire platform for something as simple as clicking in “Install updates” in Windows Update.   However, most people realize that they should have a plan when something goes wrong: getting a blue screen when rebooting a server; services not starting; applications unexpected downtime; or even worse, a security breach with an out-of-date system.

 

What do I Need to Know about Microsoft Schedules?

Microsoft has an unofficial strategy about when they release the updates. The second Tuesday of each month they release security patches; the critical updates are the exception, because those are released as soon as the patch is ready. Tuesday is the selected day, because the following approach:

• Tuesday: Updates are released (around 17:00 – 18:00)

• Wednesday: Apply updates in test environment.

• Thursday: Run use cases in test environment with new updates installed.

• Friday: Install updates in production.

• Saturday: Reboot your production servers.

 

Do I Really Need to Have a Test Environment?

The short answer: yes, you do need a test environment. But, let’s elaborate this for those that always try to avoid this matter.  If you cannot afford the replication of your full production environment (servers and workstations), you can still use some reference machines currently in production but only those with low or no impact if something goes wrong.

 

For example, to test workstations you can use the IT department’s reference machines to test new updates or any other “friendly” user’s machine that won’t mind having to troubleshoot if some update does something unexpected.  For mission critical services running on servers, most organizations should have a high availability scenario (for example: SQL Server cluster), and you can start patching using the “stand-by” node.

 

And of course we always have the virtualization alternative. Having one server (or even computer) with some resources available we can virtualize at least our main services and turn on those VMs when we need to test a new update; we can even use VMware Converter o System Center Virtual Machine Manager to convert Physical to Virtual Machines and place these machines in an isolated environment.

 

What do I Need to Know about my Environment?

Understand your environment, applications and services that need to be patched. There’s no need to have a good plan, and a schedule and test environment if you don’t know the right use cases of each platform you are updating.

 

If there’s a homemade application, request a developer to script or give you a simple test to run in order to validate the application is working properly. The same applies for other platforms, like a database server, messaging server like Exchange, SharePoint or any other; you must have a few tests to run when you update your platform, if those are automated, even better.

 

Should I Change my Backup Plan if I have a Good Test Environment?

If you are thinking to have a more relaxed backup plan, the answer is no. There are some obvious reasons why - hardware failures and user errors can still occur in production; but even if we don’t consider those, having a test environment is no “silver bullet”.  There will be scenarios where the behavior in testing can be slightly different than in production, and that “slight difference” can make a huge impact if we don’t have a way to recover it.

 

Even more, review your backup plan and ensure proper testing for those backups is being executed periodically.

 

Do I Need to Review my Current Service Level Agreement (SLA)?

Yes, of course. This is an important matter since, if we have a defined SLA, we will know the downtime windows we can have in our environment, therefore we will understand the schedule for applying updates we need in our organization.

 

And we can also find in the SLA, the priority for some services that will give us the input whether to implement a test environment for that service. For example, a SLA that requires high availability for the messaging platform will need to have replicated servers to properly test new updates.

 

Do I Really Need to Document the Patch Management Processes?

Please do. This is not just any other boring process for IT.  For example, properly documented steps for testing will give us a way to guarantee repeatable and predictable steps in production.

 

Final Thoughts

As I always say, “there’s no golden rule” in the IT world but you can find general guidance and best practices. The best solution suited for your organization won’t apply in the next company; we must always assess and understand our environment, taking into account several key factors like: budget and costs; internal policies; legal compliances; defined SLAs and so on.


Here are two free tools to help you with WSUS server migrations: WSUS Change Upstream Server Tool & WSUS Computer Migrator.

 

WSUS Change Upstream Server

The WSUSChangeUSS utility is designed to be used in conjunction with any WSUS upstream server migration involving a URL change to the location of the upstream server. The utility connects to each downstream server and updates the upstream server URL definition.

 

There are two functions available in this utility:

1)  Export a collection of downstream
servers from a WSUS server to an XML file. The syntax for the export function is:

  WSUSChangeUSS

  /command export

  /dssxmlfile <fully qualified file path to results xml file>

  /sourcewsusname <FQDN or NetBIOS name of the wsus server>W

  [/sourcewsusport <portnumber> /sourcewsususessl < yes | no >]

 

2) Configure a collection of downstream computers specified in an XML file with the parameters of an upstream WSUS server specified in the command parameters. The syntax for the import function is:

 

WSUSChangeUSS

  /command changeconfig

  /dssxmlfile <fully qualified file path to source xml file>

  /sourcewsusname <FQDN or NetBIOS name of the wsus server>

  [/sourcewsusport <portnumber> /sourcewsususessl < yes | no >]

 

A list of the valid commands and arguments are shown below...

 

/command <export | changeconfig>

/sourcewsusname <FQDN or NetBIOS name of the wsus server>

/sourcewsusport <portnumber> (optional parameter; defaults to 80 if not specified)

/sourcewsususessl <yes | no> (optional parameter; defaults to no if not specified)

/dssxmlfile <path to xml file>

 

 

NOTE: This utility requires that the .NET 2.0 framework is installed and the WSUS 3.0 SP1 or later Console or Server is installed (WSUS API)

 

WSUS Computer Migrator

 

The WSUSComputerMigrator utility is designed to be used in conjunction with a WSUS server migration using replication when server-side targeting is used. This tool will populate the existing computers into the correct groups on a replicated server. A reference to this technique can be found here: http://technet.microsoft.com/en-use/library/cc463370(WS.10).aspx

 

There are two functions available in this utility:

 

1) Export a collection of computers and group assignments from a WSUS server to an XML file. The syntax for the export function is:

 

WSUSComputerMigrator

/command export

/xmlfile <fully qualified file path to results xml file>

/wsusname <FQDN or NetBIOS name of the wsus server>

[/wsusport <portnumber> /wsususessl < yes | no >]

 

2) Import a collection of computers and group assignments from an XML file to a WSUS server. The syntax for the import function is:

 

WSUSComputerMigrator

/command import

/xmlfile <fully qualified file path to source xml file>

/wsusname <FQDN or NetBIOS name of the wsus server>

/logfile <xmlfilename>

[/wsusport <portnumber> /wsususessl < yes | no >]

 

Here is a complete list of all valid commands and arguments:

 

/command <import | export>

/wsusname <FQDN or NetBIOS name of the wsus server>

/wsusport <portnumber> (optional parameter; defaults to 80 if not specified)

/wsususessl <yes | no> (optional parameter; defaults to no if not specified)

/xmlfile <fully qualified file path to specified  xml file of list of computers and groups>

/logfile<fully qualified file path to xml file of log results> (only valid for import command)

 

NOTE: This utility requires that the .NET 2.0 framework is installed and the WSUS 3.0 SP1 or later Console or Server is installed (WSUS API)


Want a 3rd free tool?  Check out the Free Diagnostic Tool for the WSUS Agent to help troubleshoot WSUS connection issues.

In a previous post we described options for patching VMs in your environment - the virtual machines within your firewall and within your control.

 

You’re facing different challenges when it comes to the public cloud -- a multi-tenant infrastructure as a service environment such as Amazon Web Services. The technical issues aren’t any different but the legal and contractual issues are. They boil down to: Who is responsible for keeping the server and applications patched, and how and when are those patches applied?

 

Patch Management Questions to Ask Your Cloud Provider

The first and most basic question to address is whether you (the customer) or the IaaS provider is responsible for patching. Can you, for example, use your corporate WSUS server to patch your VMs’ applications in the public cloud, even if you don’t own the license for the operating system on those VMs?

 

Assuming the IaaS provider is responsible, you’ll need to know how they will execute the patch process. Will they let you, the customer, control when the deployments are done, so you have some warning in case the patches go wrong? Can you decide when the patches are applied, or at least get a warning, if servers need to be taken off-line or rebooted as part of the patch process?

 

If failed patches cause service failures, does your provider have enough redundancy in place so your users never see the impact of patching? And if a patch affects application performance or availability, what legal or financial recourse do you have?

 

Patch Management is Your Responsibility
Regardless of whoever actually does the patching, you’re in the best position to know which systems are most critical and when they should be patched.

 

Your first, and maybe most difficult choice, is deciding (just as in your private cloud) how often you should update the master images.  More frequent patching of the master image will require fewer machines to be patched down the road (if in the case you are frequently spinning up new VMs).  Many organizations only update their images when a new service pack is released (oh my).  That, however, makes for a lot more work because once you’ve decided to deploy a patch, you have to find and update every VM made from the master image.  Updating on an infrequent basis also opens the door to hackers attacking your systems. 

 

Once you’ve made your decisions on how frequently patches should be applied, how will you communicate to your IaaS provider your patching policy (including your planned patching schedule) to prevent patching issues (such as if the service provider is performing maintenance).  How will you obtain the necessary information on whether the patch was successful, or it failed, or whether it caused a performance issue? 

 

However you divide the work with your provider, the bottom line is that just as with security, you can’t outsource the ultimate responsibility for patch management. These are your servers and your applications serving your users and customers, and it’s up to you to work with your provider to make sure they’re patched appropriately.

 

Do you patch applications on servers residing in a public cloud?  Tell us, how do you patch them?

Are you one of those administrators who take pride in rolling out every software patch as soon as it comes down the pike? Or do you go to the opposite extreme, not trusting any vendor not to blow up your systems with a patch that does more harm than good?

 

Without some sort of ROI analysis to prioritize your patches, you’re needlessly exposing yourself to the risks of costly security breaches, not to mention day to day expenses like debugging crashed systems and cumbersome validation checks and compliance reports.

 

Let me explain.

 

ROI of Patching

Deploying a patch is like conducting any other business function. There is a cost, a benefit, and a risk involved.

 

• The cost is the effort involved in researching, testing, deploying, and validating the patch.
• The benefit is either increased functionality or reduced security risk (or both.)
• If you deploy unnecessary patches, the risks include overspending on deployment and troubleshooting, and reduced productivity if a bad patch causes system outages.
• If you skip needed patches, you risk security breaches and possible system outages.

 

Every admin intuitively understands these tradeoffs. But too few do the same kind of “return on investment” analysis on patching as a business does with any other activity. Some always update on Patch Tuesday, trusting anything that comes from Redmond. (Maybe not such a good idea.) Or they take pride in ignoring any but the most critical patches, so as not to rock the boat when their users are happy.

 

Both groups are skating on thin ice and will inevitably fall through, either when an unneeded patch interrupts application stability or a hacker breaks in through an unpatched system.

 

Prioritize Patching Activities to Maximize ROI

A better approach – using the discipline businesses use to prioritize all their other activities – is to decide which patches should be applied immediately, which can be deferred pending further investigation, and which are not worth even considering.

 

This prioritization must take into account how critical each system is to the business; the damage a successful attack on it would cause; the quality of the patches available for it and the effort involved in researching, testing, deploying and validating the patch.

 

You should have a good feel for each of those four pieces of information, with the help of outside organizations that test patches such as SolarWinds and those companies who share their experiences on PatchManagement.org, when it comes to assessing the quality of patches. You don’t, of course, want to fall into “analysis paralysis” when it comes to this ROI, and you don’t have to. The important point is to start applying some sort of ROI analysis to your patching, and improve on it over time.

 

Invest the time now to reap the benefits later

I can already hear you complaining that you’re already spending too much time on patching, and the last thing you have time for is this ROI analysis stuff. This reminds me of the old joke about the car owner who doesn’t want to spend money on an oil change, to which his mechanic says “Well, you can pay me now or you can pay me later.”

 

You can invest a little bit of time up-front to prioritize your patches and minimize your long-term cost and risk. Or, you can take an “all or nothing” approach that guarantees more time and anguish later in wasted deployment and validation, or (even worse) figuring out which mis-patched systems blew up or let in a hacker. It’s your choice.

  • Microsoft was not alone in delivering updates this patch Tuesday.  Below is a synopsis of the updates provided by Microsoft, Google and Adobe.

 

Microsoft Patches
Microsoft Patch Tuesday for August 2012 patches nine security vulnerabilities out of which five are labeled as critical, with updates preventing remote code execution.  The remaining updates are labled as important updates.


Critical Microsoft Updates
Cumulative Security Update for Internet Explorer – This update patches the exploit in Internet Explorer that allows the attacker to gain elevated user privileges via remote code execution in specific crafted webpages.
Vulnerability in Remote Desktop Could Allow Remote Code Execution– This patch prevents a remote code from being executed when the attacker sends a modified RDP packet.
Vulnerabilities in Windows Networking Components Could Allow Remote Code Execution – This update patches a vulnerability that allows remote code execution when a specially crafted response is sent to Windows print spooler.
Vulnerability in Windows Common Controls Could Allow Remote Code Execution– The vulnerability in Windows Common Controls allows remote code execution when the user visits a website that is crafted to exploit this software vulnerability.
Vulnerabilities in Microsoft Exchange Server Web Ready Document Viewing Could Allow Remote Code Execution– This patch fixes the vulnerability that allows remote code execution in the security context of the transcoding service on the Exchange server.

 

Microsoft Non-Critical Important Patches
• Vulnerability in Windows Kernel-Mode Drivers Could Allow Elevation of Privilege
• Vulnerability in JScript and VBScript Engines Could Allow Remote Code Execution
• Vulnerability in Microsoft Office Could Allow Remote Code Execution
• Vulnerability in Microsoft Visio Could Allow Remote Code Execution


3rd Party Application Patches
• Google has released Chrome 21.0.1180.77, a release that fixes many security and stability issues.
• Adobe released Flash 11.3.300.271 which fixes the vulnerability that allows attackers to crash the application and take control of user system by executing malicious code distributed through a Word document. This update also fixes a vulnerability in ActiveX, a version of Adobe Flash player in Internet Explorer.
• Adobe also released Adobe Reader 10.1.4, Acrobat 9.5.2 and Shockwave 11.6.6.636 which fixes a memory corruption vulnerability which could allow remote code execution.

• Oracle Java also released JRE 7u6 & JRE 6u34.

 

For a comprehensive list of recent 3rd party updates, check out this table.

  Complete the Patch Management Survey for a chance to win an Apple gift card, valued at $500.

Automation dramatically reduces costs, but it works best when the components being handled are all configured the same so they’ll fit together easily.

 

In software patching, though, custom configurations are more common and can cause more patching problems than you may know. Here’s what to look out for and some suggestions for avoiding problems.

 

Why Custom Configurations?

Some custom configurations are created intentionally, such as when an administrator blocks ports or turns off default services to harden an operating system against attack. As long as you track the changes you make, or are one of the few who create “templates” of hardened systems on which you can test patches, this is a manageable issue.

 

What’s trickier is when applications, as part of their installation process, change an operating system in ways that aren’t immediately apparent. Examples include databases that specify how a server is configured, or Web applications that require certain features be enabled on a Web server.

 

It’s even more troublesome when one application changes something on the operating system that breaks another application or the patch process for it.  For example, both Symantec Endpoint Protection and WSUS rely on a virtual directory called “content,” but neither checks whether it already exists before trying to create it. An administrator might thus install WSUS on an alternate virtual directory to allow Symantec Endpoint Protection to run. But WSUS itself has no way of knowing about this change, so if a patch requires a fresh install of WSUS, it re-creates the “content” virtual directory and thus breaks Symantec Endpoint Protection.

 

None of these potential conflicts are caught by vendors because they can’t possibly anticipate all the combinations of apps users might be running. When Microsoft tests its updates, it uses machines that have been updated with the current service pack and have no applications installed. The same is true for Adobe, Apple and other vendors.

 

Test Environments for Patching: Good Enough?

Ideally you would have a test environment that mirrored all the configurations in your production environment.  But most companies can’t afford that, even if they have a complete inventory of their production environment.

 

Most organizations instead use low-risk machines, or those owned by users they trust to report problems, before rolling patches out to a wider audience. In many cases, test servers are seen as lower-risk than production servers, and thus aren’t hardened the same way.  That makes them inaccurate test beds for patching. 

Delaying the deployment of patches to higher-risk systems is also counterproductive because those are often the systems with the most customized configurations. Patching those last means you only learn of problems with patches when they’ve crashed your more critical systems.

 

What To Do?

A common thread throughout this post has been that what you don’t know about your custom configurations can and will hurt you when it comes to patching.  If you’re using automated tools to manage your virtual infrastructure, and to create templates for common types of VMs, use those tools to create and track “golden images” of custom configurations to test patches on them. 

 

Keeping track of configuration drifts is also important when preparing deployment of updates.  If you have a CMDB, often these systems track configuration changes and golden master configurations.  Whatever your change control process is (spreadsheets, logs, change management specialized tooling), it should incorporated with your patch management process. 

 

Whatever your environment and staffing levels, there are ways to learn more about which custom configurations you have hiding in your data center. Understanding them is the first step towards avoiding these patch management land mines. 

 

Please share your experience in how you approach this problem of patching custom-configured applications.

Editor's Note: PatchZone welcomes our newest guest blogger - Brien Posey.  Brien, a six-time Microsoft MVP, is a published author of over 4,000 technical articles and papers and three dozen book contributions. He has served as a CIO in the healthcare industry and a network administrator for the United States Department of Defense. Check out his blog here.

 

 

Today almost every software vendor routinely releases patches for their wares. Because of the overabundance of available patches, it is critical for organizations to adopt a comprehensive patch management policy. Doing so is the only way to stay organized and to keep track of which patches have been deployed and why. This blog post outlines some important elements that should be included in an organization’s patch management plan.

 

Patch Testing Period

Probably the most hotly debated aspect of patch management is the testing period. Some organizations go way overboard with patch testing, subjecting each patch to a battery of tests for six months or more. On the other end of the spectrum, there are organizations that do not do patch testing at all.

 

The reason why patch testing is such a hotly debated topic has to do with the double edged sword that is patch management. It is a well-documented fact that some patches have bugs. If an organization fails to test their patches then they could experience problems related to a buggy patch. On the other hand, most patches are designed to address security vulnerabilities. An organization that spends an excessive amount of time testing patches leaves themselves vulnerable to the exploit that the patch is designed to correct until the day when the patch is eventually applied.

 

Your patch management policy should define the amount of time that is allowed for patch testing once a patch is released. Some patch management policies define multiple testing schedules based on the patch type. For instance, a patch that addresses a critical vulnerability might be tested over a shorter duration than a more comprehensive patch such as a service pack or a feature pack.

 

Of course determining an acceptable testing schedule is only one aspect of the patch management policy. Typically the policy will also stipulate how the testing is to be done. For example, some organizations use lab testing, while others prefer to perform pilot patch deployments in the production environment.

 

Patch Deployment Method

Another aspect of the patch management process that should be outlined in your policy is the patch deployment method. Typically this is a simple statement outlining the technical aspects of installing the patches and verifying the success of the installation process.

 

Post Deployment Testing

Even with a high degree of patch testing, there is always a chance that a bug could be discovered after a patch has been approved for use in the production environment. As such, some organizations like to do post deployment testing as a way of verifying that the patch is not causing any problems.

 

If your organization does post deployment testing then your patch management policy should outline the testing methods and the testing schedule.

 

Patch Rollback
If a patch is determined to be problematic after it has been deployed to the production network then the patch may need to be removed. Your patch management policy should discuss the technique for removing the patch, but also the situations that warrant removing the patch.

 

Most patch management utilities include a mechanism for removing buggy patches, but it is important to understand that patches are sometimes hierarchical. It may be impossible to remove a patch if subsequent patches have been applied, unless the buggy patch and all subsequent patches are removed. Needless to say this is a somewhat drastic step. That’s why it is so important to outline the circumstances under which it is acceptable to remove a buggy patch.

 

Patch Reconciliation

Finally, a good patch management policy should address the subject of patch reconciliation. If your organization is performing comprehensive patch testing and only deploying approved patches then it stands to reason that you should have a list of the patches that have been approved for deployment.  Patch reconciliation involves comparing this list against an inventory of each computer’s contents to verify that all approved patches have been applied and that no unapproved patches have slipped through the cracks.

 

Your patch management policy should state how often a reconciliation report should be created, as well as how to deal with any discrepancies that may be found within the report.

 

Conclusion

The items that I have outlined in this blog post represent some of the elements that should go into a good patch management policy. However, every organization’s needs are different so it is important to custom tailor your patch management policy to meet your own unique needs.

 

 

Sign up for alerts on PatchZone news & tips here.

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.