Skip navigation

Whiteboard

10 Posts authored by: Lawrence Garvin

What does the Windows XP rundown mean?

  1. Microsoft will no longer provide technical support for issues involving Windows XP, unless the customer has purchased a Premier Support Services contract.
  2. Microsoft will no longer provide security updates for the product, or anything associated with the product (for example, Internet Explorer 6).

 

Why is this important?

According to NetMarketShare, about 29 percent of the Internet-connected systems today are still running Windows XP, and there’s no universal agreement that this number is going to change in the next 90 days. Here’s a recap of the things IT pros need to consider:

  • Cost. It’s a no-brainer that switching to a new OS produces significant costs in terms of time, money and personnel, from up-front infrastructure cost to time spent training and educating end users. IT needs to look at the cost/benefit of replacing every Windows XP system with a new OS such as Windows 7 versus the cost of maintaining Premier Support Services. If neither of these options seems acceptable, one has to also consider the cost of remediating continual malware infections, potential loss of data due to socially engineered attacks or complete loss of use due to corruption or destruction as the result of malware. More on that to come. This is a scary thought given the fact that this is most likely what’s going to actually happen in many organizations; Windows XP systems will keep on going, but without any new updates.
  • Security. The end of support for Windows XP also means the end of XP patches. So, any security flaws not fixed after April 8 will almost certainly be exposed and exploited. Another security challenge facing XP goes beyond the OS itself, and has to do with the fact that there may not be a secure browser for use on the machine. Microsoft did not build IE9 for Windows XP. What remains to be seen is what Mozilla and Google do with respect to versions of Firefox and Chrome for XP beyond April.
  • Application availability. Some software vendors haven’t been prudent in updating their software applications to run on newer operating systems. In many cases, they’re 32-bit applications originally written for Windows 95 or 98 that still “play nice” on a Windows XP system, but require local administrator privileges to run. Newer operating systems, however, are not compatible with these applications. Businesses continue to depend on those applications and the operating system they run on. (We should note that with Windows 7, Microsoft introduced “Windows XP Mode,” which is a virtual environment running on Windows 7 Enterprise and Windows 7 Ultimate. However, many of the current issues with Windows XP support will also impact the “Windows XP Mode” environment, though possibly to a lesser degree because “Windows XP Mode” is not typically where the Internet activity is executed.) Nonetheless, organizations are caught between software vendors that aren’t updating their applications to run on newer operating systems, and Microsoft enabling this practice by providing Windows XP support within new operating system—until April 8, that is.
  • Organizational and end-user buy-in. For any IT organization that’s already experienced the pains of transitioning to VoIP, VDI or even installing Microsoft Office upgrades, the thought of introducing, selling and training leadership and end users on an entirely updated OS sounds like the opposite of fun. It’s crucial to get organizational and end-user buy-in, even when the reasons for making the change are entirely valid and will inevitably leave the organization better off.  

 

Additional References

For more information, we found the following articles helpful:

 

Be sure to weigh in on this issue over on the General Systems & Application forum here: Coming April 8: Windows XP Rundown. We’ve also got a XP rundown-related poll running that we invite you to participate in here: XP Support Coming to an End.

When I first started working in I.T., my toolbox was pretty thin. Actually, so were the MSDOS PCs that I was working on, so it didn't take a lot of tools to manage an 8086PC. My entire toolset fit on a single 360kb floppy disk. Life in I.T. has changed dramatically since that long-ago time, and today the toolbox of the ITPro is an indispensable thing. Also, to a great extent, that toolbox is no longer something we carry around with us, but rather capabilities and services that make doing what we do so much easier.

 

Recently I was asked to identify the Top 10 innovations that changed my life in I.T. I solicited a bit of help from some friends, and this is the list we came up with:

Infographic_10 IT Innovations That Changed Your Life.png

In a recent analysis in MSPMentor of a Perimeter ESecurity survey, Dan Berthiaume offers the idea that the lack of mobile adoption of SMBs is due to security concerns of those SMBs. In the opening paragraph of his article, Small Business Thwarted by Mobile Security Concerns, he notes that “….the first sign of SMB concerns about the security of mobile and remote devices is the fact that about 48 percent of all SMBs have less than 10 percent of their employees using laptops…”, which paints a pretty bleak picture of the adoption of mobile computing in SMBs. The original blog post at Perimeter ESecurity discussing the survey results does give us a bit of hope, though. It states “Our research reveals that laptop adoption is very prevalent among small businesses today, with a quarter of these respondents reporting 80 percent or more of their workforce regularly use laptops.”


The raw numbers

The actual truth lies somewhat in between, and the key to understanding the ‘in-between’ lies in looking at the more detailed numbers:

  • 25% of organizations with less than 50 employees report that more than 80% of their employees use laptops
  • 6% of organizations with 50-500 employees report that 80% of their employees use laptops

 

MACs and Public Wi-Fi

However, mostly the original Perimeter ESecurity survey results focused on conclusions regarding the issue of a growing presence of Macs and their use in locations with Public Wi-Fi access, not necessarily mobile computing as a whole. (The full study is published in a white paper titled Rising Mac and Public WiFi Use Poses New Risks to Businesses.) The concerns described in the white paper, relate to the expectation of increases in acquisition of Macs in the workplace (perhaps a manifestation of the current debacles with Windows 8?). Specifically, almost a third or all organizations (31%) surveyed plan to increase the number of Mac laptops, but a number of IT Managers surveyed are unsure whether their existing security policies for Windows PCs will meet the needs for the Mac, in consideration of the fact that threats are increasing in the Mac landscape. A majority of respondents (61%) are also concerned about the security of public networks.


An alternate theory

Ironically, it is, in fact, small business that has the higher rate of adoption of mobile computing than their big brothers. It’s possible that these numbers merely reflect the age of those two groups, and nothing more significant. Smaller businesses are likely to be newer to the arena, so may have invested in laptops right out of the gate, particularly where the office infrastructure is purely wireless, whereas larger organizations likely have been around for a while, and are probably a function of the standard practice of investing in desktop systems during most of the first decade of this century.

 

It’s important, in any case, to not confuse correlation with causation.  While it might be true that security concerns drive some organizational decisions to shy away from mobile computing, the survey results don’t necessarily indicate that.  Consider some alternative reasons for the technology decisions made by SMBs. It is just as likely that those SMBs who are not investing in mobile computing are not doing so because

  • many SMBs can’t afford, and don’t need, notebooks, thus do not purchase the more expensive computing option, and
  • the benefits of BYOD are much more significant in SMBs because of capital concerns, so there’s no need to invest in smartphones and tablets either.

 

Either way, whether mobile computing is more or less of the SMB landscape, it’s all old news for Windows mobile users, as the Windows landscape has been struggling with the inherent lack of security in public WiFi space for years now – pretty much since Starbucks introduced WiFi over ten years ago. It’s still a jungle out there, and it’s not going to get any easier for Mac users; it has certainly not gotten any easier for Windows users.

A recent survey of 150 IT Managers in large organizations conducted by Harris Interactive, and annotated by Mike Vizard in an IT Business Edge article, calls attention to some of the issues facing organizations that have “too many” applications.

 

Over 82% of respondents noted that they encountered monetary losses due to “unused, unresponsive, or crashed applications”, but only 32% of respondents said they plan to introduce an application monitoring tool in the next 18 months. I think those two numbers ought to be reversed: Less than “20%’ should be encountering losses, and ”80%” should be implementing application monitoring.

 

What To Do

As noted in the annotations of the article – you can’t manage what you can’t see, and somebody needs to make some hard decisions about what gets installed and maintained in the enterprise. Also consider this practical reality -- if the applications are not being used, or are not reliable in the form in which they’re installed – then it’s quite likely that nobody is producing useful work from them anyway. Get rid of them!

 

Here’s a list of things to get the ball rolling:

  1. Perform a complete inventory of the software that is currently installed.
  2. Identify which applications are not being used and uninstall them – permanently.
  3. Identify which applications can be replaced.
  4. Identify which applications need to be updated and get them updated. If you cannot update an application to address security issues or reliability issues, it’s time to get rid of it – permanently.
  5. Where multiple versions of the same application are in use, standardize on a specific version – it doesn’t have to be the newest – but pick one!.

 

Here’s another collection of things to consider:

  1. Identify which applications are dependent on other outdated or insecure software. For example, dependencies on old versions of Java or MDAC are not good. If there aren’t updated versions of the software that can be used securely and reliably, find a replacement as soon as possible.
  2. Identify which applications are dependent upon elevated privileges (i.e. they require Administrator rights).
  3. Identify which applications are dependent upon specific (read: ancient) operating systems. If the vendor isn’t going to update the application to run on Windows 7, get rid of the application and the vendor!
  4. Identify which applications can be shared or virtualized. Maintaining one copy of an application is much more efficient than maintaining several; in addition, reducing the number of instances in use may save licensing and/or maintenance costs.

 

In many cases, these situations are going to require a bit more effort to determine whether the vendor has a solution or has committed to delivering a solution, or whether the application will need to be replaced. But now is the time to start developing a response plan to issues that can be identified today.

 

Application Monitoring

For the applications that are selected to be kept, they need to be monitored. Monitoring includes a number of aspects, including:

  • actual usage
  • required updates
  • reliability
  • performance

 

How to Get There From Here

Getting from where those 150 surveyed organizations are today to a lean, streamlined proactively monitored, healthy application environment is not going to happen overnight, so don’t try. The solutions can be phased in over time, as appropriate for the needs of the organization, but monitoring applications will require a number of tools.

Ever since the introduction of the first personal computers into the workplace there has been an ongoing dichotomy between the end-user's technology skill set and the need for dedicated professionals to assist them with the use of that technology. Whether it was how to define tab stops in a Word v2 document or connect their Windows Phone to Exchange, there has always been, and will always be, a need for knowledgeable people to solve problems presented by end-users.

 

In a recent TechRepublic CIO Insights article, Nick Heath challenges those who think the I.T. Help Desk is a thing of the past. I'm in absolute agreement with Nick.

 

Many of the requests made by end-users are quite mundane, and sometimes inane, such as the classic plant-on-the-monitor problem. Maybe even some of them can be made available via self-help (e.g. password resets), but it still begs the question of whether they should be available via self-help. Convenience is but one aspect of the question; security must also be kept in mind. However, none of those scenarios are the real reason for having a Help Desk. The Help Desk provides dedicated, trained resources, for solving that occasional problem that arises that actually keeps people, sometimes many people, from doing real work that results in revenue for the business.

 

Anybody who has studied business management has learned about the idea of "core competency". Businesses do what they are good at, and they find others to help with what they are not. But this principle doesn't just apply to the external face of the business, it also applies internally. The "core competency" of the I.T. Help Desk is to quickly and efficiently respond to the needs of an end-user with a technology-related issue, so that those end-users can continue doing what they do best, rather than losing a half-day of work trying to get email working on their new smartphone.

 

The I.T. Help Desk does need to evolve, however, and ensure their skill sets and operational practices embrace those that involve the extended range of new devices. Implementing help desk software that supports access from mobile interfaces to get work-requests, and being able to administer the network via mobile devices when possible, can have a significant impact on improving the impact of BYOD on the organization.

When I started my IT career as the lone technical support person in a governmental agency of 200+ people my toolset consisted of the telephone, my feet, and my eyes. The phone would ring, my feet would walk me to the caller's desk, and I would observe over their shoulder the particular issue they were having. Not optimal, but practical given that only a few dozen of those 200+ people actually had PCs (the rest had Unix terminals), and the building was only a few thousand square feet.

 

Today the challenges with user support are radically different.

Users don't just work from the office on a desktop computer. Not only are the workers remote and/or mobile, but so are the support technicians.  Now they can be anywhere - at home, in a hotel, in the local coffee shop, or even at the beach.  (We'll leave the implications of sand, salt water, and notebook computers for another day.)

 

The remote workforce is a rapidly growing share of the total workforce.

A 2011 report from Forrester reported that almost two-thirds of information workers in North America and Europe work remotely. Working remotely is advantageous to organizations for many reasons, including positive impacts to the environment and significant reductions in cost related to real estate and office equipment. In a word, the remote/mobile workforce is here to stay; supporting these remote/mobile workers to the same level they’ve been accustomed to when in the office is the new challenge.

 

Walking down the hallway and looking over a user’s shoulder isn’t practical if they’re across the country in a hotel or at home at sunrise working on a project due at 8am.  In addition the ratio of end-users to support technicians has significantly increased, from the few dozen that I supported way-back-when, to several dozen per technician today. The only effective way to provide the necessary level of support at these ratios is through the use of remote connectivity. Remote desktop sharing software that connects the support technician's environment to the end-user's environment is required to make on-demand connections and resolve issues as they are encountered.

 

A necessary solution.

Desktop sharing software is a critical component of any strategy for managing remote and mobile users. Some key features to consider when evaluating a desktop sharing solution is:

  • initiating connections to attended or unattended machines
  • initiating connections to attended machines only with active consent of logged-on user
  • initiating connections to powered-down machines
  • sharing of keyboard/mouse so both end-user and support technician can see/control activity
  • support multi-platform connectivity (Windows, Mac, Linux)
  • support multi-factor authentication
  • screen capture utilities
  • real-time chat tools

 

Sometimes, of course, it’s only necessary to be able to view/change configurations, manage services, get status information, or manipulate files – this is where remote administration tools provide a more effective solution. It doesn’t impact the user currently working on the system, and eliminates the overhead of replicating the desktop environment across the network, particularly if the connection is not on a high-speed LAN.

In a recent blog post in IT Business Edge, Mike Vizard makes some interesting points regarding security and its impact on adoption of cloud services by IT organizations. Of particular note is the idea that cloud services are inherently more secure because they have more money to throw at better technologies and people will skills to manage them.

 

Yes, cloud service providers should be able to afford to invest in state-of-the-art security technologies (but that doesn’t mean they actually have), and yes, the staff of cloud service providers should have the skillset and time to properly use such security technologies (but that doesn’t mean that they actually do).


Why are on-premise environments less secure?

The Alert Logic report that was cited notes that there is no difference in the number of attacks between cloud providers and on-premise environments, but concludes that the difference between the security vulnerabilities in cloud environments vs. on-premise environments was that “on-premise systems had to contend a lot more with malware simply because the number of systems they were supporting created a much broader attack surface”. I submit it has less to do with the number of systems, since most cloud service providers probably have more systems than the typical on-premise customer, but rather that the real issue is that the on-premise environment has human computer users, and that fact is never going to change no matter where the data center resides.


Will on-premise environments always be less secure?

As more and more organizations shift their systems to cloud vendors, which means a smaller number of target groups with more targets and richer rewards, I believe that security will continue to be a major issue for cloud service providers, and unless those cloud service providers have established a high-level commitment to security technologies and the skilled staff to manage them, the risk of cloud-based services for many organizations may actually be higher than an on-premises solutions, despite the risk of social engineering attacks on human users.

 

Just like the Department of Defense, and certain very public-facing companies, cloud service providers will attract more activity because they will be rich targets to compromise just for the sake of having done it. In addition, consider the impact of having a consolidated collection of key corporate data from multiple corporations with a single point-of-attack. No longer does a hacker need to launch multiple attacks at multiple disparate corporate entities, but will conveniently have them all consolidated in a small number of very public attack vectors.


Why is any environment less secure?

I do, however, fully agree with Urvish Vashi’s point: “…the vast majority of security incidents are a result of systems being misconfigured in a way that makes them vulnerable to an attack.” Whether or not cloud service providers are any more immune from these risks than an internal IT Organization, is the key question, and should be part of the decision making process as to whether any given organization chooses to host in the cloud.


What should every organization do to be more secure?

To a point I made in a recent PatchZone blog post regarding responsibilities for cloud-based patch management operations, I noted that customers are ultimately responsible for the security of their systems and data, regardless of where they are or who actually does the work. Customers need to perform due diligence in ensuring that their cloud service providers DO have these security technologies, and DO have the staff capable of properly utilizing them, and CAN and WILL provide better security than can be provided by the customer, and not just blindly assume that a cloud service provider is a better alternative because the cloud service provider should have these advantages. If the customer cannot get those guarantees from the cloud service provider in the form of a Service Level Agreement, then the cloud service provider is not a “more secure” alternative.

 

And yes, either way, having a solid patch management policy is one component of the overall security strategy of an organization, regardless of whether a cloud employee is deploying the patches, or a local IT Administrator is doing it. Aside from the conversation about where data is stored, human users will always be attack vectors for social engineering attacks, and organizations need to have an appropriate patch management strategy, and user education strategy, for mitigating the risks caused by social engineering – which aren’t going to move to the cloud at all.

In WSUS (Windows Server Update Services) and Windows Update, Microsoft has one of the most reliable patch management engines in existence. According to Microsoft, Windows Update currently updates more than 800 million PCs worldwide. But over the past two years Microsoft patches themselves have been causing more and more problems, such as those users had installing Windows 7 Service Pack 1 last April and the recent Skype (now owned by Microsoft) patch - a package that was supposed to be update-only, but actually initiated full deployments to all targeted systems.

 

In other cases, the patch packages fail to find machines that need updates, or attempt to install patches where they’re not needed. Over the last year, I’ve heard two to three cases where Microsoft patches reported a failed installation when the patch has actually installed. Sometimes, a report of a failed update is actually triggered by an older update, not the most recent one.

 

This leads to a lot of confusion and wasted effort for the patch administrator. Usually, something happens unexpectedly and you don’t know why – like a report of an unsuccessful patch but you can’t identify where it failed. The other side of that, which is even more dangerous, is when a patch needs to be applied, but it can’t identify all the systems that need it and thus never gets deployed. What you don’t know can hurt you.

 

In only a very few cases is the problem a question of interaction between the patch and WSUS. Most often, it’s flagrant errors in the way the patch logic itself has been written.  After close to 14 years of solid performance one can ask why has the quality of Microsoft patches fallen so quickly?

 

Can Product Groups Do Patches, Too?

 

I don’t have a window into Microsoft, but my sense is there has been reorganization in the WSUS and the Windows Update hierarchy recently, and perhaps resources once devoted to WSUS, a mature offering, have been diverted to other projects such as System Center Configuration Manager. It’s almost as if WSUS has gone into maintenance mode, with Microsoft treating it as a product that doesn’t need continued enhancement.

 

Another factor, I suspect, is that the updates for each product are developed by the individual product groups, each of whom is supposed to use Microsoft’s standard methodology for developing update packs. But expecting each product group to have the same skill sets, level of experience and, frankly, level of commitment to patching is asking for trouble. For a process as complex as patching, and a product line as complex as Microsoft’s, a more centralized management structure is the only way to deliver the quality customers expect.

 

 

What can Microsoft Do Better?

 

What steps would I like to see Microsoft take? First, publish (to both Microsoft product groups and externally) more details about how the WSUS infrastructure works, and the best practices for working with it. This should include information about how to use Microsoft’s tools for building packages and injecting them into the WSUS database. Internally, this documentation will help keep the product groups on track with best practices for creating patches. For those of us on the outside, this detailed documentation will help us understand what has gone wrong when a Microsoft patch blows up.

 

Second, the folks in Redmond need to do whatever it takes – whether that means organizational or personnel moves, or both – to follow those best practices themselves. Millions of customers worldwide rely on Microsoft to lead the way in patch management, at least with their own systems. If we can’t trust Microsoft to patch their own products correctly, what else can we trust them with?

 

And finally, it would be helpful if Microsoft would expose in the WSUS console the actual logic rules defined in a Microsoft patch package.  The SolarWinds Patch Manager console exposes the logic rules that we use for third party packages – and that readily accessible information has helped immensely with customer troubleshooting of unexpected behaviors.

 

Being able to see, for example, that IE9 on Vista Service Pack 2 had an unexpected dependency on a certain video driver would have helped hundreds of patch administrators last year when nobody could figure out why IE9 wouldn’t install on Vista SP2 systems.

To recap the current state of things for those who might still be getting up to speed -- During the last week of May, 2012, in two separate investigations, F-Secure and the Iranian CERTCC discovered a new, highly sophisticated worm, which we now know colloquially as Flame. What Flame does it and generally how it does it is an entirely different discussion, and we'll not dig into that here, except to discuss one particular issue which was discovered and reported by F-Secure on Monday, June 4, and analyze the lessons to be learned that we all can apply to our own security mechanisms. (A detailed explanation of Flame is available from a number of sources, including Securelist and TWIT's Security Now May 30th podcast.

 

In short, Flame exploits a defect in the Microsoft Terminal Server Licensing Services application that generates licensing certificates for Terminal Services clients.  The certificate template, it appears, was configured to allow a code-signing certificate to be created. These certificates all derive from the Microsoft Root Authority. And, as F-Secure correctly points out, a code-signing certificate from a Microsoft CA is the keys to the kingdom to being able to compromise the Windows Update Agent.

 

The replication of the worm then leverages this defect along with a couple of other weaknesses in Windows components. First, a man-in-the-middle attack using a spoofed Web Proxy Auto-Discovery (WPAD) Protocol service to intercept client requests to the Windows Update service was used to insert a downloader file onto an unsuspecting client system, and second, insufficient verification of the authenticity of the downloaded file on the part of the Windows Update Agent, thus allowing it to be infected. In this manner a single infected Flame system can replicate throughout an entire organization in about 22 hours (since every online Windows system will queries Windows Update at least once every 22 hours).

 

Now, to date, the industry has published information on how the replication occurs, but one key question still remains unanswered -- How does the original client get infected in an organization. There must be other non-WU oriented replication mechanisms, so we'll be looking for more information on what else this malware exploits as the investigations proceed. Passing by that missing information for the moment, let's look at a number of mitigation steps an organization can take, in general, but also with respect to this specific threat.

 

First, a couple of lessons we learn from Microsoft:

  • Make sure your certificates are only created for exactly the purpose they are intended to serve. For example, a licensing certificate should not ever have had code-signing capabilities! Truly the root cause of much of this fiasco is the improperly configured certificate template in Microsoft's Terminal Services Licensing Server. (It should also be noted that this flaw has been repaired.)
  • Second, isolate your certificate chains of authority so they do not have cross-over privileges between different security domains. There are two flaws in this issue that contributed. First, Microsoft did not isolate the certificate chain between those which must be created by a trusted Microsoft entity (i.e. a Microsoft employee), and those which can be created by an end-user (i.e. a user of Terminal Services who needs to license a TS client). The other is that the Windows Update Agent was configured to trust ALL Microsoft certificates. This is also being remediated, with a forthcoming  update to the Windows Update Agent that will trust only a certificate chain expressly created for only the Windows Update infrastructure.

 

Another contributing factor here is that WPAD is enabled, by default, on Windows systems. Use Group Policy to explicitly configure your organization's proxy configuration (if you require a proxy server for Internet access), or DISABLE this functionality completely if a proxy server is not required. Grant Bugher, CISSP, CSSLP, wrote a great blog post over four years ago about locking down WPAD -- specifically calling out this very type of man-in-the-middle attack as a risk where WPAD was improperly enabled.

 

Finally, using Windows Update as an organizational patching mechanism is a contributing factor. If clients were using a corporate managed patching infrastructure, such as WSUS, the clients would not be looking to go through a proxy server to get updates, they would be connecting locally to a WSUS server. Since the Flame malware expressly traps connection requests going to Windows Update, I believe this risk would be completely mitigated -- although I have not personally inspected the code.

 

So, to summarize:

     1. Use and configure certificates only for the specific purpose for which they are needed.

     2. Maintain separate certificate authority chains within separate security domains.

     3. Disable automatic configuration services -- particularly those that configure functionality you're not using.

     4. Implement a centralized patch management system -- and use it to immediately apply KB2718704 which revokes the compromised Microsoft certificate chain.

In Matthew Jones' blog post last week, he presents a good overview of the challenges that today's organizations deal with in regards to patching 3rd party updates when Microsoft Windows Server Update Services (WSUS) is the chosen patching mechanism. The most significant challenge being the simple fact that they're not getting patched, or if they are, it is likely in a haphazard manner at the whim of an end-user sufficiently motivated by desktop popups from the auto-updaters for those products.

 

While the article does a great job of calling attention to the problem, and even offers some suggestions for improving the environment, it doesn’t really provide a functional solution to the reader. For example, regarding educating users, the whole idea of a centralized patch management product is that users don’t have to be ‘educated’ -- an effective patch management system is completely transparent to the end user. Avoiding the expectation that users will install updates is exactly the reason the organization has implemented WSUS in the first place.

 

In the last paragraph, Jones offers the recommendation to "...implement a patch management solution that will deploy third-party patches", and provides two options, only one of which can actually be used in a WSUS environment. Other options do exist, but seem to have been overlooked in the article.

 

For the reader who is managing a WSUS environment, one product certainly worthy of mention is SolarWinds Patch Manager.  Patch Manager sits on top of the WSUS environment, provides automatic synchronization to a catalog of ready-to-use third-party updates for all of the prevalent desktop applications: Adobe Reader, Adobe Flash, Firefox, Chrome, Java Runtime, and iTunes, to name a few. In addition, Patch Manager provides an enhanced toolset for monitoring and managing the entire WSUS environment, and a toolset to directly deploy on-demand, or explicitly scheduled, third-party updates and Microsoft updates. Patch Manager also provides tools for asset inventory and reporting on the actual state of the products and updates in the organization, and it does all of this at a price point less than a third of the other option noted.

 

Patching third-party content should be no different at all from patching Microsoft content. The only reason it would be is because the methodologies are different (e.g. using GPO/Software Distribution, or trusting users to click on the auto-updater). With WSUS, the policies should be identical. More so, with WSUS, you don’t have to “scan” systems to get information – it happens automatically, daily. Publish the third-party update to the WSUS server after it automatically arrives, and in the morning run a report (or schedule it and deliver it via email to your Inbox) and review the status of your third-party updates side-by-side with your Microsoft updates.

Filter Blog

By date: By tag: