Skip navigation
1 2 Previous Next

Geek Speak

29 Posts authored by: Lawrence Garvin

As Joel Dolisy described in A Lap around Appstack, the first installment of the AppStack series, there are many components in the application stack including networking, storage, and computing resources. The computing resources include hypervisors, virtual machines, and physical machines that provide applications, infrastructure support services, and in some cases storage. Collectively, we refer to these resources as systems.


Systems really are the root of the application space. From the earliest days of computing when an application ran on a single machine, the user went to that machine to use the application and the machine had no connectivity to anything (save perhaps a printer). Today systems offer a myriad of contributions to the application space and each of those contributions has its own monitoring needs.


Server Monitoring

Historically, with the one-service/one-machine approach, a typical server ran at only ten to twenty percent of capacity. As long as the LAN connection between the desktop and the server was working, it was highly unlikely that the server was ever going to be part of a performance problem. Today, it is critical that servers behave well and share resource responsibility with others. (Other servers, that is!) As a result, server monitoring is now a critical component of an application-centric monitoring solution. 

User and Device Monitoring

One of the components that is often overlooked in the monitoring process, are the systems used directly by the end-user. The typical user may have two or three different devices all accessing the network simultaneously, and sometimes multiple devices accessing the same application simultaneously. Tracking what devices are being used, who is using those devices, how those devices are impacting other applications, and ensuring that end-users get the optimal application experience on whatever device they’re using is also part of this effort.

Consolidated Monitoring and Coexistence 

The benefit of monitoring the entire application stack as a consolidated effort is a comprehensive awareness of how the end-user is experiencing their interaction with the application and an understanding of how the various shared components of an application are co-existing with one another.

 

By being aware of where resources are shared, for example LUNs in a storage array sharing disk IOPS or virtual machines on a hypervisor sharing CPU cycles, performance issues affecting one or more applications can be more rapidly diagnosed and remediated. It’s not unusual at all for an application to negatively impact another application, without displaying any performance degradation itself. 

Increased Complexity

The last thing to be aware of is that the complexity of the systems monitoring space is continuing to grow. Virtualization and shared storage was just the first step. For the next blog in this series, Kong Yang will discuss how that impacts the AppStack.

 

Note: This post originally appears in Information Week: Network Computing

Why do we hear of new security breaches so frequently? Make sure your organization follows these best-practices and considers these important questions to protect itself.


Three big things have been happening with great frequency of late: earthquakes, volcanoes, and data breaches, most of the latter involving point-of-sale (PoS) systems and credit card information. While I'm certainly curious about the increase in earthquakes and volcanic activity, I simply do not understand the plethora of PoS breaches.

The nature and extent of the breach at Target a year ago should have been a wake-up call to all retailers and online stores that accept credit card payments. I get the feeling that it was not, but I'm not here to point fingers in hindsight. I do, however, want to call your attention to what you are, or are not, learning from these incidents, and how those lessons are being applied and leveraged within your own organization.

Lessons from Target, et al.
Let's revisit the Target breach. In short, it happened because vendor credentials were compromised and subsequently used to inject malware onto Target's systems. At the time, a number of security professionals also suggested that the retailer was likely not the only target (no pun intended).

As a result, three actions should have occurred immediately in every organization around the globe:

  • An audit of every accounts repository throughout every organization to disable/eliminate unused accounts, ensure active accounts were properly secured, and determine if any existing accounts showed any evidence of compromise
  • A full malware scan on every system, including explicit checks for the specific malware identified on the Target systems
  • A reevaluation of network connectivity, with these questions in mind:
    • How could a service vendor's credentials be used to access our PoS network?
    • Which of our networks are connected to which networks?
    • How are they connected?
    • Do firewalls exist where they should?

And yet, in the subsequent weeks after the Target announcement, a litany of big-name retailers, including Neiman Marcus, Michaels, Sally Beauty Supply, P.F. Chang's, Goodwill Industries, and Home Depot have all reported breaches that occurred around the same time or after the Target breach was disclosed.

If you haven't done the three things listed above in your organization, go do them right now!

Patching is a no-brainer
Then there was Heartbleed, perhaps the most saturated vulnerability threat in the history of network computing. Who hasn't heard about Heartbleed? It was a threat with an immediately available and simple to deploy patch. Most organizations deployed the patch immediately (or at least took their OpenSSL devices off the Internet).

And yet, despite this, Community Health Systems managed to give up 4.5 million customer healthcare records to Chinese hackers in an attack that started a week after the Heartbleed announcement. Now, while we might forgive the April attack, this theft actually continued through June! To date, this is the only known major exploit of that vulnerability. (And yet, there are still a quarter-million unpatched devices on the Internet!)

What is your plan for ensuring highly critical security updates are deployed to your devices as soon as possible -- and if not, protecting those devices from known threats?

When is compliance not compliant?
The final aspect of all of this is the alleged value of our compliance regulations, which raises some interesting questions. For example, what good comes from the PCI-DSS regulations in the face of so many breaches? Is this a failure of the compliance standards to actually define things that should be compliant? Is this a case of businesses fudging the compliance audits? Finally, where's the meat in PCI-DSS for organizations failing to be compliant?

And how responsible is management? Perhaps the most infuriating thing about the Home Depot incident is the recent report that management had been warned for years that there were known vulnerabilities, and yet did nothing.

Is your management resistant to acting responsibly about data security? Do you have a plan for changing this resistance?

The bottom line is this: Don't be the next story in this long train of disasters. Go check your systems, networks, accounts, and employees. Most of all, learn from the tribulations of others.

We’ve just returned from this year’s VMWorld conference and it was a busy one! With a dozen staff members and four demo stations we were well prepared to talk to customers and not-yet-customers (yes, there are still a few out there) non-stop, and that’s pretty much what happened.

The Expo Hall

Normally at a trade show there’s an ebb and flow of traffic during the day as most participants are in sessions, and then the breaks between sessions are like a breakfast rush at your local coffee shop. This year was noticeably different, however, as we experienced a non-stop line of visitors to the booth throughout the entire show. This is a Good Thing™. :-)

 

I’m not sure if there were just that many more attendees and the sessions were full, or attendees just weren’t going to sessions, but we certainly appreciated the interaction with everybody. The official report is that there were 22,000+ attendees, which I’m told is actually a bit lower than 2013.

d think at VMWorld the primary interest would be virtualization software, and yet, we talked about every product in the catalog, some of them more than Virtualization Manager!

vmworld1a.png

 

Experts & Espresso

We did something different this year. We hosted a series of breakfast sessions with free coffee. The sessions were livestreamed, live-tweeted, and live-attended too!

 


You can watch the video recordings of the presentations at the YouTube links above, or just go to http://go.solarwinds.com/vmworld14.

vmworld2a.png

Tech Field Day Extra!

Joel Dolisy (pictured left) and Suaad Sait, EVP Products and Markets, SolarWinds (pictured right), also presented at the Tech Field Day Extra! held in conjunction with VMWorld. They talked about our perspective on the performance of hybrid IT and the importance of integration. You can view that presentation online as well.

Capture.PNG

Keynotes

As expected, VMWare announced some new products, although eagerly anticipated, vSphere 6.0 product was only announced as a forthcoming beta. The big announcement, I guess you could call it, was VMWare EVO, a family of hyper-converged infrastructure services.

 

  • EVO:Rail – a base level of services designed to ramp up a hundred VMs in 15 minutes. Real products are being delivered from several vendors.
  • EVO: Rack – building on EVO:Rail, produces an entire cloud environment in a couple of hours. This is still a technical preview, but look for those same vendors to expand into this realm as well.


Also announced is an OpenStack distribution that will include vSphere, VSAN, and NSX… but I’m not sure how “openstack” you can call that, since it’s mostly based on proprietary products. 

VMWare is also making a big play in the end-user space, with Desktop-as-a-Service (DaaS) -- my jury is still out as to whether I want my *desktop* to be dependent on my Internet connection! – enterprise mobility management, and content collaboration  

You can view all of the VMWorld sessions online.


Did you attend VMWorld? What were your thoughts and experiences?


 

We know mobile devices are must-have tools for your end users and they’re growing more accustomed to having options when it comes to picking their corporate-connected mobile devices. Two end user mobile enablement strategies seem to be leading the pack: BYOD (bring your own device) and CYOD (choose your own device). BYOD, of course, involves end users providing their own device(s) from a virtually unlimited set of possibilities and connecting it to the corporate network and resources. CYOD, on the other hand, involves end users selecting their soon-to-be corporate-connected mobile devices from a defined list of devices that IT supports and can have more control over. The idea being that the burden on you is lessened because you don’t have to be prepared to manage every mobile device under the sun.

 

So we’re curious, has your organization settled on one of these strategies over the other? If so, we’d love to hear about your first-hand experience implementing either of these policies—or a hybrid approach—into your organization, and how your company arrived at the decision to go the route they did. If you have implemented a CYOD policy, what benefits have you seen? Was it successful or did your employees revolt and turn to smuggling in their own devices anyway? I'm looking forward to hearing your feedback.

 

And if you haven’t already taken our BYOD vs. CYOD poll, you can find it here.

One of the questions encountered often by new users of Patch Manager is the purpose and uses of the Managed Computers node in the console.PMMC-1.png


The Managed Computers node is the collection of all Patch Manager servers, registered WSUS servers, and any machines that have been targeted for an inventory task, regardless of whether the machine was successfully inventoried. As the inventory task obtains the list of machines from the target container, a record is created in the Managed Computers list for that machine. When the inventory is successfully completed, a number of attributes are displayed for the machine in the Managed Computers node.

 

The Managed Computers node is especially useful for accessing basic diagnostic information about the status of computers and the inventory process. In the Computer Details tab for each machine, five state results are provided that describe the results of the inventory connection attempt.

 

When an inventory task is initiated, the Patch Manager server queries the container object that the inventory task has been targeted to. This may be a domain, subdomain, organizational unit, workgroup, WSUS Target Group, or Patch Manager Computer Group. Regardless of the type of container, the Patch Manager server obtains a list of machine names from the identified container.

 

Failed Inventory Connections

An entry in the Managed Computers node with an icon containing a red circle PMMC-2.png

indicates a machine that failed the most recent inventory connection attempt.

 

DNS resolution attempt reports the status of the attempt to resolve the computer name obtained from the container. If the name was resolved, the IP Address is captured and stored in the computer record and the status is reported as “Success”. If the name was not resolvable, the status is reported as “Failed”.

 

ARP resolution attempt reports the status of the attempt to resolve the IP Address obtained from the DNS resolution attempt. If the ARP resolution attempt is successful, the MAC address is captured and stored in the computer record, and the status is reported as “Success”. If the ARP resolution attempt was not successful, the status is reported as “Failed”.

 

ARP is a broadcast-based network technology and generally does not cross broadcast boundaries, which include routers, bridges, gateways and VLANs. As such, when performing ARP resolution for IP Addresses on the other side of a gateway, it’s important to note that the gateway will respond with its own MAC Address, as the purpose of ARP is to identify where network packets should be addressed to get the packet on the correct pathway to its destination. Patch Manager knows whether a MAC address returned is the MAC address of a boundary device or the actual targeted device. When a boundary device is identified as the owner of a resolved MAC Address, Patch Manager will not record that MAC address and will report the ARP resolution as “Failed”. Thus, it is a normal indication for machines on remote networks to have a status of “Failed” for the ARP resolution attempt, except where an Automation Role server is physically present on that remote network. (See Patch Manager Architecture - Deploying Automation Role Servers and How-To: Install and Configure a Patch Manager Automation Role Server for more information about the use of Automation Role servers.)

 

Endpoint Mapper connect attempt reports the status of the attempt to connect to the RPC Endpoint Mapper on port 135. When the status of this event is reported as “Failed”, and the status of DNS and ARP resolution events are reported as “Success”, this is generally the result of an intervening firewall blocking traffic on port 135.

 

File and Printer Sharing connect attempt reports the status of the attempt to establish a file sharing session on port 445 using SMB over IP. When the status of this event is reported as “Failed”, either an intervening firewall is blocking port 445, or the File and Printer Sharing service may not be enabled. Comparing the results of the Endpoint Mapper connect attempt can shed additional light on the situation. It’s also important to note that File and Printer Sharing is only needed to deploy/update the Patch Manager WMI Providers. If the WMI Providers
are deployed, a failure here will not negatively impact the completion of the inventory task.

 

WMI connect attempt reports the status of the attempt to establish the WMI session. When this event is reported as “Failed”, you should check firewall configurations as well as the credentials configured in the assigned credential ring. If using a local account to access the machine, the password stored for the credential may not match the password configured on the machine’s local account; also confirm that the chosen credential does have local Administrator privileges on the target machine.

 

Partially Successful Inventory Connections

A machine with a yellow triangle iconPMMC-4.png

PMMC-3.png

indicates a machine that was successfully inventoried, but one or more issues occurred while attempting to access a specific datasource or object. The specific objects that were impacted will be listed at the bottom of the Computer Details tab.

 

Tabs & Graphics

There are four tabs provided on the Managed Computers node that display the statistical results of various steps in the inventory collection process. Double-clicking on any graph segment will launch the corresponding report listing the machines affected.

 

The Computer Inventory Details tab shows the specific datasources collected, and the timestamp of the last successful collection.

 

The Connectivity Summary graph shows the number of systems targeted for inventory, the number of systems accessible via WMI, the number of systems presumed to be powered off (or otherwise not reachable), and the number of systems that are reachable, but could not be accessed with WMI.

 

The Connectivity Failure Summary graph shows the number of systems which failed at any of four of the five steps of the connection process: DNS resolution, ARP resolution, RCP Endpoint Mapper connectivity, and File and Printer Sharing connectivity. It also contains results for a NetBIOS connection attempt which is also performed during the inventory connection process.

 

The WMI Failure Summary graph shows the number of systems which failed WMI connectivity for the three most commonly occurring reasons: [1] Access Denied, [2] Firewall blocked or WMI disabled, and [3] any other known WMI failures which are unidentifiable or other failures not attributable to any other known cause.

 

In addition, the Managed Computers node also provides a Discovery Summary graph which shows the number of machines that were accessible on selected ports tested during a Discovery event. Discovery is a process by which devices and machines are identified by IP address, and network accessibility is identified by TCP port availability.

 

For more information about the features of Patch Manager, or to download your own 30-day trial, please visit the Patch Manager product page at SolarWinds.

Last January I talked in another article about the scenario involving using a new WSUS v6 server (which runs on Windows Server 2012) in combination with Patch Manager.

 

But I overlooked one scenario in that article, and since then another one has arisen.

 

The fundamental challenge with mixed scenarios involving different operating systems has to do with the WSUS API version. In order to support local publishing activities (basically anything involving putting a third-party update into the WSUS database), both the WSUS Console version of the Patch Manager server and the version of WSUS installed on the WSUS server must be identical. If they are not identical, the Patch Manager Publishing Wizard will return the error message

     Message: Failed to publish packageName. Publishing operation failed because the console and remote server versions do not match.

 

You can get more information about this particular message, and other known causes, in Solarwinds KB4328.

 

Today, there are three supported production versions of WSUS that can contribute to this situation.

  • WSUS v3.2 - runs on Windows Server 2003, 2008, and 2008R2.
  • WSUS v6.2 - runs on Windows Server 2012 (RTM)
  • WSUS v6.3 - runs on Windows Server 2012 R2
  • WSUS v10  - runs on Windows Server 2016

 

So the original article dealt with the scenario where WSUS v6.2 was being deployed on Windows Server 2012, but presumed that Patch Manager already existed, or was being deployed, on a non-WS2012 system. It talked about how to get the WSUS v3 console of the Patch Manager server to talk to the WSUS v6.2 server. Essentially we did that by forcing the connection through a WSUS v6.2 console installed underneath a second Automation Role server.

 

Here's a chart showing the various combinations of Patch Manager and WSUS, and which ones will connect natively and those that will require an additional Automation Role server (typically installed on the WSUS server, but can be a third system).

For the sake of the symmetry of the chart (and future capabilities), I've included the scenario involving Patch Manager on Windows Server 2012 R2 -- but please note that Patch Manager v1.85 is not officially supported on Windows Server 2012 R2 at this time. Implementing a Patch Manager Automation Role server on Windows Server 2012 R2 or Windows 8.1 will require Patch Manager v2.0 (coming soon!).

 

Patch Manager on WS2003/2008/2008R2 (WSUS v3 console)Patch Manager on WS2012 (WSUS v6.2 console)Patch Manager on WS2012 R2 (WSUS v6.3 console)Patch Manager on WS2016 (WSUS v10.0 console)
WSUS v3 (on WS2003/2008/2008R2)Direct Connection from PAS works

Requires AutoServer on WSUS v3 server

or other system running WS2003/2008/2008R2 or Windows Vista/Windows 7

Requires AutoServer on WSUS v3 server

or other system running WS2003/2008/2008R2 or Windows Vista/Windows 7

Requires AutoServer on WSUS v3 server

or other system running WS2003/2008/2008R2 or Windows Vista/Windows 7

WSUS v6.2 (on WS2012)

Requires AutoServer on WSUS v6.2 server

or other system running WS2012 or Windows 8

Direct Connection from PAS works

Requires AutoServer on WSUS v6.2 server

or other system running WS2012 or Windows 8

Requires AutoServer on WSUS v6.2 server

or other system running WS2012 or Windows 8

WSUS v6.3 (on WS2012R2)

Requires AutoServer on WSUS v6.3 server

or other system running WS2012R2 or Windows 8.1

Requires AutoServer on WSUS v6.3 server

or other system running WS2012R2 or Windows 8.1

Direct Connection from PAS works

Requires AutoServer on WSUS v6.3 server

or other system running WS2012R2 or Windows 8.1

WSUS v10.0 (on WS2016)

Requires AutoServer on WSUS v10.0 server or other system running WS2016

Requires AutoServer on WSUS v10.0 server or other system running WS2016

Requires AutoServer on WSUS v10.0 server or other system running WS2016

Direct Connection from PAS works

 

Note particularly that it is the version of the WSUS SERVER that determines where the additional Automation Role server must be installed, not the operating system that the Patch Manager server is installed on.

 

UPDATE [1/6/2014]: In this version of this article, I failed to also mention the requirement to create an Automation Server Routing Rule (ASRR). It is the ASRR that tells the Patch Manager server to route all requests for the WSUS server through the appropriate Automation Role server. To create an ASRR:

  1. From the Managed Enterprise node, select the Automation Server Routing Rules tab.
  2. In the Actions Pane on the right, under "Routing Rules" select "Add WSUS Server Rule".
  3. Select the WSUS Server from the dropdown menu and click on "Save".
  4. Check the correct Automation Server from the list.
  5. IMPORTANT: Check the "Absolute Rule" checkbox at the bottom of the dialog.
  6. Click on OK.

 

For more information about the architecture and implementation of Patch Manager Automation Role servers, please the Administrator Guide and these blog articles.

Patch Manager Architecture - Deploying Automation Role Servers

How-To: Install and Configure a Patch Manager Automation Role Server

Last week I had the opportunity to spend a couple of days in the hallway at SpiceWorld in the Austin Convention Center. Well, really I was in the SolarWinds booth, but the vendor booths were in the hallway, so there I was nonetheless.

SpiceWorld2013ACC2.jpg

SpiceWorld is the annual conference of SpiceWorks, which, functionally speaking, is a very large online community of ITPros who hang out and help each other with their day to day challenges. This was SpiceWorld’s first year in the Austin Convention Center, having outgrown the University of Texas’s Conference Center. Over a thousand people attended this year, and next year the goal is 2500! For two days ITPros, generally associated with SMBs, MSPs, and non-profits, shared each other’s physical presence, chatted with vendors, attended educational sessions on a myriad of topics ranging from Intro to PowerShell to Advanced Storage, and enjoyed the Austin nightlife. For a look back at the intensity that is SpiceWorld, you can review the SpiceWorld forum on SpiceWorks.com.

 

SpiceWorks also makes software. IT Management software, to be specific. One of the themes of the questions we were asked at SpiceWorld was “How does SolarWinds compare to SpiceWorks?”, “When/Why would I convert/migrate/upgrade from SpiceWorks to SolarWinds?”, or in a more general context, “When should I consider paying for software, rather than using free software?”

 

I’m going to take a moment here and share my answers to those questions, hopefully for the greater good. I’ll also point out that these considerations do not just apply to the SpiceWorks vs SolarWinds question, but apply to any scenario in which you find yourself comparing a “free” solution to a “not free” solution.

 

I think there are two significant points to keep in mind in the free vs not-free scenario: feature development and technical support.

  1. Generally speaking, “free” software does not have as rich a development cycle as “not free” software would have. In some cases, free software is built once, for a single purpose, and unsupported and not maintained for the rest of its life. In other cases, such as open-source software, its development cycle may be at the mercy of the availability of volunteer resources who are interested in particular feature sets. You need to consider the impact of the development lifecycle on the product and how that will affect your usage. Is it likely that at some future time you’ll outgrow the free product?
  2. Because it’s free, many of these products also do not have a rich support ecosystem. One of the advantages you’ll often find with paid products is that the vendors provide 24x7 technical support services for those products. If you’d rather invest your worktime doing something productive, rather than trying to track down an unexpected behavior in a free product, that might be the time to move to a paid product.

 

It’s interesting to note, though, in all fairness to SpiceWorks, they don’t actually fit the mold of your typical “free product” vendor because they do have a rich full-time development team and a very active support system and community. SolarWinds does also, I should mention, so if you’ve not yet joined Thwack, you should definitely check it out. By the way, the other thing that SolarWinds has a lot of is Free Tools! So, this discussion even applies when comparing our free tools to our own paid products.

 

In the end, the question of when to migrate from SpiceWorks to SolarWinds is a bit more complex and probably needs to be evaluated on more of a case-by-case basis. Or, it can also be viewed as a very simple question: Is your “free product” still meeting your business needs? If so, it’s very hard to justify spending money you’re not already spending. But if it’s not, there are options! :-)

Last week I had the opportunity to spend another few days in New York City. I’m a big fan of the city, especially at night, and NYC is definitely the place to absorb that experience. But it wasn’t all touristy; in fact, I spent most of my time at the Interop New York trade show, which was held in the Javits Convention Center in Midtown West.


The show covered five days, and also included co-locations with a number of other smaller shows, including  the Mac & iOS IT Conference, the InformationWeek CIO Summit, and LightReading’s Ethernet & SDN Expo. As a result of the convergence of all four of these events, there was quite a cross-section of ITPros present. I talked with CIOs, VPs, VCs, IT Directors/Managers, as well as Network and System Administrators working in the trenches every day (and working even on those four days, compliments of our 21st century remote access technologies).


As perhaps expected, a significant theme of the show was the growing interest around Software Defined Networking (SDN) and Software Defined Data Centers (SDDC). SDN and SDDC are all about “the cloud” and methodologies for managing the cloud, but thoughts about the cloud still have notable polarization: Why does IT fear the cloud?


In the expo hall, the two most notable groups of vendors I saw were those who build products for network monitoring (including SolarWinds, of course), and vendors selling refurbished Cisco hardware.

WP_002138.jpg
The coolest thing I saw at the show was a “big screen” touch screen by nuPSYS. They build technologies for NOCs, including a multi-touch plate that sits on top of existing screens, effectively converting the display into a multi-touch display. This big screen consists of 40 individual rear-projection display units, synchronized together, with a touch screen mounted on the front of each projector box. In the photo you can see the first 12 panels removed.


For those who are attracted to keynote presentations at these events, you can find the entire InterOp archive online, but Interop New York offered a couple of very interesting presentations: John Chambers, Cisco’s Chairman and CEO talked about networking and “the Internet of everything”, and William Murphy, CTO, The Blackstone Group revisited the ongoing question “Is IT Irrelevant?”. Recently on Thwack, we had a similar conversation about The Future of IT Jobs. Where is your IT job headed?


For me, half of my duties included setting up and tearing down the tech equipment for the SolarWinds booth, which included a humbling lesson about remembering to plug in the network cables before freaking out about why the network doesn’t work right. The other half, the half I enjoy the most, is hanging out in the booth and talking to customers and not-customers about SolarWinds products, and technology in general. One thing unique about Interop, less so than other shows such as TechNet, Cisco Live!, or VMWorld, is that there’s a higher number of executives and affiliated professionals (such as venture capitalists), and while they’re not the primary people we generally talk to, it’s always interesting to have conversations about our products with them.


All in all it was a great week in New York, and I’m looking forward to the next opportunity to come out and mingle with the people who are doing the real work in I.T.

WSUS Inventory collects server information and status information from the WSUS server and populates that data into the Patch Manager database. This data is collected via the WSUS API. An inventory task must be executed in order to use reporting.

 

Create the WSUS Inventory Task

There are several methodologies that can be used to create a WSUS Inventory task, but the simplest is to right click on the WSUS node in the console, and select "WSUS Inventory" from the menu.

WSUSInventoryMenuOption.png

The first screen presented is the WSUS Inventory Options. They provide the ability to handle certain advanced or complex inventory needs, but in 99% of instances, these options will be left at the defaults. In a subsequent post I'll discuss these four options in greater detail. Click on Save.

 

On the next screen you have the standard Patch Manager dialog for scheduling the task. Schedule the inventory to occur as needed. Typically the WSUS Inventory task is performed daily, but there are scenarios in which you may wish to perform the inventory more, or less, frequently. Be careful not to schedule the WSUS Inventory concurrently with other major operations, such as backups or WSUS synchronization events.

 

WSUS Extended Inventory (Asset Inventory)

In addition to the WSUS server, update, and computer status data, it is also possible to collect asset inventory data via the WSUS Inventory task. If the WSUS server is configured to collect this asset inventory data, it will be automatically collected by the WSUS Inventory task. To enable the WSUS server to collect asset inventory data from clients, right-click on the WSUS node in the console, select "Configuration Options" from the menu, and enable the first option "Collect Extended Inventory Information".

 

Using System Reports

With the inventory task completed, you now have access to several dozen pre-defined report templates in the two report categories named "Windows Server Update Service" and "Windows Server Update Service Analytics". The data that is obtained from the WSUS server is re-schematized within the Patch Manager database, optimized for reporting, and presented as a collection of datasources that are used to build reports. To run a report, right click the desired report, and select "Run Report" from the context menu.

 

Category: Windows Server Update Services

In the "Windows Server Update Services" report category there are 24 inter-dependent datasources available. Ten of them provide 327 fields of basic update and computer information, along with WSUS server data. The fourteen datasources named "Update Services Computer..." provide access to 111 fields of asset inventory data collected by the WSUS Extended Inventory.

 

Category: Windows Server Update Services Analytics

In the "Windows Server Update Services Analytics" report category there are nine self-contained, independent, datasources. The "Computer Update Status" datasource is the basic collection, and the other eight are based on modifications of this datasource, either by adding additional fields, or filtering the data.

 

In subsequent articles we'll look in more detail at how to customize existing reports and how to build new reports, including a more in-depth look at datasources and the WSUS Inventory Options.

 

If you're not currently using Patch Manager in your WSUS enviroment, the rich reporting capabilities are a great reason to implement Patch Manager.

In this PatchZone article I discussed how to configure a custom update view to allow update approvals to be captured from a source group (e.g. a Test Group) and duplicated into one or more additional groups (e.g. production groups).


Patch Manager can also use this technique, but if you have multiple source groups, one of the complications with the technique of update views is that you need to create a separate view for each source group. With Patch Manager we have a simpler, more direct, methodology that can be used for any number of source groups.

WSUS Server -> Update Approvals tab


From the WSUS Server node of the console, select the Update Approvals tab. This view shows the entire approval event history of the WSUS server, including automatic approvals, and any changes in approval status, such as removing an approval. The list has one entry for each target group where an approval has been set and also provides the date of the approval and the identity of the console user who issued the approval.

Filter the Update List


In the Computer Group column, open the filter selection dialog and select your source group. In this example, we’re using the group “Test Computers”.

Filter Update Approvals by Group.png

With the list filtered by the source group, you can then sort or filter by the Approved Date. One of the particularly useful features of date filtering in Patch Manager is the ability to filter on more than one specific date. The date filter provides a tree view of the actual date values in the column and you can include or exclude entries by a specific date.

Date Filter Dialog.png

 

Select the Updates


In this scenario we only have three updates approved for our “Test Computers” group, so no additional date filtering will be necessary. Select the updates and click on the “Approve” action in the “Update Approvals” section of the Action Pane. (Notice that I have collapsed the top half of the Action Pane which relates to server actions, in order to see the “Update Approvals” specific actions more clearly.)

Launch Approve Dialog.png

Add Approvals


This will open the standard Approve Updates dialog where you can now select additional groups to add approvals. In this case I’ve selected “Win2008R2” (my production group).

Approve Updates.png

And that’s all there is to it!

In a Product Blog article last August I talked about why you might want to deploy additional Automation Role servers when using SolarWinds Patch Manager. In this article I’m going to describe exactly how to do that.

 

Installation

The first step is to install the server. Installation of an additional Automation Role server is very similar to how you installed the original Patch Manager Primary Application Server (PAS), with only a couple of minor variations.

 

Launch the Patch Manager installer, and on the Express/Custom screen, select Custom.

Express-Custom.png

Proceed through the installer screens as you did for the original server. When you arrive at the database selection screen, select Use a LOCAL instance of SQL Server Express Edition. Each Patch Manager server requires its own instance of SQL Server, and there’s no need to use anything except SQL Server Express Edition for an Automation Role server.

SQLInstanceSelection.png

When you arrive at the role selection screen, select the “Automation Server” option.
SelectServerRole.png

Registration

When the installation reaches the point where it needs to register the new Automation Role server with the PAS, it will prompt you to provide the name of the PAS (again) as well as the credentials to authenticate with the PAS. The logon account must have membership in the Enterprise Administrators security role. Typically the local administrator account of the PAS, or a domain administrator account will serve this purpose.

AuthenticateToRegisterWithPAS.png

When the installation is completed, the final screen will offer you the opportunity to launch a local console and connect to the PAS.

LaunchPASConsoleConnection.png

You can continue configuring the Automation Role server from this console connection on the Automation Role server, or you can use another console session that connects to the PAS.

 

Configuration

To begin configuration of the Automation Role server, connect a console session to the PAS.

ConnectConsoleToPAS.png

Navigate to the Patch Manager System Configuration -> Patch Manager Servers node. You should see your new Automation Role server listed in the details pane of this node.

ConfigureAutoServerStep1.png

Note that the Automation Role server is not yet assigned to a Management Group, and it displays an icon with a red exclamation mark indicating that the configuration is not yet complete.

 

Launch the Patch Manager Server Wizard utility from the Action Pane.

LaunchPMServerWizard.png

Select the option “Edit an existing Patch Manager Server’s configuration settings”. Click on Next.

ServerWizardStep1.png

Select the new Automation Role server from the “Server Name:” dropdown menu, and click on Resolve if the remainder of the dialog does not automatically populate with the server’s attributes. Click on Next.

ServerWizardStep2.png

Assign this Automation Role server to a Management Group. In most instances, there will only be one management group, the default group “Managed Enterprise”; however, if you have multiple management groups defined, the Automation Role server must be assigned to one of them. It will manage tasks only for members of that management group. Select the correct management group from the “Management Group:” dropdown menu.

ServerWizardStep3.png

The “Server Role:” value should be automatically set to Automation. The option is used to add or remove roles from a Patch Manager server after deployment and registration. “TCP/IP Port:” default to 4092 and should not be changed.

 

Set the option “Include this Patch Manager server in the default round-robin pool” depending whether the Automation Role server is being deployed for a specific purpose, or just as an additional server for load-sharing. If the option is disabled, only tasks matched by an Automation Server Routing Rule (ASRR) will be assigned to this Automation Role server. If the option is enabled, any task that does not match an existing ASRR may be assigned to this Automation Role server.

 

The last set of options is useful when the Automation Role server is being deployed across a bandwidth constrained connection such as a slow WAN link or a site-to-site VPN connection. It allows you to restrict the maximum amount of bandwidth used by the Automation Role server, and you can set the values differently for incoming/outgoing (i.e. download/upload) traffic. Click on Next. You'll then be presented with the configuration summary screen. Review the configuration options and click on Finish.

 

After clicking the Finish button from the wizard’s summary screen, you will be presented an information dialog reminding you about the service restart requirement for the Patch Manager server.

ServerWizardStep4.png

After the changes are synchronized to the Automation Role server (e.g. management group assignment, round-robin option, and bandwidth restrictions), it is necessary to restart the Data Grid Service (or reboot the Automation Role server).

 

You should wait approximately five minutes after completing the wizard before initiating the restart. There are three ways you can restart the service:

  • Using the Services MMC.
  • From the command line using “net stop ewdgssvc” and “net start ewdgssvc”.
  • From the Services tab of the Computer Explorer in the Patch Manager console.

 

The last step is to create any needed Automation Server Routing Rules. ASRRs are not required, unless you want to dedicate an Automation Role server to a specific machine or group of machines (by domain, workgroup, organizational unit, or IP subnet). You can also assign a WSUS server to an Automation Role server, but please note this rule only assigns WSUS Administration tasks to the Automation Role server, it does not assign computer management tasks, nor does it assign clients of the WSUS server.

 

Navigate to the Management Groups -> Managed Enterprise node (or the node representing the management group that this Automation Role server was assigned to), and select the Automation Server Routing Rules tab.
From here you can create, edit, and delete ASRRs.

ASRRManagement.png

In a future article I'll talk in more detail about using ASRRs. In the meantime, for more information, and examples, of the use of ASRRs, please see the Product Blog article Patch Manager Architecture – Deploying Automation Role Servers.

One of the scenarios sometimes encountered in patch management environments is the disconnected network. Microsoft recognized this need and created functionality in WSUS to handle disconnected networks, and I wrote about this in the PatchZone article: Considerations with the WSUS Disconnected Network Environment.

 

Just to review, a disconnected network scenario with WSUS and Patch Manager looks like this:

DisconnectedNeworkDiagram.png

One of each server (WSUS, Patch Manager) in each network.

 

Patch Manager Enhancements for Disconnected Operations

Patch Manager also provides a similar capability as WSUS, but with a couple of nice enhancements. First, where WSUS requires you to export all of the updates in the catalog, Patch Manager allows you to export one, some, or all of the updates. Second, WSUS requires you to export the metadata separately from the installation files; Patch Manager allows you to bundle them in the same CAB file for transport on removable media. Detailed procedures can be found in the Patch Manager Administrator Guide, in the section “Importing and Exporting Catalog” on page 52.

 

The Challenge for Patch Manager in a Disconnected Environment

However, the biggest challenge for Patch Manager in this scenario is not a technological problem, but rather a licensing problem. There’s no argument that a 250-node license for a two client installation of Patch Manager on the connected network is a pretty steep price to pay. If you were willing to forego telephone support for that connected server, you could use a 50-node installation of DameWare Patch Manager, but even that’s a pricey cost for a two-node network.

 

The good news, however, is that you do not have to purchase a separate license for your single-node connected network. With a bit of creative use of Patch Manager server roles, you can license that connected server as a node of the license applied to the disconnected server. Let’s look at how this is done.

 

Install Both Servers from Disconnected Network

On the disconnected network, we’re going to install the Patch Manager Primary Application Server (PAS). This is the server that will be used to manage the WSUS server in the disconnected network, as well as the clients of the disconnected network.

 

Also, on the disconnected network, we’re going to install a Patch Manager Secondary Application Server (SAS) with the Management Server role. This server will be registered with the PAS, and as such, will be automatically licensed for use by the license applied to the PAS. Note that this can be either a physical system or a virtual machine. When we’re ready to put this SAS in service, it’s just a matter of transporting the physical system (or virtual machine files) across the network gap and plugging it into the connected network.

 

Create Scope Objects on PAS for SAS

There is one technological consideration to be aware of in this scenario. The PAS replicates all defined scope objects (Domains, Workgroups, WSUS Servers, and Computers) to the SAS. In order to get the connected WSUS server registered on the SAS, the WSUS server scope object must be created at the PAS and replicated before moving the SAS to the connected network.

ConfigureWSUSServerScopeOnPAS.png

From the Patch Manager System Configuration node, in the Details Pane, double-click on Scope Management. Click on Add Rule, and select Update Services Server. Use the “Enter the object to add” button to manually  create an entry for the connected WSUS server, and click on Save. In a couple of minutes, that scope declaration will replicate to the SAS. You can access the Scope Management tool on the SAS to confirm. You may also wish to add the Domain or Workgroup for the connected network.

 

Deploy SAS to Connected Network

One the replication is completed and the SAS moved to the connected network, the connected WSUS server can be registered on the SAS and added to the management group defined on the SAS.

 

Credentials, Credential Rings, Security Role memberships, and User Preferences are all entities defined at each individual application server, so you can create those directly on the SAS at any time, before or after actual deployment to the connected network.

 

If you’re not currently using Patch Manager and you have a disconnected network environment, check it out. Download your 30-day trial today. Even if you don’t have a disconnected network, try it anyway!

We had an awesome week in the SolarWinds booth at TechEd North America. It's great being able to meet and chat with our customers, and we were particularly surprised at the number of SolarWinds customers attending TechEd NA this year. We talked about all the products, but especially a lot about SAM and Patch Manager, and Virtualization Manager was a finalist for the Best of TechEd Awards. We even closed a couple of support tickets in the booth!

 

Perhaps the most notable announcement to come out of TechEd was the forthcoming release of Windows Server 2012 R2 (aka Windows 8.1) and System Center 2012 R2. Preview bits are expected to be available by the end of June, and you can register for notification from Microsoft when the bits are available.

 

But, no doubt, the high point of the week (well, at least for me, being the party animal that I am ... well, Used To Be!), was the closing party. Two things to catch your attention from this event, first a memorable performance of Proud Mary by Tina Turner and the [TechEd] Tinettes... here's a (not very good) photo I captured to commemorate the occasion. At the end, to a one, each of those dozen IT pros got/gave hugs from/to Tina. I'm not sure who was more excited... the guys being in the presence of music royalty, or Tina having a dozen men half her age (or more) huggin' on her. One thing's for sure ... the lady can still dance, sing, and rock a house.

 

WP_001970.jpg

... and two hours of the hottest party music from a local New Orleans band, MoJeaux. Check them out, for sure. Here's a 40 second video clip from MoJeaux's Facebook.  IT Pros Rock!

 

I'm already getting warmed up for TechEd North America 2014... which will be held in my favorite city ... Houston! May 12-15, 2014.

Greetings All!

 

I'm in the Big Apple, New York City, this week ... doing a presentation Wednesday morning at the Cybit Expo. Cybit is a "computer forensics" show, which focuses on Cyber Security and IT Security. My presentation is titled "Sharing Without Sacrifice: Managing Systems and Mitigating Risk in Today's Virtual Military" and I'll be talking about how to effectively monitor and manage virtual systems within the realm of military deployments, both mobile as well as stateside.

 

In addition to the presentation at CyBit, I'm scheduled to do some interviews with some media outlets, and we'll be talking about Cybit, my presentation (and whatever else those media outlets want to explore) ;-)

 

If you're in the New York City area, let me know at @LawrenceGarvin (http://twitter.lawrencegarvin.com) or LinkedIn (http://linkedin.lawrencegarvin.com) -- or just post a reply to this blog right here on Thwack.

I'll be monitoring all channels and I'd love to have a chance to meet you in person.

 

If you're not in NYC this week, I'll also be in the SolarWinds booth at TechEd North America (New Orleans) the first week of June or Cisco Live! (Orlando) the last week of June.

 

And don't forget your mom's on Sunday!!!!! :-)


Managing and monitoring a single IT operation can be a pretty complex operation, and we make lots of great tools to help simply that process, but what to do when you’re a Managed Service Provider (MSP) and you have multiple independent IT operations to oversee.

 

There are a couple of different deployment scenarios for MSPs to take advantage of our Server and Application Monitor (SAM) product, as well as its sibling products, including the following:

· IP Address Manager (IPAM)

· Network Configuration Manager (NCM)

· Network Performance Monitor (NPM)

· Netflow Traffic Analyzer (NTA)

· User Device Tracker (UDT)

· VoIP and Network Quality Manager (VNQM)

· Web Performance Monitor (WPM)

Centralized Deployment

The Centralized Deployment model is based on a single, centrally located SolarWinds server running SAM (or one or more of the other products listed above) and configured to monitor/manage nodes remotely.

SolarWinds Centralized Deployment.jpg

 

There are two ways in which remote nodes can be monitored from a central server.

 

The central server can connect directly to the remote nodes and a number of protocols and ports may be needed to support this. Two commonly used protocols are Windows Management Instrumentation (WMI) and Simple Network Management Protocol (SNMP). An SNMP connection works well in this fashion for network devices, but WMI connections for servers are a bit more complicated because of the dependencies on Remote Procedure Calls (RPC). If the remote nodes are within the same enterprise, then RPC connectivity may not be an issue, but RPC is rarely capable of traversing a firewall connection. If you are managing a remote network via an always-on Site-to-Site VPN connection, then RPC/WMI may be possible across the VPN. Alternatively, enabling SNMP on the servers can provide a methodology for monitoring.

 

A second option is to deploy pollers to the remote sites. Poller remotability is useful when there is a large number of nodes to monitor on a remote network. This offloads the WMI or SNMP traffic from the site-to-site connection by performing those tasks locally and then relaying that information back to the central SolarWinds server. This information is sent directly to the instance of SQL Server supporting the main SolarWinds server, so there are also some firewall and port considerations with this method as well. As a result, this methodology is not well-suited to the MSP scenario, but may work well within a single multi-site enterprise. There are also latency issues with the database communication to be aware of as well.

Decentralized Deployment

The Decentralized Deployment model provides some advantages over the Centralized Deployment model by eliminating the challenges involved in supporting RPC/WMI or SQL traffic over a site-to-site network.

SolarWinds Decentralized Deployment.jpg

 

In this model, independent SolarWinds servers running SAM (or other products) are installed at the remote sites, and the Enterprise Operations Console (EOC) is used to provide a centralized, aggregated view of the individual remote sites. In addition, the EOC can be customized on a per-operator basis, so operators that are responsible for only some sites, can have their view customized to just those sites. The EOC can also be filtered by the particular SolarWinds products in use. For example, Network Administrators may wish to focus on the content provided by NPM, NCM, IPAM, and NTA, while Systems Administrators can be focused onto the information provided by SAM.

Hybrid Approach

Finally, if you’ve implemented multiple SolarWinds products in this family, you can also choose a hybrid approach. You can implement one or more products with a centralized model and others with a distributed model, and regardless of the model used, the EOC can connect to all of them.

 

The diversity of deployment options makes monitoring and managing independent customer sites a breeze for MSPs who implement SolarWinds products. For more information about the ways SolarWinds can help MSPs manage customer operations, please visit the website for SolarWinds Managed Service Provider Software.

Filter Blog

By date: By tag: