It’s quite common that we find our virtual machines (VMs) running slow, or the apps running on them becoming sluggish. There can be numerous reasons for VM issues, and some of them are caused due to the dependency between the VMs and the other elements in the virtual environment such as the host or datastore. Let’s understand some common cases of VM performance problems.


  • A VM could be slow because the host doesn’t have enough resources to allocate to the VM. This is a resource contention issue at the host level. There is VM resource demand, but the host is not able to provision requisite resources.
  • A VM could be slow because there are too many VMs accessing a datastore and contending for storage. The datastore faces the situation of catering to more VM load than what it can actually support.


It does become difficult to identify the source of the VM bottleneck even if you have alerts set up to notify you of an anomalous VM metric. What you need here is the relational understanding of which VM is run by which host as part of which cluster.


SolarWinds Virtualization Manager provides an informative interface to view the entire virtual environment in the context of a chosen element (such as a host, a VM, a cluster, or a datastore). When you can get to understand the relational dependency between the various elements of the virtual infrastructure, you will be at a better position to infer the cause of resource contention if that’s what is causing your performance issues for a VM.

V Maps.PNG


For each selected virtual component, you can see the mutually associated other elements of the virtual infrastructure. For e.g., if you have selected a cluster, you can relationally see

  • all the hosts that form the cluster
  • all the VMs that run on all the hosts in that cluster, and
  • all the datastores that attached to all the hosts in that cluster


This works in other possible combinations too. A selected datastore would tell you which VMs are accessing storage from it, and which hosts are attached to it, etc. And likewise track down VMs and clusters.


The best part is, as you see all of the virtual infrastructure from VM to host to storage, you will also be able to see the alerts on each of these elements right on the environment map. Just a mere mouse over action will tell you what issues a VM, or host or datastore is having. You can drill down from there to explore further.


So, if you find a VM which is running slow, you can identify the associated datastore and diagnose how many other VMs are consuming resources from the datastore and where this is causing any resource hogging. With this level of information, troubleshooting becomes extremely simpler. You can decide whether to provision more resource to the host, or load-balance the VMs based on business requirements.


Time Travel is another feature of this dependency mapping which allows you to “go back in time” to see the dependencies that existed at a point in time in the past. For example, you can see all of the VMs that were on a particular datastore 3 days ago and which ones were generating the most I/O.


Watch this short video by Sott Lowe (vExpert) where he explains how you can use the virtual environment mapping and time travel features of SolarWinds Virtualization Manager to contextually understand performance issues in your virtual infrastructure.


For those of you who were waiting for an easy and reliable solution to help you streamline patching of Microsoft and 3rd party applications– Patch Manager 2.0 is all yours for the taking. And, as always, if you are an existing Patch Manager customer under active maintenance, you can enjoy the new features without any additional cost.


Alright, what’s new with Patch Manager 2.0?

Traditionally, deploying patch management tools has been a tedious process even to the most experienced administrators. With Patch Manager 2.0, the focus has been to make the installation and configuration process easier. Here’s the simplified installation flow with the new version.



New Features Include:

Pre-Installation Wizard

  • Guides you through the pre-requisites for setting up and deploying Patch Manager
  • Allows you to select based on the installed components, detects the limitations and upgrades accordingly.

Overall, it makes your life easy because all you need to do is select the components that are not already installed.



Orion integrated installer

Patch Manager plugs into your Orion installer to provide options for Express (automatic) installation and Advanced configuration (manual) to upgrade essential components. The Express Installer option needs very minimal user input, which means it automatically installs and uses SQL express for installing both Patch Manager and Orion.





Configuration and first time use wizard

The first-time usage wizard walks you through configuring your SolarWinds Orion installation and the Patch Manager installation. It helps you set up the database, website, server role, WSUS, credentials, etc.


In the configuration phase, you can configure selected clients for which you would like to test the patch update. The Configuration process uses WMI providers to configure clients for receiving 3rd party updates from typically local WSUS server.


Once the configuration phase is complete, the user can select the patches for installation. Then, the applicable patches are detected and downloaded by the wizard from the set of pre-tested patch packages within the SolarWinds server and published on the WSUS servers.



Agent-based Patch Deployment

Lastly, Patch Manager 2.0 introduces an optional agent that runs as a stand-alone service on a Patch Manager client device. It allows the agent as an active running service to query WMI providers locally.


At SolarWinds, we are open to your ideas and encourage you to share with us on the thwack community forum. Your input is invaluable as we continue to build upon Patch Manager’s powerful patch management capabilities.


Share your thoughts!


Here’s how you can get the new “Patch Manager 2.0”

If you’re monitoring a large virtual infrastructure, then you might have come across memory ballooning and swapping issues at some point. Although memory ballooning causes swapping issues in a virtual environment, let’s look at what they both are and how it affects individual VM performance.


When you setup a VM, you allocate resources for that VM based on usage, user access, and so on. Once you assign resources to a VM, the VM is no longer aware of how much resources are consumed by other VMs in a host. For example, when you have three VMs in a host and two of them are consuming the assigned memory, the third VM will have performance issues. This usually leads to various bottlenecks and you’ll very often find yourself in such situations which leaves you scrambling – troubleshooting to ensure VM performance or availability issues don’t affect end users.


Memory ballooning & swapping: How it works and where it fits in your virtual environment


Memory ballooning is a memory reclamation method and it is generally not a good thing because it causes performance issues to your VMs. The hypervisor assigns memory resources to all the VMs in your environment. A VM is usually unaware when other VMs have memory contention issues, causing performance problems in the environment. To avoid this, the hypervisor uses a balloon driver which is installed in the guest operating system. This balloon driver identifies if a VM has excess memory resources. Once the balloon driver has identified the memory resource, the balloon driver inflates and pins down the memory resource so the VM does not consume the pinned memory. The balloon driver communicates back to the hypervisor asking the hypervisor to reclaim the pinned memory from the VM.


It is up to the hypervisor to allocate more memory to other VMs which have memory contention issues using the balloon driver. One thing to keep in mind is, when there is high memory ballooning, the guest OS is forced to read from the disk, this causes high disk I/O, and it can bring down the performance of the VM. When you think the host is under memory pressure, it’s critical to monitor the memory swap metric for performance issues.

Memory ballooning leads to memory swapping. Memory swapping is also not a good thing because the system swaps to disk and disk is much slower than RAM. All this causes a performance bottleneck. Usually, when there’s excessive memory swapping for a VM, it often indicates there’s memory contention issues on the host.

To proactively find out and address memory issues, you can setup alert criteria. Set thresholds and get notified when there is ballooning or swapping issues. Drill down to see which VMs have performance issues and identify and fix the bottleneck. Watch vExpert Scott Lowe in action as he shows you how to monitor and troubleshoot memory ballooning and swapping issues using a virtualization management software.



You can read this whitepaper from VMware® to learn more about memory resource management and memory ballooning.

I recently got involved with a little DevOps integration project, using SolarWinds NPM and Plexxi Control's Data Service Engine.  The goal was to turn SolarWinds monitoring data into something Plexxi Control can use to modify a Plexxi network in true SDN fashion.


This integration took on a life of its own and evolved throughout the year and we ended up with a module that can use Twitter! to send out Events and Alarms from SolarWinds.

Before you declare that we are crazy Tweeting our Events this is more of a story of a modern DevOps integration and the use of Twitter is just an example of a mechanism to pub / sub data between data sources.


Earlier this year, Plexxi started working with SolarWinds NPM REST API's as we wanted to harvest network node data, but we soon realized that the most complex part of the integration was managing adds/moves and changes. Polling the entire list of nodes every few seconds to check if something changed does not scale. So we created an integration with SolarWinds Event / Alarm feed.


Plexxi had previously developed an integration tool named the Data Services Engine that helps us to modularize these integrations, you can find out about it here

The DSE is an in-memory message bus tool that makes data available for products to consume using their standard API interfaces.


Using the DSE, we poll the SolarWinds Event / Alarm feed every few seconds to see if there are new Events. We then process any Event that is interesting, and publish the data in a well known format on a well known channel using the DSE.


Here is what we published. Its just a bit of JSON, using the SWISS db field names, and you can process this easily in Python.


{'SolarWindsEventID':{5629: {'date': '2013-10-30T21:12:22.973', 'eventType': 56, 'message': 'Group xyz was created.', 'netObjectID': 96}}}



I don't even really need to explain these fields, but essentially this means according to SolarWindsEvent 5629 on 30th Oct 13 at 21:12, a new Group 'xyz' was created, and its group ID is 96.


At this point in the integration you can probably see that this is fairly generic SolarWinds data, but a similar format could be applied to any Event monitoring product, its just publishing Event data. So you could easily combine all your Event sources and publish them in a similar way.


Now, Publishing data like this is only useful if you have something that will Subscribe to that data, so we created a DSE channel that Plexxi could read and our integration was done (you can see this in action in our SDN central demo with SolarWinds on December 6th,


Later on, prompted by an article by @mbushong ( talking about events with Twitter (I'm not really sure if he was joking)  I decided as a thought experiment to see how easy it would be using the DSE to get this data published into Twitter!


The way of the world for API development these days is that for most modern systems you can find a nice Python library, Twitter has a few of said modules (all open source). I grabbed one at random, and created a little DSE module that talks to Twitter.


The base module is literally 35 lines of code, here is a snipit, where we process new events and post an Update to the Twitter channel


               # gets a list of new events
               changes = [x for x in newEvents if x not in self.config['events']]
               for change in changes:
                   newEntry = {}
                   newEntry[change] = newEvents[change]
                   # store current event list
                        # publish on Twiiter
                        status = self.api.PostUpdate(str(newEntry))



Now what happens is that each time SolarWinds generates an Event, the SolarWinds DSE channel we created reads them, filters them out and publishes the service affecting ones (what is defined as service affecting is housed in the SolarWinds channel we created, and totally customizable).


Running in parallel the Twitter channel reads those Events and publishes any new ones into my temp Twitter account ( please don't follow me, its a little verbose).

Screen Shot 2013-11-15 at 12.04.10 PM.png


There is a lot of extensibility here also, we started off using just Tweets to our followers, but that means everyone gets everything, so we changed it to use Twitter Direct Messages, so that our subscribers can request certain events. For example a DM with Start:56, would result in our feed doing a DM back to that subscriber each time there was a Type 56 Event.


Twitter is fairly secure and you can hide all this in the Direct Messages, its available everywhere, it scales, and its free. I mean this is crazy but we have a highly scalable Event exchange engine that anyone can use.


Now in your environment, all you need is to get your application to process the Twitter events from your account (don't use mine), and you have a perfectly workable Dev Ops operational Event model for your organization. If your application has a good Python API module (which it should), then you may find the integration is as easy as I found.

Holiday seasons are an ideal time for stores because it’s the time they get the most foot-falls and it’s a great time to cash in on sales. It is also an ideal time for shoppers to cash in on great deals and discounts. After all, who doesn’t love to shop for items when there’s a steep sale? For those shoppers who avoid the holiday rush of going to a store may just end up visiting the online store. To give those shoppers the convenience, it’s essential to have a website which offers an awesome user experience – from the time the buyer enters the site till the time the sale is complete.


If the user experience is bad, it’s going to cost you the sale, and the customer may never come back. It’s a huge risk, especially during the holidays. We recently spoke about application downtime, similarly, if your website is unavailable, it’s going to cost your business… in plenty. It’s vital to monitor website availability, performance, and responsiveness. When customers are accessing your site from multiple locations, the more reason it is for you to ensure there’s continuous uptime.


Having a website performance monitoring tool will help you find the root cause of a slow Web transaction or if the website itself is down.

  1. Monitor page load speeds. Monitoring critical Web page elements such as images, JavaScript, CSS, third party content, etc. and look whether these elements are impacting the page load speeds. If one of the page elements fail to load, then it spoils the whole user experience, and affects the page load time.
  2. Set appropriate baseline metrics. You can set baseline values once you record the Web transaction. Based on the performance of the transaction, you can keep adjusting your baseline values. For example, once you’ve set a baseline value of 80% as warning and 90% as critical, you should keep revisiting from time to time and modify based on the website performance.
  3. Monitor page element behavior. If you've found what the issue is, drill-deeper to see the root cause. For example, if your site is not loading the way you want because of a style sheet, you should go deeper to look at different components, their load times, whether another issue is preventing the style sheet from loading etc.
  4. User experience monitoring. Just because you've identified and fixed the website issue doesn't mean that all is well. You have to test your fixes continuously from an end user’s perspective. Since your user base is going to access your website from multiple locations, you need to measure the site performance accordingly and compare them with your baseline values.
  5. Get notified when there’s a problem. You can proactively receive alerts in real-time if your website is not loading, or if a Web transaction has failed, or if the website performance is trending very poorly.


Be Proactive this Holiday Season


Rely on Web performance monitoring tools that will keep an eye on your websites along with other front-end applications. Proactively monitor your webpages, receive alerts if your website is having periodic issues, and fix problems before your business performance is affected.


Happy holidays everybody!

We are happy to announce that SolarWinds Enterprise Operations Console (EOC) version 1.5 is now available for download.


SolarWinds EOC have provided the capability to monitor your enterprise network and gave unified visibility for your geographically distributed networks. With the new version, SolarWinds EOC 1.5 provides the following key customer-driven features.

  • Support for Web Performance Monitor (WPM) Summaries – With this new feature, users can view the global top 10 web applications using your enterprise network, in addition to bandwidth utilization, response time, CPU, memory, and disk space utilization, and more.
  • Support for FIPS Compliance
  • Improved configurations options for using Syslog and Orion® Traps
  • Improved stability and usability in filters and reporting functionality


You can learn more about EOC version 1.5 and download a free fully functional trial to gain visibility into the health of your distributed network in minutes!


About SolarWinds Enterprise Operations Console

Enterprise Operations Console (EOC) provides a consolidated command center to monitor your entire enterprise network. You need to proactively maintain network stability and instantly respond to any network issues, wherever in the world they occur. And you need to ensure network resources are correctly utilized to optimally deliver business services across your networks - remotely and without impacting WAN performance.

Security Information and Event Management (SIEM) software is beneficial for companies by providing a complete view of the security of their IT environment. Without this type of software, it is difficult to manage the individual event and incident log data that demonstrate a risk to your security. Log data is generated by operating systems, firewalls, devices, and antivirus software, so analyzing and reviewing the large amount of data in logs is a daunting task.


SolarWinds Log & Event Manager (LEM) software makes it easy to manage and analyze log files, mitigate threats, and automate compliance processes.LEM collects and catalogs log and event data, in real-time, from anywhere data is generated within your IT infrastructure and delivers true real-time log and event correlation by processing log data before it is written to the database, enabling you to immediately respond to security threats and vital network issues.


LEM offers the following log management and compliance options:

  • Log Analysis Servers
  • Log Analysis Antivirus and Malware Protection
  • Log Analysis IDS and Firewall
  • Log Analysis Identity Authentication and Endpoint Protection
  • Vulnerability Assessment
  • Log Analysis Websites FTP and Content Management
  • Log Analysis Other Applications
  • Log Analysis Network Devices
  • IT Security
  • IT Compliance & Audit
  • IT Operations


SolarWinds also offers other log and security information management products that assist in collecting, correlating, and analyzing log data and manage enterprise security and compliance.

Growing up, I read the comic strip Dick Tracy every Sunday. My favorite part of was the two-way telephone/television (and later, computer) watch. I was trying to learn how to tell time and the idea of having a watch that could literally tell you the time was really appealing. Plus, you could make or get a phone call and see the face of the person on the other line. "Will this television watch thing ever happen?" I asked my father while pointing to the watch. "I want one." He looked to where I was pointing. "A television watch..." he pondered and shrugged. "Probably won't happen."


Let's move ahead to today. It's late 2013 and one of the hottest gifts for this holiday season is the smart watch. Which not only tells time. It also enables you to make and receive phone calls. Some of the phones even offer cameras, although no seems to have video yet for face-to-face calls. For the most part, smart watches look a lot like a lot of other watches, only with slightly bigger faces, and of course, electronic faces. Some of those faces, however, can be programmed to look just about any way you want - from traditional to futuristic.


This holiday season, there's a slew of smart watches to choose from. There are smart watches from high tech and watch manufacturers, like Samsung and Timex. The Samsung watch uses the Android operating system, syncs with your Android phone, and can make and answer phone calls. The Timex watch can track your workout and has a USB port for data transfer. Some smart watches come from companies you may not have heard of yet, like Pebble and I'm Watch. The Pebble can sync with an Android or Apple phone and the I'm Watch with an Android phone. Both can run a number of apps, especially Android watch apps. Apple may be making getting in on the smart watch bandwagon as well, says the blog in Apple iWatch release date, news and rumours.


Hmmm. I think I know what I'm getting for Dad this year!


SolarWinds Server & Application Monitor (SAM) sweeps all six APM Software Brand Leader Awards!


In September 2013, IT Brand Pulse conducted a survey of IT professionals. The respondents were asked which vendors they perceived as the leader in the following 6 categories for APM Software: Market, Price, Performance, Reliability, Service & Support and Innovation.


SolarWinds SAM was selected by IT professionals as the 2013 Market Leader for APM Software – winning in all 6 categories and outranking the likes of CA Application Performance Management, IBM® SmartCloud, Dell® Foglight, HP® Application Performance Management, and CompuWare® APM, among others.

APM Leader 3.png  


The significance of mainstream Application Performance Monitoring (APM) has, over the years, evolved to be perceived as a bigger and broader service function for IT management. APM became Application Performance “Management” when it started detecting and diagnosing application performance problems to maintain an expected level of service–translating IT metrics into business meaning.


At SolarWinds, we’ve been working relentlessly to keep enhancing SAM based on customer feature requests and by analyzing the requirements in the IT community. From being just an application performance monitoring solution (couple of years ago), SAM has continuously evolved to offer server hardware health monitoring, server process & service monitoring, windows event log monitoring, asset inventory management, and server remediation capabilities. With the recent SAM 6.0 launch, we launched AppInsight for SQL Server® which provides deep visibility into SQL Server performance for SysAdmins, DBAs, and SQL developers.



From this survey report by IT Brand Pulse, it was made clear that SolarWinds is far ahead of the competition in all six areas of evaluation criteria that make an APM software an APM market leader.


#1 Market Leader – 27.2% of survey respondents voted for SolarWinds ahead of IBM, Dell/Quest & HP. This is a strong indicator that SolarWinds SAM is rising up really fast to become the market leader in APM.


#2 Price Leader – It’s not a surprise that SolarWinds came ahead of all other solutions. For such a rich APM feature set, SolarWinds SAM is available at a fraction of the cost of the traditional enterprise solutions. The survey respondents echoed the same, and SolarWinds SAM got a whopping 41.4% support far ahead of Dell/Quest (14.8%), HP (10.2%) and IBM (9.4%)


#3 Performance Leader – While performance is an objective element of comparison, and each software will perform differently for different use cases and scenarios, SolarWinds SAM came out as the most popular choice (23.4%) amongst all the APM contenders. This reinstates that the performance and functionality that SAM offers are meeting IT requirements, and that the IT pros are acknowledging it.


#4 Reliability Leader – Again, the survey responses indicate that SolarWinds (23.4%) has proven its reliability in the market much better than the competition – IBM (18%), Dell/Quest (17.2%), HP (12.5%), and CA (9.4%).


#5 Service & Support Leader – The customer service and support offered by SolarWinds has always been excellent and the survey stats objectify the same. In comparison with IBM (21.9%), Dell/Quest (17.2%) & HP (13.3%), SolarWinds scored high (25.6%), placing it on top of the leaderboard.


#6 Innovation Leader – Satisfying the customer needs with every release, SolarWinds strives to come up with innovations and add features to its products. SAM has a consistent release of product innovations and new features that ensure the product becomes better and more compelling to use year over year. 26.6% of the respondents have acknowledged this while Dell/Quest, HP and IBM shares around 14% of votes apiece.


Visit this page to learn more about the survey results.

A VMKernel is a critical component of any virtual environment. VMKernel is the reason why your VMs are allocated with memory, CPU, and other resources from the physical hardware. What makes the VMKernel special is that it runs directly runs on the ESX/ESXi host. However, this doesn’t mean that performance and latency issues are particularly your VMs. They can be also be found in the host itself, which can cause sudden I/O spikes affecting the VM’s performance.


If you think your VM performance is poor due to I/O spikes, you need to go to the additional step and drill into that host to look at the kernel I/O latency metric. This metric will show whether or not the host has I/O spikes. Experts recommend that this value should be around 0 to 1 millisecond. If you see values over 1 millisecond, then your VMs may not perform the way you want them to.


In addition to monitoring the kernel I/O latency, you should also monitor the queue latency counter. This measures the average amount of time the data is in the queue. There should be absolutely no data in the queue and the value of this counter should be nothing greater than 0. If the value is greater than 0, then it means workload in the VM is very high and data cannot be processed with high efficiency, which will lead to I/O spikes and performance issues.


In a virtual environment, problems start slowly and work their way up affecting other hosts in a group, VMs in a host, OS, and resources. Other measures can be taken to keep the host I/O latency to a minimum. For example:

  • One way to reduce I/O spikes is to increase virtual memory. For this to work, you should consider increasing the host memory. This way the system memory is utilized to store data thereby avoiding disk access.
  • Determine how your storage arrays are performing. When there are too many VMs trying access the storage system, a bottleneck occurs.
  • Look at ways to balance disk loads across your disk drives. This way efficiency improves and there’s nothing stuck in queue.


In order to proactively troubleshoot virtual host I/O latency issues, you need a virtualization management software that will monitor all your virtual hosts, giving you key insights into critical performance metrics. You can never go wrong with virtualization management software because it allows you to do the following in real-time:

  • Identify and troubleshoot the root cause of kernel I/O latency in the VM host. See how each of your ESX/ESXi hosts are performing – drill down to see any abnormal behavior in each host.
  • Ensure the VM has enough resources to perform smoothly.
  • Get a wealth of critical information about your storage performance through an intuitive and customizable dashboard.


Virtualization bottlenecks can be tedious to troubleshoot if you don’t know where to start looking. Watch this video where vExpert Scott Lowe shows you how to monitor kernel I/O latency using virtualization management software.


Faster is Better.

Posted by Bronx Nov 20, 2013

For someone who doesn't like endorsing products, here I go again!


This past weekend I was up at some ungodly hour and noticed my laptop was in what appeared to be BIOS mode. After squinting for a few seconds, I read that Windows decided it was time to do a sector check on my hard drive. Well, we all know what this means. Yup, my hard drive was failing. I went back to bed and didn't panic because like every other responsible person in the world (chuckle) I HAVE MULTIPLE BACKUPS OF EVERYTHING!! <--- Good tip.


After the morning coffee, I drove down to ye old brick and mortar store to get a new hard drive. (I did not want to wait until Black Friday because...well, read that story and you'll know why.) Anyway, I poked around the store for a moment and noticed the prices of Solid State Drives (SSD) were fairly cheap. Eureka! (For those of you non-geek types, an SSD is basically a flash drive disguised as a hard drive.) Within the hour I had my laptop apart and the new SSD in place.


Fast is not the word.

Once I had the SSD installed, my laptop was blazing. A typical reboot with a traditional hard drive took between four and five minutes. With the SSD, a reboot took no longer than 25 seconds! All my applications were much more responsive too. Just double-click and BOOM, they're open! No more moving parts. Simple electricity does all the work at a speed ten times faster. I don't know how I ever lived without it. By the way, the drive does come with cloning software; however, you will need to buy a SATA to USB cable separately for about $40.


Other Benefits

Ha, as though you need more than a 25 second boot time and zippier response times for your applications. Well, there are a few other minor benefits:

  • Already mentioned the super fast response times to everything
  • SSD is physically lighter than a traditional hard drive
  • A lack of moving parts mitigates the risk of Read/Write errors
  • A lack of moving parts increases laptop battery life 30 minutes, on average.


I wonder how SAM would respond to living on an SSD? After all, SSD's were originally designed for servers.

In past blog posts we've touched on IT management system scalability and failover. In relation to these topics, we should also explore how best to deploy scalability and failover tools that fit your network requirements and scale to support growing devices, servers & apps, users, network sites and customer locations. Having effective strategies primed to handle your network infrastructure as it scales up and support critical processes will positively impact your business as it grows.


There are three primary variables that can affect your scalability: infrastructure size, polling frequency and number of simultaneous users accessing the monitoring system. With this in mind, let’s look at some widely used IT management system deployment scenarios.


Growing a Single IT Management System Deployment to Support Larger Networks

This type of installation allows you to have centralized deployment of your IT management software and allows you to deploy additional pollers to distribute the polling load on the core-polling server.


Scaling up the IT Management System to Support Web User Growth

If you have an increasing number of IT users accessing the IT management system’s Web console simultaneously, you may want to deploy additional Web servers to help load balance the number of concurrent users.


Monitoring Geographically Distributed Environments with a Single Instance

This option is well suited for environments where most of the monitored nodes or applications are located in a single primary region, and other remote offices are much smaller.


Monitoring Geographically Distributed Environments with Multiple Instances

This type of deployment option is well suited to organizations with multiple regions or sites where the number of nodes monitored in each region require both localized data collection and storage. Data in different IT management system instances can then be combined into a single centralized view for network-wide administration and management.


MSP-friendly Architecture

This type of MSP deployment allows you to provide your customers with full IT management capabilities based on their individual IT management software instances deployed for customer level management & reporting. You can then roll all these different instances up into a single MSP-level NOC view. This is a cost-effective method of deployment for MSPs managing large customer networks.


IT Management System Deployment over Secure DMZ Networks

When users connect to the IT management system from DMZ areas, they can use secure channels such as the VPN to gain secure access, or choose to connect via secure Web servers.


These are just a few insights into various deployment options. For detailed information on how each of these deployment options will benefit you, and understand how SolarWinds provides you various options and means to scale up your IT management system, please check out this whitepaper – IT Management System Scalability for the Enterprise.


Penetration testing or pen testing is a cool job. I’m telling you this before even we take a look at what it is and how it can be done. It’s a kind of white hat hacking practice. Another wacky jargon? Trust me that’s a cool job too. Now, really how many of us will want to get paid legitimately for hacking? This simply means to hack a corporate network and expose its vulnerabilities and security flaws to the company’s IT authorities. Penetration testing is exactly this. You hack a computer or network of a third-party after being officially invited to hack their IT system and expose their security vulnerabilities so that they can enhance their network security and protect their IT infrastructure and corporate data from the real hackers – the ones who hack malignantly and illegally, the black hatters of course!


Penetration testing is actually a popular security practice carried out by companies to simulate real hacking scenarios by hiring third-party hackers and IT security experts. All you got to do is to carry out an unexpected hack attack and test the IT infrastructure for known and unknown hardware or software flaws, and operational weaknesses in process or technical countermeasures.


Think of yourself as a legitimate professional hacking consultant and practitioner. Instead of inflicting harm via hacking, you actually do good to your clients and help them identify vulnerabilities and protect their IT systems.

Mad Hatter 4.png  


  • To determine the feasibility of a particular set of attack vectors
  • To identify higher-risk vulnerabilities that result from a combination of lower-risk vulnerabilities exploited in a particular sequence
  • To identify vulnerabilities that may be difficult or impossible to detect with automated network or application vulnerability scanning software
  • To assess the magnitude of potential business and operational impacts of successful attacks
  • To test the ability of network defenders to successfully detect and respond to the attacks
  • To provide evidence to support increased investments in security personnel and technology



The new PCI DSS security standard 3.0 calls for penetration testing as one if its mandatory requirements for cardholder data management organizations.


Requirement #11: New requirement is to implement a methodology for penetration testing.


Take a look at the various requirements of the pen testing process for PCI:



  • External Pen Testing: This type of pen test targets a company's externally visible servers or devices including domain name servers (DNS), e-mail servers, Web servers or firewalls. The objective is to find out if an outside attacker can get in and how far they can get in once they've gained access.
  • Internal Pen Testing: This test mimics an inside attack behind the firewall by an authorized user with standard access privileges. This kind of test is useful for estimating how much damage a disgruntled employee could cause.


To evaluate how the penetration testing has impacted your organization and to respond to threat vectors in real time, try using security information & event management (SIEM) systems that scan event logs from across your IT infrastructure and provide you a wealth of insight into the security events and non-compliant occurrences on your network and network computers.


If you are a street smart hacker who would want to settle for the right and legitimate career which provides rewards for your super genius hacking skills, penetration testing is your cup of tea. Proud to be a white hatter!

We’ve spoken in the past on how system administrators have no fixed working hours. You are usually at the mercy of your employees continuously troubleshooting and fixing their issues. It does not matter whether you’re at lunch, out of the office, or even on vacation. You get those “desperate situation” calls to fix users’ issues immediately. Even if you have all the tools in the world to monitor and manage your IT infrastructure, sometimes you don’t have access to those tools when you need them the most. This is where a mobile solution can come to your rescue so you can resolve issues in your datacenter using your Android™ or Apple® devices.



A mobile IT management tool helps you troubleshoot and fix users’ issues, even when you’re away from your desk. If you have proactive alerts in place, you immediately get notified of pressing issues in your environment. You can then log on to your monitoring tool to assess the issue and determine if you can resolve it right away or refer it to another IT pro. A mobile IT admin tool eliminates the wait time necessary for you to get to your desk to look at issues. If you have a smart phone or a tablet, you can do the following anywhere on-the-go:

  • Look at real-time issues from your smartphone or other mobile device and drill-down to the issue—whether it’s a server, network, or an application issue.
  • Look at key indicators in your monitoring tool that shows you critical information such as active alerts, the top 10 databases with problems, applications running in different groups, and more.
  • Look at under-performing application components and drill-down further to see the type of alert, its current stage, and if it is affecting other applications.
  • Identify issues in your server and see if they are affecting other hardware components or application performance.


Your mobile IT admin tool should enable you to fix issues, not just show that an issue exists in your environment. On scanning a node, you can use a range of different APIs which lists all the troubleshooting features for specific applications that will help you resolve the issue.


Mobile IT Management Made Easy

Manage your datacenter from anywhere using SolarWinds® Mobile Admin™. Monitor and manage various Microsoft applications like Active Directory®, System Center Operations Manager, ActiveSync, and Exchange. You can also manage virtual environments like VMware® and Hyper-V®; open source applications like Nagios®, and many more—all using your mobile device.


TechRepublic recently published an article about how easy it is for IT admins to use Mobile Admin to remotely monitor and troubleshoot issues in your datacenter. Read the article to find out what Mobile Admin is, how you can use it to monitor and manage user issues, see what range of applications it supports, and obtain licensing information. Try Mobile Admin free for two weeks!

There may be other items in the news competing your time, but we happen to think this one deserves special attention:


Firewall Security Manager version 6.6 (released November 12) includes a new feature: an Orion Module. The Orion Firewall Security Manager Module enables you to view all your firewall details from the Orion dashboard, alongside any other SolarWinds Orion products. This new FSM/Orion module provides visibility into your firewall inventory and security status, along with the ability to point and click for drill-down details. From the Orion FSM dashboard, you can:


Get a summary of all the devices that are in FSM, including the PCI summary and the Security Audit summary.

  • View Rule/Object Cleanup reports
  • Review configs and recent changes
  • View firewall details
    • NAT Rules
    • Security Rules
    • Network/Address Objects
    • Service/Application Objects


SolarWinds Firewall Security Manager v6.6 is available for download in your customer portal for those customers under current SolarWinds Firewall Security Manager maintenance.

Patch Update failure. We hear it all the time! Reports cite that 75% of attacks use publicly known vulnerabilities in commercial software. These attacks can be prevented if the software is patched regularly. If you are running an outdated version of a software on your network, you are obviously vulnerable to security compromises.

Consider the recent breach at Adobe®. Part of the break-in involved some known vulnerabilities with their Acrobat® Reader® and their ColdFusion® Web application platform which resulted in the theft of source code.


Missing the security approach

One of the main reasons for the security breach could be patch management. In most cases, we see patches as more of an operational routine without considering the security aspects. Taking a security approach with patches gives you the perspective of what patches to apply and when.


Do you test the patches before deploying to your network?

For most vulnerabilities, the fixes become available pretty quickly but they need to undergo a risk assessment and compatibility check before they are deployed. It would be advisable to employ a patch management software that researches, scripts, packages, and tests patches for common 3rd-party applications. Then it delivers ready-to-deploy patches. Also, you need to create advanced before-and-after package deployment scenarios to ensure that complicated patches, such as Oracle® Java® deploy successfully without using any complex scripting.


Do you prioritize your patches?

You don’t necessarily need to update all the applications in all your devices in your network in the first batch.  However, make sure that you patch your critical security vulnerabilities ahead of other patches. If you do not prioritize, test, or make risk assessments on your patches, you increase the chances that your patch management will fail.


Poor implementation

There are situations where organizations do not clearly understand the limitations of their existing solutions and need to extend their capabilities with the help of add-on solutions. For example, if you are using Microsoft System Center Configuration Manager (SCCM), you need to understand that it is not a complete solution for your patch issues as they leave a gap when it comes to non-Microsoft applications. This means that you are still vulnerable when it comes to 3rd-party applications and the consequences of such vulnerabilities, if exploited, can have a devastating impact upon your IT environment. Having an efficient patch management software would help extend the power of SCCM and also manage 3rd-party patches.


Lastly, you need to ensure that your patch manager is capable of alerting you when your patch updates are unsuccessful, i.e. they need to be able to send you notifications on the unsuccessful patch updates.


Stay secure folks!

Capacity planning is not an easy thing to do in a virtual environment as it requires significant effort to understand the demands of the constantly-evolving virtual infrastructure. Enterprises that start optimizing their virtual machine (VM) provisioning strategies can quickly see a strong ROI. If host resources are left unbalanced, it can lead to underutilized resources, and consequently cause severe performance degradation. Having said this, it’s not very easy to understand the the balance between VMs and their underlying resources in real time. While this will just give the status of VM resource utilization at a certain point in time, it does not help to get visibility into how VMs are growing, which VMs will face resource contention, etc. A well-planned study to gather historical VM performance data, trend how the environment has grown over time, and estimate the demand for the future will provide a baseline for resource management.       

   VM Capacity Planning.png        

This estimation is not as easy as it sounds. With workloads on existing VMs changing over time and resource consumption varying drastically, it becomes extremely difficult to chart out future behavior trends. Leveraging historical data and correlating it to plan for future capacity requirements becomes relatively difficult with constant workload variations in the virtual environment.


With so many new project needs building up, and resource consumption getting tougher to manage, we often end up making new investments to add more hosts and storage. This situation can be better managed and the investment considerably reduced if only the VM operations, availability, and requirement are constantly studied over time. There is also the difficulty of locating over-allocated VMs in large environments. In addition, we have to deal with VM sprawl resulting in dead, rogue, orphaned or zombie VMs, and those unused and forgotten VMs that are still powered on.


All this analysis and research have become practically impossible to carry out manually.  This has created the need for an automated solution that can analyze historical data, report how the VM environment has evolved over time, how it stands today, and what is in store for the future.

SolarWinds Virtualization Manager is a perfect software solution that addresses this pressing demand for capacity planning with its three-pronged attack.


  • Operations Management: Allows you to monitor resource usage in real-time helping to detect and predict where bottlenecks are happening in your environment so you can quickly resolve them.
  • Optimization: Allows you to understand capacity usage from an application and workload perspective, and run reports on historical trends. This can help to project when you will run out of resources.
  • Planning: Allows you to identify stale, zombie, orphaned, or rogue VMs, and over- or under-allocated VMs so you can right-size your environment, allowing for accurate planning.


A well-planned and managed virtualization infrastructure will simplify resource usage and VM allocation. Estimating the VM requirement for the future and planning investment accordingly will then drive increased ROI and better productivity.


Check out this short video by vExpert & VMware® evangelist Eric Siebert to learn how to identify virtual machine capacity bottlenecks.


Leverage the integrated power of SolarWinds IP Address manager (IPAM) User Device Tracker (UDT). With this dynamic duo, you can track down a device and user instantly. Simply use the integrated view to see IP address information along with the corresponding switch port details and user information—all within the same window. You also can get both current and past connection details. You can even shutdown the compromised port directly through the SolarWinds Web UI with the click-of-a-button.

UDT Integration Tips in IPAM

SolarWinds User Device Tracker (UDT) and IP Address Manager's (IPAM) utilize the same Orion platform, seamlessly extending their capabilities by adding the following into the same view:

  • End point details
  • Network connections history
  • Current network connections
  • Current users logged into the device
  • Port and User information on the same page as IP address Host or DNS assignment history

Automatic Integration

You do not need to take any integration steps. IPAM will automatically detect if UDT is installed and add the UDT Users and UDT Switch Port  columns to yourIP Address View providing end-to-end IP Address to user/device mapping.

How This Helps You Troubleshoot

The built-in integration provides a view of end-to-end mapping of an IP Address to any connected user/device, along with device port and connection details in the same window.

  • Find out which user or device is accessing a particular IP Address
  • Drill down to get network connection history for an IP Address
  • See port and user information related to an IP Address host or DNS assignment
  • View port usage and capacity on every switch
  • Detect endpoint devices having IP conflicts
  • Directly shutdown a port through the web interface

IP Address Conflict Resolution

IPAM can detect IP Address conflicts (both IP static and DHCP environments) and help you to troubleshoot the problem by simply drilling down to the actual switch port and shutting it down.Once you see an IP Address conflict event/alert, simply click on the IP or MAC address info in the alert message and it will take you to the IP Address Detailsview, where you can see the MAC address assignment history.If you determine you need to resolve the conflict on the spot, you can administratively shutdown the port using UDT.

How to Shut Down a Port:

  1. IP Address conflict is triggered and an Event is displayed giving you the Mac Address in conflict.
  2. IPAM displays the IP Address history assignment along with the MAC address of the IP Address in conflict.
  3. Click on the node port in the Current  Network  Connections resource on Endpoint Details.
  4. Click the Shutdown button in the Port Details resource.

»For more information on IPAM

»For more information on UDT

When I hear "wetware," I think of futuristic, cybernetic implants that connect our brains to the Internet, version 10.0, but IBM is using the term to refer to a new form of liquid cooling and energy transportation.


The new technology emulates the brain's energy transportation (the quintessential wetware model). Capillaries in the brain cool and power our neurons. IBM researchers are attempting to copy that same architecture to reduce the estimated 60% of computer volume dedicated to electricity and heat exchange in modern computers. In the process, computers could become smaller, more powerful, and more energy efficient.



The cost of technology


An increasing concern in the Tech sector - especially for those businesses running server farms, super computers, and data centers - is the cost of running the computers. The cost of purchasing the computers may begin to factor less into the purchasing decision as energy, cooling, and location costs increase. With this new technology IBM will be able to build smaller, more energy efficient computers because chip components can be stacked in a kind of electronic blood that is both battery and coolant. Because the chip components can be stacked, there is less distance for signals to travel, further reducing heat production that the electronic blood will transport away. Without the need for airflow between components, noisy fans can be removed and the computer case can be much smaller. Reducing the size of the cases and the amount of heat produced will significantly reduce the cost of running the computers.



The first steps


The first iteration from IBM uses the traditional approach of water to cool the computer chips. However, IBM's experimental Aquasar installation in the Swiss Federal Institute of Technology Zurich (ETH) will use the heat from the cooling system in the heating system of the ETH building.


The second step of this technology is to get energy to the components using a liquid medium IBM researchers are looking at vanadium as a potential key component in this step.




Hopefully IBM succeeds in making the beginnings of a cybernetic brain. While I do enjoy hanging out in the server room to warm up, I'm sure the money that's going into cooling that room could be better spent. Regardless of the future of computing coolant systems, we will still have to monitor the internal temperatures of our servers using tools like SolarWinds Server and Application Monitor.

You have no control over when you may experience downtime. If you’re thinking, ‘what’s an hour of downtime going to cost?’ You may want to re-think that. What you thought was an hour of downtime can cost your organization millions (if you’re a large organization). Organizations whose businesses are driven through online traffic and sales experience massive losses, especially during peak business hours. For such organizations, application downtime is almost never accepted.


According to this report from Aberdeen Group, only 7% of organizations claimed to have less than 5 minutes of downtime in an entire year. These organizations have had almost 100% up-time. Another staggering stat from the report: downtime costs large organizations a little over $1.1 million per hour. Organizations of all sizes gets affected due to downtime and Aberdeen’s research shows there is a 19% increase YoY in cost of downtime per hour.


As sysadmins, you have a lot of responsibility in trying to ensure critical applications do not fail and end users don’t experience any downtime. You need to have consistent application performance and continuous application availability to avoid facing downtime. The only way to know application performance and availability is not affected is by monitoring key performance counters. Set baseline values and get proactively alerted if a counter or a metric is going over the threshold. All of this is possible when you have a right monitoring solution in place.


Monitoring tools allows you to do the following proactively:

  • Check the status of the application
  • See application performance – group applications based on type, location, user base, etc.
  • Get notified whenever an issue comes up
  • Seamlessly integrate with other monitoring solutions in your IT infrastructure. That way, you can address other issues in your IT environment
  • Resolve server hardware issues by stopping services and killing processes


Simplify your IT infrastructure by using a monitoring solution. You don’t have to find reasons to convince your boss on why you need an ideal tool that will monitor critical business applications. Instead, use the ROI calculator from SolarWinds and see how Server & Application Monitor (SAM) can proactively monitor applications.


How the ROI Calculator Adds Value to your Decision Making


Make informed decisions, calculate short term and long term ROI, costs and maintenance options from one vendor to another.

  • Look at how your current solution functions and determine the ROI that you get after you deploy SAM.
  • Determine the challenges you go through on your existing system and benchmark how SAM adds direct value to those challenges.
  • Look at the current spend on your tool. You can compare how much downtime you had in the past year and see if your current application monitoring tool is giving you the ROI you deserve.


You get more for less with SAM – whether it’s in-depth monitoring of your SQL server, or managing all your IT assets, it’s better to rely on a tool that proactively shows you issues as they occur, giving you a chance to quickly diagnose and fix them before more users raise helpdesk tickets. You don’t have to take our word, have a look at this a recent survey conducted by IT Brand Pulse for IT pros. SAM came out as a clear leader in all areas – market leader, performance, reliability, service and support, and innovation.


If you fall outside the 7% category of having hours of downtime in a year, it’s time you had another look at your current ROI and take some hard decisions in order to avoid downtime.


ROI Calculator.png

Alright, here it is – Firewall Security Manager (FSM) 6.6 is now available!


For those of you who were waiting for a reliable solution that would ensure the integrity of your firewall rules and manage the complex configuration in a multi-vendor firewall environment—it’s yours for the taking! If you are an existing customer for FSM under active maintenance, you can enjoy the new features of FSM 6.6 without any additional cost.


What’s new with FSM 6.6?

With the previous version (FSM 6.5), we made some notable improvements:

  • Added Juniper SRX support
  • Increased support for managing, tracking, searching, and documenting business justification rules in IOS
  • Extended rule/object change analytics—including IOS and enhanced change modeling capabilities for predicting the impact of rule/object changes on security and traffic flow including IOS


With FSM 6.6, firewall security management gets even easier! This new version offers an Orion® integration module that lets you view all of your firewall details from the easy-to-use, easy-to-view Orion Web-based console!

For example, if you are an existing user of other SolarWinds® products like Network Performance Manager (NPM) or Server & Application Monitor (SAM), you can leverage the integrated FSM/Orion module to view firewall inventory and details in the same Web-based console.




Dashboard for Quick Insight on Security

The all new customizable and intuitive dashboard gives you fast and easy insight into your security posture and risk status, so you can quickly spot vulnerabilities and policy violations.

The firewall dashboard includes:

  • PCI and Security rating overviews
  • Rule object cleanup reports
  • Most recent configuration changes and many more


   FSM dashboard.png


The FSM Web-based dashboard also allows you to drill down and learn the root cause of issues. For example, the above screen shows that the firewalls were checked for PCI Compliance, they passed 28 rule checks and failed on 37. It’s now much easier to drill down on the failed checks and download the reports directly.



Similarly, you can drill down on the security audit summary to download reports on the security issues with respect to each of the firewalls.


Easily Search for Rules, Objects, and Configurations

You can also leverage the new Web-based dashboard to easily search for security rules, NAT rules, and network objects by using the various filter options like source, policy name, actions, and so on.




Similarly, if you have to edit a part of your raw configuration or if you would like to have a quick check to find out the exact configuration line number, you do not have to go through the entire code. Instead, you can just search using keywords in the configuration or routing script.



The new FSM 6.6 dashboard also offers improved readability and maintainability of firewall and Layer 3 network security device configurations.



The SolarWinds Difference

As you know, SolarWinds does something that most in our industry don’t – WE LISTEN! We are open to your ideas and encourage you to share with us on the thwack community forum. Your input is invaluable as we continue to build upon FSM’s powerful firewall management capabilities.


Share your thoughts!


Here’s how you can get the new “Firewall Security Manager 6.6”

Last January I talked in another article about the scenario involving using a new WSUS v6 server (which runs on Windows Server 2012) in combination with Patch Manager.


But I overlooked one scenario in that article, and since then another one has arisen.


The fundamental challenge with mixed scenarios involving different operating systems has to do with the WSUS API version. In order to support local publishing activities (basically anything involving putting a third-party update into the WSUS database), both the WSUS Console version of the Patch Manager server and the version of WSUS installed on the WSUS server must be identical. If they are not identical, the Patch Manager Publishing Wizard will return the error message

     Message: Failed to publish packageName. Publishing operation failed because the console and remote server versions do not match.


You can get more information about this particular message, and other known causes, in Solarwinds KB4328.


Today, there are three supported production versions of WSUS that can contribute to this situation.

  • WSUS v3.2 - runs on Windows Server 2003, 2008, and 2008R2.
  • WSUS v6.2 - runs on Windows Server 2012 (RTM)
  • WSUS v6.3 - runs on Windows Server 2012 R2
  • WSUS v10  - runs on Windows Server 2016


So the original article dealt with the scenario where WSUS v6.2 was being deployed on Windows Server 2012, but presumed that Patch Manager already existed, or was being deployed, on a non-WS2012 system. It talked about how to get the WSUS v3 console of the Patch Manager server to talk to the WSUS v6.2 server. Essentially we did that by forcing the connection through a WSUS v6.2 console installed underneath a second Automation Role server.


Here's a chart showing the various combinations of Patch Manager and WSUS, and which ones will connect natively and those that will require an additional Automation Role server (typically installed on the WSUS server, but can be a third system).

For the sake of the symmetry of the chart (and future capabilities), I've included the scenario involving Patch Manager on Windows Server 2012 R2 -- but please note that Patch Manager v1.85 is not officially supported on Windows Server 2012 R2 at this time. Implementing a Patch Manager Automation Role server on Windows Server 2012 R2 or Windows 8.1 will require Patch Manager v2.0 (coming soon!).


Patch Manager on WS2003/2008/2008R2 (WSUS v3 console)Patch Manager on WS2012 (WSUS v6.2 console)Patch Manager on WS2012 R2 (WSUS v6.3 console)Patch Manager on WS2016 (WSUS v10.0 console)
WSUS v3 (on WS2003/2008/2008R2)Direct Connection from PAS works

Requires AutoServer on WSUS v3 server

or other system running WS2003/2008/2008R2 or Windows Vista/Windows 7

Requires AutoServer on WSUS v3 server

or other system running WS2003/2008/2008R2 or Windows Vista/Windows 7

Requires AutoServer on WSUS v3 server

or other system running WS2003/2008/2008R2 or Windows Vista/Windows 7

WSUS v6.2 (on WS2012)

Requires AutoServer on WSUS v6.2 server

or other system running WS2012 or Windows 8

Direct Connection from PAS works

Requires AutoServer on WSUS v6.2 server

or other system running WS2012 or Windows 8

Requires AutoServer on WSUS v6.2 server

or other system running WS2012 or Windows 8

WSUS v6.3 (on WS2012R2)

Requires AutoServer on WSUS v6.3 server

or other system running WS2012R2 or Windows 8.1

Requires AutoServer on WSUS v6.3 server

or other system running WS2012R2 or Windows 8.1

Direct Connection from PAS works

Requires AutoServer on WSUS v6.3 server

or other system running WS2012R2 or Windows 8.1

WSUS v10.0 (on WS2016)

Requires AutoServer on WSUS v10.0 server or other system running WS2016

Requires AutoServer on WSUS v10.0 server or other system running WS2016

Requires AutoServer on WSUS v10.0 server or other system running WS2016

Direct Connection from PAS works


Note particularly that it is the version of the WSUS SERVER that determines where the additional Automation Role server must be installed, not the operating system that the Patch Manager server is installed on.


UPDATE [1/6/2014]: In this version of this article, I failed to also mention the requirement to create an Automation Server Routing Rule (ASRR). It is the ASRR that tells the Patch Manager server to route all requests for the WSUS server through the appropriate Automation Role server. To create an ASRR:

  1. From the Managed Enterprise node, select the Automation Server Routing Rules tab.
  2. In the Actions Pane on the right, under "Routing Rules" select "Add WSUS Server Rule".
  3. Select the WSUS Server from the dropdown menu and click on "Save".
  4. Check the correct Automation Server from the list.
  5. IMPORTANT: Check the "Absolute Rule" checkbox at the bottom of the dialog.
  6. Click on OK.


For more information about the architecture and implementation of Patch Manager Automation Role servers, please the Administrator Guide and these blog articles.

Patch Manager Architecture - Deploying Automation Role Servers

How-To: Install and Configure a Patch Manager Automation Role Server

Normally it is good to just have the Storage Manager Agent or Proxy Agent running on a server as the only application. Unfortunately this cannot always be the case. There are situations where a user will run multiple applications on the same server with the Agent. At times there will also be multiple NIC cards on the server to cater to the different applications which may be routing to different subnets. In this type or scenario the Agent will use the first available NIC card on the server thus meaning it could cause traps to route to the wrong subnet causing the Storage Manager Server to loose visibility to the Agent. If the Agent is installed on a server with 2 or more installed network cards, then the core.xml file on the Agent must be manually updated to define the IP address for the network card you want associated with sending messages to the Storage Manager Server. For example, if eth0 and eth1 are installed on your server, and eth1 is used to send traps, you must associate the eth1 IP address to the Agent by following these instructions. By doing this you will allow the Agent to route traps to the Storage Manager Server.

For Windows servers complete the following steps:

  1. Navigate to \Storage Manager Agent\. Example: C:\Program Files\SolarWinds\Storage Manager Agent\
  2. Open core.xml in a text editor.
  3. Enter the IP address of the network card you want associated with sending traps.
  4. Save the file.
  5. Restart the Storage Manager Agent Service.


  For Linux Servers, complete the following steps: 

  1. Open /etc/hosts in a text editor.
  2. Keep the default options, and add the IP address, domain, and hostnames for the network card you want associated with sending traps.
  3. Save the file.
  4. Make sure the core.xml file includes the IP address in the /etc/hosts file.
  5. Restart the Storage Manager Agent Services.

In a virtual environment, if a VM has performance issues, it can affect the performance of other components within the virtual infrastructure. The converse is applicable too. For example, you can collectively map all your VMs and, depending on the location of your bottleneck, you could find that the issue is related to storage performance.  When you want to view storage performance, you typically look at storage IOPS to measure performance of a storage system. Measuring IOPS throughput will give you the amount of data transferred per second, and measure the number of operations per second that a storage system can process.


Whether you’re using VMware or Hyper-V storage, you should look at storage performance across the hosts and clusters, and periodically monitor for issues. One of the ways you can assess VM performance is by looking at critical performance metrics. To do this, you need to drill down further into a VM and check which VMs are using the most storage resources. For example, if you have three VMs with 30 GB allocated, and two of the VMs are hogging resources, then the third will have performance issues.


As virtual and storage admins, you should determine the number of IOPS your applications use on a daily basis. The best way to go about doing this is to monitor your current application performance to see how your servers are performing and to ensure they are healthy. The next thing you want to look at is usage. Knowing how your applications perform at any given time is key. This will tell you the average IOPS value an application uses. The IOPS value will vary for different applications. For example, Exchange Server may be widely used as opposed to someone using a database server or a multi-media application.


When you’re looking at IOPS and storage performance, you should keep the following in mind:

  • Map all the VMs against how much IOPS each one is consuming and determine if the IOPS values are higher than normal
  • Look at IOPS against latency, I/O size, read/write values, etc. to make meaningful insights on why storage performance issues occur
  • Proactively monitor applications or VMs to determine if they are consuming too much of storage resources
  • Have proper baselines in place to ensure high IOPS values aren’t affecting your storage performance
  • A significant amount of I/O means there could be potential storage issues


Watch this short video where virtualization vExpert Scott Lowe shows you the steps involved to identify the drivers of storage I/O for a datastore. Learn how to drill down from a datastore to see which VMs are tied to that datastore, which VMs are driving storage I/O, spikes in I/O, and more using a virtualization management software.


WAN performance is measured based on metrics like latency, jitter, packet loss, and average response time. IP SLA is the solution that assists in measuring these parameters to determine network performance, monitor IP services, and troubleshoot networks.

IP SLA simulates network traffic to collect performance information in real time by sending data across network paths. You can create multiple operations to cover multiple paths. It generates and analyzes traffic either between Cisco® devices or from a Cisco device to any other remote IP device. The values provided by the various IP SLA operations can be used for troubleshooting, problem analysis, and for designing network topologies. 

The primary uses of IP SLA are to measure end-to-end performance over WAN links and to collect details for WAN performance monitoring and troubleshooting for connectivity issues within the network or between locations.

WAN Performance Monitoring

Here’s an example of WAN performance monitoring. Let’s say you’re setting up a new link to connect your new branch to the data center. Before directing production traffic across this new connection, you would want to test the link to make sure it’s working properly. In this scenario you can create an IP SLA operation to check the performance of the link and determine if the Round-Trip-Time, packet loss, latency, etc., are under control and meets the expected values.

Troubleshooting Connectivity Issues

Poor application response for a user can be caused either by the network or by the application itself. IP SLA can help with the process of elimination by providing performance stats of the network. It offers reliable values that help immediately identify a problem and save troubleshooting time.

To take another example, say, one day you find that the helpdesk has logged tickets for VoIP call drops between the head office and a branch office. To determine the cause, you would need to check the link quality with IP SLA VoIP operation and ensure jitter, latency, packet loss, and MoS are all operating as expected. If they are not, fix the detected issue(s) and then recheck the link to make sure that VoIP performance meets expectations.

How Does IP SLA Work?

After IP SLA has been enabled on the edge devices, it measures the service quality of the path from one Cisco® device to another. Monitoring is configured based on the below IP SLA operations:

  • ICMP echo (ping)
  • UDP echo
  • UDP Jitter
  • TCP connect
  • VoIP UDP Jitter
  • DHCP
  • HTTP
  • DNS
  • FTP

IP SLA Processing.png

Once you configure IP SLA, the Source Router sends a generated packet over the network to the Destination Router. The Destination Router, after receiving the packet, sends a response based on the IP SLA operation. The Source Router then calculates the values for the metrics related to the IP SLA operation and reports on link quality.


An IP SLA monitor such as SolarWinds® VoIP and Network Quality Manager collects IP SLA data from devices, triggers alarms if certain thresholds are being crossed, visualizes the data in graphs and tables, and compares the readings to historic data.


Some highlights of VNQM in troubleshooting and monitoring WAN/application performance are:


  • Monitor WAN network performance using IP SLA technology that’s already built into your existing Cisco routers
  • Visualize site-to-site network performance on a clickable, drill-down map
  • Discover and automatically setup Cisco IP SLA-capable network devices with specific IP SLA operations
  • Quickly review WAN performance to determine the impact on key applications
  • View at-a-glance WAN performance with the Top 10 IP SLA dashboard
  • Monitor VoIP performance statistics, including MOS, jitter, network latency, and packet loss

From, today network topology mapping becomes simpler and more comprehensive. SolarWinds Network Topology Mapper which just launched in March as version 1.0, has evolved so quickly that we are launching version 2.0 today.


This release is power-packed with a lot of cool new features that allow you to

  • Discover and add Microsoft® Hyper-V® hosts to network maps
  • Modify node details on network maps (system generated node name, node role & management IP address)
  • Manually connect devices on network maps
  • Schedule export of NTM Maps to Orion® Network Atlas
  • Store device credentials within NTM database for future reuse
  • Be compliant with and support FIPS 140-2


Let’s explore some of these new functionalities.



Now you can scan your Hyper-V virtual infrastructure, and discover the host and guest (virtual machines) operating systems. You can add these nodes to the network maps which already supported VMware® mapping. Now you will get a comprehensive map and resource distribution of your virtual infrastructure on your network.

  • Virtualized links are presented as dotted links from the host to the guests making them distinguishable from Layer 2 and Layer 3 link connections
  • There are different roles for the host and guest making it more distinguishable on the network map

NTM_2-0_ Virtualization-Mapping _Base_EN.jpg



NTM 2.0 allows you to set up scheduled node discovery for mapping based on time of the day or day of the week or month. You can now automate the process of updated NTM maps (based on scheduled discovery) to be exported to Orion Network Atlas.

While setting up the discovery schedule, enable the option that says “Keep Network Atlas updated with these discovery results”. Once this is enabled, the next time you log into Network Atlas, you will be notified of an updated map from NTM, which you can then choose to place it on SolarWinds Network Performance Monitor (NPM).



After having discovered and added a node to NTM, you can now edit and modify 3 fields for the node details:

  • Node name
  • Primary node role
  • Polled management IP address



As you've no doubt heard by now, Twitter went public last week.#Twitter #IPO In honor of the occasion, I want to show you how you too can get in on all the laconic fun by configuring your SolarWinds Orion installation (NPM or SAM) to use Twitter for posting alert notifications. It's a straightforward process, as detailed in the following steps.


  • The following procedure adds a Twitter notification action to an NPM advanced alert. Though NPM itself is not a requirement, the Orion Platform (NPM or SAM) is a requirement, and SolarWinds recommends the use of advanced alerting for this application.
  • Configuring Orion NPM to use Twitter requires the installation of cURL, a free, open source, command line utility. SolarWinds disclaims all warranties, conditions or other terms, express or implied, statutory or otherwise, on software and documentation furnished hereunder including without limitation the warranties of design, merchantability or fitness for a particular purpose and noninfringement. In no event shall SolarWinds, its suppliers or its licensors be liable for any damages, whether arising in tort, contract or any other legal theory even if SolarWinds has been advised of the possibility of such damages.

To configure Orion NPM to use Twitter with an advanced alert:
1. Log on to your Orion NPM server using an account with software installation privileges.
2. Download and extract the version of the cURL utility that is appropriate for your Orion NPM server from the cURL website.
     Note: For the purposes of this procedure, the cURL package curl-7.19.5 is extracted to C:\cURL\.
3. Click Start > All Programs > SolarWinds Orion > Alerting, Reporting, and Mapping > Advanced Alert Manager.
4. Click Configure Alerts.
5. If you want to use Twitter notification with a new alert, click New, and then create your new alert. For more information, see Creating and Managing Alerts in the SolarWinds Orion Network Performance Monitor Administrator Guide.
6. If you want to add Twitter notification to an existing alert, click the alert with which you want to use Twitter, and then click Edit.
7. Click the Trigger Actions tab.
8. Click Add New Action.
9. Click Execute an external program, and then click OK.
10. On the Execute Program tab, click Browse (...) next to the Program to execute field.
11. Locate and then select C:\cURL\curl.exe.
12. Add the following parameters to the selected program path:
      -u username:password -d status="message"
      Note: The following is an example of a complete path with parameters and alert text specified:
          C:\cURL\curl.exe -u UserName:Password -d status="ALERT! ${Caption} is ${Status}."
13. Click OK on the Edit Execute Program Action... window, and then click OK on the Edit Alert window.
14. Click Done on the Manage Alerts window.

This article was adapted from the SolarWinds Knowledge Base article, "How do I configure Orion NPM to use Twitter for alert notifications?".

Four predominant IT use cases any network device configuration management tool must address are:


  • Configuration change management: scheduling device configuration backups, requiring change approval for configuration changes, scheduling execution of approved changed.
  • Compliance reporting: defining and enforcing configuration policies across all network devices through automated uploads and scheduled change reports.
  • Inventory reporting: tracking network device components, including serial numbers, interface names/specifications, port details, IP addresses, ARP tables, installed software manifests (with version levels).
  • Network device End of Sales and End of Life management: tracking sales/support status as integral part of strategic capacity and upgrade planning.


How well a configuration management system and IT best practices satisfy these cases on a daily basis impacts your company's ability to organize and promote the strategic collaborations among employees and partners that meet the business goals set at the executive level.


What business cases are the most important to you and why? What's missing from the system you currently use?

The next generation of wireless Internet, says the British newspaper, The Independent, could use converted LED light bulbs to transmit data. Researchers in Germany and China have tested the technology, known as “li fi,” and have reached speeds of up to 3 Gbps and 150 mbps, respectively. Researchers in Britain have achieved speeds of up to 10 Gbps.


What is Li Fi?


Li fi is transmits data using the light wave spectrum, rather than the radio wave spectrum wifi uses. Instead of radio transmitters, li fi uses LED light bulbs. Because the light spectrum is much, much broader the radio frequency spectrum, we can expect li fi to be faster and cheaper – up to 250 times faster than today’s superfast broadband, says the Belfast


How Does Li Fi Work?


Dr. Chi Nan of Fudan University in China created a li fi system on display at the China International Industry Fair in Shanghai. She built the system using only off the shelf parts. According to NetworkWorld, this li fi system uses “a small number” of single-watt LEDs bulbs to send and receive Internet data. The BBC notes that British researchers have used a micro-LED light bulb to transmit 3.5 Gbps using red, green, blue light, which make up white light. This means over 10Gbit/s is possible.


Does Li Fi Have Any Drawbacks?


Li fi cannot connect without light…so you folks who like to work in the dark may be out of luck when it comes to li fi. And for now, li fi cannot go through walls. So don’t expect li fi connectivity at your local coffee shop just yet!

There are myriad reasons behind faulty network performance. Network problems can arise from faulty hardware such as routers, switches, and firewalls. They can also arise from unexpected usage patterns such as in the case of network bandwidth spikes that exceed their allocated bandwidth for users, or due to security breaches, changes in device configuration, etc. Let’s explore seven key network performance issues that commonly and persistently impact enterprise networks.


#1 High CPU Utilization – The most common cause of high CPU utilization is when your network is bogged down by enormous network traffic. CPU utilization increases when processes need more time to execute or when more network packets are sent and received. For instance, if a switch or a router fails to respond or performs processes very slowly, it’s usually due to high CPU utilization.


#2 Route Flapping – Any misconfiguration on the router, hardware failure, or a loop in the network can cause route flapping. This is noted as an instability in the routing table where the existence of the route is on and off, which in turn advertises alternate routes, frequently.


#3 High Network Errors and Discards – Errors indicate packets that were received unprocessed because there was a problem with the packet. The reasons can be misconfiguration on one end or a bad cable on the other, etc. But with Discards, the packets are received with no errors but were dumped before being passed on to a higher layer protocol. Normally, the root cause of discards is when the router wants to recover some buffer space.


#4 Network Access Link Congestion – If your sales (VoIP) calls are dropping, it means there’s a network access link congestion. This is a bottleneck between a high bandwidth LAN and a high bandwidth IP network. An increase in traffic can cause the queue in router to fill, which increases jitter and causes a short term increase in time delay. High levels of jitter cause excessive numbers of packets to be discarded by the receiving VoIP system, which leads to degraded voice quality.


#5 Network Link Failure – A link failure typically appears as a period of consecutive packet loss that lasts for many seconds, followed by a change in delay after the link is re-established. But, routers are capable enough to find alternate routes if they find a link failure. Regular occurrence of packet loss/link failure could be a symptom of equipment or power supply reliability problems.


#6 Misconfigured Hardware or Software –The negative effects of misconfiguration may result from a LAN being oversubscribed or overloaded, but most often they result from overlooked configurations. For instance, a segment (VLAN) can be easily overloaded by multicast traffic, if multicast traffic constraining techniques are not properly configured on that VLAN. Such multicast traffic may affect the data transfer rate of all the users in the network.


#7 Packet Loss – In some cases, a network is considered slow when applications require extended time to complete an operation that usually runs faster. That slowness is caused by the loss of some packets on the network, which causes higher-level protocols like TCP to time out and initiate retransmission.


How to find and mitigate these network issues?

Using network fault and availability monitoring software will help you to simplify detection, diagnosis, and resolution of network issues before outages occur. This will help you to mitigate all the above issues discussed with ease. If you need more information on how to choose a network and bandwidth monitoring tool, you can check out this guide.

Unidentified devices on your network could be any device that intentionally or unintentionally attempts to breach sensitive company information or resources.


The biggest risk posed by unidentified devices is that by the time they are detected and a breach has been discovered, it is possible that attackers have already gained access to the network.

Theft of corporate data can lead to huge financial losses. In the face of such events, network administrators are under pressure to quickly locate and curb rogue activity. The primary difficulty lies in manually locating a device in a network of tens or hundreds of multi-vendor devices. This is extremely time consuming and often when the offending device is located, the damage has already been done and the perpetrator has absconded.

When your network is under rogue attack, it’s all about finding the offending system quickly and terminating network access. To survive such attacks, organizations are performing threat analysis and forensics to study the source and path of rogue activity in the network.For this, you need to start creating a baseline and then monitor how the network is behaving to identify anomalies. To do this successfully, requires that you have a solution in place that is capable of monitoring and correlating log event data throughout your environment, and very importantly, reacting in real-time.  This is where Security Information and Event Management (SIEM) solutions come into play. SIEM solutions centrally collect and correlate logs from network and security devices, application servers, databases, etc. to provide actionable intelligence and holistic view of your IT infrastructure’s security. It can also help you understand what happened before, during, and after an event to isolate fault and determine root cause.

5 Tips to Help Detect & Prevent Unknown Rogue Devices in Your Network

#1 Create an approved list of devices to isolate unknown devices and restrict network access


#2 Quickly and easily track endpoints by searching IP address, MAC address, Hostname, etc.


#3 Keep watch on suspicious devices in the network


#4 Immediately terminate network access on detection of unwarranted activity


#5 Perform analyses into IP usage history of devices in the network

For more detailed information on preventing rogue devices in your network, download the Detecting and Preventing Rogue Devices whitepaper.


Learn how automated user and device tracking along with port tracker helps you stay in control of who and what are connecting to your network.

Every organization, every network and IT administration team is looking for a network configuration management solution. “Why?” you ask. An enterprise network is a composite of uncertainties that can impact network performance and availability at any time. Even a minor change in network device setting can upset your network performance and cause it to shut down, or worse – open up a security chasm.



A network configuration and change management (NCCM) solution can help you monitor changes in device configuration and automate manual change management and configuration backup and update tasks, thus contributing to improved operational efficiency and lesser network downtimes due to the impact of unauthorized config changes. Let’s take a look at 5 key considerations to select your NCCM solution.



A good NCCM solution should be able to:

  • Perform device discovery scans and logically organize devices
  • Build, test and bulk deploy standardized device configurations
  • Monitor network devices for config changes, and send real-time alerts for unauthorized and erroneous changes
  • Back up and restore device configurations
  • Manage configuration changes, approvals and deployment scheduling
  • Provide rich compliance and status reporting


Review the product features, and even evaluate the product before you make your purchase decision. Ensure the above features are all present in the NCCM tool.



Networks are becoming more heterogeneous and your NCCM deployment should be able to support managing different network devices and types from a wide range of manufacturers.



NCCM does more than just alert on config changes. It manages your device change management and should support manage routine tasks such as firmware and OS upgrades all the way to device EOL management. You should be able to receive notifications on when a manufacturer will end support for a device, helping you to support contracts and prepare badgers and plans for replacing EOL devices.



NCCM helps specialized network teams do more work concurrently with greater standardization and speed. Your solution should be able to

  • Logically organize devices by location, function, or other grouping
  • Create team roles and permissions
  • Support concurrent use
  • Proactively notify when problems exist and what work needs to be completed from the configuration and change workflow perspective



Look or an NCCM solution that supports a common view of the network and system naming, can share events and inter-process details. The objective is to allow you to have all your IT management tools work together as a single framework so it reduces management overhead and becomes easy to view cross-functional data together from one single screen – network performance monitoring data along with configuration details of network devices.


Watch this short video to understand more about network configuration management and what best practices your NCCM solution should be able to offer.

Last week I had the opportunity to spend a couple of days in the hallway at SpiceWorld in the Austin Convention Center. Well, really I was in the SolarWinds booth, but the vendor booths were in the hallway, so there I was nonetheless.


SpiceWorld is the annual conference of SpiceWorks, which, functionally speaking, is a very large online community of ITPros who hang out and help each other with their day to day challenges. This was SpiceWorld’s first year in the Austin Convention Center, having outgrown the University of Texas’s Conference Center. Over a thousand people attended this year, and next year the goal is 2500! For two days ITPros, generally associated with SMBs, MSPs, and non-profits, shared each other’s physical presence, chatted with vendors, attended educational sessions on a myriad of topics ranging from Intro to PowerShell to Advanced Storage, and enjoyed the Austin nightlife. For a look back at the intensity that is SpiceWorld, you can review the SpiceWorld forum on


SpiceWorks also makes software. IT Management software, to be specific. One of the themes of the questions we were asked at SpiceWorld was “How does SolarWinds compare to SpiceWorks?”, “When/Why would I convert/migrate/upgrade from SpiceWorks to SolarWinds?”, or in a more general context, “When should I consider paying for software, rather than using free software?”


I’m going to take a moment here and share my answers to those questions, hopefully for the greater good. I’ll also point out that these considerations do not just apply to the SpiceWorks vs SolarWinds question, but apply to any scenario in which you find yourself comparing a “free” solution to a “not free” solution.


I think there are two significant points to keep in mind in the free vs not-free scenario: feature development and technical support.

  1. Generally speaking, “free” software does not have as rich a development cycle as “not free” software would have. In some cases, free software is built once, for a single purpose, and unsupported and not maintained for the rest of its life. In other cases, such as open-source software, its development cycle may be at the mercy of the availability of volunteer resources who are interested in particular feature sets. You need to consider the impact of the development lifecycle on the product and how that will affect your usage. Is it likely that at some future time you’ll outgrow the free product?
  2. Because it’s free, many of these products also do not have a rich support ecosystem. One of the advantages you’ll often find with paid products is that the vendors provide 24x7 technical support services for those products. If you’d rather invest your worktime doing something productive, rather than trying to track down an unexpected behavior in a free product, that might be the time to move to a paid product.


It’s interesting to note, though, in all fairness to SpiceWorks, they don’t actually fit the mold of your typical “free product” vendor because they do have a rich full-time development team and a very active support system and community. SolarWinds does also, I should mention, so if you’ve not yet joined Thwack, you should definitely check it out. By the way, the other thing that SolarWinds has a lot of is Free Tools! So, this discussion even applies when comparing our free tools to our own paid products.


In the end, the question of when to migrate from SpiceWorks to SolarWinds is a bit more complex and probably needs to be evaluated on more of a case-by-case basis. Or, it can also be viewed as a very simple question: Is your “free product” still meeting your business needs? If so, it’s very hard to justify spending money you’re not already spending. But if it’s not, there are options! :-)

Great Gadget

Posted by Bronx Nov 7, 2013

I normally don't endorse products (I don't get paid to either way), but Google Chromecast is something I highly recommend. I bought it the moment I discovered it, and so did Pop. It can be set up in ten minutes (2 hours if you're my Pop).


What is it?

Good question! It is a WIFI dongle that connects to your TV so you can broadcast videos and web material directly to a TV from your computer, tablet, or phone, with the touch of a button. Imagine you're just watching a video of GTA5 on your phone and BOOM, now it's on your TV. It's just that easy.


What does this mean?

For me it meant I could watch my NY Jets lose on the big screen, rather than from a much smaller laptop screen. I can also view any website or video within Google Chrome on my TV! Just think about all those websites aching to be thrown onto the big screen. Xanadu! If you're a SysAdmin, you can instantly take your SAM, single-pane-of-glass-view, from your tablet and toss it on the big screen at work for all to enjoy. Possibilities abound! And for 35 bucks, how could you go wrong?



The need for a well-rounded reporting tool is not often spoken about. If you’re thinking how much of a difference your report is going to make to your organization, well think again. Having a great reporting system makes a huge difference to senior IT staff, since a lot of critical decisions they make, whether it’s related to performance monitoring or availability of servers and applications, are based off of the information presented to them.


There are a few challenges you can run into when trying to put together a weekly or a quarterly performance report.

  • Incomplete Reports: Sometimes your reports are not a 100% complete. This can happen if you’ve gone over the specified page limit or if your report doesn’t support graphic heavy images.
  • Customization: You may not have rights to create or customize a chart or a table. For example, if you want to show top 10 expensive database queries or % memory used by database, the reporting tool may not allow you to show this information the way you want to.
  • Report Size: If your reports have a limitation on the size, then you’re forced to have a basic report with limited information.
  • Difficulty with Sorting Data: Tweaking the data can be challenging. For example, you may not be able to sort columns, group data, or limit results to the top 10, etc.


The reason why it’s extremely important to have a built-in reporting system is to ensure you have the flexibility to create reports in a dynamic environment. An ideal reporting tool should offer you the following capabilities, whether you’re looking at nodes, databases, memory, queries, disk usage, CPU load, etc.

  1. Adding Content: You should be able to add database tables, down applications, monitored processes, etc. that you see in your server monitoring software web console.
  2. Edit Resources: You should have the option to select and edit the resource, for example, select a column or a group and have the option to sort them by query type, memory load, etc.
  3. Customize Report Layout: Customizing your report is what matters. You should be able to show critical information that is happening in your servers and applications in real-time in the form of charts, tables, and gauges.
    1. Modify the number of columns in your report. Display charts and tables for a specific time period and view them side by side.
  4. Preview and Check the Report: Your reporting tool should allow you to double check the report for you and ensure it only displays the data you require. If you have to modify, it should allow you to go back to the layout builder to make any additional changes you want.
  5. Custom Properties: Custom properties should help you organize and find your report for later. You can mark reports as favorites so they appear on top of your report list.


In addition to the capabilities mentioned above, your reporting tool should allow you to generate out-of-the-box reports dynamically:

  • CPU
  • Memory & Virtual Memory
  • Response times
  • Statistic data
  • Resource usage
  • Hardware monitoring
    • Server warranties (Due to expire, set to expire, and expired)
  • Server and application health (Up, running, critical, down)
  • Group based reports
    • Servers, application specific groups, critical alerts, etc.


The reason why out-of-the-box is beneficial is because an IT environment keeps growing every day. Requirements keep changing as your users are accessing various information in your servers and applications. As sysadmins, it’s imperative to know what’s going on in your environment. To enable proactive monitoring of your servers and applications, a built-in web-based reporting tool will help you analyze and drill into details and provide real-time insights in an organized manner.

The network management system (NMS) is a key component of the network infrastructure which monitors your network health and identifies issues causing performance bottlenecks. This much we know already. But what will happen when the NMS is not available perhaps due to a failure in NMS implementation due to one of the reasons below?

  • Server running the NMS fails due to a hardware issue
  • Power supply to the NMS-installed server is lost
  • Scheduled maintenance downtime


In these conditions, to ensure consistency in monitoring network health, we need to make provisions to get the NMS up and running ASAP. This is where NMS failover plays a crucial role. Let’s understand this in context of SolarWinds Network Performance Monitor (NPM). If something should happen to your primary NPM server, you should have a failover plan to automatically switchover NMS operations to a remote server. This passive failover server assumes the full identity of the primary NPM server and assumes all monitoring, alerting, reporting, and data collection as did the primary server.


SolarWinds Failover Engine (FoE) is a NMS failover option from SolarWinds, which when deployed on a secondary server, can provide five levels of protection to the server, network, application, performance, and data) and can be deployed for High Availability in a Local Area Network (LAN) or Disaster Recovery over a Wide Area Network (WAN).



#1 Server Protection

A failover occurs when the first passive server detects that the active server is no longer responding. This can be because the active server’s hardware has crashed or because its network connections are lost. In a failover, the first passive server is brought up immediately to take on the role of the active server.


#2 Network Protection

SolarWinds FoE proactively monitors the ability of the active server to communicate with the rest of the network by polling up to three defined nodes around the network, including by default,

  1. the default gateway
  2. the primary DNS server, and
  3. the Global Catalog server at regular intervals


If all three nodes fail to respond, for example, if a network card or local switch fails, SolarWinds FoE can gracefully switch the roles of the active and passive servers (referred to as a switchover) allowing the previously passive server to assume an identical network identity to that of the previously active server. After the switchover, the newly active server then continues to service the clients.


#3 Application Protection

SolarWinds Failover Engine running on the active server locally monitors the applications and services it has been configured to protect through the use of plug-ins. If a protected application should fail, SolarWinds Failover Engine will first try to restart the application on the active server. If a restart of the application fails, then SolarWinds FoE can initiate a switchover which gracefully closes down any protected applications that are running on the active server and restarts them on the passive server along with the application or service that caused the failure.


#4 Performance Protection

SolarWinds FoE proactively monitors system performance attributes to ensure that your protected applications are actually operational and providing service to your end-users, and that the performance of those applications is adequate for the needs of those users. Similar to application monitoring, various rules can be set to trigger specific corrective actions whenever these attributes fall outside of their respective ranges.


#5 Data Protection

SolarWinds FoE ensures the data files that applications or users require in the application environment are made available should a failure occur. Once installed, FoE can be configured to protect files, folders, and even the registry settings of the active server by mirroring these protected items in real-time to the passive server. If a failover occurs, all files that were protected on the failed server will be available to users on the server that assumes the active role after the failover.


Register for this FREE webinar to understand more about the failover options for SolarWinds network management, application and server management software, and learn from our product experts how SolarWinds can provide high availability and fault tolerance for your NMS implementation.

Register Here.png

What’s in a log file?

The very purpose of IT security is to be proactive make it difficult for someone who attempts to compromise your network. You also need to be able to detect the actual breaches as they are being attempted. This is where log data really helps.


Collecting and analyzing logs, help you can understand what transpires within your network. There is gold in log files as they provide you with invaluable information, especially if you actually knew how to read them and analyze them. With proper analysis of this actionable data you can identify intrusion attempts, misconfigured equipment, and much more.


Monitoring and Analyzing Event Logs

To make the log analysis and log management more efficient, it is important to need to collect and consolidate log data across your IT environment, and correlate events from multiple devices in real-time. Also, you need to analyze the event to understand the root cause. Monitoring the activities across your web server, firewalls and other network devices are no more enough, you need to monitor your workstation logs as well.


Log file analysis is best done with an SIEM software, and here’s your chance to secure your spot at the Free Live Demo of SolarWinds Log & Event Manager (LEM) hosted by Rob Johnson.


When: Friday, November 08 at 1PM CST


Alright, what is this webcast about?

SolarWinds LEM delivers powerful Security Information and Event Management (SIEM) capabilities in an affordable and easy-to-deploy virtual appliance. In this webinar, you can watch the product in real-time and delve into security best practices. You will be able to get your questions answered by Rob throughout the demo.



Registration link:


In early October, Adobe® was hacked and 3 million customer account details (IDs, passwords, and credit card information) were stolen. Seems whopping? There’s more. Last week, it was revealed that the real number is actually 38 million. Yes, usernames and encrypted passwords of 38 million active Adobe users were stolen as part of this cyber-attack. This is ginormous hacking even for global hacking standards, and the hackers had posted the stolen data on public sites on the Internet. Part of the Adobe breach involved the theft of source code for Adobe Acrobat and Reader, as well as its ColdFusion Web application platform.


Adobe Hacked.png


While Adobe is still looking to identify the actual source and means of this data breach, it is also making amends to the customers whose account credentials were stolen and redeeming the company’s reputation.


Nothing held back against Adobe which is just one of the victims of large-scale cybercrimes, what can you do when such malicious security threats are on the rise and you are left defenseless to detect these attacks on time and happen to compromise on security and compliance? If you do not want to see this happen to your organization, you must act now and equip yourself with the right security techniques and technology to defend against security threats.


Heighten Network & Data Security

Security information & event management (SIEM) is a cutting-edge security practice that allows you to get visibility into all suspicious happenings on your network and data center by correlating and monitoring event log data from across the IT infrastructure – systems, network devices, security appliances, etc. Start at the basics – “the logs” – and work your way to the fore with security analysis and actionable intelligence. Logs provide a wealth of information about just about everything on a particular device or operating system. When you have the means to analyze these logs in real time to be able to isolate suspicious behavior patterns and policy violations, you’d be in a much better position to diagnose a security threat. Once detected, you can then take counteractive measures to contain or eliminate the threat thereby securing your network and secure corporate data.


There’s more security bulletin on Adobe …


There have been 2 security fixes silently released by Adobe in October following the massive security breach.

  • The first update is for RoboHelp 10 on the Windows operating system, a publishing software that enables users to collaboratively develop HTML 5-based video-enabled websites. This update addresses a vulnerability that could allow an attacker to run malicious code on the affected system by exploiting a memory corruption vulnerability (CVE-2013-5327).
  • The second update addresses issues in both Adobe Reader and Acrobat XI (11.0.04) for Windows. The fix addresses a regression that occurred in version 11.0.04, affecting JavaScript security controls. It permitted the launch of JavaScript scheme URIs when viewing a PDF in a browser (CVE-2013-5325).


Again, this is not uncommon in the software industry, and application vendors keep discovering new security loopholes and vulnerabilities and keep pushing new patches to their software. Though these Adobe security patches have no relation to the data heist, it only makes us think about the very many security lapses and subsequent fixes that keep coming about in the digital world.


Constantly Update Your System Software & Third-Party Apps

Only by keeping afoot with the application patch updates and security fixes will we be able to avoid vulnerabilities in software to compromise system security. Implement an automated patch management system in place to ensure all your systems and servers in your enterprise are running the most latest and updated software versions – especially in the likes of the highly risky Java® application platform.


IT security need not bother you to the brink to driving you into paranoia. If you have the right tools, security policies and processes in place, and the personnel to put these in action, you can rest assured your organization can be safeguarded against the detriment of hacking and its forays.

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.