VMware’s big hype for VMworld 2013 in San Francisco was all about bringing software-defined data center capabilities to market.  Since the compute aspect of the data center has pretty advanced virtualization capabilities (i.e., “software-defined”) that meant the primary focus was on advancing the network and storage capabilities.  While VMware’s storage announcements help meet that goal, in general they were not as far along as the networking aspect and sound a lot like what Microsoft has been doing with their recent releases.  The networking capabilities they announced with their NSX capabilities look more mature and potentially ready for deployment in the right situation.  


NSX is really the next logical evolution for VMware.  Given their dominance in compute virtualization they would like to extend that dominant position to the rest of the data center.  They pretty much announced their direction and intent last year with the Nicira acquisition and a pretty similar set of directional statements at VMworld 2012. This year we got more details of how things will really work.  


At a high level, NSX is focused on taking over “east-west” network traffic as they described the traffic between VMs that pass through the networking infrastructure.  VMware claimed that as much as 60% to 70% of network traffic is traffic between VMs with the remainder being the traffic between the VMs/data center and the external network (i.e., “north-south” traffic). NSX capabilities will include virtual switch, router, firewall and load balancer capabilities. VMware is using a fusion of their existing vSphere vSwitch and the Nicira technology in the solution.  It actually consists of two products, NSX for vSphere and NSX for multiple hypervisors.


Currently, the network is often the bottleneck when it comes to dynamic workload placement. Moving a workload from one host to another hypervisor, including storage, can be done in a matter of minutes.  However, if network reconfiguration is required, this can often require days to complete.  NSX provides complete network stack encapsulation over the existing Layer 3 physical network.  This provides an opportunity to move the network to the same level of encapsulation as compute and storage, allowing snapshotting, rollback and cloning along with the potential to provision or reconfigure in a matter of minutes.


But VMware’s NSX announcement does raise a number of interesting questions.  Some of these key questions and initial thoughts are provided below.


* How does NSX impact physical network architecture?  Should customers rethink their basic network design?

* This could change the primary goal of physical network design to be focused on high-availability and performance, not necessarily on application traffic segregation anymore.

·         How to do you manage and monitor the comprehensive network health and performance?

* Who is responsible for the network issues?

* Network engineers and admins will still be needed, all the protocol alphabet soup is still there when it comes to configuration and interop.

* How fast will the software be adopted versus other efforts such as OpenFlow?

* It is likely to have faster adoption for a number of reasons:

o NSX will have no dependence on physical switches

o No multi-vendor compatibility issues

o Complete control over the inner protocols and implementation

o Functionality will be built-in the hypervisor

* Where is the competition relative to VMware?  

* VMware has leaped over Microsoft once again. Microsoft brought interesting networking solution with in Hyper-V v3 in Server 2012, but those look a lot less advanced compared to NSX.

* How Will VMware expose virtualization monitoring and management capabilities for NSX?

* This was not clear from the VMworld 2013 and is still an open question

* Some diagnostic tools were demonstrated but to be successful those capabilities need to be integrated with existing solutions.

*  vCOps will be updated to provide visibility at both levels, but it's not clear how soon that will be available.


In summary, the virtual networking capability is an impressive innovation brought forward by VMware. As any new disruptive technology brought to the market place, it comes with its set of questions and uncertainties. It now potentially puts VMware in control of the last technology pillar that is needed to make the SDDC a reality. Vendors like SolarWinds will monitor those changes and ensure that their existing and future customers maximize their investments in those new technologies while still relying on their monitoring and management solution to provide them the insight they need.

The largest population of the world, China, woke up last Sunday (25th August, 2013) to the most colossal distributed denial of service (DDoS) attacks to have rattled the Chinese digital age. With over 8 million websites affected[1] on .cn domains, the government has condemned this incident and dubbed it the biggest ever cyber breach in Chinese history. Government-run China Internet Network Information Center said the attack started at 2 a.m. on Sunday morning and jolter Internet services until around Monday morning. There was a 32% drop in traffic on the .cn domains.as observed by CloudFlare, a website security company.

Firewall Breach.png

The exact source or motive for the attack has not yet been traced, but the damage has been done. This incident has shown the world that DDoS attacks of such high magnitude can be carried out successfully by hackers, and there’s nothing the victim (in this case the government) can do about it.


This takes us back to the basics of IT security, and makes us look for the answers to these critical questions:

  • When did the attack take place and how long did it last?
  • How did the attack take place? Which device or IT system was compromised?
  • Is there a way to protect the network from such attacks? If so, how quickly can we react to contain or minimize the repercussions?


Follow the Log Trail

There are tens of thousands of logs generated by all your network devices, computers and security equipment. Start by looking at the system and device logs and try to identify what went wrong and when. It can be a difficult task do to given the number of devices, volume of logs generated by them, and the false positives. But you can always employ an event log correlation mechanism to sift through the logs and track down unusual behavior patterns and suspicious network activity. Once you have the means to get alerted in real time, you’ll be able to take preventive or corrective action immediately.


Log data can be used for:

  • Real-time incident monitoring & threat detection
  • Performing event forensic analysis and root cause isolation
  • Compliance reporting and security audits


SolarWinds Log & Event Manager (LEM) collects logs from devices across your IT landscape and correlates them in-memory to provide real-time notifications and alerts should there be breach or policy violation. SolarWinds LEM has built-in Active Responses that automate actions to respond to breaches and attacks.

This blog contains the answer to a question in this month's thwack mission (week 4).  Enter this week to win a Slingbox 500!



I had the pleasure to speak with Thomas Löfstrand of Nethouse, a SolarWinds business partner located in Sweden.  Thomas recently evaluated the new features of Server & Application Monitor.


JK: I understand you have tried the new AppInsight feature of Server & Application Monitor.  How do you like it?


TL: I was really excited about this feature when I first heard it was coming out last fall.  We are a SolarWinds customer and partner. We’ve been a partner and have been using SolarWinds products for the past 9 years.  My main focus is on Network Performance Monitor and Server & Application Monitor products.  About 50% of the time I spend as a SQL DBA working to help troubleshoot performance of both Nethouse databases and our customers’ databases.


I really like the complete, specialized view of SQL performance you get with AppInsight for SQL.  This was lacking before in Server & Application Monitor (SAM) in that you were only able to view some performance metrics of SQL server itself.  Now, when you can see all the SQL metrics in one view, it will be much easier for our customers to understand what is going on with their SQL databases.



JK: Prior to using AppInsight, what were some of the activities you performed weekly or monthly for troubleshooting SQL server performance?

TL: I used to create scripts in SAM to monitor SQL agent jobs to see if they were running or not.  I also wrote scripts to monitor user connections.  Now, with AppInsight, I can see this kind of information immediately, it’s built into the product.  This spring I had an incident related to user connections which could have been avoided if we had the features now available in SAM 6.0. We had a user connection that locked some tables in SQL server and it caused the application to stall so other users could not access the application. It took 2 hours to find this problem.  If I had SAM 6.0 at this time, I could have seen right off in the SQL dashboard that this user shouldn't have been there.


One of the other things I look at all the time is disk space usage and through SQL commands I can go into each database to see database space usage.  Before, with Server & Application Monitor, I could see disk space usage but not for each database and not within the database files.  Now I can get an immediate view of available space for each database.


JK: How many databases do you manage?

TL: I manage a lot of databases for our customers.  I do not perform the day-to-day responsibilities for all customers but help with troubleshooting activities.  I have a very big customer I work with to help in troubleshooting SQL performance issues.  Before AppInsight, I had to run traces for 12 to 24 hours to see how the server was performing over time to understand the top CPU intensive queries.


Now it will take a matter of 5 minutes after they get the AppInsight feature installed.  SAM is also good to show historical data for problem analysis, which is very helpful in working with customers to troubleshoot application issues.


JK: What other features have you tried in SAM 6.0?

TL: I have tried the inventory dashboard to get a complete view of our hardware and software assets.  You can import and export inventory data to CMDBs.  Today, we use Microsoft Excel as our CMDB.  It is easy to start with the asset inventory dashboard if you don’t have any CMDB or asset management tool.


SolarWinds is really a complete platform to run all of your IT environment.  You get monitoring, asset management and now you have specialized SQL server monitoring.  You don’t need any specialized tools like Red-Gate or Idera, you get everything with SolarWinds.


If you are interested in seeing a deep demo of the new features of SAM 6.0, check out this webcast replay.



In our last post we talked about a couple of key points.  First, because many of us live in a “ready, fire, aim” world we often don’t have the time to plan what we do before we act.  This is a problem that can be corrected by improving some of our management processes.  Second, because of this we suggested reviewing a governance framework, like ITIL, to capture some ideas on where and how to improve IT management processes. And finally, we introduced our “overlooked” network configuration practices.  These practices complement our improved processes, by taking a more holistic approach to network configuration management.  




Today we will dive into the first of these best practices – to inventory and profile network systems. This best practice is further divided in these activities:




How Network Inventory Can Help


Our objective for this first practice is to identify all devices under management.  With hundreds of network devices on your network its important to know about each device.  To make this a realistic task, you need tools that will perform an automatic discovery scan of your network and build a database of devices.  From here you will want to organize these devices (by vendor, location or some other way) and begin to collect and manage useful details about each device. 



For example, what is the device serial number?  Where is it located, Who is the primary point of contact, when will it no longer be vendor supported?  Has budget been secured for a replacement?



Its very helpful to have this information saved as part of the device profile.  This makes it easier to maintain one authoritative source and share it with others throughout the organization.  If this information is incomplete or managed external to the device profile (perhaps using a spreadsheet) then extra work is required to keep it current and to protect the integrity of the data from becoming out of sync due to multiple document versions.


Consider the following examples:


  • Firmware upgrade.  Being able to easily identify which devices need and are compatible with the upgrade and which devices were successfully upgraded.
  • Maintenance Audit.  Easily determine if there is agreement between devices installed and devices covered under the maintenance agreement.
  • End of Service.  Easily identify which devices are no longer vendor supported and should be replaced.  Has budget been requested and approved?  Have resources been scheduled to retire the device?




By following this best practice, you are taking a first important step to holistically manage network configuration by creating a sound foundation that can serve to make informed decisions.   Of course by doing this, you will look like a genius and in the process being to start driving down the human error that leads to network downtime.  Both of which are really great things to be recognized for and certainly will help getting that next great job promotion.


In our next post we will explore  the best practice of deploying standardized device configurations. In the meantime, review past posts on this topic (Part 1, 2) or download give NCM a try.


You can also find and read past posts in this 7-part series here


Post 1:

Post 2:

Post 4:

Post 5:

Post 6:

Post 7:

DOWNTIME IS NOT GOOD – not good for business, not good for IT and not good for employees. Every network administrator knows that much. Regardless of the size of the network and the type of business, downtime impacts productivity, disrupts business services, causes financial losses and certainly creates headaches for IT. Just so that you are aware of the gravitas of an outage situation, Gartner® pegs the average hourly cost of downtime for networks of small to medium-sized business at $42,000[1]. The big question is: What causes network downtime (or outages)? In this blog, we’ll take a look at the various factors that play a key role in causing unplanned network downtime.


What Causes Network Downtime?

HARDWARE FAILURE IS THE NUMBER ONE REASON FOR NETWORK DOWNTIME[2]. There are so many interconnected hardware elements in the network that even if one critical component fails it could cause an outage. It could be a complete or partial failure of any number of devices, such as a router, gateway, network controller, etc…


Here is a list of all major reasons why you could be facing a network downtime:

  1. Faults, errors or discards in network devices
  2. Device configuration changes
  3. Operational human errors and mismanagement of devices
  4. Link failure caused due to fibre cable cuts or network congestion
  5. Power outages
  6. Server hardware failure
  7. Security attacks such as denial of service (DoS)
  8. Failed software and firmware upgrade or patches
  9. Incompatibility between firmware and hardware device
  10. Unprecedented natural disasters and ad hoc mishaps on the network such as a minor accidents, or even as unrelated as a rodent chewing through a network line, etc.


One of the most critical reasons amongst these is router failure due to a configuration change. This is very difficult to identify. According to a research by University of Michigan, 23 percent of total network downtime is attributed to router failure[3].


When a user complains, all you know (from help desk tickets, calls and escalations) is that they are not able to connect to the network. It’s up to you, the network administrator, to figure out if it’s a router failure, or if the internet link is down, or if it’s because of any of the above reasons.Then you have to fix the issue. It’s a 2-step process: identifying the problem and then fixing it. Yes, network downtime is a hard nut to crack. But it doesn't have to be so hard if you have the right nutcracker.


Dilbert & Downtime



Continuous Network Monitoring

Continuous network monitoring is your downtime nutcracker. Network monitoring helps you gather data about the status of a network by polling network devices for availability and performance statistics. Once polled, you can use the data to infer what caused the downtime – which device, in which location, and when. Network monitoring software does this job for you and alerts you when there is downtime, device unavailability, performance issues or any deviation from an accepted network baseline. You can just focus on the fixing and not worry about figuring it out.


A network configuration management solution will help you monitor your network devices for any changes in configuration settings, and alert you in case of an unauthorized config change. And what’s more? It can allow you to roll back to an earlier state of known good configuration.


Network downtime – find it and fix it!


Learn More

Watch this video to learn further about finding the root cause of a network outage.




In our last post we talked about how muffing a device configuration could be one of those dreaded career limiting moves.  It’s easy to do. There are over 17,000 Cisco IOS commands.  How’s your average network admin to know the difference between Clock Rate vs. Bandwidth and Process-ID vs. ASN?   Well fortunately, there’s a better way if you’re willing to learn a few new skills.


How Best Practices Can Help

Many of us, we live in a “ready, fire, aim” world.  We are under tremendous under pressure to get things done fast -- so fast that we don’t have time to aim first and then fire!  What does this have to do with network configuration? This – most technical problems aren’t solved using technology alone. Resilient solutions take a combination of technology and process.  While others are firing and missing, use this opportunity to step forward and propose some meaningful changes to your process.  A good place to look for process ideas is an IT governance framework like ITIL.  Most teams recognize the need for improvement and you’ll be well regarded for making needed recommendations.



Another helpful perspective is to adopt a holistic approach to solving the problem. Take a broader approach. Which leads us to our overlooked network configuration practices.





These practices support good process and will allow you to take an effective comprehensive approach helping you to 1) identify devices and protect working configurations, 2) know what changed and when, 3) know when device configurations are out of compliance with standards and practices, and finally 4) recover from harmful changes or catastrophic failure quickly.  In summary these practices will help you reduce human error, be more productive and improve network availability – the stuff great promotions are made of.


In our next post we will explore each of these practices in more detail and talk not only about specific objectives, but also specific actions to take to achieve these objectives.


In the meantime, if you missed our first post, you can catch up by reading it here or take an hour and install and evaluate the newest release of NCM 7.2 here.

You can also find and read other posts in this 7-part series here


Post 1:

Post 3:

Post 4:

Post 5:

Post 6:

Post 7:

In the VMworld 2013 keynote session in San Francisco Monday morning VMware made a number of announcements mostly related to software-defined data center (SDDC) capabilities.  While they focused a lot on networking, storage and hybrid cloud, the capability that will probably have the most impact in the short term will be the increased horsepower of vSphere 5.5.  As CEO Pat Gelsinger described it, the latest version of vSphere is “2X” the horsepower of the previous version with double the cores, double the number of VMs and more than double the memory.  VMware was clear about why they think this is important, they want to take away any barriers that prevent people from moving mission critical applications to the virtual environment.  While this will help accelerate the movement of these applications, this additional VM capacity is only one part of the story.  Storage and auxiliary capabilities like replication and high availability (HA) are also required to make many companies comfortable with virtualizing applications like their mission critical databases.


Fortunately, on the storage side there has been a lot of focus on addressing the storage I/O issues that have long been a limitation for virtualization.  Various solid state disk (SSD) technologies along with capabilities like what VMware, Microsoft and others are driving for software-defined storage are helping improve the storage side of things.

There are a number of sessions at VMworld addressing the other required capabilities like HA including specific sessions on SQL databases, virtualizing Exchange, Hadoop and SAP. This is being attacked from both the application vendors and VMware with at least the basic tools and a set of procedures that can be used to do things like perform a rolling software update with relatively low disruption to the application availability.


However, the impression across the board is that while it is possible to run and manager many of these mission critical apps from a compute, storage and HA point of view, it still isn’t easy or mainstream yet.  Many of the approaches discussed require a high degree of knowledge, fairly complicated procedures, higher end and more expensive hardware and software, and a degree of compatibility across vendors that isn’t always there.  As a result, this is still somewhat of a leading edge technology for many companies.  There is clearly enough interest and demand in the market to continue to drive rapid improvement and maturity across the board.  In the short term, many of these gaps can be patched over by skilled IT admins if given the right tools. 


While there are things needed to mature the mission critical application space from companies like VMware and the storage vendors, one of the simplest ways to start is by getting end-to-end visibility across the application stack.  The visibility provided by extending virtualization management to apps and storage arms the skilled admins with the information they need to intelligently fill in the gaps that still exist between application, virtualization and storage management to ensure that the end business service is not impacted.  That is exactly the feedback we got from the SolarWinds customer base when developing integration between the SolarWinds Virtualization Manager and Server and Application Monitor (SAM) products on top of existing Storage Manager integration. The integration provides full visibility across the virtualized application stack from the application and physical servers to the virtual environment all the way down to the storage LUNs and arrays.  In additions, SAM’s upcoming release will have some critical capabilities around database monitoring that will help complete the mission critical visibility story.


The net result, the broader ecosystem is moving closer to making virtualized mission critical applications mainstream and easier but end-to-end visibility is here today.

Unpatched applications are perfect entry points for security attacks, thus making patch management one of the most critical processes in vulnerability management. You may be using Microsoft® System Center Configuration Manager (SCCM) for patch management and it might work well for you, but only as long as they are Microsoft updates!!


Microsoft applications aren’t the only ones that are vulnerable to threats. Third-party applications such as Adobe, Opera, Google or Mozilla, etc. can bring in more vulnerabilities than ever before. The consequences of exploitation of such vulnerabilities can have a devastating impact upon your IT environment. The recent Opera Security breach and Microsoft Yammer authentication vulnerabilities are great examples and should serve as cautionary tales.


As we all know, addressing threats from third-party applications can be time-consuming and disrupting in the ever-busy, day-to-day schedule of an IT administrator, especially if you’re relying on Microsoft-only management tools. For example, to manage non-Microsoft patches you need Windows Server Update Services (WSUS) and System Center Updates Publisher (SCUP) to import third-party software update catalogs. But the problem is that you get very few pre-built and tested packages for deployment with SCCM.



Time to leverage SolarWinds Patch Manager

SolarWinds Patch Manager extends the power of Microsoft® SCCM and WSUS to help you easily keep desktops, laptops, and servers patched and secure with the latest updates for both Microsoft and non-Microsoft applications. Very importantly, it provides you with pre-built, tested, and ready-to-deploy patches for common third-party applications so you can save hours of time while resting assured your environment stays patched and compliant.


During the installation of Patch Manager, a detection is run to see if the SCCM console exists. If so, it will directly integrate with the Patch Manager tools. Once integrated, all of Patch Manager’s tools are made available directly within the SCCM console (Note:  this integration will only be visible on the server hosting Patch Manager / SCCM console).

Here are some key advantages of using Patch Manager on top of SCCM:

  • Simplifies and streamlines patching of both Microsoft and third-party applications
  • Provides centralized and detailed visibility into application inventories and patch status
  • Enables easy automation of pre- and post-deployment actions without complicated scripting
  • Allows the use of the Wake-on LAN feature to perform offline patches
  • Delivers out-of-the-box patch compliance reporting—on-demand or scheduled


Also, make sure to check out - Top Reasons to maximize your SCCM investment with SolarWinds Patch Manager

Gear up, stay secure!!

We all know that whenever there is an application performance issue, the blame game between the network team and the application team goes into overdrive.  Network engineers say there's an application issue and sys admins blame it on the network.  So, how do you determine where the problem really is?  With integrated network, application and server monitoring.  Check out Extreme Visibility: Integrated Network, Application & Server Monitoring at networkmanagementsoftware.com.

There are many VoIP and video providers on the market today. One of them is Microsoft Lync. We would like to understand your plans for Lync in your environment. Do you plan to use it for VoIP, video, IM? Share your thoughts in our quick survey, and we'll enter you to win a $100 Amazon gift certificate. Thank you in advance for you time! Take the survey now.

With the latest release of VoIP & Network Quality Manager (VNQM), SolarWinds has taken VoIP monitoring to totally new level to provide more advanced performance metrics that helps simply VoIP troubleshooting and capacity planning. Let’s explore what’s new in this release of VNQM 4.1, and how you’ll benefit from all the new functionality.


VoIP Monitoring in VNQM (before 4.1 release)

SolarWinds VNQM already provided a wide expanse of VoIP monitoring data on intuitive dashboards to gather and show:

  • VoIP Site details – shows all your call paths and VoIP infrastructure
  • VoIP CallManager details –shows VoIP phone connections and interface details
  • VoIP Phone details – from where you can search for all calls associated with a VoIP phone and see average/min/max availability and QoS metrics for a VoIP phone
  • VoIP Call details – shows call QoS metrics, path details and VoIP call signaling topology
  • VoIP Operation details – shows IP SLA operations test result metrics on VoIP UDP jitter


What’s New in VNQM 4.1: VoIP Gateway & PRI Trunk Monitoring


As discussed in an earlier blog announcing the Release Candidate, VNQM 4.1 now supports Cisco® MGCP PRI Gateway monitoring and collects PRI Trunk data for advanced VoIP utilization monitoring.

You can now benefit from:

  • Real-time VoIP utilization monitoring on Cisco MGCP  PRI Gateways
  • Detailed channel data distribution UI
  • Trunk data distribution and utilization charts for improved capacity usage information
  • Historical VoIP & data utilization statistics for gateways


In addition to the existing dashboard views (as listed above), VNQM 4.1 now includes new Gateway Detail View Page for monitoring VoIP gateway details which allows you to

  • Monitor top call quality issues for each gateway
  • List of all failed calls related to a gateway
  • Track current percent utilization for each interface


VNQM Call Details View.png



Product Integration with SolarWinds Network Performance Monitor

Now VNQM integrates with SolarWinds Network Performance Monitor (NPM) allowing you to see real-time device network availability and throughput of your gateway routers and call managers that are monitored using VNQM.


Conversely, on the NPM Node Details Page, you now have an additional sub-view called ‘VoIP & Network Quality” that allows you to see VoIP and WAN performance from VNQM data along with the network performance metrics and charts on NPM.


CallManager 2.png


Other Product Enhancements & Support:

  • Integration with Alert Central (FREE alert management and on-call scheduling software)
  • Support for Windows ® 8 and Windows Server 2012®


Download VoIP & Network Quality Manager 4.1 today!



Learn More about VoIP

What is a VoIP Gateway?

A VoIP gateway is what converts telephony traffic from analog to digital (IP) and vice-versa. Traffic coming in from the public switched telephone network (PSTN) flows through a gateway and gets converted to digital packets (for transportation over a LAN or other IP-based network). The other way round is also possible when digital IP traffic is fed through a VoIP gateway for conversion back to analog so it can be transported out over the PSTN. Essentially, a VoIP gateway acts as a bridge between an IP network and the PSTN.[1]


What is PRI?

In the Integrated Services Digital Network (ISDN), there are two levels of service:

  • Basic Rate Interface (BRI) – intended for the home and small enterprise
  • Primary Rate Interface (PRI) - intended for larger users.

Both rates include a number of B-channels and a D-channel. Each B-channel carries data, voice, and other services. The D-channel carries control and signaling information. PRI

The Primary Rate Interface consists of 23 B-channels and one 64 Kpbs D-channel using a T-1 line or 30 B-channels and 1 D-channel using an E1 line. Typically, a PRI user on a T-1 line can have up to 1.544 Mbps service or up to 2.048 Mbps service on an E1 line.[2]


What is PRI Trunking?

PRI trunking is a means of transmitting many different voice and data communications amongst multiple locations allowing the addition of large numbers of voicelines (as much as 23) at the same time.

When you decide to take time off of work to go on vacation or even plan to be out of office for a day, you must be thinking of a way to not get those dreaded phone calls and messages about server downtime and application issues.

False Alerts: Reasons Why You Get Them and How to Avoid Them

There are many reasons why you your system may trigger alerts more frequently than normal. According to this recent post, many admins get “spam” alerts for a number of reasons. Here are a few examples:

  • Events that frequently occur such as CPU or memory utilization can trigger alerts more often than most other system components.
  • You can get “spam” alerts from servers that are not in production or switches that have been discharged.
  • If your polling cycles aren’t tuned to the right level of granularity, you might get a flood of alerts that will fill you in-box.
  • Not properly tuning threshold levels can lead to a sudden spike in alerts.


These are valid reasons for why you receive a false or a "spam" alert. What if a false alert is triggered and you're out of office? You get the alert, you start making calls, and you get status updates from colleagues every few minutes to be sure the issue is resolved. When you come across such alerts you tend to ask yourself a few questions - why do I get hundreds of alerts on a daily basis when things are running smoothly? Why am I getting an alert in the middle of the night? How do I optimize server functionality so I'm not bothered constantly?


Here are some ways to avoid these issues:

  • Set up alerts for components that you think are really going to impact your users or your business.
  • Establish well-defined threshold settings—this way you can optimize the kind of alerts you receive during the day and ensure that you’re not bothered after work hours.
  • Set the right dependencies to significantly lower the amount of alerts you receive.
  • Define teams to look at specific alerts. This way you can forward issues to the right teams based on the severity of the alert.
  • Understand baseline trends to set more realistic thresholds.


Determine What to Monitor and Why

Most admins have to monitor hundreds of servers and applications. This means you’re probably dealing with plenty of alerts. Under these circumstances you’ll have to determine a few things.

  • Go over each metric and see if you really need to monitor that metric (if you have no defined response in how to react to the alert).
  • Talk to your business groups and understand what the impact will be. This will give you a sense of how monitoring metrics might affect the overall business.
    • You’ll know what they really care about and what they think are critical applications that need to be monitored.


Statistical Thresholds: A Better Way to Set Baseline Values

SolarWinds Server & Application Monitor (SAM) takes threshold-based alerting to a new level. One of the new features in version 6.0 is alerting based on statistical thresholds. Normally, you would have to monitor applications for several weeks in order to know what the ideal or optimum baseline is to set warning and critical thresholds. With the new Server & Application Monitor, threshold values can now be calculated and assigned automatically. Now, when I say automatically, SAM collects the data from the last 7 days (you have the option to change this setting) and determines the baseline values. You can then select your work hours, nights, and weekends. Based on the time of the day, SAM calculates the baseline data for both day and night system performance (the option to set threshold values manually is still available).


In short, statistical thresholds allows you to look at these processes:

  • Applying thresholds to templates, individual component monitors, and applications.
  • Understanding baseline statistics using standard deviation calculation for day and night system performance.
  • Gaining statistical insights into the performance metrics and how they vary over time. Look at how stats are collected for higher and lower threshold values of each metric.
  • Looking at baseline details before setting the right threshold values.
  • Setting the right threshold values using the built-in baseline calculator that calculates and applies the recommended threshold values for warning and critical stages for a specific metric.


At some point, you will have to deal with “spam” alerts. And the best way to go about is to strike that balance between monitoring your application usage and setting the right threshold values. We believe with the new Server & Application Monitor, you can adjust thresholds more dynamically and keep those alerts to a minimum.


Feel free to sign-up and download SAM 6.0 release candidate now to experience all the new exciting features.

The real impact of maintaining a good Device Lifecycle Management Process is - Save Time and Mitigate Vulnerability Risks!

While network administrators are busy setting up networks, troubleshooting and resolving issues, optimizing network performance, how can they squeeze-in time to manage these tasks?

Every step in the device lifecycle process is just as critical as the other. Whether it is adding and configuring a device or performing required software/ hardware upgrades or ensuring security policy compliance or finally, visibility into when the equipment needs to be phased out.

So, what are the Benefits?

  • Maintain a standardized repository of network infrastructure elements
  • Enjoy clear visibility to understand the impact of each configuration element
  • Improve decision making and risk assessment
  • Increase operational efficiency and administer standard configurations for devices

The Actual Need

Comprehensive lifecycle management is a fundamental part of IT management and linked to this, is the pressing need for real-time network device configuration visibility. Equally important, is an inventory database that enables automated searches for end-of-life, end-of-service (EOL/EOS) devices and components.

Device inventory must be, from time-to-time - prioritized, remediated and monitored. In a network with tens or hundreds or thousands of devices, updating and maintaining device information is definitely not easy. Nevertheless, most organizations still manage to maintain some data, but this again is mostly in bits and pieces and often outdated.

The Solution

A tool that can automate and simplify - device discovery, configuration, monitoring, and administration of all devices in the network. For more on Device Lifecycle Management take a look at our whitepaper on:


While reading a recent blog post on SlashDot, my mind was opened by some touching SysAdmin poetry.  While not a lethal as Vogon verse, I promise it will alter your mood.



FTFY by Neo-Rio-101


I don't want my data in the cloud

I don't want my data in a crowd

I don't want my data on the net

I don't want my data on diskette

I don't want my data over there

I don't want my data everywhere

I know the spooks don't give a damn

I do not trust you Uncle Sam!







Farts In Their General Direction by MrMeval

I do not play with the cloud clowns.
I own my own hardware and software.
I do not walk in the valley of DRM.
I do not beg to receive the fruits of my labors from datachangers.
I shall not want.


(untitled) by TheGratefulNet


Do you like data in the cloud?

I do not want it in the cloud,

I would not like it since I'm proud.

Would you like it here or there?

I would not want it anywhere.

I do not like the loss of data,

Yes, you can call me a cloud-hater.




Enough with the Cloud Crap Already by (unknown)


I don't trust you with my data.
I don't trust your security.
I don't trust your longevity.
I don't trust that you at some point in the future won't hold my data hostage.
I don't trust you to keep my data away from big brother.

I also don't trust my ISP!

FINALLY, I don't want to wait all day for a file to load.


Open Mic Monday


Have you been inspired too?


If you're looking for software that allows you to deploy a controlled edition of popular cloud functionality like secure file sharing, you may want to check out applications like SolarWinds Serv-U that can be deployed on your own infrastructure.  Or, if you wrote your own ode to the cloud, lay it on us in the comments below.

Server & Application Monitor (SAM) has made major strides in the last two years with the introduction of hardware health monitoring, java application and hypervisor monitoring, remediation and a truck load of other capabilities.  With the SAM 6.0 release, this product has expanded its capabilities well beyond application and server monitoring and now includes functionality for specialized SQL monitoring and IT asset inventory management.


The response to this release we have seen thus far makes this product marketer want to cry……. with tears of joy.  Thank you aLTeReGo!


If you would like to view the webcast replay for a deep dive on some of the new features of SAM 6.0, check out the video below.  The video covers these features: AppInsight for SQL, the Threshold Baseline Calculator, Asset Inventory Dashboard and the Real-Time Event Log Viewer.  If you would rather download SAM 6.0 and try it out, you can sign up here if you are an existing SAM customer on maintenance.  If not, you can contact sales@solarwinds.com to obtain access to these new features.



If you only have a couple minutes, I encourage you to take a look at this very short video which highlights some of the problems you can solve with AppInsight for SQL.




Below are the questions asked during the Sneak Peek Webcast and the responses.


Q: Will SAM 6.0 get EoL data from the various server hardware vendors (i.e. HP, IBM, Dell, etc.?
A: Correct. SAM 6.0 queries each vendors internet web service for warranty status information. This requires the Orion server to have internet access.

Q: Also, is SAM available as standalone product or only as a module?  I am concerned that this app might cause my database to explode once we add our over 8,000 servers to it
A: SAM has been available as a standalone product since APM v4.2. You may want to consider leveraging AppInsight for SQL to analyze your Orion’s SQL performance issues. Performance could probably be improved by adding more spindles to the array where the Orion database or tempDB are located. As always, with MSSQL more memory always helps.

Q: Can a report be generated to show all expired hardware in a list?
A: Yes. SAM 6.0 even includes a new out-of-the-box report that utilizes the new Web Based Report Writer that contains this information.

Q: Will you need to add Asset Inventory to See ILO and Drac's?
A: Yes. Reporting of Out of Band Management Cards such as Dell DRAC’s and HP iLO’s is included as part of SAM 6.0’s Asset Inventory.

Q: What versions of hp support tools or ibm director are needed? the latest and greatest or previous versions?
A:  Hardware information collected by Asset Inventory requires the following software provided by the hardware vendor.
• Dell PowerEdge server with OpenManage Server Administrator 7.2 or later
• HP Proliant servers with HP System Insight Manager v6.2 or later
• IBM xSeries servers IBM Director (Common Agent, v6.3 or later)

Q: No other software needs to be installed on the server, like Log Forwarder?
A: Asset Inventory requires vendor specific software be installed on the server for physical hardware components only. General server inventory information is available for all nodes, including virtual guests.

Q: If there are multiple disks, would it break out the disk statistics per disk or as a whole?
A: Yes. AppInsight for SQL will show all files that make up the database or transaction log, and the disk I/O for each drive those files are stored on.

Q: How do you handle Microsoft Server Clusters in AppInsight for SQL? Would you monitor the cluster virtual node or the real server?
A: It’s recommended that AppInsight for SQL be applied to the cluster VIP. You should also have each cluster member node managed/monitored in Orion.

Q: Is the custom assrt information editable on the web interface? What permissions would someone need to edit it?
A: Custom Asset Information requires node management rights to create or modify.

Q: Do you get all the hardware information if you are only using SNMP to monitor Windows servers or does it require WMI?
A: For Linux/AIX hosts yes. For VMware ESX/ESXi hosts it’s recommended you poll those hosts directly using the “Poll for VMware” option for the highest level of detail. Windows hosts can be polled via SNMP, though some information is only available when the host is managed via WMI.

Q: Does this polling impact SQL?
A: AppInsight for SQL has very little impact on monitored SQL servers. Those components which are considered higher impact, such as index fragmentation have fully configurable polling intervals. The default polling intervals for these components are also not configured to poll the standard “5” minute interval. Index fragmentation for example is configured to poll every per-hour.

Q: What is the default interval for AppInsight?
A: AppInsight for SQL uses SAM’s standard 5 minute polling interval, though some information is polled as infrequently as once an hour to limit AppInsight’s impact on the monitored SQL server.

Q: I understand that AppInsight is the beginning of a new era for SAM. What's in your roadmap for other applications?
A: AppInsight for SQL is the first of many applications we’d like to support, though no specific roadmap currently has been defined. As with all features, the order in which we implement applications support will be dictated by user demand.

Q: Is there any new features regarding Exchange Server monitoring? and does SAM support Exchange 2013?
A:  SAM 6.0 does not include any new Exchange specific features, but we do have pre-release Exchange 2013 application monitoring templates available on Thwack. When they’re officially released they will be available for download through the Content Exchange.

Q: any changes to other AM like Exchange?
A: SAM 6.0 does not include any new Exchange specific features, but we do have pre-release Exchange 2013 application monitoring templates available on Thwack. When they’re officially released they will be available for download through the Content Exchange.

Q: Is there something new for Oracle for SAP Application?
A: No changes have been made in this release for Oracle. If you’re looking for SAP support I recommend you check out SAPOrion.

Q: is there a miminum release of NPM you have to be running to upgrade your 5.0 to 6?
A: If you’re running NPM on the same server as SAM you’ll first need to upgrade to NPM 10.6 before upgrading to SAM 6.0. You can also upgrade directly from SAM 5.0 to 6.0

Q: Upgrade path, Can you go from 5.0 to 6 without having to go to 5.5?
A: Yes, you can upgrade directly from SAM 5.0 to 6.0

Q: Will there be any licensing or pricing changes with the release of 6.0?
A: No licensing or pricing changes are planned for the SAM 6.0 release.

Q: Is AppInsight only for SQL? or does it work with Oracle?
A: AppInsight for SQL supports only Microsoft SQL at this time.

Q: I would like to see this type of webinar for SAM itself, not just AppInsight. Is this available for release 6?
A: If you currently own SAM, you can sign-up to download the SAM 6.0 Release Candidate. If you don’t currently own SAM you can contact SolarWinds Sales and they can provide you links to download the SAM 6.0 pre-release.

Q: also would like to know the release GA date for SAM6
A: A release date has not yet been made official. Though you can sign-up to download the SAM 6.0 Release Candidate which is fully supported and can be upgraded directly to the GA release when it’s released.

Q: So is the Real-Time Event Log viewer going to be an additional cost or is it included in SAM 6?
A: There is no additional cost associated with the Real-Time Event Log Viewer. It’s included as part of SAM 6.0, similar to the Real-Time Process Explorer and Windows Service Control Manager.

Q: Is AppInsight part of SAM?
A: Yes. AppInsight for SQL is included as part of SAM 6.0

Q: Is it possible to update Warranty info that is auto populated?  we buy support for our HP servers through a third party
A: Typically, HP support purchased through 3rd parties is the same support purchased directly. If your support status is accurately reflected on HP’s warranty status website, then it will be shown correctly in SAM’s Asset Inventory.

Q: does the warranty information come from hp/dell?
A: Correct. Warranty information is polled directly from Dell, HP, and IBM’s internet web services.

Q: do you need sql credentials to get the sql data?
A: AppInsight does require valid credentials to connect to the SQL server via the SQL protocol to collect performance information related to the SQL server and databases. Both local SQL and Windows credentials are supported by AppInsight for SQL.

Q: How does AppInsight affect license usage? Does it count as a single component?
A: Appinsight for SQL consumes 50 component monitor licenses for each monitored SQL Server instance.

Q: Will the Asset and Inventory data points be available in Custom Reports?
A: Yes, Asset Inventory information is available for Custom Reports. In fact, several out-of-the-box reports are included in SAM 6.0 that are built on Orion’s new Web Based Report Writer.

Q: Is an AppInsight module for Oracle on the roadmap?
A: We are currently considering adding AppInsight support for several different applications but as with all things at SolarWinds, user demand will dictate the roadmap.

Q: do you have to have Patch Manager
A: No SAM 6.0 features are dependent upon Patch Manager.

Q: when is the 6.0 release date ?
A: A release date has not yet been made official. Though you can sign-up to download the SAM 6.0 Release Candidate which is fully supported and can be upgraded directly to the GA release when it’s released.

Q: so enterprise 2005 this will not be useful ?
A: AppInsight for SQL supports SQL 2008, 2008R2, and SQL 2012. While AppInsight has been reported to work with SQL 2005, it is not officially supported.

Q: We run SAM 5.0, can we upgrade right to 6.0?
A: Yes, you can upgrade directly from SAM 5.0 to 6.0.

Q: Can you add devices that are in stock and not on the network?
A: Asset Inventory in SAM 6.0 is tied to nodes managed in Orion. Theoretically it would be possible to temporarily manage a device that’s in “stock” and, then unmanage that node essentially indefinitely to store and report upon that asset. It’s not however possible to manually key-in node/asset information for a node that has never been managed by Orion.
Q: sorry - how does warranty status get populated again..?
A: SAM 6.0 queries each vendors internet web service for warranty status information. This requires the Orion server to have internet access.

Q: will disk queue or I/O require that the node be monitored my WMI specifically, or does the application of the WMI template allow access to those metrics?
A: AppInsight collects the majority of information via SQL though some information, such as disk I/O require that the node be managed via WMI.

Q: can advanced alerts be setup for asset inventory data..?
A: Alerts based on warranty information can be created. In fact, we include this alert pre-configured out-of-the-box.

Q: can baseline thresholds be used for CPU and Memory, or just SAM elements..?
A: The threshold baseline calculator is available for any application component monitor that returns a statistical value that can have a warning or critical threshold defined.

It’s always been a difficult task to maintain standardized IT policy when it comes to deploying patches in your environment. Evidently IT admins try to avoid any vulnerabilities or security issues, but the scene never changes even when IT administrator cries out loud to users to keep their systems updated. There are several reasons why it’s tough for IT admins to achieve patch remediation smoothly and one among those is surprisingly the users themselves, who never show the lightest of interests to keep their systems and third-party applications patched and updated to the latest versions.


Watch the video below, where Larry the IT pro dishes out the truth about users in his environment.



Like Larry says, “Rules are rules. Why do we have them, if we are not going to follow them?” IT policies, patch updates, compliance – all these are fine. When users are not going to follow them, IT is supposed to do that for them. Bummer!



Tips to Make Your Systems Secure!

It’s crucial for any IT admin to ensure all the applications are up-to-date, the systems are secure without any vulnerabilities and meet their IT compliance standards in his organization. They should be aware of latest updates from your software vendors and deploy the patches accordingly. An effective patch management process would involve

  • Auditing systems for identifying missing patches and vulnerable systems
  • Deploying updates systematically in order to eliminate application vulnerabilities in your endpoints.
  • Automating the patch management process to ensure the operating systems and third-party applications are patched in a timely fashion.



Benefits of Centralized & Automated Patch Management

  • Simplify patch management process with the help of automation, and control patch management from a central location without having to do it individually for each user and application.
  • Decrease security risks & service performance degradation by controlling when & where patches are applied
  • Pass audits and demonstrate compliance with out-of-the-box reports and dashboard views



SolarWinds Patch Manager allows you to patch both Microsoft® and non-Microsoft third-party applications with more visibility, control, and reporting from the simplicity of a single central intuitive interface. Patch Manager gives you key management capabilities that help you simplify the entire patch management process from patch notification, to import/synchronization, to publishing, approvals, deployment, scheduling, reboots, and more.

Even though it’s simple to reset a user account password, for an IT administrator like you, who has a ton of other critical stuff to do, it takes time. And, it’s definitely no fun at all considering the number of help desk tickets that you resolve for this task each day.

Why Do These Annoying Passwords Get Locked?

More often than not, you’ll have issues with passwords because one of the following might have happened.

  • Your end-users have simply forgotten their password, and entered it incorrectly enough times to get the account locked.
  • End-users may have forgotten to change their system password, which has now expired. Now, it has to be reset.

Watch this video, and you may be able to relate to Pete’s frustrations on what he has to deal with every day.

Do you think it has resemblance to what happens in your life?

Tips to Avoid Password Reset Scenarios

  • Set up automatic password change reminders that prompt the end-users in advance of password expiry.
  • Provide self-service options to end-users to reset password using a Web interface.
  • Have KB articles built into your help desk software so that the user gets tips to reset the password on their own.
  • Institute an automated system in place that can help reset AD password automatically when a user is locked out of their account.

The operational hurdle is one. Then, there’s the security threat. Do you know if unauthorized users are trying to break into network workstations? How will you detect that and keep your servers and workstations secure?


Security Tips to Monitor User Logon Actions

  • Lookout for multiple incorrect password attempts. It can be a security breach or an unauthorized user trying to access the system.
  • Get visibility into logs from your domain controller and user workstations. This will give you visibility into the number of password attempts made, the time when they happened, which system is having suspiciously repeated incorrect password entries, etc.

The Solution: SolarWinds Log & Event Manager

These tips will be of use especially for someone like Pete (and you of course!). SolarWinds Log & Event Manager (LEM) addresses both the operational and security perspectives of password lockout issues.

  • LEM has built-in Active Responses to reset passwords automatically.
  • LEM collects and correlates logs from various entities such as AD domain controller and user workstations to alert you when suspicious password user logon/log actions are recorded on systems.


LEM WP.jpg



Network configuration management could be described as “hours of boredom punctuated by moments of terror.” Certainly it’s no fun to remote into a couple hundred routers or switches and manually enter config commands. And it’s absolutely terrifying to watch one of those teeny-tiny changes ripple into a full-blown network disaster! But it can happen all too easily. Unfortunately getting fingered for having fat fingers doesn't improve your career prospects either. But rest easy. This doesn't have to happen to you.

This seven-part series will explore frequently overlooked, yet proven and highly effective, network configuration practices that will help keep your network humming, users happy, and possibly make you the stuff of which career fast-track legends are made of.

The Problem

Today we’ll start by talking about why network configuration errors are the leading cause of network downtime. Next week we’ll explore what needs to be done, and in the remaining posts we’ll dive into the specific practices that too many network engineers and admins overlook at their own peril. So let’s begin.

It’s a well-publicized fact that the number one cause of network failure is human error – the kind of error that results in device misconfiguration and produces 80% of network downtime. One thing this statistic makes certain is that we all need to eliminate human error.




You might be asking yourself, “What’s the big deal if the network is down for 5 minutes here and 10 minutes there?” Simply put, all of those many small outages add up to one huge expense.

If you've been working to improve network availability, then you know how difficult it can be achieve 100% network uptime. While an annual uptime of 99.9% is good, it still represents about nine hours of network downtime a year.  And with downtime costs ranging from $100K to $300K per hour, this represents $900 thousand to $2.7 million a year in unnecessary expense. Not exactly packet change. 




If network availability is so important, then why can’t more organizations achieve 100% uptime?

There are three primary reasons why: lack of standardization, sheer quantity and diversity of devices, and the complexity of the configuration command set.

Today’s networks are large, very complex, and may utilize thousands of network devices including firewalls, routers, switches and more. To make things even more complicated, network devices can come from a variety of vendors and each has its own unique rule set. Furthermore, many devices use a remote command line interface (CLI) where each command must be entered separately. On top of that, many devices use hundreds of complex command statements. (Did you know there are roughly 17,000 Cisco® IOS commands?) Finally, there is no end-to-end view.  Each device is administered separately without any insight into how a change to a firewall can affect a down-line router or switch.




Like an intricate mosaic, your network has many tiny pieces which all must fit and work together perfectly.  There is no margin for error.  If everything doesn’t work just right, then your network breaks and when it does the business is also broken.

What has been your experience? Leave a comment and add to the discussion. In our next post, we’ll talk about what the ideal solution is and introduce often overlooked best practices that can help improve network availability.

In the meantime, check out this video showing how a Cisco network engineer uses SolarWinds Network Configuration Manager (NCM) to make complex network changes easily and accurately or Download and evaluate a 30-day fully functioning version of NCM today.

You can also find and read past posts in this 7-part series here


Post 2:

Post 3:

Post 4:

Post 5:

Post 6:

Post 7:


Well, it’s true! Normally tough network admin Vernon, here, is one such soul who had to check himself into IT Group Therapy. Unruly users taking advantage and indulging in malicious network activity was too much for him to handle. They sent the poor chap right off his rocker!

• Endless Nagging - For more network bandwidth and speed
• Uncontrollable Acts – Plugging in their personal devices anywhere, anytime
• Wild Behavior - Setting up their own wireless network without consent
• Roguish Conduct - Streaming videos and downloading files


Struggling to track and stop these impish culprits, Vernon has been having a hard time dealing with security loopholes and access leaks in the network.

Click on video below, to hear Vernon’s side of the story,


Can Vernon save his sanity and show them “Who is Boss?”


The answer is yes, and you don’t have to drive yourself nuts dealing with these network wrongdoers. Take charge and show them who’s tough. Support yourself with the necessary measures and means to know who is connected.


Tips to Monitor, Track and Control Devices on your Network:

Create means to know who and what connects to your network - Retrieve and maintain up-to-the-minute information of all devices entering and leaving the network.
Keep a watch on suspicious devices in the network - Tag a suspicious device or user and be alerted every time this device connects to the network.
Build alerting methods and quickly pull out information to locate a device - Create whitelists and be alerted on unauthorized device entry.Track down a device right to its switch port.
Restrict and monitor Wi-Fi access - Create wireless device whitelists and keep an eye on unauthorized endpoint entry.
Maintain data to track device usage history - Investigate a suspicious device of its previous connections, and determine if it’s rogue.
Set-up mechanisms to immediately detect and remediate rogue activity - Identify the switch port to which the rogue device is connected and remotely shutdown the port to immediately stop activity.


So, no more fear or worry. Take control of the devices in your network!

Tune in to hear from other network admins of their experiences at - SolarWinds Security Week (Aug 19-23, 2013)


For more tips check the following:

  1. 3 Simple Steps to take Charge of Your Network Access Security - eBook
  2. Detecting and Preventing Rogue Devices - Whitepaper

Now that SolarWinds has released its own ad hoc file sharing solution, where does it fit in an increasingly crowded market?


What Exactly IS File Sharing?


When we're talking about file sharing, we're talking about the ability for individual end users to send files to other people: literally to ANYONE with an email address, at ANY TIME they feel like it.


Ad Hoc File Sharing Diagram

We're also talking about the ability for end users to REQUEST that other people send them files.  In both cases, an email with a link to files or a link to a site where someone else can upload files is essential.


To Cloud Or Not to Cloud?


The first thing you'll note about SolarWinds' file sharing offering is that you don't "sign up" for it - instead you buy a copy of Serv-U MFT Server, install it in your datacenter, plug it into your Active Directory, and roll it out to your end users.  You are certainly welcome to try our file sharing offering online, but you deploy it in your own infrastructure - under your complete control.


Saving Serious Money


OK, so there are dozens of online file sharing offerings and even a few other server products like Serv-U that you deploy onsite.  So what can Serv-U MFT Server do for you that no other solution can?  The answer is SAVE MONEY.  Unlike almost every cloud offering and many on-premise offerings, Serv-U MFT Server lets an unlimited* number of end users share files and request files with anyone.


In practical terms, Serv-U makes it easy to calculate an ROI against almost any hosted offering on a per user basis.  For example, you can easily create a "what's a better deal" chart based on Serv-U's current retail price of $2995**, Serv-U's annual renewal charge of $599**, and a three-year timeframe (e.g,. total Serv-U MFT Server cost of $4,193 or $1,398/yr over three years).


Serv-U File Sharing ROI vs. Hosted Offerings (Over 3 Years)


Cost $ per user per month5 Users10 Users15 Users20 Users25 Users
$5not Serv-U
(pay $300/yr)

not Serv-U
pay $600/yr)

not Serv-U
pay $900/yr)

not Serv-U
pay $1,200/yr)

Serv-U ROI: $102/yr
(vs. $1,500/yr)


not Serv-U
(pay $600/yr)

not Serv-U
pay $1,200/yr)

Serv-U ROI: $402/yr

(vs. $1,800/yr)

Serv-U ROI: $1,002/yr

(vs. $2,400/yr)

Serv-U ROI: $1,602/yr
(vs. $3,000/yr)
$15not Serv-U
(pay $900/yr)
Serv-U ROI: $402/yr
(vs. $1,800/yr)
Serv-U ROI: $1,302/yr
(vs. $2,700/yr)
Serv-U ROI: $2,202/yr
(vs. $3,600/yr)
Serv-U ROI: $3,102/yr
(vs. $4,500/yr)
$20not Serv-U
(pay $1,200/yr)
Serv-U ROI: $1,002/yr
(vs. $2,400/yr)
Serv-U ROI: $2,202/yr
(vs. $3,600/yr)
Serv-U ROI: $3,402/yr
(vs. $4,800/yr)
Serv-U ROI: $4,602
(vs. $6,000/yr)
$25Serv-U ROI: $102/yr
(vs. $1,500/yr)
Serv-U ROI: $1,602/yr
(vs. $3,000/yr)
Serv-U ROI: $3,102/yr
(vs. $4,500/yr)
Serv-U ROI: $4,602/yr
(vs. $6,000/yr)
Serv-U ROI: $6,102
(vs. $7,500/yr)
$30Serv-U ROI: $402/yr
(vs. $1,800/yr)
Serv-U ROI: $2,202/yr
(vs. $3,600/yr)
Serv-U ROI: $4,002/yr
(vs. $5,400/yr)
Serv-U ROI: $5,802/yr
(vs. $7,200/yr)
Serv-U ROI: $7,602/yr
(vs. $9,000/yr)
$35Serv-U ROI: $702/yr
(vs. $2,100/yr)
Serv-U ROI: $2,802/yr
(vs. $4,200/yr)
Serv-U ROI: $4,902/yr
(vs. $6,300/yr)
Serv-U ROI: $7,002/yr
(vs. $8,400/yr)
Serv-U ROI: $9,102/yr
(vs. $10,500/yr)
$40Serv-U ROI: $1,002/yr
(vs. $2,400/yr)
Serv-U ROI: $3,402/yr
(vs. $4,800/yr)
Serv-U ROI: $5,802/yr
(vs. $7,200/yr)
Serv-U ROI: $8,202/yr
(vs. $9,000/yr)
Serv-U ROI: $10,602
(vs. $12,000/yr)


* = OK, not quite unlimited; hardware and performance will eventually limit the number of users that a single Serv-U can support.  But at least there are no limits built into the software or licensing. 

** = US - legal also makes me say "subject to change"

The usage of flash drives or USB removable media is quite common in any workplace; but having a closer look at it, it is potentially an internal security threat. In the past few years, we have seen so many organizations tracking down the loss of sensitive/confidential information to have happened owing to usage of USB drives and other mass storage media. Cyber-security breaches and data theft are making more and more IT leaders paranoid about security than ever before.


Do You Know How Data Leaves Your Organization?

  • When your employees plug in USB devices try to back their data up without involving the IT team
  • When a disgruntled employee decides to just copy sensitive information and tries to leak it externally
  • When an employee’s unsuspecting USB device has a malware in it which can automatically trigger a script or code to install or run on your system and steal data
  • Irresponsible usage of the BYOD policies may have a role to play, especially when the devices are used as mass storage devices to transfer data

The Impact: You Get Hurt Pretty Bad!

The loss or theft of sensitive information is not limited to the realm of emails and contacts, but it can also extend to more sensitive information such as:

  • Loss of copy-righted information
  • Intellectual property data
  • Deviation from compliance regulations
  • Access codes and secure login credentials


All these severely impact the victim organizations in terms of financial and reputation loss.


Watch this video where John gets paranoid about his experience of becoming the victim of USB data loss sharing any information owing to his past experience.



Don’t Allow Data to Walk Out of the Door – Tips to Stay Secure!

Here are some tips to ensure you keep your data protected on your network, servers and workstations.

  • Set up access rules & policies so only authorized employees have USB access
  • Ensure to remove sensitive information access from employees once the purpose of using the information is fulfilled
  • Do not leave old or unattended data on end-user systems
  • Build strong BYOD usage policy and disallow using employee-owned handheld devices as mass storage devices for data transfer
  • Monitor the log activity of all your enterprise workstations and USB endpoints

How Does Log Management Help?

Continuous log monitoring of your IT infrastructure will help collect logs from all your workstation endpoints and trigger real-time alerts to notify you of USB activity on the network. With automated incident response available in log management tools, it’s easier to take preventative action and automatically disable USB connection in real time.

Watch out for more posts and security tips throughout this week!


Learn More:

You don’t need to believe in Murphy’s Laws to realize that one or all of the following will happen, just at the right time:


  1. Server failure
  2. Application failure
  3. SQL crashes


Now, multiply each of the above points with the number of servers, applications, and SQL databases you might have in your IT environment. Nightmarish, right? Well, as much as you’d hate to accept, these failures are bound to happen—however careful/watchful you may be.


Of course, your IT environment would have a help desk solution that caters to the IT support personnel in accepting and resolving incoming issues. If there’s a server failure, the system administrator can notice the glitch and start fixing it based on the severity and application dependencies. In the meantime, those users/groups affected by this issue will create help desk tickets. The fact is, the help desk professional might not even know that there’s a server issue and that a system administrator is working on resolving it. Did you notice the disconnect between the server failure and the help desk?


Imagine this. A server monitoring system detects a failing server, and…




…all of the above happening in real-time! Incredible, isn’t it?




SolarWinds® Web Help Desk™ with its powerful asset discovery & management, does exactly this when natively integrated with SolarWinds Server & Application Monitor (SAM). Once you establish a connection to the server node that’s being monitored by SAM, you can define simple rules to filter alerts based on parameters such as severity, alert field messages, etc.


Once integrated, communications between Web Help Desk and SAM are bi-directional. This means that whenever an incoming alert-to-ticket, taken up by a help desk technician is assigned/updated—the same is visible from within the SAM console. This ensures that there’s a closed-loop between these two otherwise disparate systems, which makes the life of the help desk professional a lot easier!


Web Help Desk not only integrates with SAM, but also with SolarWinds Network Performance Monitor (NPM), making it a simple, yet powerful help desk software.


What are you waiting for? Take a test drive today!

SolarWinds Firewall Security Manager (FSM) is a powerful tool for analyzing firewall configurations and logs to isolate redundant, covered, and unused rules and objects. Without touching production devices, FSM can also model how a new rule, or change to an existing rule, will affect your firewall policy.  FSM simplifies firewall troubleshooting for your multivendor, Layer 3 network devices, and helps fill gaps in your security rules.

Importing Configurations from Network Configuration Manager

After you set up Network Configuration Manager (NCM) to monitor your device configuration changes, and have installed FSM, you can import the device configuration files from NCM into FSM.

To import the device configuration files from NCM into FSM:

In the menu bar, click Add Firewall.



The Import New Firewall window opens.



Choose Import from NCM Repository, then click Next.
The NCM Repository Connection Parameters window opens.


Enter the NCM Server URL, your Username and Password, then click Next.

Choose the device configurations to import into FSM.

After you choose device configurations to import, click Finish.
FSM begins the import process.

When the import process is complete, the imported device configurations are available on the FSM Firewall Inventory tab.


You can now start analyzing the configurations you imported from NCM.

Generating Change Scripts to Run from NCM

To make changes to these configurations in FSM, start a change modeling session.

To start a change modeling session:

In the Firewall Inventory, chose the device.

In the menu bar, click Analyze Change > New Change Modeling Session.

A new Change Modeling Session opens.


You are ready to make changes to the configuration, and test the changes offline. Testing offline enables you to see the effect of your changes before you put them into production.

After you make changes to your configuration, click Generate Change Scripts.
FSM generates a new Change Script. This script includes all the actions required to implement the changes you made to the specified device configuration.

Use NCM to load the change script. Choose the target device, and click Execute.



Together NCM and FSM make a great team to efficiently manage your device configurations!

For more details on Change Modeling, Virtual Packet Tracing, and other Firewall Security Manager functionality, visit our other resources on Thwack, the SolarWinds community.

Network administrators have never been confined to working on networks alone. A huge number of small and medium size organizations prefer to have a single admin to take care of both the network as well as the applications as part of their cost cutting measures. But even with one person taking care of it all, enterprises ask for 99.9% uptime!



Application monitoring can sometimes become a nightmare for admins. To ensure application performance, one needs to keep an eye on both server as well as the application in addition to the network. You need to track CPU, disk performance, system load, memory, packet loss, response time, etc. on the server side and then database connections, running processes and threads, service status, number of queries/second and more about the application. It also does not end with collection of stats alone – You also need to able to correlate the stats collected from different elements and quickly spot issues before the users start complaining. Working based on guidelines alone may not be always effective and can be time consuming.  Let us look at the factors that can help ensure better application performance.

Real time visibility into critical elements:

Business critical applications like CRM, CITRIX, ERP, etc., need continuous monitoring of both server and application elements. Critical applications are used by hundreds of users across the organization and there will be processes like adding, modifying or deleting data, updates, backups, etc. running at all times. To ensure uptime, you need to make sure that the server is never overloaded. Any lack of resources on the server will cause bottleneck for the application making the application slow.  Hence a holistic visibility into your critical application servers is necessary and that too in real-time.

Proactive Alerting:

How can you ensure zero downtime? Proactive monitoring is the answer. You will need a monitoring tool with good analytical and alerting functionality. You should not end up waiting for end users to pinpoint the problem.  Analyze applications and their normal behavior and set thresholds that can alert you before a change in behavior turns into a problem. And make sure the alerts can reach you at all times wherever you are. There are also monitoring tools which set thresholds automatically based on behavior and alert based on those thresholds. And when you create your alerts, remember not to overdo it and find yourself with a flooded mailbox.

Learn from Outages:

Outages are bound to happen in spite of the best effort you have put in. Understand the reason why an outage occurred and why you did not see it coming. Most admins leave their monitoring tool as it is after an outage believing the tool did not see it coming. That is always not the case - look at the thresholds and alerts you created on your monitoring tool and try to understand why a scenario must have been missed. Create alerts so that if such a scenario replicates, you are not left lost. You also have to think about problems that can occur and create alerts based on it.

Read your Reports:

Reporting helps not only identify problems but also helps understand baseline behavior of your systems. With knowledge on baseline behavior, you will know when something is nearing failure. Remember to go through your everyday reports and ensure you are getting reports on critical factors about the application as well as the server hosting the application.




Application maintenance and uptime can be easy if you remember the factors we discussed. Not only do you need to monitor your important assets and critical factor, you also need to have an understanding of normal vs problematic behavior. This can be further made easy if you have the right tool for server and applications monitoring in your network.




30 Day Full Feature Trial | Live Product Demo | Product Overview Video | Twitter

You Can Be a Victim of Social Engineering

Social engineering is a human hacking tactic, as opposed to brute-force attacks, that involves unsuspected social engineers who take advantage of the gullible nature of the victim (You!) and extract information such as credentials, access codes, financial and trade secrets, and any other sensitive data that the victim is privy to. Humans are the weakest link in the security forte of an organization. A security appliance maybe difficult to break into; but an employee, who is easy to manipulate, is the hacker's key to Fort Knox. Social engineering also includes commonplace--but highly overlooked--threats such as phishing, hoaxes, shoulder surfing, tailgating, etc.


Common Social Engineering Traps



  • You could receive a call from a trusted source to reveal sensitive data
  • The caller can be a phony pretending to be someone else to con information from you
  • You could get an unsolicited email requesting credit card numbers and passwords to be filled in
  • It can be a phishing attack to obtain sensitive information from you
  • You could happen to meet with an unassuming stranger who wants to conduct a survey, or just earnestly seeks help
  • It could be a social engineer trying to con you with his guile of speech and false identity

Watch this video where Greg, a naïve and helpful IT administrator, gets hoodwinked by an expert telephonic trickster. Funny, and yet enlightening!



Help the Hacker Not! – Tips to Stay Protected

You don’t have to turn paranoid and be alarmed at every single phone call or email. It just takes more awareness and education on social engineering, and some secure online and social practices to stay protected.

  • Be aware of social engineering attacks. Educate your peers, employees and friends.
  • Do not divulge personal information and company data to any untrusted source, however convincing and genuine it may look.
  • If you are suspicious of any person or specific email, report the case to your organizational authorities and IT security teams.


If at all there happens to be a case of social engineering attack, monitor logs from all devices and workstations to see any unusual behavior pattern or non-compliant activity that may lead to data theft or other cyber-crimes. It’s nice to be helpful, but do you really want to help the hacker? (Unless you want to end up holding the golden crowbar like Greg does!)



Security Week

This is the first day of SolarWinds Security Week (August 19-23). Stay tuned for more security tips and entertaining videos throughout this week!


Security Week v2.PNG


Learn More

By default the Storage Manager Website uses port 9000. Some customer may wish to change this to another port due to port availability or firewall issues. There is a configuration file within the install subdirectory of Storage Manager that will allow users to change from default port 9000 to another port.


In this example we will change the Storage Manager Website port to 80. This example will also show you how to change the port assignment for Storage Manager running on Windows or Linux.

The file we must modify is called server.xml and it can be found in the following locations:


  • Windows - <installed drive>:\Program Files\SolarWinds\Storage Manager Server\conf
  • Linux - /opt/Storage_Manager_Server/conf

Open the file with a text editor and do a search for “9000.” You will see a section that looks like the following snippet:


<Connector port="9000" protocol="HTTP/1.1" URIEncoding="UTF-8"

               disableUploadTimeout="true" connectionTimeout="20000"

acceptCount="100" redirectPort="8443" enableLookups="false"

maxSpareThreads="75" minSpareThreads="25"

maxThreads="150" maxHttpHeaderSize="8192"/>

The entry for Connector port= is where we change the port assignment for the Storage Manager Website. I change the entry from 9000 to 80.


<Connector port="80" protocol="HTTP/1.1" URIEncoding="UTF-8"

disableUploadTimeout="true" connectionTimeout="20000"

acceptCount="100" redirectPort="8443" enableLookups="false"

maxSpareThreads="75" minSpareThreads="25"

maxThreads="150" maxHttpHeaderSize="8192"/>

Save your changes to the file. Finally we must restart the Storage Manager Web service before the change will take effect. To restart the Storage Manager Web service do the following:


  • Windows - Run services.msc and restart the SolarWinds Storage Manager Web Services service
  • Linux – From the Command Line Interface type “/etc/init.d/storage_manager_server restart webserver”

Open the Storage Manager website in a web browser referencing your new port assignment.

With the recent release of Virtualization Manager 6.0 (VMAN) and the integration with SAM/NPM there is now a really cool way to view the different levels of performance analysis from application to VM to datastore. By having the data from virtualization manager presented in SAM it provides a single pane of glass for monitoring and troubleshooting the full OS to application to VM to storage environment. Customers often ask me, “Ok so I have some VM’s that are not running well, I think it’s the storage but can’t see this?”


SAM to VMAN Integration


To start with we can monitor the server in SAM, by having the insight at the OS level we can assign out of the box application templates such as SQL, Exchange, IIS, or create custom one’s depending on what we want to monitor. This will then give really deep insight in to how the applications are performing or maybe not performing.


When the virtualization manager integration is enabled with SAM we will be able to see alerts from VMAN that may exist for this server. We also get a storage tab on the server where we can get deep insight to the underlying storage information from VMAN being displayed within SAM.



Once on this page I can see really specific metrics regarding IOPS and latency, On this particular one there is an alert from VMAN for disk latency



If I click on “View Performance” and select the latency metric I will be shown the results in the cool performance analyzer view in VMAN



So what’s causing this latency?


If we close the chart on the top right, it will take us back into the storage tab of this server in SAM. Here we can see what datastore this VM resides on.



If I select the datastore I can see what applications are currently running. I can also see there are quite a few VM’s, too many VM’s on one datastore can cause read/write latency.



On the same page I can see the overall datastore latency and the Top VM’s. Quickly I can identify the top VM’s that have latency, with this information I can now decide what to do, this could be to migrate the top VMs off to a separate less busy datastore or simply to migrate them to a datastore running on faster disks. Whatever I decide is the next step I now have deep insight to make the correct decisions.



This is just one of the queries we get from customers looking to solve issues in their virtualization environment. To read more about the challenges SysAdmins face and how SolarWinds can help solve them, read the eBook “A Day in the Life of a SysAdmin” now.

In an earlier blog, we saw what a blended threat is and what all elements constitute it. To quickly recap, a blended threat is a composite security threat which involves various threat vectors coordinated to launch a high-degree security exploit that’s both difficult to detect and contain.


How Does a Blended Threat Work?


Blended Threat 2.png

Let’s understand this with some examples:


Case 1: Email-Malware-Rootkit Blended Threat

  • A legitimate website is hacked and a malware is placed there –by putting up malicious ad links, or cross-site scripting, or SQL injection, etc.
  • The link to the website is sent via unsolicited mails to end-users. This gets passed by the anti-virus because there’s no virus attachment in the mail, and only a legitimate site URL.
  • Once the unsuspecting end-user clicks and follows the link, it takes him to the infected Web page and activates the malware hidden there.
  • This can trigger the download or install of a rootkit that is invisible to the end-user on the list of OS applications and processes, but surreptitiously runs some malicious software such as spyware or key logger to capture keystrokes and steal login credentials, or run a bot to take control over the entire PC.


As you can see this attack involves various threat elements cleverly disguised to infiltrate network security and compromise your system.


Case 2: Poisoned Search Engine Results

This involves a similar pretense-of-authenticity attack where end-user search for popular search terms ends up in poisoned search results that look authentic but may conceal malware underneath.

  • User clicks an unsuspected genuine search result.
  • There is a URL redirect to an infected website which activates a malware.
  • The malware can exploit the unpatched versions of Web browsers or OS applications to compromise system security.
  • Once the system is taken over, it can be infected and even turned into a host for launching large-scale botnet attacks such as DDOS.


The complexity with blended threats is that they are difficult to detect and even if a portion of it is detected. That is not all; there’s more risk at large that can inflict damage.


Protection from Blended Threats

To protect from such sophisticated threats, you need to have the best-in-breed security technology such as layered security and defense in depth. These technologies combine multiple layers of security on different fronts to stay protected against blended threats. For example, a layered security solution can encompass a firewall, an antivirus software, an intrusion prevention system and storage encryption controls. This follows the principle that the combined whole is greater than the sum of its parts.


Security Information & Event Management (SIEM) is an excellent solution that provides defense-in-depth protection to your existing security infrastructure. A SIEM software collects logs from your security appliances, network devices and workstations, correlates them in real time to provide advanced threat intelligence. SIEM does not just stop with detection, it covers real-time log monitoring, threat alerting, automated incident response for threat remediation, forensic analysis and compliance reporting.


Read this White Paper:



SIEM White Paper.png

I recently had the opportunity to interview Cole Lavallee of Waters Corporation. At Waters, Cole and his team use Server & Application Monitor, Log & Event Manager and DameWare to monitor and troubleshoot hundreds of servers, critical applications, and sites worldwide to reduce any downtime and increase customer satisfaction.


JK: What are some of the challenges you face every day in your job?

CL: Currently we have about 80 offices worldwide and it’s very crucial we receive alerts when there is a problem in our environment. Since we are in the life sciences business, there are FDA regulations and guidelines we have to keep log data for certain amounts of time. This includes important files on many different servers and requires periodic validation.


JK: How did you find out about SolarWinds?

CL: Working in IT I’ve known SolarWinds for years. When I came onboard to Waters few months back, we already had Server & Application Monitor (SAM) up and running.

I am in the group that manages all corporate IT and monitor over a hundred servers as part of our datacenter using SAM.  We use VMware for our virtual infrastructure with the majority of our servers being virtual and we use SAM to monitor them. We also monitor Active Directory which is a big thing for us. We monitor SQL servers, UNIX and Linux systems, and IIS Web servers.


We monitor the Waters website and all the servers that go along with it, which is a lot of servers.  One website is used for customers to communicate to customer support and that’s crucial to customers. If any of those sites go down then we have a big issue so it’s important they’re up.


We use Lotus Domino internally and because of that we use Log & Event Manager to manage Active Directory accounts in case of lock outs. This is really important for us to monitor because with Lotus Domino you actually have to change your password in your phones unlike other environments. With Log & Event Manager, we can automatically reset their password when accounts get locked and it saves a lot of time for us and the end user.  This probably saves us 5+ hours a week minimum.


JK: In terms of business benefits, what is the outcome of using SolarWinds products?

CL: We really try to stay with SolarWinds with anything that we try to do. Our team at Waters is really happy with SolarWinds. Also the products are easy to use. We’ve never really had any problems where the tools reported something incorrectly or if something went down. I’ve used a lot of server monitoring software and I’ve seen how awful they are and SolarWinds is one of the easier ones to use. It definitely works for us.

FSM Web Interface - You Got It!

Soon (just hang on it's not far off) you'll get to meet the Firewall Security Manager (FSM) Web Interface. This is about the coolest thing that's happened to Network Configuration Manager since blade servers. FSM produces network device audit reports, enables you to model (test) the effect of changes you want to make, and so much more.

Coming to an NCM Near You

The same Network Configuration Manager you know and love has partnered-up with FSM. You'll soon be able to configure, run, and view Cleanup, Security Checks, and PCI Compliance reports using FSM in your browser. This package includes the ability to query traffic flow, run packet traces, and do a VPN Audit.

All for One, One for You

We put this pair of great tools together because we listened to you. I wish I could say more, but we're still busy making this coming release sparkle. We're still fine-tuning. With that in mind, have a look at this:

Watch this space for more news about this new NCM/FSM integration coming Fall 2013.

If you've been working to improve network availability (MTBF and MTTR) then you know how difficult it can be.  While an annual up-time of 99.9% is stellar, it nevertheless equals nearly nine hours of network downtime.  And with downtime costs ranging from $100K to $300K per hour this represents $900 thousand to $2.7 million a year in unnecessary expense.  Not exactly packet change. So whether you’re striving for “four nines” (99.99%) or looking for high impact ways to improve efficiency, it can really pay to learn how configuration management can help.


Next Tuesday, August 20, at 10am CDT, Francois Caron, Product Management Director at SolarWinds and Edward Bender, Head Federal Systems Engineer at SolarWinds, will be presenting a webinar packed with valuable tips and insights entitled Network Configuration Manager: Working to Eliminate the #1 Cause of Network Down-time. Here they will talk about the best practices we've identified to help improve network availability using network configuration management.


Here are just a few of topics they’ll be covering

  1. How to build a great profile inventory of network devices networ
  2. Why standardization can help reduce configuration errors and improve maintainability
  3. What can be done to protect configurations from unwanted changes
  4. Ways to bring out-of-service devices back into service quickly
  5. How to automate device updates, changes and end-of-life tasks
  6. How to more easily create compliance reports
  7. Ideas on how to tune-up your change control processes


If you've ever wanted to improve network availability or better manage device configurations, then this event will give you the insights you need.  This is a free webinar.  Please plan to join us Tuesday, August 20th at 10am CDT by registering now.

What is the value of network configuration management and change control?  Perhaps it depends on who you ask. Senior managers will cite reliability, compliance and performance benefits.  Network administrators will talk about how easy it makes their jobs. The reality is that managing device configurations in a complex, heterogeneous network can be like the hamster wheel of pain: Every network change = network risk and the cycle is never ending.  It’s hard to mitigate this risk without a best practices framework, which is why it should come as no surprise that the #1 cause of network downtime is actually due to configuration errors!


Remember your old networking 101 classes?  BYOD, Virtualization, Cloud Computing and SDN have certainly changed the way we manage networks today, but the reality is that errors resulting from routine configuration changes are still the most common and costly reliability problem with networks today.  Fortunately, it is possible to easily prevent these errors by adopting a few universal best practices based on familiar IT governance frameworks like ITIL.


In this video, Dex Manley, Product Manager at SolarWinds, offers five best practices for maintaining a high availability network. We hope you find it to be a helpful guide and reminder for ways to make your configuration management process smoother and more efficient overall.



In this video you’ll also get a intro to the NEW SolarWinds Network Configuration Manager V7.2.  Make sure to take a look at cool new features like EOL too!

SAM offers a detailed view of your SQL databases' performance without the use of agents or templates by using the AppInsight for SQL embedded application. AppInsight for SQL provides a level of detail and expert knowledge far beyond what a SQL template can provide, allowing you to monitor virtually every aspect of your SQL instances and databases.


Like any unassigned application in SAM, AppInsight for SQL is considered a template until it is applied. Therefore, it is a member of the Application Monitor Templates collection. Once applied to a node, AppInsight for SQL is considered an application. Like any SAM application, AppInsight for SQL is comprised of multiple component monitors, also known as performance counters.

Make Sure You're Ready for it!


AppInsight for SQL Requirements and Permissions

AppInsight for SQL data is collected at the same default five minute polling interval as traditional application templates. Following are the requirements and permissions needed for AppInsight for SQL:

AppInsight for SQL Requirements

AppInsight for SQL supports the following versions of Microsoft SQL Server:


Microsoft SQL Server 2008

Without SP




Microsoft SQL Server 2008R2

Without SP



Microsoft SQL Server 2012

Without SP


AppInsight for SQL Permissions

The minimum SQL permissions required to use AppInsight for SQL are as follows:

  • Must be a member of the db_datareader role on the msdb system database.
  • View any definition.
  • Connect permission to Master database.
  • Execute permission on the Xp_readerrorlog
  • Connect permission to the Msdb
  • Must be member of db_datareader role in the Msdb
  • Connect permission to all databases.


Note: AppInsight for SQL supports both the SNMP and WMI protocols and uses SQL to gather information about the application. Additional information is available for nodes managed via WMI.


The following script will configure permissions:

USE master



EXEC sp_adduser @loginame = 'AppInsightUser' ,@name_in_db = 'AppInsightUser'

GRANT EXECUTE ON xp_readerrorlog TO AppInsightUser

USE msdb

EXEC sp_adduser @loginame = 'AppInsightUser' ,@name_in_db = 'AppInsightUser'

EXEC sp_addrolemember N'db_datareader', N'AppInsightUser'

EXECUTE sp_MSforeachdb 'USE [?]; EXEC sp_adduser @loginame  = ''AppInsightUser'', @name_in_db = ''AppInsightUser'''

Learn it!


Expert Knowledge!

Every SQL counter, both in SAM and the Admin Guide, will contain expert knowledge. This will allow you to resolve issues quicker than ever!

For example:


Lazy Writes/

The lazy writer is a system process that flushes out buffers that contain changes that must be written back to disk before the buffer can be reused for a different page and makes them available to user processes.

This counter tracks how many times per second that the Lazy Writer process is moving dirty pages from the buffer to disk in order to free up buffer space. The Lazy Writer eliminates the need to perform frequent checkpoints in order to create available buffers.

Generally speaking, this should not be a high value, say more than 20 per second. Ideally, it should be close to zero. If it is zero, this indicates that your SQL Server's buffer cache is large and your SQL Server does not need to free up dirty pages.

Possible problems:
If the returned value is high, this can indicate that your SQL Server's buffer cache is small and that your SQL Server needs to free up dirty pages.

Check your SQL server and verify its memory is being used efficiently. Applications other than SQL may be using a great deal of memory. Try and recover memory by closing unnecessary applications. Installing additional memory may also help.

You will find this type of information on the Component Details page for every AppInsight for SQL performance counter!

Ready to take your mapping skills to a whole new level? You learned a lot from the Admin guide and thwack, and now, this episode will take your mapping skills to a whole new level.


In this episode, get step by step demos of:

    • How to Add & Embed a Sub-Map
    • Create Maps for Specific Users/Permissions
    • Why You’ll Want to Map Racks, Apps, & More
    • How to Create the Big Green Button (and the Boss
      Log On)



After the episode, you might want to check out Network Topology Mapper.

Any growing organization needs some sort of directory service or database management to maintain and manage a smooth IT environment. When the IT environment grows it becomes hard to maintain and manage users, systems and servers. Windows Active Directory® (AD) was a tool designed by Microsoft® to as distributed management service to help manage the IT environment. Active Directory is designed as scalable multi-master database management system and it helps administrators to maintain their entire IT environment from a single source  from creating a new user to updating user systems and securing user logins.  There are also other directory services like Novell’s Directory Service (NDS), however, all directory services generally have the same features and benefits.  Because Windows server is most prominently used let’s just talk about Microsoft Active Directory.


Why implement Active directory?

Active Directory was introduced for the first time in Windows 2000 and it acts as a central hub that manages all network activities of user data and enables connecting different directory hubs together for an integrated IT environment. Without Active Directory, administrators may find it hard to manage a large IT enviornment.  Administrators need some kind of directory services in a growing organization to leverage the growing IT needs such as:

• Active Directory provides single top-down view of the entire IT infrastructure and it provides a single link to all users, groups, computers, printers, servers and applications.

• Active Directory acts as management framework for all domain controllers in the domain. It acts as a bridge between various domain controllers, and domains in the organization

• Active Directory provides secure login access for all users on the network. It allows administrators to allocate resources to users, administer email, and manages users and groups using group policies.


Beyond Active directory

• Active Directory is needed by various other IT tools and software for developing a robust IT infrastructure

• Active Directory can be used beyond being a centralized IT management tool. It can be used as a reliable tool for monitoring domain controllers.

• Application listing in Active Directory helps administrators to calculate and allocate appropriate resources to users

• User directory services help administrators understand how many users have logged in at a particular time. Using active directory with an event management tool can be used to monitor user activities.

• Active directory along with a helpdesk tool helps resolving ticketing and support issues

• WSUS and SCCM use active directory tool to check for inventories in user system and to push regular updates


Active Directory is like secondary root for a tree. As the tree grows, it needs additional root to support to support it similarly when IT environment grows directory services are needed to support and connect all elements in a growing network.  My next post on Active Directory will discuss what is needed to adequately support and monitor Active Directory.

Virtualization and automation have become mainstays of the modern data center with the latest versions of vSphere® and, more recently, Hyper-V®, as well as increasing automation built into applications, storage arrays and networking. These technology innovations have greatly increased the speed and flexibility with which changes can occur in each of these areas. Today, IT professionals can create new virtual machines in minutes and move existing machines across town without downtime.


The problem is that this innovation and flexibility has developed largely within silos, such as those created by compute, storage and applications. While the ability to make changes within an individual technology area rapidly and automatically has evolved, the ability to coordinate with the other technology areas affected by those changes has not kept up. For example, if one VM is overloading the CPU of a given host, the hypervisor can move the VM easily and automatically to a host that has more CPU capacity. However, that move may not be optimal for the underlying storage systems, so that fixing the CPU problem creates a storage bottleneck.


What is needed to address this problem? Many groups and companies are working on it, with capabilities such as software-defined data centers or cloud orchestration aimed at improving the cross-domain automation and coordination.


Unfortunately, having one company’s technology control and manage (and some would say commoditize) all the supporting infrastructure of the datacenter is not in the best interest of many of the big IT players, or for that matter, the consumer. As a result, cross-silo automation has not yet materialized, which leaves the responsibility for much of that cross-domain coordination with the administrators running the systems. That also increases the need for cross-domain visibility and monitoring so that administrators can see across domains and can get the information they need to coordinate between the various infrastructure and application environments in the data center quickly and automatically.


That is where SolarWinds® comes in. With SolarWinds Virtualization Manager Version 6.0 releasing today, we have begun putting together the end-to-end visibility that is becoming more and more necessary to keep up with the rapid change occurring in the individual technology silos. Virtualization Manager 6.0 includes integration with Server & Application Monitor (SAM) to provide visibility across the full application stack from Applications to VMs, hosts and datastores to physical server hardware and even all the way down to the storage disks via integration with Storage Manager (STM).


In a recent survey of our systems admin customer base, 47% of respondents indicated that the hardest problem for them to answer was, “Is the problem with my application being caused by virtualization or storage?” Additionally, having a “Single View of Application, Virtualization and Storage Data” ranked as the most important capability needed by the survey respondents.


With this latest release, a customer can now see the key components of the application’s underlying infrastructure in context of that application. Alternatively, the virtualization administrator can quickly understand all of the applications that are running on a given resource, such as a host or datastore, and then take appropriate actions to optimize resources and performance.


By providing that integrated, single view of the end-to-end application stack from within the SAM console with Virtualization Manager, administrators can now better coordinate across the silos. In this way, they can do more than optimize one resource area—they can optimize for the application (i.e., the business service) that customers need.


Specifically, the new capabilities in Virtualization Manager 6.0 include:


  • Integration with Server & Application Monitor (SAM) for in-context views of the virtualized application stack from application to virtualization to the datastore/storage
  • Deeper virtualization visibility to the datastore for SAM & NPM users
  • Enhancements to Microsoft® Hyper-V storage objects (e.g., clustered storage volumes)
  • Updated libraries and components, and improvements to the GUI, collection speeds and supportability


The integration will be delivered with Virtualization Manager 6.0 and will be part of SAM, so implementation should be simple. Combined with the very affordable price of both SAM and Virtualization Manager, end-to-end visibility should be very accessible to all types of users. At SolarWinds, we are still trying to keep it simple, but that doesn’t mean we aren’t taking aim at the big problems!

When it comes to managing your IT security, the first thing that you need to be wary of are the security threats are lurking in your network. Proactive threat management is a must-have in today’s networks, and log files hold the key. Due to the continuous tightening framework of compliance requirements, information extracted from log data becomes one of the most valuable source not only for IT security departments, but also for IT administration teams and compliance auditors.


Although log management is gaining more attention than ever as a key element IT security strategy, not many organizations have developed and implemented best practices for log management.

Well….we thought we will do it for you through this webinar!!


Join this webinar to learn how to make your log files work for you to secure your network in an increasingly threat-ridden landscape.

In this webinar, you will learn:

  • What are the potential internal and external threats that are daunting your network security & workstations?
  • The best practices for log & event management
  • How to significantly increase information security and pro-actively manage threats?

Register for the webinar Here!

When you're setting up a new Web Help Desk v12.0.X installation or upgrade, how carefully should you consider your database options? It depends what's most important to you. Are you looking for the easiest installation? Are you upgrading or performing a brand new installation? Are you already running a supported external database?

About Web Help Desk-supported Databases


WHD v12.0.X uses an embedded PostgreSQL database as its standard database and no longer supports FrontBase, OpenBase, or Oracle databases. If you are upgrading to WHD v12.0.X and have been using an embedded FrontBase database, the WHD installation wizard walks you through upgrading from the FrontBase database to an embedded PostgreSQL database.


WHD also supports Microsoft SQL Server versions 2008 and 2012 and MySQL version 5.5 as external databases. You can use an external PostgreSQL 9.2 or 9.3 database with WHD v12.0.X as well.




For most installations, SolarWinds recommends using the standard embedded PostgreSQL database. This Web Help Desk Getting Started Wizard walks you through the complete installation of this database when you do a fresh install. Or, if you're doing an upgrade from FrontBase, the Getting Started Wizard automatically performs the database upgrade for you, and you don't have to do anything!


If you already have an external PostgreSQL 9.2 or 9.3,  SQL Server 2008 or 2012, or MySQL 5.5 database up and running, you may want to use that database as your Web Help Desk database. If you using an external database, SolarWinds recommends installing WHD and the WHD database on separate servers, with the WHD database hosted on a Microsoft SQL Server or MySQL server.


If you are running an unsupported database, such as Oracle or FrontBase, you will need to migrate to one of the supported databases. Not an impossible thing to do, but there is some work involved. One of the things you'll need to do is convert the datatypes in your existing database to the datatypes used in the new database. You can perform these conversions using third-party tools such as PGAdmin3 or PostgreSQL Data Wizard. For details on migrating from an external or unsupported database to PostgreSQL, see Converting from other Databases to PostgreSQL on the PostgreSQL website. This article covers database migration from many databases, including:


  • FileMaker Pro
  • IBM DB2
  • Microsoft Access
  • Microsoft SQL Server (SQL Server)
  • MySQL
  • Oracle


For more information on making the most of your Web Help Desk installation, check out these free Web Help Desk use case and training videos.

It’s no surprise that security attacks are getting more complex and sophisticated to deal with. Such advancement in the technology of cyber-crime makes it paramount that IT security teams start understanding new-age threats, and equip themselves with proper strategies to counter attacks. Blended threats are one of the many complex attacks to detect and contain. A blended threat is one that combines several types of malware exploits and inflicts a multi-pronged attack against network computers. Hackers introduce threat vectors in various parts of your IT infrastructure and use multiple methods to coordinate and propagate them across your network.


Constituents of a Blended Threat

A blended threat may comprise of a combination of viruses, worms, Trojan horse, or a piece of malicious code such as bots, rootkits and spyware, etc. In Part 1 of this two-part blog series, let’s understand the differences between each of these threat vectors, and then, in Part 2, see how a blended threat works, and how it can be prevented.


What is a Computer Virus?

A computer virus is a malware that is available, in most cases, as an executable file that, when run, cause damage to your computer. Viruses can also spread, like an infection, to other systems attached to your network and

Blended Threat.png

affect them. A virus is generally activated by human action, i.e. when the malware executable is accidentally or intentionally executed. The defining characteristic of viruses is that they are self-replicating computer programs which install themselves without the user's consent.


What is a Computer Worm?

A computer worm is similar to a virus in its characteristic of propagating from one system to another and causing damage, but differs in the way it is activated. In contrast to the virus, a worm need not always be executed by human action. Worms are standalone software that exploit a vulnerability on the target system by taking advantage of your system’s information sharing and transport features, allowing it to spread unaided through the network.


What is a Trojan Horse?

A Trojan horse is a type of malware that tricks computer users into loading or executing it. A Trojan conceals harmful and malicious code and can pose a number of threats ranging from annoying window pop-ups to deleting files and stealing data.


What is a Rootkit?

A rootkit is a type of malware that is designed to conceal viruses and other malware from your anti-virus software. Rootkits also prevent malicious processes from being visible to the system administrators. Rootkits achieve this concealment by modifying the host’s operating system and they are activated before the OS boots up.


What is a Bot?

Bots (short for robot) are automated programs that is used by a hacker to simulate user activity on the target system. As defined by Cisco, a malicious bot is self-propagating malware designed to infect a host and connect back to a central server or servers that act as a command and control (C&C) center for an entire network of compromised devices, or "botnet." [1]


What is a Backdoor?

In a normal computer operating system, a backdoor is method of bypassing normal system authentication and security mechanisms. This is made available during the development phase of an OS, and programmers use it for testing and troubleshooting purposes. The backdoor is typically removed when development is over and the OS is ready to be shipped. Hackers exploit undetected backdoors and associated vulnerabilities to gain unauthorized access into your system and secure remote access.


What is Spyware?

A spyware is also a type of malware that aids the hacker in gathering or stealing from the host computer. Spyware can get into a computer as part of any untrusted download of executables. They secretly get into your system, and relay information back to the hacker.



A blended threat involves a combination of multiple choices of the above attack vectors, and is carefully planned and coordinated to cause maximum damage and financial loss to the victim organization, network and computers. We’ll learn more about blended threats in Part 2.


Read this White Paper


Sec Mgmt Checklist.png

In the event of network failure caused by errors or discrepancies in device configuration, it is mandatory to have your device configurations backed up. A few examples of network downtime due to faulty configurations can be attributed to:

  • Outage duration in getting a failed device back on the network
  • Bad or unapproved configuration changes
  • Errors while making or executing configuration changes
  • Configuration changes triggered by unauthorized or rogue users

How can you thwart the impact of such problems and prevent downtime?

Device Hardware Failure

Problem: A critical device, like a core switch, has failed, and the switch has to be replaced. Your objective is to restore operations as quickly as possible typically by locating a spare from inventory, racking the replacement plugging-in and then configuring the switch. In addition to this, if there is a problem or error in the re-configuration, then further trouble awaits.

Solution:  The failed switch can be replaced with a spare from inventory. Quickly rack the switch, plug it in, and download the failed switch configuration from stored configuration backup database into the replacement switch and your good to go!

Bad or Unapproved Configuration Changes

Problem: Your network monitoring detects very high CPU utilization and packet loss on a particular node. After a lot of analysis you finally figure that the cause of the issue is from a particular switch. Further drilling down, reveals that the issue originated from a faulty line in the switch configuration. You did not carry out this change. Then, who did?

Solution: There exists a good baseline configuration of the device in place and you can immediately replace the faulty configuration with this last known good configuration. Get your network up and running in no time.

Human errors in executing configuration changes

Problem: You are engaged with the task of enabling NetFlow across the entire network. This involves executing at least 10-12 lines of code through the Command Line Interface (CLI). Having to do this on 20 or more routers is a cumbersome and error prone task. An ideal situation of erroneously executed configuration changes causing havoc in your network.

Solution: The best possible immediate solution for such a scenario is to revert to a previously working configuration from a backup. Making configuration changes with a tool that can help rollback the change for all the switches at one go would facilitate bringing back your network up quickly before any major damage is done.

Hacker in the network

Problem: Your network is under attack and the hacker gains entry and alters the routes and configurations to further access your network.

Solution: Have a system in place that would notify you in real-time of any unapproved changes made and by whom. On being notified verify and immediately revert to the good configuration or make changes to rule out any further access to the offender.

In short, backing up your configuration should be the No: 1 task on your checklist for network preparedness. SolarWinds Network Configuration Manager (NCM) provides you with features to efficiently execute bulk configuration changes, configuration roll back, configuration comparisons, real-time alerts and more. Additionally, the SolarWinds Network Performance Monitor (NPM) helps detect, diagnose and alert you of device or network performance issues enabling immediate remediation with NCM.



Device Management and Network Monitoring together help administrators combat network issues and also comply to and carry out routine device management tasks.

Take a look at what this happy SolarWinds NCM customer has to say!

IT professionals are under constant pressure to improve the efficiency of IT operations, which usually means two things:

  • Reduce costs
  • Improve productivity

Striking a balance between the two is key, especially when the investments revolve around security. You need to deliver greater service levels, enhance security initiatives, as well as conform to compliance policies, while keeping costs in check.



But, it’s very important to remember that investing in security is not just about the direct costs—it’s crucial to take indirect costs into account, as well. These costs include the value of the data that tends to be lost during a breach and the subsequent fallout, including the loss of customer trust, damage to corporate reputation, financial and legal penalties, and more. The costs associated with data loss can be far greater than the investment that was made in security.




More and more organizations are realizing the need for a proactive approach when it comes to keeping a watchful eye on all the activities happening on their network. As a result, the demand for SIEM tools continues to increase as a means to streamline the process of collecting and analyzing event data generated throughout the network and to produce actionable security intelligence.

Top 3 Factors that Determine the ROI of your SIEM Solution

When choosing a SIEM solution, you need to know the factors that go into measuring the ROI of the chosen solution.


Reduced Costs & Increased Productivity

The amount of data that needs to be collected and correlated is growing exponentially in today’s ever-expanding network infrastructures. The amount of time security admins have to spend per day per server to retrieve, normalize and analyze event log data leaves them little to no time for anything else. As a result, other crucial administrative tasks are put on the back burner or additional admins have to be brought in to help. A SIEM solution will automate and consolidate the log management process, thereby reducing the man-hours and resources needed to accomplish this vital task. Translation—increased efficiency and decreased costs.


Enhanced Security & Compliance

Networks threats don’t just come from the outside—insider abuse is on the rise. Security concerns span across external hackers, internal data theft, policy violations, malware, rogue devices, and the list goes on.  Additionally, regulatory mandates and corporate policies have become increasingly stringent.  The fact is that a security attack of any kind can have a direct impact on your organization’s integrity and reputation, which is why a comprehensive security solution must be put in place. A critical piece of any security plan should be a SIEM tool with real-time analysis and cross-event correlation, which will:

  • Reduce the time taken to identify attacks, thereby reducing their impact
  • Reduce the time spent on forensic investigation and root cause analysis
  • Reduce the time and cost incurred on maintaining compliance

Network Uptime & Application Availability

Ensuring the highest levels of network uptime and the availability of business-critical applications is a big concern for any security admin. The right SIEM security tool can help by continually monitoring event log data of critical devices and application servers in order to detect network degradation, failing devices, and application errors before they impact end-users and cause a disruption in service.


As you can see, investing in the right SIEM solution not only helps defend your network from an ever-growing list of security threats, but also helps ensure network and application availability, as well as safeguards critical data and protects your company’s reputation!

We compared vSphere 5.1 and Hyper-V 2012 in terms of their capabilities of Storage Management, Memory Handling & CPU Scheduling earlier in this blog series. In this blog post, we’ll discuss how both the hypervisors help manage data and workload migration and provide virtual machine (VM) mobility.


VMware® has been offering vMotion for a long time and allows moving running VMs from one host to another with no—or just a few milliseconds of—downtime. Though earlier versions of Hyper-V were not able to match vSphere’s VM migration capability, the introduction of Hyper-V 2012 is closing up the gap with Live Migration feature that is similar to vMotion. Let’s discuss how both the hypervisors execute workload migration and understand the differences and similarities between them.


vMotion in vSphere 5.1

vSphere 5.1 vMotion allows you to transfer the execution state of a running VM from the source ESXi™ host to the destination ESXi host over a high speed network. The execution state consists of the VM’s:


The Execution State of a VM Consists of:

1.   Virtual disks

vSphere 5.1 uses Storage vMotion for the transfer of virtual disks. This involves a synchronous mirroring approach to migrate a virtual disk from one datastore to another on the same physical host.

2.   Physical memory

vSphere 5.1 vMotion uses pre-copy iterative approach to transfer physical memory just like the earlier versions of vSphere:

  • Guest Trace Phase: Traces are placed on the guest memory pages to track any modifications by the guest during the migration
  • Pre-copy Phase: The memory contents of the VM are copied from the source to the destination ESXi host in an iterative process.
  • Switch-over Phase: The last set of memory changes are copied to the target ESXi host, and the VM is resumed on the target ESXi host.

3.   Virtual device state

Virtual device state include the state of the CPU, network and disk adapters, SVGA, etc. vSphere 5.1 vMotion serializes the VM’s virtual device state and transfers it over a high-speed network.

4.   External network connections

vSphere virtual networking architecture makes it very easy to preserve existing networking connections even after a VM is migrated to a different host. Each vNIC has its own MAC address (which is independent of the physical NIC’s MAC address). This allows the VM to keep the networking connections alive after migration, as long as both the source and destination hosts are on the same subnet.



Live Migration in Hyper-V 2012

Windows Server 2012 provides a capability similar to vMotion with a technology called Live Migration which allows you to configure a VM to be stored on an SMB file share, and then perform live migration of this VM between non-clustered servers running Hyper-V. In this process, the VM’s storage remains on the central SMB share.


Windows Server 2012 allows you to select optimal performance options when moving VMs to a different server. In a larger virtualization setup, this can reduce overhead on the network and CPU usage in addition to reducing the amount of time for a live migration. Shared Nothing Live Migration in Hyper-V 2012 allows you to move VM between systems that don’t share common storage including two non-clustered hosts, between a non-clustered host and a clustered host, and between two clustered hosts. It’s is also possible to perform multiple live migrations of VMs, and also queue them up in line so they move in a sequence.



Similarities between vSphere 5.1 & Hyper-V 2012

It is to be noted that shared storage is no longer required for both Live Migration in Windows Server 2012 and vMotion in vSphere 5.1.


And, recent versions of both the hypervisors support workload migration with 10 Gigabit Ethernet (GbE) networks. The maximum number of concurrent vMotions that you can do per ESXi host is:

  • Four with a 1 GbE network connection
  • Eight with a 10 GbE network connection

Both vMotion and Live Migration ensure avoiding downtime and the impact on service availability while the VM and its workload are being moved between hosts.


All this does not come to say one hypervisor is better than the other. Although VMware has been there as the pioneer in the arena of server virtualization, with the evolution of Hyper-V 2012, Microsoft has positioned itself as a challenger and we’ll have to wait and see how IT teams run and manage both of them in a mixed hypervisor setup. If you are interested in virtualization performance monitoring, learn about VMware monitoring and Hyper-V monitoring.


To learn more about how vSphere 5.1 and Hyper-V 2012 differ and compare,



Read this White Paper:

vsphere vs hyper-v white paper.png


Watch this Webcast:

vsphere vs hyper-v webinar.png


Other parts of the vSphere 5.1 vs. Hyper-V 2012 series:

In this PatchZone article I discussed how to configure a custom update view to allow update approvals to be captured from a source group (e.g. a Test Group) and duplicated into one or more additional groups (e.g. production groups).

Patch Manager can also use this technique, but if you have multiple source groups, one of the complications with the technique of update views is that you need to create a separate view for each source group. With Patch Manager we have a simpler, more direct, methodology that can be used for any number of source groups.

WSUS Server -> Update Approvals tab

From the WSUS Server node of the console, select the Update Approvals tab. This view shows the entire approval event history of the WSUS server, including automatic approvals, and any changes in approval status, such as removing an approval. The list has one entry for each target group where an approval has been set and also provides the date of the approval and the identity of the console user who issued the approval.

Filter the Update List

In the Computer Group column, open the filter selection dialog and select your source group. In this example, we’re using the group “Test Computers”.

Filter Update Approvals by Group.png

With the list filtered by the source group, you can then sort or filter by the Approved Date. One of the particularly useful features of date filtering in Patch Manager is the ability to filter on more than one specific date. The date filter provides a tree view of the actual date values in the column and you can include or exclude entries by a specific date.

Date Filter Dialog.png


Select the Updates

In this scenario we only have three updates approved for our “Test Computers” group, so no additional date filtering will be necessary. Select the updates and click on the “Approve” action in the “Update Approvals” section of the Action Pane. (Notice that I have collapsed the top half of the Action Pane which relates to server actions, in order to see the “Update Approvals” specific actions more clearly.)

Launch Approve Dialog.png

Add Approvals

This will open the standard Approve Updates dialog where you can now select additional groups to add approvals. In this case I’ve selected “Win2008R2” (my production group).

Approve Updates.png

And that’s all there is to it!

A lot has been discussed thus far, about the need to secure your network and firewalls which are prone to vulnerabilities and attacks.  Before getting deep into the solutions that you need to secure our network, you need to understand the factors that determine your network security.


Typically, network security starts with monitoring your network for vulnerabilities that may enter the network to access potentially sensitive information in the form of security attacks.


Let us look at some of the vulnerable components in your network.

  • Your routers can be easily breached without proper configuration and restrictions. Talking of switches, attackers tend to undermine security by reconfiguring unmonitored switching rules, allowing “sniffing,” wherein an intruder can potentially capture all network traffic.
  • If a firewall isn’t correctly configured, a hacker can find a port accidentally left open and gain access to the network.
  • Servers are a target for Denial of Service (DoS) attacks. When attacks are successful server performance decreases and can crash the server. Between outdated software, viral attachments, and smuggled-in flash drives, individual workstations are the Wild West of IT Security frontiers.
  • Security concerns around the BYOD too adds to the list. Rogue access points, social engineering, and botnet malware can make your wireless a porthole into your LAN.


The very purpose of IT security is to be proactive and responsive. You need to collect and analyze events data from your network devices, you need to protect your network from unauthorized configuration changes. Also it’s important to efficiently deploy and manage service packs and updates to third-party software applications on a regular basis.


You to have well-planned  configuration management, continuously monitor your network activities,  yet keep the back doors closed, and have the ability to rapidly identify, quarantine, and mitigate threats.


Check out How Secure is your network, our new network security micro-site and learn how SolarWinds can help you secure your network!!

Do you find it difficult to manage your increasingly complex network? Does your daily network maintenance routine feel like an uphill task? Then do not worry, you are just like any other network administrator in this world, who is trying to find a solution to a never ending problem.

Network managers and administrators are plagued by seemingly different network issues that constantly interrupt critical business processes.  Many of these issues originate from users who over utilize bandwidth. In addition, they are also faced with the problem of extensive systems that depend on both devices and applications within their network. If anything goes wrong, the adverse impact on the business will be blamed on network administrators.

Proactive network and bandwidth monitoring will help you solve day-to-day problems and prevent unexpected network scenarios.  Here are a few considerations:

#1 Great Visibility ensures Greater Performance

One of the keys to resolving network issues quickly is to have in-depth visibility into your network from the device to the application. Being aware of what is happening in your network will help you understand the problem areas and solve it immediately. Network administrator will gain more insight into network performance by finding out which applications, devices, protocols and users are most active, and plan for future purposes accordingly.


#2 Ensure High Network Availability

Uninterrupted network monitoring will help you ensure continuous uptime and service availability to your network by detecting network issues, determining root cause and then troubleshooting. By analyzing network performance metrics through indicators such as bandwidth utilization, packet loss, latency, errors, discards, etc. administrators can have a contingency plan in case the system fails.


#3 Quick-to-Resolve with Intelligent Alerting and Hardware Monitoring

Recognizing and resolving issues even before your users experience network performance issues can be achieved with Intelligent Alerting. It ensures that network administrators get prompted when some specific predefined condition occurs that can potentially affect the network. Having intelligent alerting is a must in order to ensure continuous network uptime. This also includes monitoring key device sensors including temperature, fan speed, and power supply, so that you can proactively address issues before they negatively impact your network performance.

How SolarWinds Can Help

Now you can monitor your network devices and bandwidth with SolarWinds Bandwidth Analyzer Pack, a comprehensive network bandwidth analysis and performance monitoring tool. Bandwidth Analyzer Pack allows you to:

  • Detect, diagnose, and resolve network performance issues
  • Track response time, availability, and uptime of routers, switches, and other SNMP-enabled devices
  • Monitor and analyze network bandwidth performance and traffic patterns
  • Identify bandwidth hogs and see which applications are using the most bandwidth
  • Graphically display performance metrics in real time via dynamic interactive maps

You can download the fully functional 30 day free trial or test drive the demo.

As much as Active Directory (AD) is one of the very highly used applications by IT for customer profile and login access management, it is also a challenging one to manage. Take the example of user account lockouts: if an employee gets locked out from their account – no questions asked – no matter whatever time of the day it is – you’ll have to reset the password and unlock the user account straightaway. And what if this is a frequent problem? Not just this, there can be several other issues involving the performance of AD server. This calls for proactive Active Directory monitoring to help you detect problems before they are reported by your users, and before they impact productivity.

Monitor Active Directory: Avoid Performance Issues

Monitoring AD comprises of many key aspects such as keeping a close watch on the application and service availability, and ensuring various AD performance metrics are kept checked and in line with the accepted thresholds. SolarWinds Server & Application Monitor (SAM) provides intuitive dashboards to monitor the status and performance of AD servers. You can leverage the out-of-the-box AD monitoring templates and component monitors to monitor several aspects of your AD environment including (and not limited to):

  • File Replication Service: Identify failure on a replication link or if there is an issue with the network leading to slow replication rates between websites.
  • LDAP Client Sessions: Monitors the NTDS object counters and the number of clients connected to an LDAP session. It provides statistics and performance metrics for speed and response times of specific sessions.
  • Directory Services: Monitoring critical directory services will makes sure your email and phone contacts are always in-sync.
  • Service Outages: Monitor the domain controllers continuously and prevent service outages. SolarWinds SAM will monitor this within DNS servers and clients, servers and workstations, distributed file systems, inter-site messaging, etc.
  • DNS Server Service: Look for issues in the DNS server related to downtime or performance problems and immediately get notified for taking corrective action.




Manage Active Directory Logs: Automate Issue Remediation

Monitoring Active Directory logs is another crucial part of AD management as this gives a wealth of knowledge about the specific events that caused the AD application or server to fail, and have latency or other issues. Monitoring logs from the domain controller and AD clients on user workstations allow you to get real-time visibility into events such as:

  • Users being added or removed to domain groups
  • User groups being created or removed from the domain controller
  • User account privileges or Group Policy Objects modified or changed
  • Account password being changed or reset

Resetting user account passwords is a hard and cumbersome thing to do every time there is a password issue or account lockout. And it’s best to have an automated process in place to automatically reset passwords when there is an account lockout or unauthorized password change detected from the AD domain controller’s event logs.

SolarWinds Log & Event Manager (LEM) collects log data from your AD servers and user workstations and correlates AD and user activity events in real time to provide insight into AD issues. Additionally LEM provides built-in Active Reponses that can be automated to reset AD passwords and unlock user accounts.

Think of how much time you will save by not having to reset user passwords. Try the combination of SolarWinds SAM and LEM, two powerful solutions to monitor the health of your AD and entire Microsoft® environment and stay ahead of performance and security issues!

Case Study

AD Case Study.png

After installing Log & Event Manager (LEM) v5.6, you may need to download and install these additional pieces of software for further data collection and analysis.

LEM Desktop Console

The LEM desktop console is identical to the web console, only you install it on a Windows computer. Download and install the Adobe AIR Runtime and/or Log & Event Manager desktop console if you want a locally installed version of the LEM console.

Note: Both items below are required to run the LEM desktop console; however, you do not need to download and install the runtime component if you already have it on your system for another application.



Deploying agents allows you to collect data directly from different operating systems, and to connect to the appliance for monitoring, notification, and response. After deploying agents, you can configure the desktop software from Manage > Nodes to enable your different data sources.

MSSQL Auditor

MSSQL Auditor allows you to audit Microsoft SQL 2000, 2000 MSDE, 2005, 2005 SQL Express, and 2008 databases for changes and failed modification attempts. Install SQL Auditor on your MSSQL server or a remote system with SQL Profiler installed.

LEM Reports

LEM Reports is a standalone reporting application used to access alert information on the LEM database. Download and install the Crystal Reports Runtime and/or Log & Event Manager Reports if you want to run pre-configured security and compliance reports.

Note: Both items below are required to run LEM Reports; however, you do not need to download and install the runtime component if you already have it on your system for another application.

LEM Connectors

Connectors allow LEM to normalize the data it collects from your agents and network devices. Download and apply the LEM connector update package any time SolarWinds updates a connector you use, usually when Support informs you to do so.


I had the opportunity to recently interview Jim Shank of Douglas County School District, Castle Rock, CO. Jim is part of Douglas County’s infrastructure team which uses SolarWinds Server & Application Monitor to proactively monitor the schools’ servers and databases.

JK: What are some SolarWinds products you’re currently using and how do you use them?
JS: We started using Orion (NPM) to monitor network switches and monitor the performance of operating systems. We’ve also been using Server and Application Monitor (SAM) for over a year now. It’s definitely providing us great insights on how our servers and databases are performing. SAM helps us know when our databases are busy, whether there is an abnormal memory condition, and it alerts us when something goes wrong within our infrastructure.

We’re also using Alert Central for alert management and escalations, and I like how it’s integrated with SAM so we don’t have to watch the dashboard all the time. When an alert is raised, it’s automatically routedto the concerned team and they immediately see it.

JK: What was your initial reaction after using AppInsight (SQL monitoring feature of SAM)?
JS: With AppInsight, we’ve been able to drill-down specifically to which database instance is having an issue, which one is taking up a lot of RAM, which queries are being sensitive, and so on. So it’s been huge for us. We also get requests from the software team saying there is a network problem which is affecting the database performance. Having AppInsight allows our team to tell them the exact query that is causing a slowdown to the database. It eliminates the finger pointing and allows us to show where the problem is occurring from and the reasons for it.

As a result of having AppInsight, we’re able to be proactive. We share access to the console to various teams which alerts them when an issue comes up.  The database team can now take care of the databases and monitor them proactively before a user reports there is an issue with the app.

Another fun thing we’ve been able to do with AppInsight is we’re able to look at slow procedures that are really sloppy. We’re able to bring this to the attention of the off-the-shelf software vendors, like software that helps with our student information system. We can tell them a particular query or a stored procedure built within the product is taking a very long time to load. If they tell us it’s a server or a network or a memory issue, we immediately tell them the specific query in their software program that was not built very efficiently, and that it is likely causing the problem. This also helps our staff because they don’t have to chase the vendor to try to fix the problem. We just look at the stored procedure that’s causing the delay. When we call the vendor, we can now tell them that the stored procedure data index student is taking 800 seconds to load. That’s a huge difference in getting the call moved to the right person in the vendor organization.


JK: The SAM 6.0 release candidate (fully supported in production) is now available, which means you can have deep SQL visibility.  Check out the details and sign up for the SAM 6.0 RC here.

Most network admins might have faced a “finding Waldo” situation – where Waldo is an unknown device that sometimes appears in your reports or logs: in your ‘ACL Deny’ logs as using non-business applications, in your NetFlow report as a bandwidth hog, as a problematic server in the cluster, as an unapproved NAS device in your campus, or for some extra drama, as that “forgotten server”. And you have no idea who the user is, where the device is located or to what it is connected.

It is somewhere out there, part of a VLAN with multiple member ports, connected through all those “x, y and z layer switches”, in a daisy chain, and with the Ethernet cables behind the wall! You of course cannot locate such a device by walking around the office and datacenter, looking for that distinctive red and white striped shirt. So, Where’s Waldo?

The Traditional Method :

The first step for the network admin to take on finding an unknown IP Address in his reports or logs is to block it. This is necessary until you find out if the device is safe and approved or not. Though you can block traffic from its IP Address using ACLs, it definitely is not the best solution because it could be an IP could be from your network DHCP range. Due to possible security risks associated with such devices, it is imperative that you find where in the network the device is and the switch port to which it is connected.

The process starts with a ping! Login into your core switch and ping the unknown IP Address. This lets your core switch learn the MAC Address of the device and add an entry about it to its ARP table. You can then do an ARP lookup to find the IP to MAC mapping. The ARP table shows the MAC Address mapped to that IP as well as the port on the switch which points to the MAC address. Find what is connected to the port and if it is another switch, repeat the process till you find the device.

ARP Table.pngIf you happen to have Layer 3 as well as Layer 2 switches in your network, remember that an ARP table is available only on your Layer 3 switch whereas on a Layer 2 switch, you have to look up the MAC Address table. The reason is that an ARP table is used for Layer 3 to Layer 2 mapping which is needed only by a Layer 3 switch. On the other hand, the MAC Address table holds the mapping for Layer 2 to switch port, and that is usually seen on a Layer 2 switch. Once you reach the rogue device going through all the different layers and switches, you know what your action ought to be.

Now, to add a bonus difficulty level to the game –what if you don’t find the unknown device when you search for it in the morning, after seeing its IP Address in the logs? So, Where was Waldo?

The Alternative:

What we discussed is the traditional, widely-used, time-trusted and time consuming method for finding a rogue device from your network. There are alternatives like using scripts that grabs the output of “show arp” or “show mac address-table” from each switch at different time intervals and storing them as logs. Not the best and easiest again.

In an era of advanced threats, data theft and zero day malware, you cannot afford to play the waiting game. As soon as you see an unknown device in the network, you first need to block it from the network by shutting down the port it is connected to. Only then comes the part of trying to find who the device belongs to, whether it was an approved device or not and what was it actually doing in your network. Methods like manual or script based ARP and MAC Address table lookup are neither the fastest nor the easiest solution. And if this device is a bot taking part in a DDoS attack or sending out SPAM emails, you really need to act fast.

This is why network administrators should deploy tools that can help detect unauthorized network devices, shut down ports as soon as you find a rogue device, track suspicious activity or even create a whitelist for trusted devices. With a user device tracking tool for the network, you can be sure of finding unknown devices within minutes. So, There’s Waldo!

To overcome that bonus difficulty level we added, the tool should also be capable of storing history of the device, like where it was connected prior to disappearing. Even more importantly, if you see that the device is wreaking havoc in your network or if you think the device to be suspicious, rather than having to open putty, login into the switch, search the port and then perform the shutdown action from the switch, you should be able to do a remote shut down with a click from the tracking tool. And finally, if the tool can integrate with your DDI solution to help with IP Address management, even better!

UDT-without SW logo.png

Remember to use a tool that is feature rich and meets all your requirements. You not only ensure that an unknown device will not be a hindrance to network uptime, you could also spend some time playing the real game. For more information on protecting your network, take a look at our “Detecting and Preventing Rogue Devices” whitepaper.


30 Day Full Feature Trial | Live Product Demo | Product Overview Video | Twitter

Testing patches before deployment is perhaps the most critical step in the patch management process.  Although patch management has become a reliable solution for end-point vulnerability protection, organizations often fail to realize the impact of incorrectly patched applications and failed updates. As much as we want applications updated, we need to be careful about how we introduce a new patch update into the application environment.

Think of the below possibilities that need be analyzed before deploying the patch update.


  • Will the patch be successfully deployed in the target environment?
  • Will the patch be compatible with the OS platform?
  • Will the patch cause any unprecedented issues on the target platform post-deployment?
  • How will you get to know if the patch deployment has failed?


Failing to analyze these questions could invite more complaints and issues from end-users and cause administrative headaches to IT admins.



Best Practices for Pre-Deployment Patch Testing


The goal of testing patches before deployment is to ensure the system's applications and operations are not impacted, and business services are not interrupted. Proper testing of security updates is an industry-standard best practice that allows you to understand the possible impact of the patch update on your target environment.


Pre-deployment patch testing should consist of the following:

  • Simulate test cases and check if the patches are getting deployed successfully on the target platform(s)
  • Compare application performance before and after patch deployment and check if there are any issues
  • Test if other applications running on the target environment are impacted by the patch update
  • Ensure that if the patch is successfully removed, no application or system issues will occur
  • Incorporate patch testing as part of your IT security risk assessment plan


Additionally, as a post-deployment check, implement a mechanism to notify you of failed patch deployment.



Leverage Pre-Built, Pre-Tested Patches for Hassle-Free Deployment


SolarWinds Patch Manager allows you to leverage pre-tested, pre-built, ready-to-deploy patches for common third-party applications. SolarWinds does all of the research, scripting, packaging, and even much of the testing and makes patches available on the Patch Manager console ready for deployment. These patches can be automatically synchronized with the WSUS server right along with your Microsoft® patches.


Click here to see the list of all third-party applications and latest patch updates that SolarWinds Patch Manager can help you deploy right out of the box.


Patch Testing v2.png


SolarWinds Patch Manager automates and simplifies your organizational patch management process and allows you to:

  • Create advanced before-and-after package deployment scenarios to ensure that complicated patches deploy successfully without requiring any complicated scripting
  • Get automatic notification of failed patch updates
  • Implement bulk patch deployment to your choice of target computers
  • Reboot sleeping or turned off workstations and servers using built-in Wake-On-LAN and deploy patches


Try out the fully-functional evaluation of Patch Manager!

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.