Before we start to understand how this works, you need to know that SolarWinds Alert Central is a FREE SOFTWARE - totally, completely, free for life! 


Once you have installed Alert Central and linked it with your Orion (NPM, SAM, IPAM, NCM or other Orion-based products from SolarWinds), all your Orion alerts will be sent to Alert Central and any updates made to the alert in Alert Central will also be updated on Orion. It’s just a simple 3-step process to establish this integration:


#1 Configure Alert Central’s Email Settings

  • You need a new email account for managing alerts with Alert Central
  • Alert Central can receive email from POP or IMAP, with or without SSL, or with a direct connection to Microsoft® Exchange Web Services
  • Alert Central uses standard SMTP with or without SSL and authentication to send outbound email


#2 Configure Orion to Generate Alerts for Different Alert Conditions

  • Use Orion Basic Alert Manager of Advanced Alert Manager to define alert conditions and determine when they are going to be triggered
  • By default, Orion is set up to send alerts to Alert Manager if you are running Orion system with Core v 2012.2.0, or if you are using versions NPM 10.4 / SAM 5.5 or above.


#3 Configure Routing Rules in Alert Central

  • Add your Orion source credentials to Alert Central
  • Configure the routing rules for Alert Central to route alerts to recipients based on custom conditions.
  • The most common scenarios for configuring alerts are:
    • Routing all alerts to a single group
    • Routing alerts from different locations or network segments to different groups
    • Routing alerts with different keywords to different groups




  • You can also add different Orion properties in Alert Central to route specific alerts and test them on the Alert Central console to ensure they work well
  • Best Practice: Set up a default alert group to receive all alerts which do not meet any of your custom routing conditions


Once you have configured your alert routing conditions you can set up escalation policies and on-call calendaring for each group to ensure alerts go to the right user.


Watch this short video to learn how to configure Orion alerts with Alert Central.



About SolarWinds Alert Central

SolarWinds Alert Central is centralized IT alert management software that provides amazingly streamlined alert management for your entire IT organization. It consolidates and manages IT alerts, alert escalation, and on-call scheduling to help ensure all your alerts get to the right people, in the right

groups, at the right time. And Alert Central integrates with just about every IT product that sends alerts, using a simple set up wizard to walk you through the process.


  • Centralize alerts from all your IT systems
  • Automatically route alerts to the right group
  • Filter alerts from noisy monitoring systems
  • Automatically escalate unanswered alerts
  • Easily create and use on-call calendars


And the best part is that SolarWinds Alert Central is completely FREE! Download Alert Central - for VMWare or Hyper-V


AC 2.png

To be in the first flush of network issues before they turn into a nasty outage, you need to be on the watch for a few important network monitoring parameters. With so many factors, objects and interfaces, the burning question is – What to monitor?


Knowledge about main causes of network downtime helps but, a lot depends on the design of your network, the devices, services running, and so on. But, in general, what are the recommended critical parameters that need steady monitoring?


Be the first to know before it affects your users! Here are a few pointers to some important monitoring parameters are as below,


Availability and Performance Monitoring: Monitoring and analysis of network device and interface availability and performance indicators help ensure that your network is running at its best. Some of the factors that influence good network availability are:


  • Packet loss & latency
  • Errors & discards
  • CPU, memory load & utilization


Detailed monitoring and analysis of this data for your network elements and timely alerting on poor network conditions like slow network traffic, packet loss, or impaired devices helps safeguard your network from unwarranted downtime.


Errors & Discards: Errors and discards are two different objects in the sense that errors designate the packets that were received but couldn't be processed because there was a problem with the packet. Discards are those packets that were received with no errors but were discarded before being passed on to a higher layer protocol.


Large number of errors and discards reported in the interface stats is a clear indication of something that’s gone wrong. A further investigation into the root cause will help identify the issue which can be quickly resolved.


Device Configuration and Change: Non-compliant configuration changes are a major cause of network downtime. Not knowing what changes are made and when, is even more dangerous for your network. Create a system that monitors and approves device configuration changes before they are applied to production. Setting up alerts to notify you whenever a change is made helps you maintain control over your device configurations being the cause of network downtime. 


Syslog and Trap Messages: Syslog and traps serve separate functions - syslog messages come in for authentication attempts, configuration changes, hits on ACLs and so on. Traps are event-based come in when some device-specific event has occurred like an interface had too many errors, high CPU usage, etc.


The advantage here is that, instead of waiting for your network management system (NMS) to poll for device information, you can be alerted on unusual events based on these syslog and trap messages.


Network Device Health Monitoring: Monitor the state of your key device components including temperature, fan speed, and power supply so that you do not find yourself in a network outage caused due to faulty power supply or an overheated router. Set pre-defined thresholds to be alerted every time these values are crossed.


So, start monitoring your critical factors that impact the availability and performance of your network and  Minimize Network Downtime!


To Learn more: Unified Network Availability, Fault and Performance Monitoring

Alright Security folks, WEBCAST TIME!!


What is this all about?

This webcast will discuss and showcase how various organizations are dealing with IT security.  You will get to know whether these organizations are implementing tools and techniques to deal with their security data analytics problems, are they automating pattern recognition, and many more.



When: Thursday, October 03 at 1:00 PM EDT



Why shouldn't you be missing this?

During June and July, along with SANS, we conducted a survey that was taken by 600+ security professionals to explore how organizations are dealing with their security data for better analysis and detection. In fact, we even found some shocking facts, for example, almost one-third of the security pros out there are still acting upon hunches when it comes to detecting security threats.



To stay updated, all you have to do is just attend this webcast.



Registration link:







Along with the SANS Analyst Dave Shackleford, you will also have Nicole Pauls from the SolarWinds crew.



Nicole Pauls

Nicole Pauls is a Director of Product Management for Security Information and Event Management (SIEM) at SolarWinds, an IT management software provider based in Austin, Texas. Nicole has worked in all aspects of IT from help desk support, to network, security, and systems administration, to complete IT responsibility over the span of 10 years. She became a product manager to help bring accessible IT management software to the masses.

Because Microsoft® SQL Server® is such a widely used database, slowdowns within its environment can lead to issues for multiple applications. More often than not, the root cause of such slowdowns is usually memory bottlenecks. There are many issues that affect SQL Server performance and scalability. Let’s look at a few of them.


  • Paging: Memory bottlenecks can lead to excessive paging which can impact SQL Server performance.
  • Virtual memory: When your SQL Server consumes a lot of virtual memory, information will constantly move back and forth from RAM to disk. This puts the physical disks under tremendous pressure.
  • Memory usage: No matter how much memory is added to the system, it appears as though SQL Server is using all of it. This can happen when SQL Server caches the entire database into the memory.
  • Buffer statistics: When other applications consume lots of memory and your SQL Servers don’t have any then there can be issues related to page reads, buffer cache, etc.
  • Other: Memory bottlenecks can occur if databases don’t have good indexes, and applications or programs constantly processing user requests.


Monitor SQL Server Memory

You must continuously monitor your SQL Server in order to improve its overall performance. During this process, it’s vital to check the statistics (optimally, through alerts) of various performance counters related to SQL Server memory. This is especially true when you’re constantly adding more databases to your SQL Server. Along with memory resources you’ll also have to monitor CPU load, storage performance, physical and virtual memory usage, query responsiveness, etc., which also cause performance issues in the database.


Improve Your SQL Server Performance

If you’re really looking to improve your SQL Server performance, it’s imperative to understand your existing environment, and which performance counters you really need.


A server monitoring tool should provide out-of-the-box user experience for your SQL Server database. It will allow you to simulate end-user transactions and proactively measure the performance and availability of your SQL Server. The tool will also:

  • Ensure the availability and performance of your SQL Server.
  • Give you visibility into statistics, the health of your SQL Server, and then set performance thresholds.
  • Build custom reports to show SQL Server availability and performance history.
  • Get real-time remote monitoring of any WMI performance counters to troubleshoot application issues.


SolarWinds Server & Application Monitor (SAM) comprehensively monitors SQL Servers, and other Microsoft applications running in them. Try the fully functional free 30 day trial.

If you are considering HP Network Node Manager i (NNMi) for your network monitoring requirement, or if you already are using NNMi in your network and looking for a change, here are 5 concrete reasons why you should consider SolarWinds Network Performance Monitor (NPM).


#1 NPM is an easy-to-use network management solution

    • Intuitive Web interfacePicture1.png
    • Customizable and interactive dashboards and charts
    • No training cost or product management overheads


#2 NPM is an affordable enterprise-class software

    • Transparent pricing
    • Flexible licensing
    • No hidden costs


#3 NPM can be installed and deployed typically in under an hour

    • Do-it-yourself deployment
    • No professional services, or product delivery time
    • Download, install and start monitoring in under an hour


#4 Built by network engineers for network engineers

    • Purpose-built based on the needs of the IT community
    • Customer and community-driven product enhancement


#5 Buy only what you need

    • Unified network fault, availability and performance monitoring in one single software
    • No add-ons for network monitoring functionality
    • Modular architecture allows you to add other network management solution and integrate with NPM


More information on product comparison in this SlideShare


That’s not all!


For network performance and combined bandwidth monitoring and network traffic analysis, you can save up to 75%[1] over HP NNMi with SolarWinds Bandwidth Analyzer Pack (Network Performance Monitor + NetFlow Traffic Analyzer).


[1]^ Estimated cost savings for 500 nodes using SolarWinds Network Performance Monitor SL2000 and SolarWinds NetFlow Traffic Analyzer for Network Performance Monitor SL2000 vs. HP Network Node Manager Advanced + iSPI Performance For Metrics + iSPI for Traffic + iSPI for Multicast. Based on available August 2013 data.


Learn More About SolarWinds NPM

Troubleshooting polling issues can involve many steps and there are a few quick things we can try first before placing a support call. For example, say we have a piece of networking equipment such as a router or switch that is being monitored in Orion and suddenly the interfaces go unknown (gray) on the network device. This is telling us that SNMP information is not being recorded for the interfaces. More than likely we will not see any CPU or Memory utilization being reported either. To troubleshoot this issue we can do the following.

  1. Verify that the network device is configured correctly for sending SNMP traffic to the Network Management Software. A quick example would be verifying that the community string is correct.
  2. Run a packet capture on the server hosting the Network Management Software. We can poll the device from the Orion website by selecting “poll now” from Node Details. This sends SNMP (UDP 161) traps to the destination network device. While you are polling the device, run a sniffer trace using a program like or similar to Wireshark (formerly Ethereal). If there is a network issue, you will see SNMP frames leaving your Network Management Server, while no response is returned from the network device.
  3. If you do see frames returning from the network device, then this rules out network connectivity issues. Next verify there is no firewall blocking the SNMP frames on the server hosting the Network Management Software. If you are running a sniffer such as Wireshark, the incoming frames are being read at the Network Interface Card. A local server firewall such as Windows Firewall can still block SNMP frames from reaching the Network Management Software even though we see them hitting the NIC card. Note also that if a firewall is blocking SNMP traffic on the local server this will prevent any SNMP related information from being reported in your Network Management Software.

In summary verifying end to end connectivity between the Network Management Server and the device in question is the key and there are a few steps we can do to verify and possibly fix polling issues saving the end user some time.

WSUS Inventory collects server information and status information from the WSUS server and populates that data into the Patch Manager database. This data is collected via the WSUS API. An inventory task must be executed in order to use reporting.


Create the WSUS Inventory Task

There are several methodologies that can be used to create a WSUS Inventory task, but the simplest is to right click on the WSUS node in the console, and select "WSUS Inventory" from the menu.


The first screen presented is the WSUS Inventory Options. They provide the ability to handle certain advanced or complex inventory needs, but in 99% of instances, these options will be left at the defaults. In a subsequent post I'll discuss these four options in greater detail. Click on Save.


On the next screen you have the standard Patch Manager dialog for scheduling the task. Schedule the inventory to occur as needed. Typically the WSUS Inventory task is performed daily, but there are scenarios in which you may wish to perform the inventory more, or less, frequently. Be careful not to schedule the WSUS Inventory concurrently with other major operations, such as backups or WSUS synchronization events.


WSUS Extended Inventory (Asset Inventory)

In addition to the WSUS server, update, and computer status data, it is also possible to collect asset inventory data via the WSUS Inventory task. If the WSUS server is configured to collect this asset inventory data, it will be automatically collected by the WSUS Inventory task. To enable the WSUS server to collect asset inventory data from clients, right-click on the WSUS node in the console, select "Configuration Options" from the menu, and enable the first option "Collect Extended Inventory Information".


Using System Reports

With the inventory task completed, you now have access to several dozen pre-defined report templates in the two report categories named "Windows Server Update Service" and "Windows Server Update Service Analytics". The data that is obtained from the WSUS server is re-schematized within the Patch Manager database, optimized for reporting, and presented as a collection of datasources that are used to build reports. To run a report, right click the desired report, and select "Run Report" from the context menu.


Category: Windows Server Update Services

In the "Windows Server Update Services" report category there are 24 inter-dependent datasources available. Ten of them provide 327 fields of basic update and computer information, along with WSUS server data. The fourteen datasources named "Update Services Computer..." provide access to 111 fields of asset inventory data collected by the WSUS Extended Inventory.


Category: Windows Server Update Services Analytics

In the "Windows Server Update Services Analytics" report category there are nine self-contained, independent, datasources. The "Computer Update Status" datasource is the basic collection, and the other eight are based on modifications of this datasource, either by adding additional fields, or filtering the data.


In subsequent articles we'll look in more detail at how to customize existing reports and how to build new reports, including a more in-depth look at datasources and the WSUS Inventory Options.


If you're not currently using Patch Manager in your WSUS enviroment, the rich reporting capabilities are a great reason to implement Patch Manager.

It’s been 35 years since the very first solid-state drive (SSD) was launched. This went by the name “solid-state disk”. These were called “solid-state” because they contained no moving parts and only had memory chips. This storage medium was not magnetic or optical, but they were solid state semiconductors such as battery-backed RAM, RRAM, PRAM or other electrically erasable RAM-like non-volatile memory chips.


In terms of benefits, SSDs worked faster than a traditional hard drive could in data storage and retrieval – but this came at a steep cost. It’s been a constant quest in the industry, over the years, to make the technology of SSD cheaper, smaller, and faster in operation, with higher storage capacity. A post in shows the development and transformation of SSD, over the years, since its first usage and availability until now.


Why Storage and Datacenter Admins Should Care?

More than the user using the PC or notebook, it’s the storage admins who are spending time managing and troubleshooting the drives for detecting storage hotspots and other performance issues. And it’s imperative that storage and datacenter admins understand the SSD technology so they can better apply and leverage the technology in managing the datacenter.


Application #1 – Boosting Cache to Improve Array I/O

A cache is a temporary storage placed in front of the primary storage device to make the storage I/O operations faster and transparent. When an application or process tries to access the data stored in the cache, this can be read and written much quicker than from the primary storage device which could be a lot slower. All modern arrays have a built-in cache, but SSD can be leveraged to “expand” this cache, thus speeding up all I/O requests to the array.  Although this approach has no way to distinguish between critical and non-critical I/O, it has the advantage of improving performance for all applications using the array.


Application #2 – Tiering – Improving Performance at the Pool Level

SSDs help storage arrays in storage tiering by dynamically moving data between different disks and RAID levels in meeting different space, performance and cost requirements. Tiering enables storage to pool (RAID group) across different speeds of storage drives (SSD, FC, SATA) and then uses analysis to put frequently accessed data on the SSD, less frequently on FC, and least frequently on SATA.  The array is constantly analyzing usage, and adjusts how much data is on each tier. SSD arrays are used in applications that demand increased performance with high I/O. It is often the top tier in an automated storage tiering approach. Tiering is now available in most arrays because of SSD.


Application #3 – Fast Storage for High I/O Applications

Since arrays general treat SSD just like traditional HDD, if you have a specific high I/O performance need, you can use SSD to create a RAID group or Storage Pool.  From an OS and application perspective, the operating system understands the RAID as just one large disk whereas since the I/O operations are spread out over multiple SSDs, thereby enhancing the overall speed of the read/write process. Without moving parts, SSDs contribute towards reduced access time, lowered operating temperature and enhanced I/O speed. It should be kept in mind that SSDs cannot handle huge data, and data for caching should be chosen selectiv; virtualizatioely based on what data will require faster access to – based on performance requirement, frequency of use and level of protection.


SSD Benefits in Virtualization

In a virtualization environment, there is the fundamental problem of high latency with host swapping primarily in traditional disks compared to memory. It takes only nanoseconds for data retrieval from memory whereas it takes milliseconds to fetch from a hard drive. When SSDs are used for swapping to host cache, performance impact of VM kernel swapping reduces considerably. When the hypervisor needs to swap memory pages to disk, it will swap to the .vswp files on the SSD drive. Using SSDs to host ESX swap files can eliminate network latency and help in optimizing VM performance.



The application of Solid-state Drives has become significant in achieving high storage I/O performance. Proper usage of SSDs in your storage environment can ensure your datacenter meets the demands of today’s challenging environments. If you are interested to learn more about the advantages of SSD over Hard Disk Drives (HDD), you can look at this comparative post from

Network administrators work endlessly to maintain high quality network services for business critical processes and applications to run smoothly. They are expected to monitor and deliver high network performance, without any downtime. While, achieving this has been daunting for network administrators, the need for ‘high network availability’ has increased manifold in recent years.


What is High Availability for Network?

High availability (HA) can refer to the degree to which a network device, application, or other component in an IT infrastructure is operable and meeting service level objectives. Network Management Systems (NMS) are often used to track the performance and responsiveness of applications and the servers on which they run and to document that high availability is being maintained in a network. Network availability is typically included in SLAs established for IT departments. Hence, monitoring your network performance on a regular basis becomes a necessity.


How can an NMS help achieve HA?

Enterprises struggle to sustain their target network availability objectives when they try to scale up with their network, and when network bottlenecks occur it can be difficult to ensure network availability. Network Monitoring Systems can help administrators by monitoring and detecting any network issues. It helps them to determine the root cause of the issue and troubleshoot.


Network device availability can monitored by an NMS using ICMP to poll the devices for response time & status.  By monitoring various network performance indicators, such as disk space, CPU load, memory utilization, bandwidth utilization, packet loss, latency, errors, discards, quality of service for SNMP-enabled devices, an NMS can help administrators to make sure that nothing affects their network availability. For instance, monitoring hardware performance using an NMS will help you get alerted when the CPU is overloaded with tasks, which can risk an organization’s network performance and availability.


What if your NMS fails?

When network administrators monitor and analyze network performance or uptime, the quality of the network data entirely depends on the continuity of the NMS in their environment. To make the NMS available at all times, administrators must have a contingency plan in the event of a system failure.

A Failover solution can ensure NMS availability in a network environment if the server goes down, but only for a particular time period. This process is kept in place to provide monitoring of network availability for a short period, until network administrators solve the issue. There are three different approaches that can help administrators while implementing a failover protection.

  • Active – Active Solution: Administrators have two active servers which have their NMS up & running, and mirroring the application, database, etc.
  • Active – Passive Solution (High Availability): Administrators have the NMS running in an active/primary server and in the event of failure the secondary/passive server takes over with all processes for continuity. This is normally implemented over the LAN.
  • Active – Passive Solution (Disaster Recovery): Implemented over the WAN at the time of failure, a secondary/passive server in the different location takes over so to the end users they just continue to access the applications as they would normally use.

Normally, an NMS follows the Active – Passive Solution for achieving high network availability. On the other hand, Disaster Recovery is a back-up strategy especially when your failover solution falls apart. As businesses largely depend on the availability of various services, enterprises can use an NMS and Failover solution together for uninterrupted network monitoring and achieve high network availability.


NMS and Failover Protection

Network monitoring solution helps you to keep network’s high availability and with a Failover solution, network administrators can keep network up and running.


SolarWinds Network Performance Monitor helps you monitor your network, proactively detect problems and helps you in troubleshooting. Now with a Failover Engine, network administrators can be sure that they never lose network visibility for a short time frame.

By now, you are no doubt aware that SAM 6.0 and NPM 10.6 are both out. Yes, we're pretty excited about it around here, and I want to highlight one particular feature you may have missed: web-based reporting.


Report Writer? Yeah, It Works, But...

Yeah, we've been giving you the ability to create solid, information-packed reports for years; it's basic network monitoring, and Report Writer was handling it. The process would go roughly as follows:

  1. Select a type of data (e.g. Historical Application or Node Availability, Interface Traffic, Active Alerts, etc)
  2. Select the devices you want to report on
  3. Apply some limited filtering and formatting
  4. Execute the report query

It was pretty straightforward, but somewhat limited: you couldn't include a chart of reported data, and you'd need to configure a specific resource just to see the report in the web console. Some of you let us know how limited reporting was, and we listened. After some significant effort we are quite pleased to give you web-based reporting.


Web-Based Reporting. Sounds Good. What is it?

Web-based reporting in SolarWinds products using Orion Platform 2013.2 and higher is a radical reworking of SolarWinds' established approach to IT management reporting. Previously, you designed and generated your reports in a separate application--Report Writer--and then you either printed them out, emailed them to interested parties, or linked to them from designated resources in the web console. Now, with web-based reporting you can construct reports directly from the Orion Web Console using the very same resources you have already been configuring and using in the web console. Any resource in the web console is eligible for inclusion in a new web-based report, and you are no longer limited to a single list of data, as you can include as many charts and resources as you want. Yes, it's pretty exciting. For a thorough walk-through of the process, check out the SolarWinds Orion Web-Based Reports Technical Reference and the chapter "Creating Reports in the Web Console" in the SolarWinds Orion.NPM Administrator Guide. After you've read up, or maybe even before, download your own evaluation of NPM 10.6 or SAM 6.0 and give it a try yourself.

IP Address Management (IPAM) Solutions at present, call for management capabilities beyond just IP Address Management.To enhance overall administrative effectiveness, network administrators look forward to consolidated IP, DHCP and DNS Management and Monitoring abilities.


So, taking a step beyond IPAM into the realm of DHCP and DNS of ‘DDI’, let’s examine a few practicalities involved in the monitoring and management of these services in your network.


How are DHCP and DNS important to IPAM?


Domain Name System (DNS) is the way in which domain names are located and translated into IP Addresses. DNS comprises of DNS zones and records that are individual to each DNS service.


These DNS zones and records often need to be updated, created and sometimes removed. While doing so manually,there are cases where these records get corrupted or contain incorrect information. Also, in large networks where multiple admins operate, there are chances that the subsequent tasks of IP Address allocation like updating DNS records may not be done. Worst case scenario is when the admin encounters a Website down or Domain Name not resolving issues and finds the cause to be a wrong IP in the DNS records.


Network admins managing BIND DNS, know the pain of using the command line interface every time they have to update changes to DNS zones and records.


Dynamic Host Configuration Protocol (DHCP) is a protocol that allows network administrators to centrally manage and automate the assignment of IP addresses to devices in a network. This eliminates the manual task of assigning IP Addresses and enhances the overall management of the IP administration.


In short, DHCP server takes care of central management and automation of tasks viz. allocation of IP addresses, lending out IPs to transient users and reclaiming addresses that are no longer in use. To set up a DHCP server, the admin carries out tasks of calculating subnets, determining scopes, splitting scopes, allocating IP reservations and so on.


Consolidated IP Address Management Solution


With DNS and DHCP services running in your network, IP address or IP space management can be combined and centralized to increase efficiency in managing these services from a single interface.


Some of the functions present in a consolidated IP Address Management, DHCP/DNS Monitoring and Management solution are:


  • Management - Visibility into scope utilization and management, provision to add new scopes and subnets, subnet scan and corresponding updates on the DHCP/DNS server. Add, edit or delete DNS servers, zones and records alongside IP address and DHCP information in the same integrated interface.
  • Monitoring - Monitor in real-time the availability and performance of DHCP and DNS services, viz. scope utilization, zone status, etc.




  • Manage and monitor in real-time DHCP/DNS services alongside IP space management.
  • Stop maintaining multiple management consoles and seamlessly integrate with existing DHCP/DNS environments.
  • Easily propagate DHCP/DNS changes to the servers.
  • No expensive investments in appliances requiring vendor expertise and costly upkeep.


One place to start looking at is SolarWinds IP Address Manager (IPAM). IPAM provides the flexibility of a user-friendly GUI to easily perform add/edit/delete functions for BIND DNS zones and records, thereby, eliminating the struggles of having to use the CLI. Download a trial version and start managing and monitoring Microsoft DHCP/DNS, BIND DNS, and Cisco DHCP services, including Cisco ASA devices - all from a unified,easy-to-use Web console.

Over the last 3-4 days, we have been discussing several revelations that came out of the SANS Security Survey 2013. We had a glimpse of how organizations used security reports, how well organizations were equipped to collect security data and correlate that for threat intelligence and more.



Now, that takes us to the final part of the findings:

  • Satisfaction of organizations on their current Security Analytics & Intelligence capabilities
  • Top IT Security investments planned by organizations for the future



Organizations tend to place too much focus on data protection, resulting in not monitoring the events logs on their network. Log messages like syslogs, server logs and system logs are the means to actionable result.


Typically, IT Security capabilities are measured based on these 3 factors:


  1. Ability to identify potential risks across your IT infrastructure
  2. Intelligence to identify anomalies and suspicious behavior in your network patterns
  3. Ability to respond in time


For this to happen, you need to have visibility across the security events on your network and the intelligence to correlate the suspicious activities.



But the mind-boggling results that came out of this survey were:



Need for a Security Solution

The above results clearly show the need for a strong security solution that would alert you when a specific security condition is encountered, troubleshoot issues and react to policy violations, perform event forensics and root cause analysis to identify suspicious behavior patterns and anomalies. This eventually leads to fact that organizations need to invest sensibly on secure their network, sensitive data and systems from potential threats and risks.



Here are some statistics that were revealed regarding the security investment intentions of the participant organizations.




How would an SIEM Solution help?

  • Event correlation for event context and actionable intelligence
  • Real-time analysis for immediate threat detection and mitigation
  • Advanced IT search to simplify event forensics and expedite root cause analysis
  • Built-in reporting to streamline security and compliance



SolarWinds Log & Event Manager (LEM) is a powerful SIEM security software that is highly affordable and an easy-to-deploy virtual appliance. It helps you collect, correlate, analyze log data and alert you in real time. Also with its Active Response technology, you can automate the incident responses.


Join us at Las Vegas for SANS Network Security 2013

Are you already there? Well, look for us at Booth 14. We'll be the ones with awesome t-shirts, buttons, and giveaways! Make sure that you stop by, have a chat with us and also check-out our line-up of security products. That’s not all, meet our security experts and attend live product demos! Come, get geeky!!





Typically IT admins spend a significant amount of time managing SQL servers because of the volume of data usage and the number of applications accessing SQL databases. This also makes it difficult to optimize SQL server performance. Let’s get into the details of what you need to optimize and how to improve the performance and availability of your SQL servers.


  • Index Fragmentation: SQL uses indexing to make data searches faster. When data is modified, the index contents scatter which causes fragmentation. Poor index fragmentation not only slows down database searches, it also requires more disk space usage which causes performance degradation.
  • Storage Capacity of Temporary Database: Every SQL server has a shared database called tempdb which is used for storing temporary user and internal objects. There can be bottlenecks in tempdb because there is only a single tempdb available for every instance. To avoid bottlenecks here and improve performance, you will have to optimize the size of the tempdb file and monitor it from time to time.
  • Top Expensive Queries: One bad query from an application using the SQL database will affect the performance of the whole database server itself—not just the individual database belonging to an application. Monitoring expensive queries alerts you to troubleshoot possible issues before the SQL server itself is brought down by a single query from one of the hosted applications.

Monitor SQL Servers and its Key Components

If you want to improve your database performance, you should monitor certain performance counters. These counters monitor the performance of your database server and the server hardware. They will come back and tell you the value of each counter and whether you need to take necessary steps to improve the performance.

  • Processor Time: Monitors the CPU load on the server.
  • Memory Utilization: You’ll know if there are memory bottlenecks which can ultimately lead to paging.


  • Storage Utilization: You can find out how much storage space is used and how much is free. This way you can allocate space accordingly.


  • Average Response Time: This counter will tell you the response time of the SQL server–what is the average response time for specific queries?
  • Page Life Expectancy: Monitoring this counter will tell you the time it takes before the average page is removed from the cache buffer. If this fails below the threshold then it is an indication that your SQL server may require more RAM to improve performance.
  • Blocked Queries: You can check of the number of queries the database server blocked. You’ll know they are blocked for a reason such as poorly written queries, queries that take too long to respond, queries affected by CPU time, efficient queries that are slowed down by slower queries, etc.
  • Transactions per Second: You will get to know the number of database transactions that have started every second.

Advantages of Monitoring SQL Server Performance Counters

  • Keeps you aware of the performance and the availability of the database server at any given time.
  • Increases the effectiveness of the database server.
  • Avoid any kind of performance bottlenecks.
  • Scalability to monitor more databases and instances.
  • Helps you maintain your server hardware and keep it healthy.

Watch this video to find out how to optimize and improve your SQL server performance.


In the earlier blogs of this series, we saw the snapshot of the SANS security survey showing how prepared organizations are in the methos of threat detection and response, and generating security reports. Now, we’ll see how organizations are equipped to collect security data and correlate that for threat intelligence.


A startling result that we found from the survey is that,

Image 1.png


Security data collection is the process of being able to gather logs from across your IT infrastructure including network devices, security appliances, servers, workstations, virtual machines and databases. It doesn’t just stop with log data collection, but focuses on how quickly the logs are collected. There are attacks that cause havoc even in a few minutes, and not being able to collect data and gain knowledge of the attack will be more detrimental to your secure data and systems.


Log collection is just the first step. Once the data is collected, there should be a relevant mechanism to process the logs, normalize them, correlate them as quickly as possible and generate intelligence to diagnose anomalies in network patterns and isolate suspicious events. Real-time event log correlation could be most efficient only when it is automated and happens in-memory so that threat analysis becomes extremely fast.

Another interesting survey finding was that,

Image 2.png


This is a challenge for security admins as they cannot manually search for logs and hope to respond to attack on time. Security information & event management (SIEM) tools help you automate event log correlation and alert you when there’s a security threat of policy violation. Log management becomes simpler and automated when you use an external SIEM software. The survey showed that,

Image 3.png


Your security needs will not be fulfilled if you are not ready to invest in a security solution that’s going to help protect your network, systems and sensitive data. Organizations need more awareness and learning on how an automated SIEM system can help you gain real-time visibility into the security and operational events in your network. Try SolarWinds Log & Event Manager, a full-functional SIEM software, that collects and correlates log data and alerts you in real time, and helps remediate threats with built-in automated incident responses.


Join SolarWinds at SANS Network Security 2013 Las Vegas

You are invited to stop by at booth No. 14 TODAY (September 18th, 2013) to meet our security experts and geeks, and attend live product demos and find a solution to your security challenges. And yes, there is a lot of cool geek gear to grab and wear – complimentary of course!


SANS event.png

The previous article on SANS Security Survey 2013 discussed about the security needs and challenges in enterprises to detect threats and the complexity to respond to breaches and attacks, etc. Further to detecting threats and responding to them, we got some insights on the kind of data used by organizations for security analytics.

Interestingly, the most common data used to investigate security issues were:


  1. Log data from network (routers/switches) and servers, applications and/or endpoints
  2. Monitoring data provided through firewalls, network-based vulnerability scanners, IDS/IPS, UTMs, etc.
  3. Access data from applications and access control systems


By doing log analysis, you can understand what transpires within your network. Each log file contains many pieces of information that can be invaluable, especially if you know how to read them and analyse them. With proper analysis of this actionable data you can identify intrusion attempts, misconfigured equipment, and many more.


Security Reports

Next, you cannot afford to undermine the importance of security reporting as it would give you critical information like the vulnerabilities, suspicious behavior on your network, network traffic, etc.


Satisfaction with Current Analytics and Intelligence Capabilities

survey pic.png

The above statistics are based on the SANS Security Survey conducted early this year. For detailed survey results and reports, please click here.

59% of respondent organizations

  • Not satisfied with their library of appropriate queries and reports

56% of respondent organizations

  • Not satisfied with their relevant event context intelligence
  • Have no visibility into actionable security events


How do Security Reports help?

From the above chart you can see the various factors that organizations look for when it comes to reports. While it is absolute necessity to have an effective security reporting to stay informed about the various security issues, it is also important to understand the different areas where reports can be used.


Compliance Reporting:

Being in line with IT compliance regulations such as PCI DSS, GLBA, SOX, NERC CIP, and HIPAA requires businesses to monitor and control access to and usage of sensitive information. Scheduling periodic report generation can help you in gaining visibility over your network and help you adhere to various compliance regulations, which in turn means protection of your customers’ data.

Security Auditing:

Security audit is a continuous process, hence you need to conduct security audits regularly. Reports help you conduct an audit of network events and establish a security baseline. You can make it even more effective automating the audit process with the help of SIEM tools.

IT Security Forensics:

You can use reports to identify suspicious behavior patterns on your network, traffic patterns, malicious codes, summary of various events on your network, and many more.


Are you all set to meet us at SANS Network Security meet? Look for us at Booth 14. We'll be the ones with awesome t-shirts, buttons, and giveaways! Make sure that you stop by and have a chat with us and also check-out our line-up of security products.


Come, grab some!!


top 5 reports.PNG

Finally, SAM 6.0 has been realized and released! The improvements and additions are spectacular. Unlike Microsoft and their Windows 8 debacle, we listen to the people who use our software. This is the very reason we have All of the new features in this release came from the users in one form or another, including a lesser known one highlighting the differences between each version of the Administrator's Guide. A sole voice emailed me requesting this little improvement and explained to me why it was important to him. My thought process was simple, "Done." Now every SAM user can benefit from this simple little improvement requested by one user. And here it is:


Granted, this won't garner the praise that actual features will, but it is indicative of something larger. We listen and we care. One voice was heard. Was it yours? In fact, I continue to ask for more input from you. Take this article for example. In the article I asked users to demonstrate how they do things. This wasn't for my amusement. I wanted the responses I got to become a collection of "How To" topics so I could add them to a new section of the Administrator's Guide, or even better, create a separate book on the subject. This How To book would demonstrate examples, rather than me telling you how to do something. Here you would be able to "see" something that works, rather than be told. The difference is important because seeing a working version of something allows you to take away what you need from it. Being told how to do something is certainly more restrictive.


And while I'm here, let me ask you for How To videos! Yup, I want to start adding videos to the Help system so you can see how things are done. I think a How To section and a How To video section would help users immensely. What do you think? Share your videos and How To operations in the comment section below, or feel free to DM me and we can talk about more ambitious ideas you may have offline.


BTW, enjoy SAM 6.0MG.

In conjunction with SANS, SolarWinds recently conducted a security survey amongst 647 respondents who are security and network administration professionals from various public and private organizations including federal government agencies, banking, financial, and healthcare institutions across the US and Canada. The results of this survey gave us a deep understanding of the pressing security needs in enterprises, the challenges faced to deal with breaches and attacks, and preparedness of the IT infrastructure teams to contain and respond to security threats.


Threat Detection

As security professionals, we know it’s paramount to have a mechanism in place to detect threats as early as possible to be able to contain them, or respond to them with corrective or preventive action. This is where organizations are facing the challenge and they are not able to detect threats in time which also increases the time span for the attack to wreak maximum damage.


Difficult Threats to Detect

In the past couple of years,

Image 1.png


This is an alarming figure as it shows there were so many threats that couldn’t be detected soon. Imagine the impact of the attacks until they were discovered. Until the threat is detected and action is taken there can be so much of data loss, system malfunction, failure and even compromise.


Impact on Systems

Image 2.png


Threat Response & Remediation

The challenge doesn't stop with just detecting the threat. From this survey we found that organizations are also finding it hard to respond to attacks after discovering them.

Image 3.png


We didn't just stop with detecting and responding to threats. We wanted to find out what was stopping organizations from getting this visibility.


Top 3 Impediments for Organizations to Discover & Follow Up on Attacks


Top 3 Impediments.png


As we can see from all these statistics, there is a clear lack of preparedness in the IT teams to defend their data and systems from breaches and attacks. Log management is an efficient way to identify abnormal behavior patterns on the network and spot threats. A security information & event management (SIEM) software will help you collect, correlate log data in real time to isolate zero-day threat vectors and allow you to remediate the threat with automated response. Threat detection, response and remediation simplified!


Join SolarWinds at SANS Network Security 2013 Las Vegas

You are invited to stop by at booth No. 14 on September 18th 2013 to meet our security experts and geeks, and attend live product demos and find a solution to your security challenges. And yes, there is a lot of cool geek gear to grab and wear – complimentary of course!


SANS event.png

If you own more than one Orion platform product, chances are, there are benefits to be had via their inherent integrations that you aren't leveraging. What you say? I can get more out of what I have right now? It's true! Because so many SolarWinds products have built-in integration points. there's a lot of power there that is yours for the taking. So if you ever find yourself asking:


  • What products integrate with the product that I own?
  • What products does SolarWinds have, in general?
  • What would these integration points do for me?
  • How do I set it up?


Well, we're going to tell you. In the "Admin" section of your Orion products, you'll see a new resource.



Click on the "Integration Overview" to see a large diagram of our products and how they fit together. It's a large diagram, so you'll only see part of it here.



Click on any link (any product) to read about commonly integrated products and to get step by step, illustrated guides about implementing those integrations.


Let us know what you think of this new feature at

It’s not a vendor-dependent network management world anymore. There are so many players in the market. Vendor-specific enterprise solutions, third-party software and Open Source tools are all being used across corporate networks subject to individual network requirements. Network management systems (NMS) are becoming vendor-agnostic, offering the capability to support networking hardware’s from a wide variety of manufacturers and device models.

When we spoke to some of our customers who had previously used BIG 4 (HP®, Cisco®, IBM®, CA®) network management solutions, some of the foremost reasons on why they made a switch to SolarWinds were, the difficulty of managing an enterprise NMS suite, affordability and the total cost of ownership of these products that, over time, started choking IT management budgets for organizations. Big 4 users felt that they paid for features they did not use and were not getting an adequate return-on-investment.

With increase in network complexity, users seldom need a tool that overshoots their existing problems. In order to find an easier way to achieve what users want at a reasonable cost, we had to dig deep to find the difficulties faced by them while using one of the Big 4 products.


Affordability and Total Cost of Ownership

Big 4 network management solutions are high priced, from the initial cost of licenses purchased till the maintenance and support of their products. Big 4 customers end up spending too much for the features and capabilities they didn’t use, nevertheless the charges for the additional support they would require which results in a much higher total cost of ownership (TCO). The following are some examples where Big 4 customers face the heat of additional costs.

Maintenance – Traditionally Big 4 enterprise solutions have an expensive annual maintenance charges. Users have to subscribe in order to receive latest patches and updates.

License – Organizations who purchase Big 4 products end up paying for software licenses that they rarely used. In large organizations, the guy who made the initial purchase isn’t even involved during the implementation process.

Consulting/Services – Additional services rendered by the vendor usually draws additional costs that wasn’t included in initial purchase. Services may comprise of user training sessions, services during implementations, deployment, etc.

Return on Investment (ROI) – Most of the time, Big 4 customers were skeptical on how to justify the ROI on their network management solution when it came down to total cost of ownership. To maximize the ROI, they have to spend extra money for training their own staff. And with higher maintenance fees, organizations are looking at diminishing ROI when they are looking for more scalability and integration within their environment in future.


Big 4 users always complain about the complex user interface and not so easy-to-read dashboards. Mostly, they have to get a special training to understand the processes involved in operating the network management tool, which additionally consumes more time and effort. Users go through a series of tedious tasks to operate the basic functionalities like reports, alerts, etc. Some of the factors that inhibits the ease-of-use are,

Complex User Interface – Absence of clear and easy-to-read real-time network information has been the Achilles heel of Big 4 products. Lack of visibility, by having a broken dashboard view of network infrastructure and operations can distort the users from easily finding out the network issues.

Integration – Expensive enterprise network management solutions are difficult to integrate with other products based on different platforms, which might be critical to make users job much easier is to manage their network environment through integration.

ReportingBig 4 users also have a difficulty while generating reports because of the lack of out-of-the-box reporting functionality. Users sometimes manually create reports based on their need which is time consuming.

Management overhead – With extra features that has little or rare use, organizations require resources to manage them if the need arises. The lack of staff time and resources creates an operational nightmare if the users doesn’t know how to manage them due to complexity.

Ideally, customers want a network monitoring tool that’s affordable, easy-to-download, easy-to-install and easy-to-deploy with minimal effort. SolarWinds Network Performance Monitor (NPM) is an easy to use network monitoring tool that is modular and scalable. NPM is an affordable enterprise class network monitoring solution, when installed its up and running under an hour. You can get 80% of features present in Big 4 NMS products at 20% of the cost. You can download our fully functional 30 day free trial or test drive our demo.



Learn More

Save Big Bucks over the Big 4

Rightsizing Your Network Performance Management Solution: 4 Case Studies

According to this article, SMB hiring of IT staff is at a stand-still with only 26% planning to add head count. This can only mean that SMBs expect more out of their current IT staff. On the other hand, SMB IT budget has increased by 7% to 162K. As a result, you won’t get to pawn off your unwanted projects on the newbie. But, you may have some extra cash in the budget to buy software to take some of the work off your hands.


5 Easy Tips to Do More with Less


  1. Spend Less Time on the Phone: Get the most out of a server monitoring tool that will make you look like a rockstar so you’re not stuck on the phone troubleshooting all day.
  2. Don’t Buy Hardware Until It’s Time: Manage your virtual environment in an organized manner. That way you can ensure there is less or no clutter.
  3. Consolidate Your Inventory: Streamline your inventory so your workload of consolidating hardware, OS, and apps become easier.
  4. Automate Patching of 3rd Party Apps: Think about ways of automating patch management. That way your productivity goes through the roof and you end up saving hours of manual work.
  5. Automate Mundane Active Directory® Tasks: You can leverage free tools that will automate Active Directory tasks which are otherwise boring, tedious, and time consuming.


Checkout this short presentation that teaches you five simple ways for adding extra hands without hiring an extra body.




Today we conclude our seven part series discussing how to use a handful of overlooked best practices to improve network configuration management.  Over the course of these posts we have highlighted the difficulty involved in managing hundreds or even thousands of switches and managing complex configurations consisting of hundreds of command-line statements.  The probability for human error is high. Even the smallest of errors can adversely affect service. Therefore, every step must be taken to get it right.  This is where Network Configuration Manager (NCM) and our five overlooked best practices come into play.




Control Change

Today we’ll look at best practice #5 which advocates using a well-defined change control process for reviewing, approving and making changes to your device configurations and a process for tracking device end-of-life (EOL).  The reason why this practice is so powerful is because once you have spent a great deal of time and effort implementing and stabilizing your configurations, you want to maintain that stability even as your environment evolves.  Changes will be necessary so why not maintain them in a controlled fashion?


For this best practice, there are four activities we recommend you consider adding to your management regiment.  These four activities are:




The first step is to create a baseline.  To baseline a configuration is to create an internal standard that allows you to measure other configurations and future changes by.  So it goes without saying that your baselines should be error-free and stable.  Once you designate a configuration baseline then you can detect changes and determine whether those changes followed your change control processes.



Which is a great segue into our next practice – creating a well-disciplined change control process.  Ideally you want to be able to review and approve all changes prior to implementing them in your production network.  This is useful if you have teams of admins or engineers doing work.  Using a change control process will help coordinate activities between teams.  Or you may have less experienced admins or engineers making changes.  Again, a formal change control process will allow you to review all changes and detect and fix errors before the change is made.




Our next best practice suggests using automation to deploy configuration changes – especially if the change needs to be deployed across many systems.  Using automation can help ensure the change is made the very same way and error free.  Automation is your friend.




The last recommendation deals with using change control to manage end-of-life (EOL) hardware devices.  You may be wondering why tracking EOL devices is so important.  We’ll it is and for the following reasons:


  • Excessive Support Costs. The primary driver for increasing support costs for EOL hardware is due to vendor end-of-sale and end-of-life policies.  As a device approaches end-of-life the support services can become both explicitly and implicitly more expensive. Failure to secure or renew a maintenance agreement before critical end-of-life dates expire will prevent you from receiving vendor technical support and maintenance upgrades.  Therefore you may be forced to develop or maintain more expensive in-house skills or contract externally for needed services.
  • Regulatory Non-compliance. Non-conformance costs will become an issue if the device is unable to achieve control objectives defined by your policies.  This may be due to a lack of technical capability or because the device is no longer able to receive updates that address security vulnerabilities.
  • Business Disruption. This risk often produces a broad spectrum of affects caused by catastrophic device failure and can lead to business disruption and accompanying lost revenue and/or brand damage.  These problems are amplified when remediation occurs with a legacy device that consumes even more time because spares cannot be located or the replacement device requires extensive install and configuration effort.
  • Diminished Productivity. IT technology is a significant business productivity driver.  Therefore when new IT technologies are not adopted and utilized then opportunity costs may negatively affect bottom-line financial performance.  This problem is also realized when the business wants to expand service only to discover that the underlying infrastructure won’t support the business requirements because it is no longer supported.  This discovery then forces unplanned expenditures and cost overruns.


By carefully tracking EOL hardware you can work to eliminate these problems.






Experience informs us that when we follow these overlooked practices that you can eliminate network downtime.  And if you are the one who introduces your teams to these practices and are noticed for it, then you will likely find favor with your boss – which is always a good thing when you want to ask for a raise. 


Of course SolarWinds can help you with NCM v7.2.  SolarWinds Network Configuration Manager (NCM) is a network configuration management solution.  NCM is part of the SolarWinds Orion Management platform.  The Orion platform offers integrated network performance monitoring, systems and application monitoring, network configuration management, security event monitoring and more.  Using Network Configuration Manager, you can increase efficiency, reduce network downtime and manage configuration compliance by managing and automating major configuration management and change management tasks.


Why not try it today.  Click here to download your free 30-day trial!



You can also find and read past posts in this 7-part series here


Post 1:

Post 2:

Post 3:

Post 4:

Post 5:

Post 6:

Don't miss our New Release Roundup tomorrow at 1pm CT. Learn how you can easily add more power, deeper insight, more accurate alerting, and faster time to root cause with snap-in applications from SolarWinds. In this short webcast, we take you through a few of our newest releases and discuss how these products could power up your existing infrastructure. Want to be more proactive? Solve problems faster? Free yourself from routine tasks and focus on more strategic ones?


Bring any questions you have about your own infrastructure or product integrations and we’ll help you solve them.


Sept 13 @ 1pm CT REGISTER HERE
Sept 20 @ 1pm CT REGISTER HERE

We have expanded the Content Exchange for Web Help Desk to now include FAQ articles and helpdesk articles for IT staff.  Now through the end of October you can earn an extra 50 thwack points per article (totaling 100 points per article)! New sections added:


FAQ Articles - Share the most common and repetitive help desk questions and the workarounds.

Help Desk Articles for IT Staff - Share tips & tricks on getting things done, fast! Help your peers leverage your knowledge, like for example, Windows 8 Tips & Tricks or a workflow for VMware troubleshooting.

If you have ideas on how to improve the Web Help Desk Content Exchange, please comment.

If you're like this guy and you fear that sensitive data is walking out of your network on USB thumb drives,


you will be happy to know that SolarWinds' software portfolio includes an alternative to thumb drives.


Lock Down Those USBs

SolarWinds provides a technology called "USB Defender" within its Log and Event Manager software. USB Defender protects sensitive data using real-time notification and other security features when USB devices are detected, including

  • Automatically disabling user accounts
  • Imposing quarantines on work stations
  • Automatically or manually ejecting USB devices

The USB defender also audits and reports on USB usage over time.


An Alternative to Thumb Drives

Regardless of whether your organziation allows USB drives, people might still have business protocols that require them to exchange large files. One alternative to using USB thumb drives is to provide employees with universal access to their home folders and/or selected folders on existing file shares. By giving them secure access to the same files they use both inside and outside the office, you reduce the incentive to make copies on removable media or use 3rd-party web sites.


A second alternative is to provide employees with secure "ad hoc" file sharing. This allows your end users to safely send files and request files from their daily business contacts, again without using 3rd-party web sites.

Fortunately, SolarWinds offers both capabilities in the same product: Serv-U Managed File Transfer (MFT) Server.  When you deploy Serv-U in your data center, you can reuse the same security policy, procedures, people, and infrastructure that protects the rest of your data. This will enable you to finally retire those pesky USB thumb drives.


Do You Have Other Security Challenges?

Be sure to check out SolarWinds' new Security site, or leave your thoughts and comments below.


This is part six of a seven part series discussing how to use a handful of overlooked best practices to improve network configuration management.  Why?  Human error is the leading cause of network downtime.  Eliminate the error with these overlooked practices and you not only improve network up-time but also prove to the boss you are a natural born leader!

Today we’ll look at best practice #4 which recommends auditing your configurations for standards compliance.




How Configuration Compliance Can Help


Our objective with an audit is to ensure compliance to all applicable policy standards.  There are a variety of security policies and standards that each organization may chose to follow.  Most all of these are designed to protect the confidentiality, integrity and availability of company systems, data and other resources. 




Many of the standards we implement are based on industry requirements, internal risk mitigation measures and other “best practices”.  These standards are expressed as controls which are implemented as configuration settings.  Therefore, by auditing selected configuration settings you can determine your compliance to those standards you follow.








Audits are notoriously unpleasant.  They are time consuming and often reveal shortcomings that can reflect poorly on managers and administrators alike.  However, for many they are a fact of life.  By regularly reviewing your own audit reporting you can discover problems before they are noted by the auditor.  Therefore, when proactively finding and correcting violations you can dramatically reduce risk and receive higher scores.

Looking forward to our review of the remaining post, we will take a look at practice #5 which deals with using change controls to manage changing business requirements and configuration updates. In the meantime, if you've joined this discussion in progress, you can visit our earlier postings.  You can also learn what new in our recently releases NCM v7.2 or download your own fully-functional 30-day trial and start to put these practices to work in your own network.

You can also find and read past posts in this 7-part series here

Post 1 of 7

Post 2 of 7

Post 3 of 7

Post 4 of 7

Post 5 of 7

Post 7 of 7

We are pleased to announce that NPM version 10.6 is now available for download.


In the previous version, NPM 10.5, we shipped some strong features such as IP multicast monitoring and network route monitoring with support for RIP v2, OSPF v2, BGP protocols. So, what’s new in NPM 10.6?


Web-based Reporting

NPM 10.6 allows you to create, edit and manage reports right from the Orion® Web console. We have added such flexibility and simplicity on the Web UI that you will love it more than the desktop-based Report Writer. (The original Orion Report Writer is still available.) Using the new Web-based Report Writer you can:

  • Create new custom reports
  • Edit existing reports
  • Duplicate existing reports
  • Add custom tables and custom charts on reports
  • Add data series to charts
  • Duplicate individual reporting resources (CPU load, packet loss, etc.) to add reporting data for more objects (nodes or interfaces)
  • Change report layouts
  • Preview reports before running them
  • Save and execute reports manually
  • Schedule reports for automated delivery

Web-based Reporting.png


New Out-of-the-Box / Built-in Reports

There’s more to reporting. We have added a bunch of new built-in reports for out-of-the-box value. These new reports can also be edited and duplicated as required. Some of these are:

  • Top 10 Interfaces Transmitting Traffic
  • Top 10 Interfaces Receiving Traffic
  • Top 10 Least Available Interfaces
  • Top 10 Interfaces Discarding Traffic


Worldwide Map (Integration with OpenStreetMap)

Worldwide Map has the capability to display the status of nodes or an aggregated group of nodes over dynamically updated street data. With the 10.6 release, NPM integrates with OpenStreetMap making it easy to layout and view where your equipment is, and it's relative status. You can drill down right from the world map down to the country, state, town, and street to get a bird’s eye view of your location/site and see device status on real-time maps. You can additionally show NPM objects on MapQuest maps added as a resource on NPM's Web Console.

Worldwide Map.jpg


Universal Device Poller (UnDP) on Maps

With NPM 10.6 you can now add UnDP objects on network maps using Orion Network Atlas™. There’s a new map tooltip for UnDP which displays UnDP statistics such as OID being polled, current value, and computed status (using thresholds). There’s also a new page for setting the "warning" and "critical" UnDP thresholds.

UNDP on Network Atlas.png



Other New Enhancement and Feature Improvements

  • F5® device support (Interface monitoring for F5 APM® via F5-BIG-IP®-SYSTEM-MIB)
  • Functionality to cancel “scheduled unmanage actions” directly from the node management resource
  • New reporting entities (wireless devices, hardware health, F5, fibre channel switches)
  • Improvements on Syslog & SNMP trap rules


NPM 10.6 is just a click away – Download Now!


On my last blog post I explained how Virtualization Manager (VMAN) is now integrated with SolarWinds Server & Application Monitor (SAM) and Network Performance Monitor (NPM) showing how now we can view application to VM to datastore performance, configuration and right sizing all in a single pane of glass.

SolarWinds Virtualization Manager (VMAN) to Storage Manager (STM) Integration

I wanted to take this a step deeper and introduce VMAN’s integration with Storage Manager (STM). For some time VMAN has had the ability to provide a link from the datastore in VMAN to the LUN/volume view in STM. This is enabled by adding the STM IP address and username/password in VMAN. Once done the datastores will be hyperlinked, when selected the link opens the underlying LUN or volume view in STM.

By having a flow from VMAN to STM, we can view datastore performance in VMAN and then simply click on the hyperlink opening the LUN view in STM. Here we can view how the underlying LUN at array level is performing. This is a great way of having end to end mapping, it really takes the guessing game out of what datastore is on what LUN and provides quick performance analysis.





SolarWinds Server and Application Monitor (SAM) to Virtualization Manager (VM) to Storage Manager (STM)

If we have the link from VMAN to STM enabled, and we then enable the integration from VMAN to SAM then we will also have a hyperlinked datastore to STM from SAM.

This adds really nice functionality and workflow from the application to VM to datastore and LUN all from SAM.

In SAM we can click on a server, and choose the storage tab or simply click on the storage submenu



I now have the LUN view and it is hyperlinked, so from here I can simply click on the link and the LUN/volume will open in STM!



SolarWinds Systems Management solutions like SAM, VMan and STM help solve the daily challenges of the sysadmin, to find out more, read the eBook “A Day in the Life of a SysAdmin” now.

Please join us for a monthly product update from the SolarWinds Product Management team. The team will cover what’s new, what’s coming, and what we’re thinking about for future releases. Each session  will be very collaborative. We want to hear your thoughts, questions, and requests.


September 11  

In September, we’ll show you some interesting new developments centering around Orion platform product integrations. Plus, our NCM PM will give a brief overview of what’s new and what’s coming up. Be sure to bring any questions on our recent releases, and we’ll get them answered.


Register here.

What is IP Multicasting?

While traditional IP communication allows a host to send packets to a single host (unicast transmission) or to all hosts (broadcast transmission), IP multicast is new a bandwidth-conserving technology that allows a host to send packets to a subset of all hosts as a group transmission. Multicast transmission reduces traffic by simultaneously delivering a single stream of information to multiple receivers.



Key Benefits

  • Considerable bandwidth savings as there’s just a single stream of traffic transmitted
  • Elimination of network redundancy
  • Reduced load on servers and CPUs
  • Functionality to choose individual receivers or group of receivers


Thus, IP multicasting allows you to efficiently distribute video, voice and data to virtually any number of corporate recipients and homes. Multicast is increasingly being deployed in enterprises for services such as multimedia distribution, finance, education and desktop imaging.


Why Monitor Multicast Traffic?

It’s important to monitor multicast traffic because you need to be sure of the availability of the multicast receivers, and whether there’s any packet loss or latency with packet transmission via protocols such as IGMP and PIM. As a network admin, you also need to know what the multicast route path is, and whether the network devices and their interface transmitting the multicast data are having any performance issues that inhibit the multicast data delivery.


What Multicast Metrics You Need to Monitor?

Network Devices by Multicast Traffic

At a high-level this statistic will allow you to be informed of the top consumers of multicast traffic on your network.

pic 1.png


Multicast Group Members

To be able to easily isolate issues happening in multicast groups, you need to know the multicast devices and interfaces that are part of a multicast packet transmission, and monitor their availability, health, and performance right alongside multicast data.

Pic 2.png


Multicast Traffic Metrics at the Device Level

  • How a network device shares multicast traffic from various transmissions?
  • What is the node utilization over time by multicast traffic?

Pic 3.png


Multicast Traffic Metrics at the Interface Level

  • Current rate of transmitted multicast (pps)
  • Current rate of transmitted multicast (pps)
  • Current incoming multicast traffic to interface (pps)
  • Current outgoing multicast traffic from interface (pps)


Multicast Topology

For any given network device routing multicast traffic, knowing what the upstream and downstream routers are can help get a view of immediate topology. You can get to know what interfaces are used for packet transmission and reception from the topology, whether they are available or not, and what amount of traffic is passing through them, etc.

Pic 4.png


Troubleshooting multicast related performance issues is a time consuming, manual process requiring knowledge of CLI and advanced scripting. SolarWinds Network Performance Monitor (NPM) gives you the ability to automatically monitor your multicast network and alert you when performance issues arise allowing you to reduce your time to resolution. SolarWinds NPM combines views of real-time multicast information alongside device information so you can drill down and see route details of multicast nodes and monitor routers, switches and end-points that receive and forward multicast packets.


Learn More

Read this white paper to further understand how to monitor IP multicast traffic.

Multicast White paper.png

Workflow within an IT team often involves both separation and coordination between management-related tasks and those performed by other team members. Within a config change approval system, for example, a manager might set the policy and strategy based on which team members make changes to their assigned areas of the network. Conversely, when a team member sets up a device config change that targets a specific set of nodes, a manager often provides approval for the change and may schedule the day the time for the change to take place.


One division of labor that often goes unintegrated in the daily processes of network maintenance is tracking the support status for the different devices running on the network. It’s common for a manager to track support and maintenance of equipment as an acitivity isolated from other more integrated network monitoring processes.


The disadvantage of isolating support, planning, and procurement to management oversight is in creating a single point of failure. While a manager checks the activity of team members, nobody tends to check the manager’s awareness of which devices on the network are nearing end of life or end of vendor support.


Tracking Device Support Status

The significant advantage of delegating consistent tracking of device support status to a team member with responsibility for the relevant area of the network is that the responsible team member consistently keeps support status in mind when planning device configuration work. As a result, in planning changes in the network, different team members can appropriately engage managers to plan and procure device upgrades as an integral part of maintaining the of integrity of the network. Also, a manager who receives reports on end of life and end of sales related to network devices can trust the point person on the team to provide the most strategic information about what to do about the devices that appear in a report. Team members gain additional ownership over the devices they maintain and managers gain an overview that helps them focus without being bogged down with unnecessary details.


SolarWinds Network Configuration Manager version 7.2 introduces an End of Life and End of Sales tracking feature that integrates this additional awareness into the IT team’s daily monitoring workflow.

Monitoring your devices in the underlying network infrastructure is vital to maintain network up-time. The daily routine for administrators is to anticipate potential network issues that might bring business services down, anytime. But how can they be proactive and find issues that can cause a downtime? Today, we will discuss why it is important to take a step back, and have a macro view of monitoring network devices by understanding how customizable views for devices, network grouping and dashboards can help us focus on what really matters to the business. Any organization with a huge network environment is bound to receive large amounts of information on the status of devices, interfaces, connectivity, etc. Finding the root cause of the problem will be difficult with all those events constantly pestering you if you do not have network grouping already implemented.


Shown below is an issue of an Outlook application not responding properly. This is a business service failure. Only if you have set up alerts to notify you from different parts of your IT infrastructure - could be your network or your application server, etc - you'll be able isolate the right cause of the issue for faster troubleshotting


Business Service.PNG


Why Grouping is Necessary?

By including all of your network components and adding new devices to your database as they are discovered, you will be able to monitor, alert, and report on the network as a unit and thus ensure optimal service delivery. Going one step further, grouping will give you the benefit of classifying those network devices based on commonalities or dependencies. Grouping allows you to:

  • Organize devices, interfaces, and servers into groups. This helps network administrators to roll up service level status across business and constantly monitor critical processes.
  • Group objects by department, business center or geography to analyze the impact of network issues.
  • Prevent multiple alerts on the same issue and help administrators identify the root cause of the problem.


For instance, if a router or switch is down at a particular location, dynamic grouping helps network administrators quickly analyze the impact of the issue by aggregating the views. Without grouping, they have to go through the hassle of manual process which risks their business services going down. Setting up dependencies will establish parent-child relationship in a group, which in turn reduces the number of alerts by pinpointing the problem immediately. This helps you find the root cause of the problem.


How Customized Network Views Can Help You

Complex networks can be a headache if network administrators do not understand how they impact business critical applications and services. When issues arise, faster response & time-to-resolution can be achieved if administrators customize resource views to meet their specific network environment. You can ease network troubleshooting by:

  • Looking for Top 10 users and applications – Monitoring users/applications who hog most of the bandwidth will help administrators to keep the lid on consumption and maintain business critical processes.
  • Managing by exception and thresholds – Customize network views by creating exceptions. This helps you focus on issues that result from exceeding predefined thresholds. For instance, a sudden increase in third party application traffic can cause a disruption in VoIP calls. This can be avoided if you have a customized view for tracking application traffic.
  • Dashboard – A single window view can help you monitor all the devices, events, etc. It helps administrators resolve issues even before they are escalated.


A robust network monitoring tool will provide you with a good choice of customizable views, grouping and dashboards to simplify network troubleshooting. With SolarWinds Network Performance Monitor, you can group your devices, customize and view data in graphs, tables, maps and top 10 lists with an easy to use web interface. You can download the fully functional 30 day free trial or checkout our product demo.


Watch this video to learn more about IT infrastructure monitoring and the impact on your business service.


This series takes a fun and lighthearted look at the very serious topic of how to be a rock star on your team. We all want to be respected by our fellow geeks and we all know that rock stars drive fast cars and make lots of money.  Unfortunately, making a network configuration mistake is the wrong way to attract the wrong kind to attention!  But making these kinds of mistakes is all too easy to do.  Did you know there are over 17,000 Cisco IOS commands alone?  That’s why we have introduced these five best practices to help you manage network configurations like a rock star.



Configuration Backup's are Essential

Today we’ll look at practice #3 which suggests that you should defend your device configurations from unwanted or harmful changes and we we’ll offer you some ideas and tips on how to do this.

Protecting you configurations makes sense for a number of reasons.  The foremost is that once you have everything running smoothly you want to keep it that way.  But there are other reasons like being able to quickly reverse a mistake or standing up a spare replacement device following a catastrophic failure.


In order to reverse a mistake or quickly provision a spare, you need to have a copy of the most recent device configuration to restore.  Doing backups of device configurations is one of the most popular and compelling reasons why customers purchase SolarWinds Network Configuration Manager (NCM). From a central location you can automatically and remotely backup these configurations and restore them as the need arises.


Suggestions 2-4 relate to monitoring active configurations so you can detect changes and determine if they are intended.  If they are not intended then you have the tools to reverse the change by restoring the most recent previous configuration.  In addition to simply knowing that a change has occurred you can also isolate the change and determine if the change is merited.


When you protect your working device configurations from change you save yourself a great deal of time and effort.  Whether an unintended change occurs or a device fails you will have the tools to easily assess what has happened and have the ability to quickly restore service.

Looking forward to our next post, we will review practice #4 which deals with auditing configurations for standards compliance. In the meantime, if you've joined this discussion in progress, you can visit the other posting which are part of this series.  You can also download your own fully-functional 30-day trial and start to put these practices to work in your own network.


You can also find and read past posts in this 7-part series here


Post 1:

Post 2:

Post 3:

Post 4:

Post 6:

Post 7:

What makes Server & Application Monitor (SAM) awesome is its ability to monitor out-of-the-box applications using templates. A template is a set of different component monitors that monitor the performance and current status of your applications. There are over 150 templates and SAM provides clearly defined settings on what is monitored by these templates with best-practice thresholds to monitor applications. You have the flexibility to choose a template with pre-set component monitors, as opposed to creating individual component monitors one-by-one. And you can customize existing templates that will suit your current infrastructure.


With SAM, you can set thresholds to component monitors. These thresholds are extremely helpful in indicating when the component or parameter is reaching a critical state. For example, if you’re monitoring the percentage of free space remaining on a volume, you can set a warning threshold at 15%, and a critical condition at 5%. If disk space reaches the threshold limit, you will receive an instant alert on the condition. You can also track what is a normal baseline for this metric and set a statistical threshold from the baseline–two standard deviations for warning and three standard deviations for critical alerts.


Compare this to other application monitoring software. The first thing you may notice is that most monitoring software uses “monitoring agents.” The problem with these agents is you can’t pull all the data you want to monitor in an application. For example, if you want to monitor a specific component in your SQL server, the agent will not allow this because of its limitations with customization.

This is where you may find SAM to be far superior to the rest of pack. The template approach really gives you the freedom to monitor any application in your environment. Here are a few reasons why templates are the real deal:

  • Flexibility: Monitor out-of-the-box applications.
  • Customization: Customize, edit, and modify existing templates.
  • Scalability: There is no limit to the number of components you can add to a template for monitoring applications.
  • Simpler licensing model: Pay for only the number of component monitors you need and not for each component monitor.
  • Templates on thwack: Get access to hundreds of out-of-the-box thwack templates that are created and shared by SolarWinds® customers and partners.

Check out this short video and learn how to easily add a node and assign custom application templates using SAM.


We began this series by describing network configuration management as “hours of boredom punctuated by moments of terror” -- meaning that manually configuring hundreds or even thousands of routers and switches can be sheer boredom.  But make a single mistake and that boredom suddenly turns into frantic search to find out what went wrong!  But what’s even potentially worse is having to own the mistake.  Which brings us to the reasons for this series.  You can turn disaster into opportunity by showing your peers how to avoid these all-to-common problems.  Which is why we introduced five best practices and why discusses the first of these last week.




How Standardization Can Help

Today we’ll look at practice #2 which deals with standardizing your configurations.  The reason why you want to standardize your configurations is because you want to improve uniformity which reduces potential error.  To achieve this objective, here a three ideas you can start with.


The first recommendation looks at how you remotely access your devices and standardize those methods which include the device login information, communication protocol and IP service ports.  By standardizing how you remotely access devices you can accomplish the following:

  • Make sure all devices are accessed using a secure communication protocol
  • Make sure all devices are not using vendor supplied ID’s and passwords and that all passwords are strong and conform to your security policy
  • Make sure all account ID’s and passwords are synchronized and easily and routinely updated




The second and third recommendations calls for using script-based templates to standardize and automate complex configuration changes. Using templates is an excellent way to reduce error because you can spend time developing and testing the template and then consistently applying it to a number of network devices.  You can use templates to perform routine tasks like changing VLAN memberships by port, configure device interfaces and enable a variety services like IPSLA, NetFlow and more. In addition to building a template, you can also schedule their execution to perform ongoing changes.  By using templates you are able to improve configuration management in the following ways:

  • Reduce hundreds of command statements into a single script that can be tested and consistently applied error-free to as many devices as required
  • Perform repetitive tasks with consistency




To summarize, when you standardize your management of device configurations you introduce uniformity which is key when working with hundreds or thousands of devices and hundreds of unique configuration commands per device.  This standardization will help drive down human error and result in more network up-time.

Next post, we will look at practice #3 which deals with ways to protect your device configurations from harmful changes.  In the meantime, have you taken the opportunity to play with the interactive online NCM demo?  Try it here.  You can also download a 30-day trial that is fully-functional and start implementing these recommendations right now.

You can also find and read other posts in this 7-part series here


Post 1:

Post 2:

Post 3:

Post 5:

Post 6:

Post 7:

Access control through whitelists can limit unwarranted users from gaining access to an organization’s network. But once it is found that users belong to the network, additional authorization determines what services they have access to. Even then, the possibility of a threat or a breach from a rogue within is something that should be anticipated at all times.

Who are these internal troublemakers?

They are those who’d want to access confidential company information, unethically use company resources, upload or download data for unofficial/unauthorized use - some of which can be damaging to the organization, sometimes they are unruly employees refusing to abide by policies, and so on.

The consequences of each of these ultimately end up on the shoulders of the network admin and he is held responsible to restore normalcy and answer for all the mayhem caused.

How to determine and curb such disruptive activity?

Say, your network management system alerts you on abnormally high traffic that is slowing down all users, or you notice that an IP address from another subnet is trying to access a restricted network. How do you go about determining if this is a result of rogue activity? And, if yes, how to put a stop to it?

Three simple steps to help you regain control of your network are:

  1. Acquire Information - For visibility into and to determine the current location of the user, use the IP address or MAC address to retrieve more information on the user’s connection details such as what switch or access point is the user connected to, the port or SSID, host name and even endpoint details.
  2. Round up Evidence - Pull out data about where the user has been connecting to in the past, or drill down to the port level for a connection history. If you maintain a list of IP address details with the MAC and hostname assignment history, then it is easier to track the activity of a suspicious IP address/user.
  3. Seize Control - Once determined as rogue, immediately block and cut off this user from the network. Being able to do this immediately is vital to reduce or prevent damage to the network. The efforts put in the first two steps are rendered useless if the admin cannot immediately block rogue access or activity in the network.

Hence, to be able to quickly determine, locate and remediate an internal threat in as little time as possible would be the essential action in finally busting these internal culprits. Now only if all this was  possible from a single console!

SolarWinds User Device Tracker (UDT) provides network admins the ability to track, locate and block any of these internal unwarranted users. Integration with SolarWinds IP Address Manager (IPAM) can  further support you by providing detailed reports on usage history for a particular IP address. So, download a trial version today and start blocking off unruly users from your network!


If you hate user provisioning and deprovisioning and resetting passwords as much as this guy,



you will be happy to know that SolarWinds has identified five tried-and-true methods you can use to cut your daily FTP Server user administration chores down to almost nothing.


1. Authenticate Company Employees Through Active Directory

You already have all your internal end users configured in Active Directory (AD).  If you have an FTP server like SolarWinds' Serv-U MFT Server that can authenticate users, pull email addresses, and hook directly into end user's existing home folders, go ahead and hook it up to AD. This allows existing employees to immediately authenticate to your FTP server. It also means that employee access is revoked as soon as a user is turned off in AD. Of course, this leaves your external partners and "service" or "automation" users out, but there are additional steps we can take to reduce administrative hassle there too.

2. Authenticate External Partners Through a DB Connection

Serv-U and some other FTP servers can use database entries to authenticate end users not found in AD. This allows you to affix your FTP server to existing Web portal or customer service applications, so external partners only need to remember a single set of credentials to authenticate to your web properties.


3. Allow End Users to Change Their Own Passwords

Most modern Web applications allow end users to change their own passwords, and FTP servers should be no exception. FTP servers that also feature a web transfer interface usually have a link or button that allows end users to change their own passwords, and many also allow advanced end users to change passwords via FTP commands.


4. Send Password Expiration Notifications via Email

Security best practices typically state that all accounts that only use passwords for credentials (as opposed to a client key or client certificate) should change their password periodically. Turning on user password expiration is easy in any modern FTP server, but a good way to avoid extra help desk tickets is to proactively notify end users of required password changes BEFORE they get locked out.


5. Allow End Users to Trigger Their Own Password Reminders

Finally, you can use built-in password reminders on the web interface of your FTP server to avoid the most common of help desk issues: "I forgot my password."


Do You Have Other Security Challenges?

Be sure to check out SolarWinds' new Security site, or leave your thoughts and comments below.

What is Route Flapping?

Route flapping can be termed as the stream of fluctuating routing updates received by the routers on your network as they are routing traffic based on pre-defined routing policies. A route flap occurs when a router alternately advertises a destination network via one route then another, or when there’s an interface error on the router that alternates the availability of the router as up or down. When this happens repeatedly, the routing topology is distorted and it becomes difficult for the traffic-sending network device to determine the next route hop. The longer it takes to determine the next possible route path, it’ll lead to the network service latency or downtime.


Some level of route flap is unavoidable and is quite common. But when the flap frequency is very intensive and the destination router availability status keeps fluctuating, the routing topology would not converge and the traffic may be re-routed to other devices creating routing loops making the situation even worse.


What Causes Route Flapping?

The major reasons for route flap are:

  • Hardware errors
  • Software errors
  • Configuration errors (such as misconfigured Channel Service Units)
  • Intermittent errors in communications links
  • Unreliable connections


Route flapping is a common condition in the network when dynamic adaptive routing is used. This approach dynamically propagates information on topological changes to routers causing them to advertise or withdraw availability based on the topology changes. If the topology changes are too intensive, there will be a flap.


How to Control Route Flapping?

There are two ways to control route flapping:

#1 Route Dampening

Route dampening is a way of suppressing flapping routes so that they are "suppressed" instead of being advertised. To accomplish this, we define some criteria to identify poorly behaved routes. A route which is flapping gets a penalty for each flap. As soon as the cumulative penalty reaches a predefined "suppress−limit", the advertisement of the route will be suppressed. The penalty will be exponentially decayed based on a preconfigured "half−time". Once the penalty decreases below a predefined "reuse−limit", the route advertisement will be un−suppressed.[1]

Route dampening will be turned off by default. You can use the following commands to switch on and control route dampening for BGP protocol:

  • bgp dampening – this will turn on dampening
  • no bgp dampening – this will turn off dampening


#2 Route Aggregation

Route aggregation (or route summarization) is the process of limiting the visibility of topology details so that routing updates caused by topology changes do not reach the router. This process consolidates selected network routes into a single route advertisement and improves network stability by reducing unnecessary routing updates when a part of the network undergoes a change in topology.


Route flapping is one of the reasons the network faces latency and downtime, and network performance and VoIP call quality are impacted. SolarWinds Network Performance Monitor allows you to discover and view routing table information for monitored nodes, identify flapping routes, and create alerts for detected routing table changes. NPM supports RIP v2, OSPF v2 and v3, and BGP protocols for route monitoring.


Route Flapping.png


Learn More

Read this white paper to understand the importance of network route monitoring and ways to troubleshoot route errors.

Route Monitoring White Paper.png

Managing the IT infrastructure for  your organisation and shielding your network against security threats can be a thankless job. It can require a lot of time and expertise. Below are some key pain points that IT professionals can face on a daily basis when trying to manage a secure network


1. Extracting useful information out of events

Consider the volume of logs in AD or a Firewall, the sheer volume presents many challenges. Every admin wastes a lot of time in trying to extract useful information based on the events logged and trying to understand the root cause. There also may be a situation  where IT admins try to track down how many VPN connections happened in the afterhours, top sources for specific firewall ACLs, etc.

Solution: SolarWinds Log & Event Manager (LEM) aggregates all the logs into a single location, thus making troubleshooting, root-cause analysis and forensics much easier. These log entries are processed (or normalised) to extract information and display the data in a common column/field-based format, rather than the complex format that you see in the source data.


These normalised events are processed against your Rules, sent to your Database for archiving, and sent to the LEM Console for monitoring. As there can be millions of events each day, LEM Console uses filters to categorise the type of events in real time.


LEM also helps you detect the source IP of event lock outs and then allows you to enable that account from the web console or automate that process.


2. Event  Consolidation & Correlation

Typically security admins spend a lot of time searching their way through the events across their network & systems and it really becomes difficult for them to identify the issue and take a responsive action.


Solution: SolarWinds LEM offers an easy way of searching through millions of events across your network and it gives you the ability to ignore the type of device and focus on behavior patterns. Event correlation is the key to an effective SIEM solution and LEM provides hundreds of pre-built rules with an easy to use interface for customisation, giving LEM a significant advantage over Splunk. Also the real-time, in memory analytics ensures that when there is a security issue, the notification and response are instantaneous.


With LEM’s nDepth visualization techniques, you can examine your log data from several perspectives and respond to events in real time. The nDepth view contains a powerful search engine that lets you search all of the event data and displays the search results with several different visual tools that can also be combined into a customisable dashboard. You can also analyse the root cause of events using historic data and also compare raw data and normalised events in parallel.


3. Workstation Edition Security Issues

Workstation security has been a concern for most security admins considering their vulnerability.  Your employees may process content from the Internet and Email and there are chances that they come in contact with infected files, sometimes they may be involved in file sharing or using external mass storage devices, etc. Monitoring this activity becomes very difficult, especially when your IT environment is continuously growing.

Solution: LEM monitors both server logs as well as workstation logs and tracks key information like:

Logon/Logoff attempts, Non-compliant folder sharing, URLs accessed, Insecure file transfer, Unauthorised software installation, Malicious processes, Misappropriation of user privileges

LEM helps you effectively troubleshoot issues by understanding the relationship between various activities using multiple event correlations and alerts you as and when it encounters a security threat. Based on the log information, LEM provides many useful built-in Active Responses that can help combat critical workstation security threats on your network as they react in real time and counter anomalies, threats, policy violations without requiring human intervention to confirm or activate any action.

Some key active responses include:

Delete User Account and User Group, Block IP address, Log Off User, Restart/Shutdown Machine, Disable USB devices



When: September 04, 2013 (Wednesday)


Webcast Agenda:

Lawrence Garvin, SolarWinds Head Geek and WSUS MVP, will discuss:

  • How to create basic and advanced packages with the Package Wizard feature of Patch Manager
  • How to use the PackageBoot™ function to create complex before and after deployment scenarios.

Bring your questions and we’ll answer them all!


Webcast Registration:

Patch Webcast - Register Here.PNG


SolarWinds Patch Manager is an easy-to-use patch management software that allows you to simplify, centralize and automate updating all your Microsoft and other third-party applications. Patch Manager has a built-in Package Creation Wizard that allows you to:

  • Easily create custom packages – Microsoft SCUP and complicated scripting not required
  • Support deployment of any MSI, EXE, or MSP through WSUS or Configuration Manager

Package Creation.PNG

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.