Welcome to SolarWinds blog series “The Best Practices for Optimizing Alerts”. This is the last part of 3-part series where you can learn more about the best practices in optimizing your network alerts in your IT environment.

In the previous blog, we had discussed network based alerting and how it helps administrators to maintain continuous network up-time and service availability. In this blog, we will explore more about activity based alerting and its importance in monitoring critical servers, applications and process in your network environment.

One of the responsibilities of the network administrator is to find out network issues, by keeping updated on the events that are happening and have a potential effect on network performance. Usually administrators have a pre-defined set of conditions that they know will affect their network and set up alerts that prompt them if there is unusual activity in the network. Activity based alerting helps network managers to be proactive and resolve any issues quickly.


What is Activity based Alerting?

Network administrators can set activity based alerts to get informed when specific actions are performed such as unauthorized access in a user file or a port based on event logs, SLA performance or syslogs. Activity based alerting notifies network administrator if an event matches a specific pre-defined criteria.


Troubleshoot network issues faster!

Activity based alerting helps you to monitor certain critical processes that continuously operate without any interference. For example, if network administrators set restricted access to control certain business applications and if there is an unauthorized entry from an IP addresses to access confidential data, administrators can receive an alert on the specific event and immediately respond. It helps to formulate network policies and procedures based on the user activity.

Other examples of activity based alerting include network bandwidth usage, and unauthorized USB usage.


Get Intelligent Alerts on your Network Activity

Activity based alerting keeps you informed about specific cases in which you need to receive alerts. Intelligent Network Alerting helps you recognize and correct issues before your users experience performance or availability issues. Intelligent alerting not only keeps you updated on the events that happened, but also ensures you don’t receive unnecessary alerts.

By monitoring your network, you can find and correct issues before your network performance gets affected by degradation or availability issues. Also, learn more about threshold and network based alerting, and its significance in troubleshooting network issues.

The first and foremost reason "WHY" any organization and its IT security teams will want to monitor workstation logs is because they want to Protect Confidential Data & Prevent Data Loss!


Workstations are vulnerable to many security threats and there are many compliance mandates that demand monitoring workstation activity for policy violations. Some potentially risky security events on your enterprise workstations include:

  • Unauthorized users logging onto workstations
  • Multiple failed logon attempts
  • Use of unauthorized USB drives
  • Launch of prohibited applications (IMs, games, etc.)
  • Changes to local accounts and groups
  • System changes such as installation of unexpected software, and changes to local policies, etc.


You wonder "HOW" to monitor workstation logs, and we give you SolarWinds Log & Event Manager Workstation Edition.


Workstation Edition is the way to extend you existing Log & Event Manager (LEM) installation to make workstation log management more affordable than ever. LEM Workstation Edition provides all the functionality of LEM to help you cost-effectively collect, correlate, analyze and store logs from a greater number of workstations.


View this SlideShare to know more about LEM Workstation Edition and how LEM users can benefit from simplified log management and enhanced workstation security!

Alright, are you all set for BSides Las Vegas (July 31st – August 1st 2013)?


This is one event you don’t want to miss--a security conference run by the security community for the security community! It’s a chance to exchange ideas, ask questions, and learn from others just like you.

We’ll be in the main chill-out space! At our table, you’ll have an opportunity to discuss how SolarWinds is tackling the real-world security challenges you face each day, and learn more about our different security solutions, including SolarWinds Log & Event Manager.


Now, the goodies!!


Walk in and you don’t have to walk out empty-handed!! Yes, you could easily win raffle prizes, some of them being:

  • iPad Mini
  • Roku 3


It doesn’t end there, we also have SolarWinds Surprise prizes!! Are you the lucky one??


We compared the analyzed vSphere 5.1 and Hyper-V 2012 in terms of their capabilities of Storage Management and Memory Handling earlier in this blog series.  In this blog post, we’ll discuss how both the hypervisors perform CPU scheduling and manage CPU resources.


When there’s workload running on a virtual machine (VM), the hypervisor needs to efficiently schedule the load for execution lest there be any CPU contention and resource bottlenecks. CPU contention impedes the ability of the VM to respond to processing requests, and both vSphere and Hyper-V have their own CPU scheduling mechanisms to avoid this problem.



CPU Scheduling in vSphere 5.1

vSphere 5.1 has a CPU scheduler that helps maintain the performance of the VM and meet system objectives such as responsiveness, throughput, and utilization. You can change the amount of CPU resources allocated to a VM by using the shares, reservation, and limit settings.

  • CPU Shares: Shares specify how the CPU is allocated to a VM in relation to another VM. If a VM has n times the amount of CPU shares as another VM, it is entitled to consume n times as much of the CPU resource when these two VMs are competing for resources. When assigning shares to a VM, you must specify the priority for that VM relative to other powered-on VMs.
  • CPU Reservation: Reservation specifies the guaranteed minimum allocation for a VM. You can make a Reservation if you need to guarantee that the minimum required CPU or memory are always available for the VM.
  • CPU LimitLimit specifies an upper bound for CPU resources that can be allocated to a VM. A server can allocate more than the reservation to a VM, but never allocates more than the limit, even if there are unused resources on the system.


CPU Scheduling in Hyper-V 2012

Hyper-V uses some different methods to address CPU contention issues.

  • VM Reserve: This is a CPU configuration setting that assigns a percentage of a VM’s total CPU usage. VM Reserve is configured on a VM in two scenarios:
    1. When a VM is powered on, there should at least be enough CPU as much as the reserved limit.
    2. When there is CPU resource contention, the VM should get at least the specified amount of CPU as much as the reserved limit
  • VM Limit: This is another CPU allocation setting on a VM that assigns a fixed limit for CPU consumption especially when a VM is attempting to consume excess CPU resource.
  • Relative Weight: This is similar to the CPU Share in vSphere. Relative Weight is another CPU control setting which, unlike VM Reserve and VM Limit, has no unit, and is just a number between 1 and 10,000. This is useful in situations when VMs are prioritized in terms of workload and usage. You can assign a higher Relative Weight to a VM when compared to another VM. When there’s resource contention, Hyper-V will allocate more CPU resource the VM with more weight based on the relative weight configured.


In the next part of the blog series, we’ll discuss “Workload Migration” capabilities of both vSphere 5.1 and Hyper-V 2012. To learn more about how vSphere 5.1 and Hyper-V 2012 differ and compare,



Read this White Paper:

vsphere vs hyper-v white paper.png

Watch this Webcast:

vsphere vs hyper-v webinar.png

If you are interested in virtualization performance monitoring, learn about VMware monitoring and Hyper-V monitoring.

Other parts of the vSphere 5.1 vs. Hyper-V 2012 series:

Time and again, when an organization grows, so do the problem tickets on the help desk software console. In an ideal situation, all help desk tickets can be addressed one by one. Inevitably, you get ticket flooding. Fortunately, there are ways to avoid drowning.


A recent SolarWinds help desk survey (April 2013), showed:

  • 78% of the respondents’ organizations (of 500 employees or less) process up to 50 help desk tickets a day
  • 65% of the respondents’ organizations (of more than 500 employees) process more than 50 help desk tickets a day
  • Over 30% of the help desk professionals work on more than 10 tickets a day
  • Over 75% of the respondents spend more than an hour on troubleshooting and ticket resolution
  • More than 48% of the respondents spend more than an hour just managing and tracking tickets




Now, it’s obvious that automation is a key aspect in addressing ticket volumes. But, in which areas is it most useful?

  1. Employee self-service: Effective knowledge base management helps the end-user to automatically find fixes to common/recurring problems. This feature not only enables shorter turnaround-time for ticket resolutions, but also reduces the load on help desk professionals.
  2. Email to tickets: The help desk software should support IMAP, POP, and exchange protocols to automatically convert incoming incident emails into trouble tickets. Also, automatic email alerts ensure that SLAs are honored and no job is left unattended or unassigned. Well, it’s a known fact that unattended/unassigned tickets pile-up!
  3. Automatic alerts to tickets: This is the best part. Imagine ticket integrations with network and systems management software in your IT environment. Help desk tickets can be triggered automatically based on network and system alerts. This enables help desk professionals to get notified of the issue immediately and act on it (e.g. notifying users of downtime duration), evidently avoiding multiple tickets being raised for the same issue.
  4. Task management: In situations where there are repetitive tasks with multiple workflow dependencies and numerous support teams (e.g. new employee induction involving the HR, IT and finance teams/procedures), automated task management comes to the rescue. Custom task elements and event-based workflows help you streamline ticketing and task management with ease.
  5. Asset management: The facility to automatically discover assets based on IP-range/subnet, and keep the data fresh by periodically polling, solves most headaches associated with erroneous manual procedures. Apart from being erroneous, manual processes will increase the time taken to troubleshoot and resolve tickets, resulting in missed SLAs.


SolarWinds® Web Help Desk™ is 100% Web-based and packs in all the aforementioned features, enabling an automated help desk environment that will address a variety of ticket volumes with simplicity and ease.


What are you waiting for? Take a test drive today!

What Are the Key Front-End Components of a Webpage?

(Click any table heading to learn more)

CSS.jpgHTML.jpgJavaScript.jpgThird Party.jpgImages.jpg






Proactively monitor website end-user experience, find causes of Web page latency, monitor database, Web server, and hardware health.

Keep your websites and Web applications running at warp speed using SolarWinds Web Application Monitoring Pack.

Try it free for 30 days.


Sometimes files are left behind when uninstalling Storage Manager in Linux, and when re installing on the same server a message is received saying it is already installed.



    • Check the /usr/bin sub directory and verify there are no Storage Manager scripts present. If so, delete them.
    • If you are installing Storage Manager Server, make sure the opts/Storage_Manager_Server directory is gone. If you are installing the Storage Manager Agent software, make sure the opts/Storage_Manager_Agent directory is gone. If it is present, delete it.
    • Go to etc/init.d and make sure storage_manager_server file is not there if you are installing Server. For Storage Manager Agent make sure storage_manager_agent file is not present. If so, delete it.
    • Proceed with reinstalling Storage Manager Agent or Server software.

Loopholes in the configuration of network devices like firewalls and routers pave the way for attackers to gain access into your network. Once in, they can easily lay hands on confidential data, alter important information, or even gain control over a machine and pose as a trusted system on the network. How can you protect your network from being exposed?

One way to avoid such gaps and keep attackers out is to follow and maintain a controlled configuration and change process environment.

A Typical Problem Scenario

In a network comprising of 200-300 devices, the network admin has to invest a tremendous amount of time and effort to maintain end-to-end device configurations and changes. For example, to perform a configuration cleanup task for all firewalls in the network, the administrator has to:

  1. Take an inventory of all devices in the network
  2. Decide which devices need a change and those that will be impacted
  3. Manually obtain the configuration of each device individually
  4. Analyze each configuration and try to make sense of all the ACL’s
  5. Make the changes and pray that nothing breaks!

A particular network may have thousands of configurations, and to perform the above mentioned tasks for all devices in the network can take forever. There also exists the possibility (and high likelihood) of human errors. In a network with changes occurring continuously, administrators can end up spending their entire day dealing with configuration and maintenance tasks.

If all these tasks can be automated, why not do so and utilize valuable time and resources for other pressing needs?

The Solution

SolarWinds Network Guardian Bundle is a powerful combo of SolarWinds Network Configuration Manager (NCM) and Firewall Security Manager (FSM) is the way out of time-intensive, error-prone, headache-ridden, manual change processes that can leave dangerous security gaps in your network.

Network Guardian.PNG

With the SolarWinds Network Guardian Bundle, you get a solution that guards the integrity of your network configuration by automating the change management process, end-to-end. The result is streamlined administrative tasks, increased operational efficiency, and reduced risk of human error and unauthorized changes—saving you valuable time and effort, while helping ensure the security and compliance of your network.

A usable, affordable and simple solution for your configuration and change management needs. Download a 30-day trial version today!

We all know that patch management is a vital part of an organization's security management process, not just to stay abreast of bug fixes and new features but also to bring down the vulnerabilities. All said and done, there are two areas that remain a challenge:

  • The potential difficulty and complexity of getting updates regularly arises when it comes to 3rd party products.
  • Distributing and deploying patches across multiple machines on your network when they are shut down.


SolarWinds Patch Manager extends your existing WSUS or SCCM environment to help you efficiently deploy and automate 3rd-party patches.



Why offline patching is critical?

Typically, end-users tend to postpone the patch updates so that it doesn’t disturb their busy schedule but that in turn results in the patch update left unapplied. There can be situations where some of your enterprise workstations are shut down and virtual machines are powered off and they are left unpatched. In such situations, offline patching comes in handy.

SolarWinds Patch Manager makes your offline patching easy!!

When the systems are shut down, it becomes a very time consuming task to physically turn on offline systems or dormant virtual machines.  Also if your organization uses WSUS™, end users with administrative privileges can cancel the update. SolarWinds Patch Manager uses Wake-On LAN (WOL) to make your life easy when it comes to patching, as it allows you to reboot the systems for applying the patch.

Here’s how Patch Manager make offline patching easy:

  • It allows you to remotely power on WOL-enabled systems using a UDP broadcast on the local subnet.
  • Patch Manager provides WOL functionality in conjunction with other scheduled tasks to patch computers and virtual machines that are not powered on and turn them on, once the update is applied.


4 Simple steps to make WOL work for you:


  #1 After having run the Patch Manager discovery task to populate the target systems’ MAC Addresses into Patch Manager console, you can open the Wake-On LAN console by simply selecting the “Wake-on LAN” Configuration Management task from the Actions menu on the right side of the console.





#2 Enter the Broadcast Address for the subnet containing the managed computers you wish to wake up. You will also need to enter the ‘UDP port’ for the WOL request and the ‘Secure on Password’ (if you have configured your managed systems to require one).

#3 Once you do that and click on ‘OK’. Task Options Wizard’ is launched which will first prompt you to select the managed systems that you want to wake up.


#4 Clicking ‘Next’ will launch the scheduling dialog where you can specify – how long this task would run. Additionally you can also set up a recurring WOL task.


No productivity loss, hassle free patching!!



SolarWinds is going to BSides:

Don’t forget to visit SolarWinds at BSides Las Vegas 2013: July 31st to August 1st! We will have a table in the main chill-out space. See you there!

As IPv6 address migration is catching up in all enterprise networks, let’s take a look at some of the operational best practices to migrate to and subnet IPv6 addresses.


IPv6 Allocation

The Internet Assigned Numbers Authority assigns IPv6 addresses in large blocks to the Regional Internet Registries (RIRs). These registries include the Asia Pacific Internet Community (APNIC), American Registry for Internet Numbers (ARIN), Réseaux IP Européens (RIPE), and others. The RIRs allocate addresses to internet service providers (ISPs), whose default allocation by the registry is /32. The ISPs, in turn, allocate /48 to customers. The allocation process for IPv6 provides more bits than the IPv4 address space because the IPv6 address space is 128 bits (2128) in size, containing 340,282,366,920,938,463,463,374,607,431,768,211,456 IPv6 addresses.


The best practice is to get at least a /48 prefix from an ISP which gives you an address space of 2^80 bits to manipulate. The IP space has 128 bits of which 48 bits can't be changed. This means you’ll get 80 bits to use (128 – 48 = 80 bits).

IPv6 Subnet Masking

IPv6 subnet masking is similar to IPv4, but there are 2 key differences in the way the IPv6 masks looks and what gets masked.

What Does an IPv6 Mask Look Like?

IPv6 uses 128 binary digits for each IP address, as opposed to IPv4's 32 binary digits. The 128 binary digits are divided into 16-bit words. Since using IPv4’s octet notation to represent 128 bits would be difficult, we use a 16-digit hexadecimal numbering system instead.


Each IPv6 set represents 16 bits (4 characters at 4 bits each), and each 4-digit hex word represents 16 binary digits, like the following examples:

  • Bin 0000000000000000 = Hex 0000 (or just 0)
  • Bin 1111111111111111 = Hex FFFF
  • Bin 1101010011011011 = Hex D4DB


So, an IPv6 128-bit binary address is represented by 8 hex words separated by colons:



What Gets Masked?

In IPv4, every IP address comes with a corresponding subnet mask. IPv6 also uses subnets, but the subnet ID is built into the address.

  • The first 48 bits are the network prefix – used for Internet routing.
  • The next 16 bits (49th to 64th) are the subnet ID used for defining subnets.
  • The last 64 bits (65th to 128th) are the interface identifier, which is also known as the Interface ID or the Device ID.


For example, if you want to break your network into 64 subnets, the binary mask just for the subnetting range would be 1111110000000000, which translates to a hex value of FC00.

The full 128-bit hex mask would be FFFF:FFFF:FFFF:FC00:0:0:0:0

IPv6 Subnetting Practices

  • Every individual network segment requires at least one /64 prefix: The IPv6 equivalent of an IPv4 /24 subnet is a /64. The customer should break their network segments into this space. A /64 is an IPv6 subnet that has 64 network bits and 64 host bits. Regardless of the number of hosts on an individual LAN or WAN segment, every multi-access network requires at least one/64 prefix.
  • Subnet only nibble boundaries: Each character in an IPv6 address represents 4 bits (a nibble). A nibble boundary is a network mask that aligns on a 4-bit boundary. This makes it easier to understand and follow the IP address sequence, reducing incorrect configurations. Since the smallest subnet mask used for multi-access networks is a /64, you can just count down in multiples of 4 to set the nibble boundary masks (e.g. 64, 60, 56, 52…4)
  • Implement a hierarchical addressing plan to allow for aggregation: In a 3-level hierarchy enterprise network having a site, region, and Autonomous System (AS), each site should receive one /48. This is to accommodate additional aggregation points as the network grows.
  • Consider grouping your IP address by type of usage: In a large IP address spaces, grouping IP addresses by usage helps track them for allocation and management. You can group your space by customer space, internal space, infrastructure space, etc.


As we try to understand the concepts of IPv6, we should also start preparing for the migration. You can read this thwack post to get some tips on migrating from IPv4 to IPv6.


If you are looking for an IPAM solution to help you through the process, learn how SolarWinds IP Address Manager can benefit you. IPv6 is there already. Are you?

After we have collectively congratulated ourselves for the D-Wave folks producing what may be the first quantum processor-capable of solving computing problems 10,000 times faster than  conventional computers-we might want to put some serious brain power into thinking about what "solving computing problems 10,000 times faster" means for security, especially during the transition period between the conventional computing and wide-spread adoption of quantum computing.


If the initial offerings of the D-Wave processor can solve problems at 10,000 times the speed of conventional  computers, and we're not even 100% sure it is a quantum processor (hello, Schrödinger), what impact are these new, quantum (like) processors going to have on data security?


Even if quantum computing takes 10 or more years to be viable for businesses or governments, IT pros will still have to address security concerns around hackers with quantum computers. Assuming you have no plans or desire to migrate to quantum computing or leverage quantum computing in your organization, most people will need to either be involved in quantum security measures or be aware enough of quantum security measures to choose an appropriate third-party product. If you are part of an organization that collects private data, financial data, or medical data, you have sensitive information that you are obligated (usually by law) to protect. That is pretty much every organization that can afford an IT department.


As soon as criminals have access to quantum computers, conventional security wisdom and policies will no longer be viable because criminals will be able to breach security faster by multiple orders of magnitude. To combat that, the rest of us will basically be forced to invest in quantum security measures.


So, what will the future look like? Are security firms going to be the new IT rock stars, like Google, Apple, and Microsoft? Will new computers ship with biometric security devices? Will employers hand out smart cards?


While it is highly unlikely that we can know the future (going by the principles of quantum mechanics), the security field is definitely going to be impacted by this new technology.

Data Loss Prevention (DLP) is a computer security term referring to systems that enable organizations to reduce the corporate risk of the unintentional disclosure (or data loss) of confidential information.


How Does Data Loss Happen?

Data loss happens when security is compromised and corporate sensitive data is accessed. Technically, this can be termed as the unauthorized, intentional or unintentional exfiltration of confidential information from a secure network. Other terms for this include unintentional information disclosure, data leak, and also data spill.



We can classify secure data into 3 main categories:

  1. Data in Motion (DiM) – Any data that is moving through the network  to the outside via the Internet
  2. Data at Rest (DaR) – Data that resides in files systems, databases and other storage methods
  3. Data at the Endpoint/Data in Use (DiU) – Data at the endpoints of the network (e .g . data on USB devices, external drives, MP3 players, laptops, and other highly-mobile devices)


Loss or leakage of any of this data can be termed as data loss. This can happen due to illegal cyber-crime practices such as hacking, malware induction, physical attacks, and even including employee privilege misuse.





Data Loss Prevention (DLP)

Organizations are fighting hard to protect data from breach and leakage at all stages, whether it be in motion, at rest, or in use. Fortunately, DLP has evolved to address data protection at each one of these stages.

  • Network DLP (for DiM): At this stage a DLP tool that’s installed at network egress points analyzes network traffic to detect sensitive data that is being sent in violation of information security policies.
  • File-Level DLP (for DaR): At this stage DLP software identifies the sensitive files and then embeds the information security policy within the file, so that it travels with it whether the whole file or only part of it is sent, copied or downloaded.
  • Endpoint DLP (for DiU): At this stage a DLP system runs on end-user workstations or servers in the organization, and is used prevent unauthorized access to the data stored on hard drives, USBs and external mass storage devices.

IT Security Survey: 2013

In an IT security survey conducted by SolarWinds earlier in 2013, we found that data loss was the major priority for IT security teams. More details on the survey can be found below.



SIEM the Seer

Security Information & Event Management (SIEM) systems are a good solution to detect, block and prevent data loss from happening in your network. SIEM tools capture log data from different disparate sources across the IT infrastructure and correlate them for meaningful insight and data loss intelligence.



SolarWinds Log & Event Manager (LEM) is a full-function SIEM solution that automates real-time preventive mechanism to counter data loss, and also alerts on suspicious network and user behavior patterns.



With LEM, you can:

  • Detect and disable unauthorized USB device connections on endpoints and prevent data loss
  • Be alerted on unauthorized system log on attempts and unscheduled reboots of servers, as these may be symptoms of malware attacks
  • Shut down and even disconnect infected computers from the network and avoid bot attacks



SolarWinds is going to BSides:

Don’t forget to visit SolarWinds at BSides Las Vegas 2013: July 31st to August 1st! We will have a table in the main chill-out space. See you there!

Welcome to SolarWinds blog series “The Best Practices for Optimizing Alerts”. This is the second of 3-part series where you can learn more about the best practices in optimizing your network alerts in your IT environment.

In the previous blog, we had discussed threshold based alerting and its significance in faster troubleshooting of a problematic network situation, and how you can keep your problems at bay. In this blog, we will dive deep into network based alerting and explore more about its uses in maintaining continuous network up-time and service availability.

Today’s world expect businesses to run 24x7 and it is never been easy to support business critical processes, applications and also manage user expectations. The backbone of such businesses is the IT network that helps people to run all their processes without any service interruption. If anything disrupts the network, the first person to blame is the network administrator and to avoid this situation, administrators constantly update themselves on what is happening in their network.


What is Network based Alerting?

Network based Alerts are setup by administrators to get alerts on connection issues between networked devices in an enterprise such as packet loss, unreachable interface, high latency, etc. With the help of this information network administrators will have an actionable knowledge to identify affected devices and avoid any downtime in critical applications.


Importance of Network based Alerting

Network based alerting plays a big role in enabling network administrator not only to maintain, but also find and resolve any issues quickly. It also helps them to keep track of events and errors that have ever occurred in their network.

In addition, it helps you reduce the time for troubleshooting by finding issues at the device level. For example, a business critical application running at one location tries to access database from another server and if there is high latency between them, the network administrator will get alerted on a probable cause that slows down the application. Similarly, if there is any errors or discards caused by any server in the presence of a geographically distributed network environment where users need to access information from different locations, having network based alerts will help you find an event that probably has an effect on packet loss between the routers or servers. You can also get alerted on hardware errors such as excessive CPU or memory utilization, from your pre-defined limit of 90%. This helps network administrator’s to find the exact device that causes the network problem.


Ensuring continuous Network Availability through Intelligent Alerts

Network based alerting helps you to troubleshoot a network faster and more proactively avoid future issues. Intelligent Network Alerting, not only keeps you updated on the errors or events that happened, but also allows you to define device dependencies to ensure you don’t receive unnecessary alerts. You can configure alerts for correlated events and escalate them to a variety of delivery methods. By monitoring your network, you can find and correct issues before your network performance gets affected by degradation or availability issues.

Learn more about Threshold based Alerting and its significance in troubleshooting network issues.

Simply put, Uniform Resource Identifier (URI) cache stores information about a website address. This information includes a set of characters used to identify the website/URL. When it comes to monitoring the Internet Information Server (IIS) performance, the URI cache is a critical component to be monitored along with the hardware, website availability, Web transaction responsiveness, and others.



Relationship between Caching and IIS

Caching improves process requests and increases response times for frequently requested data from the IIS Web server. By not caching, IIS has to read the information from websites every time the same request is made. This also leads to a delayed authentication process.



Importance of Monitoring IIS Counters

There are different counters in the IIS server and you can monitor them to improve the server’s performance. If you don’t monitor these counters, then you won’t know which problem occurs. For example, you may not know if your users are having problems connecting to the server, you may not know if there are hardware problems, or any problem with the Web application that is causing the issue, and so on. These metrics provide critical information about user-requested data. More specifically, you can measure the performance of the counters and really look at the quantity of the data and its attributes such as size, duration, and the rate at which data is being requested and received.

IIS has the following performance counters:

  • Web Service Counters
  • Web Service Cache Counters
  • Port Monitoring Counters
  • System Monitoring Counters


URI Performance Counters

It is important to monitor URI cache counters because they provide performance metrics in order to optimize Web servers, Web applications, and websites. URI cache counters are part of the Web service cache counters. The URI cache counters consists of cache performance counters that process information about the data being requested from the Web server. Web services’ URI cache counters are designed to monitor server performances only, and you cannot configure them to monitor individual websites. A server management tool usually monitors the server performances and the following URI cache components within the IIS environment:

  • URI Cache Flushes Counter: Counts the number of URI cache flushes that have occurred since the server started. If a response is taking more time than what is specified in the threshold, or if the file was edited or modified then this counter will flush those files.
  • URI Cache Hits Counter: Counts the number of successful lookups in the URI cache since the server started. The value of this counter should increase constantly. If the value that is shown is very low, then you could inspect why requests are not finding cached response.
  • URI Cache Hits Percent Counter: Calculates the ratio between URI cache hits to the total number of cache requests. For example, if approximately one-fifth of requests to your sites are for cacheable content, then the value of this counter should be close to 20%.
  • URI Cache Misses Counter: Counts the number of unsuccessful lookups in the URI cache. If the value of the count is high, and the value of the cache hits is low, you could inspect why the responses are not cached.


In general, IIS performance monitoring  ensures:

  • Performances of your Web servers, websites, and Web applications are optimized
  • Monitor your server hardware in case of any failure
  • Alert arrive when problems are detected
  • You remain vigilant about the overall IIS server performance
  • That you understand how your Web servers are performing


IIS - URI Blog.jpg


SolarWinds server monitoring software comprehensively monitors your IIS web servers and its components. It provides insights on what to monitor in your servers and leverages Microsoft® recommendations for threshold values of components.


If you haven’t experienced Server & Application Monitor, you can always try the fully functional free 30 day trial kit.

Scott Adams is no stranger to controversy, and his latest strips may be swinging at offsite storage services - a.k.a. "the cloud."


Read Thursday's full Dilbert cartoon here: http://www.dilbert.com/2013-07-18/



Read Friday's full Dilbert cartoon here: http://www.dilbert.com/2013-07-19


If it's in Dilbert, it's likely that companies are wising up to the rising costs and risk of lock-in from cloud services, like those providing off site storage or file sharing.


What Are the Alternatives?


Savvy sysadmins know there's very little cloud services can do that can't be done better - and often cheaper - by well-managed datacenters and disaster recovery.


As one anonymous poster noted on SlashDot this morning.


Your Datacenter + Your HA/DR site = You control where data is replicated.

Your data + Someone's cheap cloud service = You not having a damn clue when/where your data is replicated.


How Can SolarWinds Help?


SolarWinds applications such as Server and Application Monitor and Log & Event Manager can help you monitor and control your infrastructure better than any cloud service.  Other SolarWinds applications such as Serv-U MFT Server can be used to provide storage accessible from anywhere and secure file sharing.  (For more on that, join our file sharing webinar.on Thursday.)

Picking up the discussion from where we left in Part 1 – Storage, let’s move on to explore how vSphere 5.1 and Hyper-V 2012 handle virtualization memory management.



Memory Management in Hyper-V 2012

Hyper-V 2012 offers some distinct memory management options that help avoid memory over-commitment and efficient virtual machine (VM) restart operations.



  • Dynamic Memory to treat memory as a shared resource and allocate to VMs that are running on Hyper-V hosts. This technique helps provision VMs with only the memory they need. For example, if a VM is not running on peak capacity and doesn’t need the committed memory allocated to it, Hyper-V is capable of dynamically reallocating this memory to other VMs that are running more load and are in real need of memory resource. This avoids the problem of over-commitment of memory to VMs.
  • Smart Paging is another memory handling mechanism in Hyper-V 2012 used for reliable VM restart. Sometimes when a VM is rebooted, there may be no available physical memory and no option to reclaim from other VMs on the host. Smart Paging enables the temporary use of storage as a memory cache so that the VM can be provided with enough RAM to get started. Once started, the workload can fall back to its normal operational needs and stop using relatively slow disk as a temporary paging file. This helps maintain high levels of VM density without having to worry about the needs of boot-time RAM.



Memory Management in vSphere 5.1

VMware has perfected some powerful techniques by which RAM can be managed and optimized on a vSphere host in order to provide additional scalability on a per-host basis and to keep a host operating a peak levels. There are 4 memory management techniques offered by vSphere 5.1:


  • Transparent Page Sharing: This is a de-duplication memory management technique that helps eliminate redundant copies of memory pages across multiple VMs running on a host. vSphere discovers redundancies by assigning hash values to pages and compares them it-by-bit. Once discovered the hypervisor shares the page with all the VMs on the host with the help of a pointer. This frees up memory for new pages.
  • Memory Ballooning: This is a technique that helps free unused VM memory when the host is facing a memory shortage. There are balloon drivers installed in each VM that transfer memory from the VM back to the host so they can be reassigned to other VMs needing memory.
  • Memory compression: When there is memory contention on the host, right before swapping a page to the disk, the hypervisor reclaims memory by compressing the pages that need to be swapped out. This improves application performance when the host is under heavy memory pressure.
  • Hypervisor Swapping: When ballooning and page sharing are NOT sufficient to reclaim memory back to the host, the ESX/ESXi host swaps memory pages out to physical disk in order to reclaim memory that is needed elsewhere. The hypervisor creates a separate .vswp swap file in the home directory and swaps out guest physical memory to this file. This frees host physical memory for other VMs.


One key difference between both the hypervisors is that Hyper-V's Dynamic Memory allocates memory on demand and in real time, whereas vSphere’s pre-allocates memory and implements the above memory management techniques to reclaim unused memory.



To learn more about how vSphere 5.1 and Hyper-V 2012 compare,



Read this White Paper:

vsphere vs hyper-v white paper.png

Watch this Webcast:

vsphere vs hyper-v webinar.png




If you are interested in virtualization performance monitoring, learn about VMware monitoring and Hyper-V monitoring. In the next part of the blog series, we’ll discuss “CPU Scheduling Control”.


Other parts of the vSphere 5.1 vs. Hyper-V 2012 series:

"60% of availability and performance errors are the result of misconfigurations"


Enterprise Management Association (EMA)reports that the main cause of network downtime and performance errors is – misconfigurations or badly executed network changes.

End users today expect the network to be available at all times. The network operations team is under constant pressure to maintain the highest levels of uptime and performance, even while carrying out software or hardware deployments and changes.

A typical network involves various types of devices from multiple vendors. And, trying to manually manage each device individually is a daunting and error-prone task. Each bad configuration can lead to costly downtime. Add to this, the amount of time and effort that goes into stabilizing the network after a high impact event. The end result is a slowdown in network operations and an unreliable IT infrastructure—all of which have a negative impact on the business. In short, network downtime caused by configuration changes impacts the overall performance and productivity of the business.

So, what can you do to recognize configuration issues, take timely action, and stay ahead of network problems?

Five Simple Steps to Ensure Control of Configuration Changes in Your Network

1.      Maintain a last known good configuration state and perform regular backups

2.      Record all changes, when they were made, by whom, and the business justification

3.      Test changes for impact before applying to production devices

4.      Be notified in case an unplanned or unauthorized change is made

5.      Regularly check if configurations are compliant with standards or best practices

Following a systematized and automated methodology to handle network change processes aids in eliminating time-consuming, tedious, and repetitive configuration tasks, and of course, the risk of human error.

Business Value-add of Network Change and Configuration Management

  • Reduce network downtime - Monitor network configuration changes in real-time and be automatically alerted when changes occur. Proactively identify potential network issues and quickly fix them before they cause major downtime.
  • Increase operational efficiency - Eliminate inefficient manual processes and meet increased demand for service requests through automation and consolidation. Manage your change process with change request approvals and verification prior to execution.
  • Improve network security and stability - Detect and report on policy violations to ensure network compliance with federal regulations and internal standards. Identify devices that do not comply with configuration policies or those that pose a security risk.  Protect your network from unauthorized, unplanned, or erroneous configuration changes.


Current-day change management capabilities can help reduce the risk of network downtime by providing you with the necessary tools to stay in control of changes occurring in your network. SolarWinds Network Configuration Manager (NCM) is one such solution that automates your network configuration and change management activities. Stay confident knowing that you are in charge and can effectively tackle network changes and roll-out new technologies. 

Join SolarWinds Sales Engineer Rob Johnson on July 25th at 1pm for an exclusive presentation for LEM Customers on LEM’s new Workstation Edition Features. There will be a live demo on expanding your compliance and log management to desktops. This live event will allow you to ask questions and get your answers right away! Register now.

This series relates technology trends and their implications (face recognition technology, Big Data storage and findability, Wearable Computing and Cyber-citizenship , Surveillance) to perennial IT concerns and challenges. The film Minority Report has served as a helpful point of reference in making the relevant connections.


It’s been a few weeks since the British news organization The Guardian began publishing stories about and sourced through a Booz Allen Hamilton contractor named Edward Snowden.


The story has many aspects and implications. In this case I want to simply point out Booz Allen Hamilton’s obviously inadequate IT policies and practices.


Booz Allen Hamilton is no novice in working within the US intelligence bureaucracy; it has a long history of securing very lucrative contracts since National Security Agency director John Poindexter made the decision to have private technology companies modernize that agency’s computing infrastructure.


Yet a relatively junior contract IT analyst, Snowden, was able to use his routine access to BAH computing resources to download a trove of classified documents onto a thumb drive. Apparently, the data related to the surveillance of all civilian US telecommunications traffic is so easily available to BAH employees that Snowden could take what he wanted without raising any flags.


In the sense that he apparently violated his employment contract, Edward Snowden might be called a rogue IT professional. But Booz Allen Hamilton made it very easy for him. The obvious conclusion is that BAH merely excels at getting US government contracts; having a credible program for ensuring that their customer’s information remains secure is an afterthought BAH didn’t have until now.


No Booz(e)

There are many ways to manage the security of the data within your network. One simple but very effective way is to encrypt all data passing through your network and make access to data dependent on role-based systems' use. And that includes access to data flowing through the tools that monitor and manage your network systems. Network monitoring and management products like Solarwinds Network Performance Monitor or SolarWinds Network Configuration Manager support data encryption at the level of AES 256 and impose a role hierarchy on which accounts within your IT systems can view and manipulate.

These days security breaches, vulnerabilities and threats seem to be in the news more frequently than ever before. The foremost concern that comes to mind is what best you can do to protect your network from data breaches. Attackers and hackers don’t seem to take a break. Hence it becomes all the more important to stay hawk-eyed on our network rather than being reactive after the damage is done. You know that every device, every system on your network generates tons of logs. Have a centralized solution in place to monitor them, analyze them and responding them is the way to go.



Security Information and Event Management (SIEM) has emerged and come a long way by playing a major part in your security strategy. It spans across regulatory compliance, log management and analysis, troubleshooting and forensic analysis. Typically it starts by classifying your IT assets based on the information that they contains, and then collating relevant information to provide meaningful context.


So the core aspects that you need to consider when you are choosing a SIEM solution are:

  • Log Collection & Correlation
  • Analysis & Forensics
  • Response & Compliance


Most organizations find it tough to get the right combinations going, when it comes to log management and SIEM. Even before choosing your vendor, you need to need have a SIEM strategy in place. If properly planned and executed, a good log management and SIEM software, combined with process automation, can offer an excellent ROI.



To be really effective in network defense, and not just from a forensic analysis standpoint, you need to make sure that the security event data is analyzed and correlated in real time. Also, you need to capture threats in real time, correlate them in-memory and respond to the attacks in a timely manner.



Not many organizations think in terms of correlation rules because constructing the rules has been an obstacle. For example, you might be familiar with the network policies and you could even describe the business rules and objectives but the challenge is to bridge the objectives with the construction of correlation rules.



We understand this, and insist that correlation and real-time log analysis are the heart of SIEM technology. In the following blogs, we will be discussing in detail about each of the following areas:

  • Infrastructure surveillance and threat intelligence from log data
  • Log correlation, threat response and remediation
  • Regulatory compliance for IT security


Keep an eye on this space, there is more on the way!!

In a Product Blog article last August I talked about why you might want to deploy additional Automation Role servers when using SolarWinds Patch Manager. In this article I’m going to describe exactly how to do that.



The first step is to install the server. Installation of an additional Automation Role server is very similar to how you installed the original Patch Manager Primary Application Server (PAS), with only a couple of minor variations.


Launch the Patch Manager installer, and on the Express/Custom screen, select Custom.


Proceed through the installer screens as you did for the original server. When you arrive at the database selection screen, select Use a LOCAL instance of SQL Server Express Edition. Each Patch Manager server requires its own instance of SQL Server, and there’s no need to use anything except SQL Server Express Edition for an Automation Role server.


When you arrive at the role selection screen, select the “Automation Server” option.


When the installation reaches the point where it needs to register the new Automation Role server with the PAS, it will prompt you to provide the name of the PAS (again) as well as the credentials to authenticate with the PAS. The logon account must have membership in the Enterprise Administrators security role. Typically the local administrator account of the PAS, or a domain administrator account will serve this purpose.


When the installation is completed, the final screen will offer you the opportunity to launch a local console and connect to the PAS.


You can continue configuring the Automation Role server from this console connection on the Automation Role server, or you can use another console session that connects to the PAS.



To begin configuration of the Automation Role server, connect a console session to the PAS.


Navigate to the Patch Manager System Configuration -> Patch Manager Servers node. You should see your new Automation Role server listed in the details pane of this node.


Note that the Automation Role server is not yet assigned to a Management Group, and it displays an icon with a red exclamation mark indicating that the configuration is not yet complete.


Launch the Patch Manager Server Wizard utility from the Action Pane.


Select the option “Edit an existing Patch Manager Server’s configuration settings”. Click on Next.


Select the new Automation Role server from the “Server Name:” dropdown menu, and click on Resolve if the remainder of the dialog does not automatically populate with the server’s attributes. Click on Next.


Assign this Automation Role server to a Management Group. In most instances, there will only be one management group, the default group “Managed Enterprise”; however, if you have multiple management groups defined, the Automation Role server must be assigned to one of them. It will manage tasks only for members of that management group. Select the correct management group from the “Management Group:” dropdown menu.


The “Server Role:” value should be automatically set to Automation. The option is used to add or remove roles from a Patch Manager server after deployment and registration. “TCP/IP Port:” default to 4092 and should not be changed.


Set the option “Include this Patch Manager server in the default round-robin pool” depending whether the Automation Role server is being deployed for a specific purpose, or just as an additional server for load-sharing. If the option is disabled, only tasks matched by an Automation Server Routing Rule (ASRR) will be assigned to this Automation Role server. If the option is enabled, any task that does not match an existing ASRR may be assigned to this Automation Role server.


The last set of options is useful when the Automation Role server is being deployed across a bandwidth constrained connection such as a slow WAN link or a site-to-site VPN connection. It allows you to restrict the maximum amount of bandwidth used by the Automation Role server, and you can set the values differently for incoming/outgoing (i.e. download/upload) traffic. Click on Next. You'll then be presented with the configuration summary screen. Review the configuration options and click on Finish.


After clicking the Finish button from the wizard’s summary screen, you will be presented an information dialog reminding you about the service restart requirement for the Patch Manager server.


After the changes are synchronized to the Automation Role server (e.g. management group assignment, round-robin option, and bandwidth restrictions), it is necessary to restart the Data Grid Service (or reboot the Automation Role server).


You should wait approximately five minutes after completing the wizard before initiating the restart. There are three ways you can restart the service:

  • Using the Services MMC.
  • From the command line using “net stop ewdgssvc” and “net start ewdgssvc”.
  • From the Services tab of the Computer Explorer in the Patch Manager console.


The last step is to create any needed Automation Server Routing Rules. ASRRs are not required, unless you want to dedicate an Automation Role server to a specific machine or group of machines (by domain, workgroup, organizational unit, or IP subnet). You can also assign a WSUS server to an Automation Role server, but please note this rule only assigns WSUS Administration tasks to the Automation Role server, it does not assign computer management tasks, nor does it assign clients of the WSUS server.


Navigate to the Management Groups -> Managed Enterprise node (or the node representing the management group that this Automation Role server was assigned to), and select the Automation Server Routing Rules tab.
From here you can create, edit, and delete ASRRs.


In a future article I'll talk in more detail about using ASRRs. In the meantime, for more information, and examples, of the use of ASRRs, please see the Product Blog article Patch Manager Architecture – Deploying Automation Role Servers.

Check out our Head Geeks in SolarWinds Lab Episode #4: Floapalooza, as they discuss about latest flow technologies and walk you through the command line configuration of JFlow, sFlow and NetFlow, with tutorials and hands on demonstration. They’ll also cover topics on how to set up NetFlow on a Cisco Device, JFlow on Juniper Devices and provide a set up tutorial for sFlow on a HP device.





Here’s the document referenced in the video.


Great flow analysis helps you to get in depth visibility into your network traffic and also helps you with faster troubleshooting. Watch our episode here and get everything you ever wanted to know about flow technologies of all kinds.


You can learn more about SolarWinds Labs and sign up for calendar reminders for upcoming shows at lab.solarwinds.com.

Perhaps you have wished for a one stop shop for answers to any questions you may have about our products?

Well for any customers who would like some additional information on the products they may have, check out these great links that provide a compendium of answers.



Drill down into a product page and you'll see many options. For you visual learners, Video Zones have been added to all the pages – these videos include webcasts and youtube videos that are relevant to using the product. In the upper right corner you'll see the version # for this page. This changes when new content is added.


How to articles are divided into sub categories.


If there is something else you would like to see added to these pages let us know:

thwack.com help   solarwindscommunityteam@communications.

Receiving SNMP Alerts from SAM in LEM

Some examples of using the systems together:

  • SAM detects an issue with a service and use LEM to determine if there are errors being generated from that service, when the issue started, and respond by restarting the service, and building a rule to detect & notify you of future outages before the service actually goes completely down.
  • Build rules inside of LEM that combine data from SAM with your event log, device log, and application log data, to combine the power of what's happening in the log with the knowledge that something's gone wrong.
  • Respond to an event detected from SAM in the LEM Console to isolate an issue, quarantine a user or system, restart a service, or kill a process.

To send data from SAM to LEM:

  1. 1) On your LEM appliance, enable SNMP, if you don't already have it enabled.
  2. a) From the virtual/hardware appliance Advanced Configuration console, enter service at the cmc prompt.
  3. b) At the cmc::scm# prompt, enter enablesnmp.
  4. 2) Configure the SolarWinds tool on your LEM appliance via Manage > Appliances, then click the Gear icon and select Tools.
  5. Select Network Management from the Category list.
  6. 4) Click the Gear icon next to the bottom Network Management line and select New and create a new SolarWinds Orion tool.
  7. 5) Click Save to save the configuration (the default name/alias that appears in all of the messages from these tools can be changed).
  8. 6) Click the Gear icon and select Start to enable the tool/connector to monitor for incoming data.

For more information on setting up alerts with SAM, check out the "Creating Alerts" section in the SAM User Guide.

With targeted security attacks getting increasingly advanced and persistent, businesses have started investing more in procuring the right security technologies to protect them from these new threats. According to Gartner Inc., the global security technology and services market will reach $67.2 billion in 2013, up 8.7% from 2012. The market is expected to grow even higher to over $86 billion by 2016 as companies continue to expand their security infrastructure.


As the technologies used by hackers become more and more sophisticated, IT teams must counter by implementing stronger defense mechanisms that proactively prevent breach attempts and intrusions.

Some of the challenges IT security teams face today include:

  • Gaining and analyzing enough data to detect advanced attacks
  • Identifying patterns of potential risk across diverse data sources for actionable insight
  • Implementing automated incidence responses to proactively counter zero day threats


Increasing Demand for Security Tools

With the increasing demand for implementing more security tools and solutions, organizations have started opening up their wallets more generously—with the intent to get the best bang for their buck. From antivirus software, firewalls, VPNs, IDS/IPS systems to logging and auditing tools, there’s a growing need to strengthen security and stay protected. This comes with the important consideration to choose the best technology and strategy well before it’s deployed.



Defense-in-Depth ‘SIEM’ Strategy

The guiding principle in a defense-in-depth strategy is that it’s more difficult for an attacker to breach a complex and multi-layered defense system than to penetrate a single barrier. Here’s where Security Information & Event Management (SIEM) can help you build a robust security architecture. SIEM tools provide a holistic view of your organization’s IT security by integrating with all of your security systems, network devices, servers, and workstations to collect log data, correlate them in real time to perform inspection of distinct time and transaction based events, and flag anomalies. SIEM tools enhance IT security by:


SIEM provides the additional layer of defense necessary to combat security breaches.



SolarWinds Log & Event Manager (LEM) is a full-function SIEM solution that is totally affordable and meets your IT security needs and budget. As organizations start to shell out more money on expensive appliances, you can turn out to be the smarter IT guy and invest in a robust and affordable solution like LEM, and keep your entire IT infrastructure safe and secure!

Imagine a complex IT network environment supported by a help desk system, both working in silo. Here are the top three challenges in such a scenario:

  1. Manual ticket creation for the network issue(s)
  2. Fragmented/duplicate help desk tickets for similar network issues
  3. Conventional/time-consuming ticket follow-up, status notifications, and closure processes


Well, according to a discussion here, IT pros think that integration with network management systems is a must. This is one of the main concerns that led over 82% of the respondents, in a recent SolarWinds help desk survey, to highlight the necessity for help desk tickets to be triggered automatically based on network node alerts/failures.


orion alert integration.png


SolarWinds® Web Help desk™ seamlessly integrates with SolarWinds Network Performance Monitor (NPM) and Network Configuration Manager (NCM), and the aforementioned challenges vanish, just like that! All you have to do is to create a connection and define a few rules to accept and create tickets based on parameters like severity, alert field messages and so on.


filters orion.png


For example, if you create a filter to accept all NPM alerts with severity that is not ‘low’, Web Help Desk will automatically create problem tickets that will appear just like any other help desk ticket. It’s as simple as that!


What’s more, this integration is bi-directional! When a help desk technician takes up a ticket and adds a note to it, the same is acknowledged within the NPM console, by Web Help Desk.


Now you know, help desk integration with network management software is not a far cry. SolarWinds Web Help Desk is a simple, streamlined, and powerful help desk solution that facilitates integrations with ease.


Take a test drive today!

Welcome to SolarWinds blog series “The Best Practices for Optimizing Alerts”. This is the first of 3 part series where you can learn about the best practices for optimizing your network alerts in your environment. In this blog we will discuss threshold based alerting and its importance in maintaining ideal network performance.

What are Network Alerts?

A network alert is a notification which informs the network administrator of an abnormal event that has occurred in the network. Alerting allows network administrators to be aware of the events that directly affect the health of their network so that the administrators can resolve the issue proactively before the problem directly affects the user.  Automated alerts can be set to be delivered through a variety of methods such as email or text.

Monitoring and managing alerts is an important function for any network administrator who wants to ensure continuous network availability. With increasing network complexity, alerting can become very “noisy” as network managers are bombarded with series of notifications about all the activities and issues in network. Intelligent alerting will help avoid unnecessary notifications so you can focus on those that are most important.

How Threshold based Alerting helps Network Administrators

Threshold based alerting can help reduce the “noise” by limiting notifications to only those that cross a pre-defined threshold.  For example, if a network administrator sets the bandwidth usage of a critical interface to a limit of 80%, he will be alerted when the usage exceeds that limit. Similarly, if the CPU utilization exceeds 80% limit, you will be alerted about the impending situation so you can check what device or application is using the memory.

Maintaining High Performance through Intelligent Alerts

Threshold based alerting helps you to troubleshoot a network situation and keep your problems at bay. Almost all network monitoring tools include alerting to notify you when an event occurs..

What you really need, however, is Intelligent Network Alerting.  With intelligent alerting you can define device dependencies to ensure you don’t receive unnecessary alerts, configure alerts for correlated events and escalate alerts to a variety of delivery methods.


By monitoring your network and optimizing your intelligent alerts you can find and correct issues before your network performance gets affected by degradation or availability issues.

website monitoring.gif

Last week I interviewed Joe Kline of Maritz. Joe is a Senior Infrastructure Specialist and manages the Network Management Group which is responsible for deploying new IT solutions, tools, upgrades, and more. Joe and his team use Web Performance Monitor from SolarWinds to manage hundreds of web applications and websites.


Jennifer: What SolarWinds products do you use to monitor your environment and how do you use them?

Joe: We have Server & Application Monitor (SAM), Web Performance Monitor, Network Performance Monitor, Network Configuration Manager, and VoIP & Network Quality Manager.

We previously used HP SiteScope and their Business Availability Center (BAC) products.  These products were pretty expensive and the SolarWinds Orion toolset offered more flexible licensing options.


Orion offers a lot of flexibility for monitoring. I do like how flexible SAM is, we can pretty much do what we want to do. It is just basically limited by whatever we want to put our heads and minds together to work on. HP kind of limited you on what you could do.


We use Network Performance Monitor (NPM) for the core CPU, memory, and disk space for pretty much everything. We use Server & Application Monitor (SAM) to cover a lot of application specific monitoring like Windows services, processes, and some performance monitoring counters. We use Web Performance Monitor (WPM) to monitor all our web applications, which is quite a few.


Jennifer: Could you explain how you monitor your websites, what you’re looking for, issues you uncover?

Joe: We are currently monitoring about 315 websites with WPM today. That could easily double if WPM improves its scalability. And for SAM, we probably have about 25,000 component monitors deployed in roughly around 1,700 servers. We are several different business units operating under one big parent company so we have different application development groups supporting each of those lines of businesses. We deal with a lot of different architectures when it comes to web applications. We are very heavy on the IIS and .NET side but we also have a pretty sizeable installation of JBoss.  Our business units under Martiz all develop applications that you probably use today if you have a credit card with any kind of points reward program. That’s the kind of things we do. We host a lot of these applications at our HQ in Missouri, but we also have onsite deployments and we are looking at venturing into the cloud.

We have hundreds and hundreds of those applications and SLAs we have to adhere to. Some of which, especially in the financial sector, are very strict with significant financial penalties if we don’t adhere to SLAs.


With our website monitoring, we are primarily looking at availability. We do look at performance on a limited basis and if we see a performance issue, we bring it to the business unit’s attention.  For example, we look at every step in the transaction and try to put content matches in where it makes sense.  We find issues with the websites every day, at least 10 to 15 alerts every day.


Jennifer: If you didn’t have web application monitoring in your environment, would you be getting a lot more calls?

Joe: Over the last 4 years, we’ve really matured as a Network Operations Center in how we monitor everything. I think our customers that use our services that we’re monitoring applications for comfortably rely on us a lot more than they used to. Monitoring web applications 5 years ago was kind of an afterthought. We did it as requested and now we pretty much monitor everything that we know about.


The obvious benefits are if we’re notifying them (clients) of issues before the client does.  We provide the business with a lot of reporting (internally built) driven off the Orion data, largely availability and performance on a monthly level.  We also take our change management infrastructure changes, and correlate that with downtime events and manipulate the availability metrics based off those windows, so we can have more realistic SLAs which exclude maintenance windows.




The following illustration explains the Template and Application relationship.


Here you can see that if you change something at the template level, the applications based on that template will be affected. Conversely, if you change something on the application level, only the individual application will be affected. This inheritance relationship is beneficial if you need to make a great deal of changes quickly. For example, rather than change one item on 100 applications that are based on a single template (which requires 100 changes), you can more easily change the one item on the template. That one change in the template will trickle down to all 100 applications that are based on the template.




Now you know the secret that makes SAM so powerful and flexible!

11553_270_172_cache.pngHave you had a chance to check out the 30+ free tools on DNSstuff.com? If not, there's no better time than the present. DNSstuff has all the tools you'll ever need to troubleshoot DNS and email issues - all in one super easy interface.

And best of all... the tools are now totally free! Yes, we love you guys.

There's more infomation about the tools here. And you can check out the site itself here. Let us know what you think!

This is one of the most highly trending topics storage and virtualization admins are trying to follow and cash in on. Server virtualization has evolved over the years beyond myths and apprehensions into a stable, secure and much-sought technology for improved productivity and resource efficiency. VMware® has been the major player in this market but with their latest release of Hyper-V® Microsoft® become a credible alternative.


In this blog series, we’ll compare the latest versions of these hypervisors vSphere 5.1 and Hyper-V 2012. Hyper-V has come a long way to be considered as a competitor to the undisputed vSphere. This might be kind of a bakeoff, but one that will help virtualization shops take advantage of both technologies and benefit from a multi-vendor virtual infrastructure.


In Part 1 of this series, we’ll compare the Storage capabilities and limitation of both the products. Let’s dive in.



Supported Storage Types



The table below shows the various storage types supported by both vSphere 5.1 and Hyper-V 2012. Hyper-V may not have had support for all of these earlier, but now with Hyper-V 2012, it matches all the supported storage technology that vSphere does.


VSphere vs. Hyper-V (Storage).png



Storage Thin Provisioning


Yes, thin provisioning increases virtual machine storage utilization by enabling dynamic allocation and provisioning of physical storage capacity. This helps you cut down the amount of space that is allocated to the VMs but not used.

  • Hyper-V 2012 provides thins provisioning at the virtual layer via the VHDX file format and at the physical storage layer when your storage supports it. You can add virtual disks to SCSI controllers when the VM is running (provided the VM is powered off). Though the VHDX disk goes up to 64 TB, because of its dynamic expansion, it grows only when required, and saves a lot of storage capacity.
  • vSphere 5.1 introduces a new virtual disk type, the space-efficient (SE) sparse virtual disk. This has the ability to reclaim previously used space within the guest OS and the ability to set a granular virtual machine disk block allocation size according to the requirements of the application.



Support for Linked Images


Instead of creating a base image for all the VMs you are running, it will save you tremendous storage if you just create a base image and link the other VMs to this.

  • VMware Linked Clones are simply “read/write” snapshots of a “master or parent” desktop image. This feature is not available in vSphere yet and is native only to vCloud Director and View. With vSphere 5.1, the maximum number of hosts that can share a read-only file has been increased to 32. Now VMware View and vCloud deployments using linked clones can have 32 hosts sharing the same base disk image.
  • Hyper-V Differencing Disks are like VMware Linked Clones. There’s a parent VHD and a number of linked VHD\VHDX for VMs that only record the changes from the parent. Virtual admins can create a master base image and simply link other VMs to this image and save a lot of disk space.



Storage Management


Both vSphere and Hyper-V offer centralized management capabilities for datastores. Both offer SMI-S support.

  • vSphere has vCenter Server
  • Hyper-V 2012 needs System Center 2012 - Virtual Machine Manager (VMM)


This analysis DOES NOT comes down to a winner. Both the hypervisors are suited for different requirements based on cost, infrastructure and virtualization management. Watch out for Part 2 of the vSphere 5.1 vs. Hyper-V 2012 series where we will discuss Memory Handling”.





Can’t Wait to Read the Next Part Already?


This blog is based on information in the whitepaper “Hyper-V® 2012 vs. vSphere™ 5.1: Understanding the Differences” provided by SolarWinds and blogger Scott Lowe. Download the whitepaper to get the whole story.


VMan White Paper.png


If you are interested in virtualization performance monitoring, learn about VMware monitoring and Hyper-V monitoring.

Other parts of the vSphere 5.1 vs. Hyper-V 2012 series:

Do you ever feel like you’re not taking full advantage of the power of your alerting system?  Have you seen snippets of coolness from other users and wondered how they did it?

Check out our head geeks in SolarWinds Lab Episode #3 as they discuss all four types of alerts in a single episode, with tutorials and hands on demonstration. They’ll cover syslog alerting, SNMP trap alerting and more importantly – the difference between basic and advanced alerts, with in-depth walk through of Orion platform alert management.





Great alerts allow quick response, and managing them shouldn’t be a chore.  Learn how to create and configure the alerts you need and suppress the ones you don’t.

Make your manager think you have physic powers by day, and sleep better on nights and weekends. Get the details here.

As a security admin, you’re always committed to the security and privacy of the users in your network. However, despite your commitment, the fact remains that without comprehensive monitoring in all the right areas, your network will still be prone to targeted attacks.



The Opera Security Breach

The recent incident at The Opera Security group stands as a perfect example of a breach due to inadequate network monitoring. Initially it appeared that the attack was uncovered and no user data was compromised. However, upon closer inspection the security team found that this was no ordinary attack. The attackers had not tried to steal intellectual property, instead they wanted to use Opera’s auto-update mechanism to propagate a piece of malware normally associated with Trojans. As a result, the users received the malware as part of the browser update.



How to stay guarded?

You need to effectively identify and respond to threats in real-time rather than being reactive. The Opera Breach incident reiterates the importance of continuously monitoring the activities on your Web server, firewalls and endpoints. It’s advisable to use SIEM security software that can help you identify anomalies and deviations in policy definitions, baseline your IT environment for vulnerabilities, and shield your IT environment from vulnerabilities. SIEM security software also helps you to analyze and correlate activities across the various components of IT.

Further, it makes utmost sense if the SIEM software uses active responses to respond to critical events and shuts down threats immediately. For example, automatically blocking an IP address upon identifying policy deviations.



Stay updated but cautiously!!

Although the Opera security update incident led to a breach, it doesn't mean you should be skeptical about every update that you get from vendors. The more you fail to keep up with security updates from your software vendors, the more you’re prone to vulnerability. Unpatched applications are prone to become entry points for security attacks, thus making patch management one of the most critical processes in vulnerability management.



SolarWinds® Patch Manager to the rescue!!

SolarWinds Patch Manager is an easy to use tool that helps you report, deploy, and manage Opera patches. It simplifies and centralizes the patch management process by:

  • Preparing patch deployment packages for Opera with the latest updates and bug fixes
  • Testing the Opera patches before actual deployment and analyzing the post implementation results, thereby helping bulk deployment
  • Creating advanced before-and-after package deployment scenarios to ensure that complicated patches are deployed without complicated scripting
  • Ensuring compliance through automated patching with the help of built-in reports and custom reports that allow you to check whether the patch was successful or failed during deployment



In order to achieve comprehensive network monitoring, it’s best to have a hawk-eye on the events in your network and update your security software regularly to minimize vulnerabilities.

Suspicious devices on your network? How can you track them down? You basically have two options: the easy way (my personal favorite) or the hard way.

Option 1: The Hard Way

The hard way involves pings, traceroutes, and ARP tables before you can actually locate and disconnect the device/user. It can be a very frustrating game of cat and mouse--with the mouse trying every last bit of your patience! If you’ve done this before, then you know how time-consuming and painstaking it can be. And, by the time you finally track down the suspicious device, the damage could already be done.

Option 2: The Easy Way

The easy way involves leveraging the integrated power of SolarWinds IP Address manager (IPAM) User Device Tracker (UDT). With this dynamic duo, you can you track down a device and user in a flash! Simply use the integrated view to see IP address information along with the corresponding switch port details and user information—all within the same window. You get both current and past connection details. You can even shutdown the compromised port directly through the SolarWinds Web UI with the click-of-a-button. All this right from your desk!

The only thing left to do is figure out how you’re going to spend all that time you saved by doing things the easy way.

For more on IPAM and UDT check out these videos…..

IPAM Overview Video




















UDT Overview Video


I was working late the other night and was getting really hungry. The snacks in the break room were all gone. The fridge was empty. I ended up running out for a burger and came back to meet my deadline. The burger was not especially satisfying, but it was fast. What I really wanted was a nice plate of lasagna with a green salad and some garlic bread. And a slice of chocolate mousse cake for dessert. “Where are those food replicators when you need them?” I wondered.

Almost 50 years after Star Trek brought us the first fictional food replicators, it looks like they could become a reality that helps feed the world. According to the CNET article,  NASA funds attempt at 3D food printer for pizza, NASA’s provided a grant to Systems and Materials Research, a materials and technology firm, to come up with a 3D food printer. The goal here is to be able to provide food with a super-long shelf life, suitable for long-term space travel. The printer would use powdered nutrients to print out 3D meals. Anjan Contractor, an engineer with Systems and Materials Research, has already created a 3D printer that prints chocolate onto cookies. His next assignment is to print a 3D pizza.

Food for Thought

In the Quartz article, The audacious plan to end hunger with 3-D printed food, Contractor says that the dried nutrient powders the printer uses for “ink” can last for up to 30 years, making them the perfect not only for feeding space travellers, but for feeding the 12 billion people on earth as well.  Creating food in this way cuts waste, is sustainable (because it can come from sustainable nutrient sources, like algae), and uses open source hardware. Contractor bases the prototype 3D food printer on the second-generation RepRap 3D printer.

Algae lasagna could taste a lot like spinach lasagna, don't you think?

Who is Ubisoft?

Ubisoft Entertainment is a French global video game publisher and developer, and is one of the largest independent game publishers in Europe and the United States. Do you remember playing Assassin’s Creed and Price of Persia? Those were from Ubisoft.


Hacking? What Happened?

One of Ubisoft’s websites was exploited to gain unauthorized access to some of their online systems, and during this process, confidential customer account data had been illegally accessed from the account database. Usernames, email addresses and encrypted passwords have potentially been accessed, and users have been advised to change their passwords. Ubisoft has claimed that financial info, credit and debit card data, user real names and home address data were not affected. Forensics tests and analyses are happening to uncover how the attack happened.



Ubisoft is Not Alone

This type of hacking attack is becoming more common in the digital world, and technology giants such as Facebook®, Microsoft®, Apple® and Twitter® have also been the victims of similar hacking incidents in the past.



Some Scary Factoids

According to the 2013 Data Breach Investigations Report (DBIR),

  • Hacking constituted 52% of breaches that happened in 2012
  • 48% of hacking incidents involved authentication-based attacks and stolen credentials (guessing, cracking, or reusing valid credentials
  • 66% of breaches that happened in 2012 remained undetected for months


What is the Lesson Learnt?

Organizations are NOT prepared enough to detect zero-day attacks. Only after the perpetration is made and the damage done, we come to know of the impact. The reason for this is that there’s not much real-time actionable data for IT teams to monitor. Forensics are not enough. Detecting and stopping today’s zero-day, multi-vector and blended threats requires real-time, in-memory, analytics that can capture data, and respond to network attacks and insider abuse at network speed.



Get Access to Actionable Data & Real-time Log Analytics

Actionable data is found in all the system and device log files. All security, operational and policy-driven events are captured in the log files. To be effective in network defense, and not just for forensic analysis, the network and security event data must also be analyzed and correlated in real time. Security Information & Event Management (SIEM) systems help you get real-time insight into network activity by collecting logs from various network entities and correlating them in-memory and providing meaningful incident awareness to isolate anomalous events, threat vectors and non-compliant behavior patterns.


Try SIEM, and monitor & defend your network against hacking, intrusions, breaches, data loss and other malicious security threats!

Hats off to Canada! It seems that D-Wave has produced the first commercial quantum processor, or at least they've presented enough evidence that NASA, Google, and Lockheed Martin have purchased, or are in the process of purchasing, one of their processors.


When D-Wave first started out, there was, and continues to be, a great deal of skepticism regarding its claims of creating a quantum processor. Most research labs have only succeeded in building general-purpose quantum computers that use a few qubits (quantum bits). D-Wave claims to use hundreds of qubits in their processor.


D-Wave seems to have overcome the limitations seen in research labs by creating a processor that solves a specific type of problem - optimization problems - though it is comparable to a high-end classical processor for general problem solving. A recent study in Nature Communications supports D-Wave's claims about creating the first commercial quantum processor. At minimum, the study lends credence to D-Wave's claims  by acknowledging that "quantum effects play a functional role" in the processor.


Regardless of its status as quantum processor, it has performed up to 10,000 times faster on some optimization problems during testing. If you are involved in machine learning or identifying exoplanets from satellite images, there might be a D-Wave computer in your future. Unfortunately, the high price tag means you have to have extraordinarily deep pockets to purchase one. Heck, you might even need to have deep pockets to look at one, considering the reported $10 million asking price.


So the general IT field probably won't have to worry about monitoring quantum devices for decades, but with this viable quantum entry into the field, we're going to have to start thinking about how to apply IT management principles to quantum computing, especially in regards to monitoring quantum equipment and new security protocols. And really, after quantum computing, we're going to have to worry about quantum networking to move this new wealth of information around more efficiently.

Flow technologies deliver one of the best ways to see what is going on with your network traffic. In the next episode of SolarWInds Lab, we'll introduce our newest Head Geek and resident flow technology guru, Don. Our Geeks will walk through side-by-side, command line configuration of JFlow, sFlow and NetFlow. They'll also discuss their relative merits and use cases. As always, bring your questions and we'll answer them!


Next Episode: July 10: 1pm CDT.

Technical Topics. Live Chat.

Sign up for a calendar reminder.

Last time, we spoke about how Web components such as CSS, HTML, JavaScript, and Third Party Content affect website performance. Let’s continue to talk about a different issue, images, to understand why they create problems, and tips to identify these problems.


Images definitely enhance the look and feel of a website. They significantly contribute to the overall user experience. If images don’t load when you visit a website for the very first time, chances are it’s going to affect your perception about that site and its content. You might never return to that site. Your users are going to be thinking the same thing when they visit your website. Sometimes images that tell an important story about your product or service won’t load, leaving the page with just text. Let’s look at why this issue occurs.


The image loads but it looks incorrect. Sometimes when images load on a website, they don’t look the way they do in other browsers. This could happen if you’re using Web accelerator software, which reduces image quality.

Plug-in issues. Some plug-ins installed in the browser allows images to load only on the very first viewing. They may not load during successive visits to the same page, even after refreshing.

Cache and cookies. A corrupt cache file or cookies can sometimes prevent images from loading.

Image permissions. Some browsers prevent certain websites from loading images just to increase load speed.

Internet Security. Antivirus, firewall, and other security programs may block images and prevent them from loading.

Pathnames to image files. Images that contain backslashes in their URL might have issues displaying in the browser. This may vary from browser to browser.


4 Tips for Monitoring Image Issues

1. Record your Web transaction. Recording a transaction will establish how well your applications are performing. You can then compare this to your baseline and identify what page element is causing the issue.

2. Use image matching. You can define the number of seconds it takes for an image to load using image matching. Monitoring this will tell you if the image has loaded within the specified time. Then you know if the transaction passed or failed.


                 Set thresholds to monitor image loading times

3. Monitor page load times. Establish a baseline for how much time it should ideally take applications to load. Then monitor the load times of each step in the page. If a step loads slowly or fails to load, you should receive an alert about the problem.

4. Enable JavaScript settings. Ensure JavaScript is enabled in your browser. Monitoring JavaScript will make sure the browser displays images and other features in your application.


Efficiently & Effectively Monitor Websites

Learn to monitor website images issues using SolarWinds web performance monitoring software.  Check out the registration-free online demo.



When installing EOC it will also install Orion modules.



When you uninstall EOC via "add/remove programs" or  "programs and features", the uninstall process will only uninstall "SolarWinds Enterprise Operations Console." The modules will remain. When you install EOC to a different drive and run the configuration wizard you will receive the error:


"error while executing script- column name or number of supplied values does not match table definition"


Make sure EOC and all of it's associated modules are uninstalled before moving EOC to a different drive on the same system.

I’m sure by now you’ve heard the constant buzz about hackers targeting vulnerabilities on your system. You’re probably nodding your head now and saying “Yes, but what are some of the most common targets?” Well, to answer that question, one of the most common targets is actually end-user applications. Yes, you heard it right, end user applications with well-known vulnerabilities feature among the most common targets. The simple reason for this is that applications with a large user base have an increased chance of a hacker being able to target a vulnerability. Such application vulnerabilities can lead to dreadful security issues like data theft, non-availability, and many more.


Do you know which applications on your network are vulnerable?


It’s time that you keep tabs on the vulnerability levels of the applications on your network. Ideally, you should scan your network environment for vulnerabilities. “Catches win matches,” goes the clichéd statement; so too does the catching of vulnerabilities. In addition to running the vulnerability scan, it's extremely important for you to understand the target systems. The more customized the applications are during their implementation, the lesser they are supported. Hence, vulnerability scanning becomes the base of the security activities to follow.


Once you analyze the severity of the vulnerabilities, you need to prioritize which need to be addressed first. Looking at the current trends, Java® and Adobe® products appear to be an abundant source of exploitable vulnerabilities. Given the estimate that Java is implemented on approximately three billion devices and the significant increase in Java-based exploits, it becomes critical to address these timely.


Guard your applications against vulnerabilities


You need to stay updated on your software patches in order to stay protected from the latest security threats. Unpatched applications are prone to become entry points for security attacks, thus making patch management one of the most critical processes in vulnerability management. For example, the Java 7 update 21 patch updates have been available for quite a long time now, but only 7% percent of the users are running the latest version. For all those who are running older versions of Java, patching your applications might get even harder because the security fixes that are to come would consider the current version (JRE7) as the base.

With the help of automated and centralized patch management software, you can easily discover systems that are not running the latest updates and patch them accordingly. SolarWinds Patch Manager researches, scripts, packages, and tests patches for common third-party applications and automatically delivers them as ready-to-deploy patches within its console. Patch Manager also helps you in customizing patch reports, scheduling, and emailing them.

Protect your applications against vulnerabilities, stay secure!!

In the era of IPv6, as we find corporate networks expanding with an increase of IP-enabled devices, the entire IT infrastructure is evolving into a large and complex framework that will be unwieldy to manage without the right resources. As a network administrator, you need to be able to monitor and manage the configuration and changes happening on your network.


IP address allocation, reservation, tracking and management could be really hard if you are still thinking about having spreadsheets to manually update IP addresses changes and subnet masks. Centralizing and automating IPAM solutions is the need of the hour. Let’s see what an IPAM solution offers to help and why IP address management is not an option anymore for businesses and organizations of all sizes and sectors.



Key Benefits of IP Address Management

  • Automation: Automated IP address scanning enables you to scan your subnets and update your server with all the dynamic changes happening on IP addresses as they are allocated and reserved. Automation saves time and effort while reducing errors to help improve operational efficiency and ensure the accuracy of your IP space data.




  • Centralized IP Address Management: When the organization is spread across locations, centralized IP address management will help keep all network admins in sync with the IP address allocation and changes. You need an enterprise-class system to:
    • Keep track of used IP addresses, IP address ranges, IP exclusions, etc.
    • Alert on IP address space utilization by subnets, DHCP scopes, etc.
    • Locate IP addresses quickly and easily for management




  • DHCP/DNS Server Management: In addition to IP address management, there’s need for comprehensive DDI software that will offer integrated and consolidated management and monitoring of all your DHCP and DNS servers, such as:
    • Managing automatic and dynamic assignment of IP addresses to devices with DHCP
    • Monitoring name-to-IP with DNS and IP-to-name with reverse DNS




  • IPv6 Migration: IPAM tools can help you plan and coordinate your migration from IPv4 to IPv6 smartly and effectively, and allow you to create and test multiple scenarios before implementation to improve preparedness.




  • Tackling BYOD: Bring Your Own Device trend causes an explosion of IP addresses to manage as employees bring in personal notebooks and smartphones into the corporate network. IPAM software will help you efficiently and effectively monitor and manage all the dynamic IP address assignments and changes occurring on the network as a result of mobile device proliferation.





  • Security: IP addresses are the foundation of modern network connectivity and communication. Without them, you’d have no network. As such, your IP space is a vital resource that needs to be properly secured, including:
    • Controlling access privileges to personnel managing IP addresses
    • User delegation to keep a check on who makes changes, when and where




  • Compliance, Reporting & Audit Trail: Organizations today must comply with increasingly stringent internal policies, as well as external regulatory mandates. As such, regular security audits and compliance checks are a crucial and required part of network management. This is especially true for federal and government agencies. IPAM tools help you to:
    • Perform detailed event recording to track the IP address changes across your network
    • Store historical IP address tracking data for compliance and reporting purposes




Requirements of IT Teams in Organizations


  • Small-scale companies are starting to move away from using spreadsheet-based or homegrown tools to automate and centralize IP address management.
  • Mid-market organizations are looking to take control of the BYOD scenario, alongside the need for automated and consolidated DHCP, DNS, and IP address management, monitoring, alerting and reporting.
  • Large enterprises will continue to require the highest levels of automation and centralization to improve operational efficiency and productivity in increasingly complex and disparate multi-vendor IP infrastructures, in addition to the growing need to migrate to IPv6 as they outgrow their existing IPv4 space.
  • Government and federal agencies face a more stringent requirement of security, compliance and reporting in all areas of network management.


SolarWinds to the Rescue: IPAM for ALL!


SolarWinds IP Address Manager provides you with the agility and scalability to monitor and manage the entire IP space of your corporate networks. With the growing need to face problems of management and security to tackle BYOD, IP conflict issues, IPv6 migration, etc., and to do away with existing spreadsheet-based IP address management, organizations and agencies of all sizes have the requirement for IPAM software. IPAM gives you the opportunity to realize benefits of increased ROI and simplified network administration quickly and significantly.


Users of SolarWinds network products know that a Microsoft SQL Server database is a required component. Though all SolarWinds network products still include a utility called Database Manager its feature set is now limited to query operations.


So in this article, as a general courtesy to those who have been relying on Database Manager for backing-up and restoring SolarWinds product database, I'm going to provide steps for performing these operations using Microsoft SQL Server Management Studio and in the context of moving an existing database from one SQL Server database server to another. The procedures are relevant to database created in SQL Server 2005 and 2008.




  1. Using an administrator account, log on to the SQL Server database server where your SolarWinds product database currently resides.
  2. Click Start > All Programs > Microsoft SQL Server 200X > SQL Server Management Studio.
  3. Specify the server name of the current SolarWinds database server on the Connect to Server window.
  4. If you are using SQL Server Authentication, click SQL Server Authentication in the Authentication field, and then specify your credentials in the User name and Password fields.
  5. Click Connect.
  6. In the pane on the left, expand the name of the server hosting the SQL instance you are using for your SolarWinds product, and then expand Databases.
  7. Right-click the name of your SolarWinds database (for example, right-click "NCM_database), and then click Tasks > Back Up.
  8. In the Source area, select Full as the Backup type.
  9. In the Backup set area, provide an appropriate Name and Description for your database backup.
  10. If there is not already an appropriate backup location listed in the Destination area, click Add, and then specify and remember the destination path and file name you provide. This is the location where your backup is stored. Note: Remember, if your database is on a remote server, as recommended, this backup file is also created on the remote database server; it is not created locally.
  11. Click Options in Select a page pane on the left.
  12. In the Reliability area, check Verify backup when finished.
  13. Click OK.
  14. Copy the .bak file from your current SolarWinds database server to your new database server.




Restoring a database happens differently depending on the version (2005/2008) of SQL Server you are running.


SQL Server 2005


To restore your database backup file:

  1. Log on to the new database server using an administrator account.
  2. Click Start > All Programs > Microsoft SQL Server 2005 > SQL Server Management Studio.
  3. Click File > Connect Object Explorer.
  4. Specify the name of the new SolarWinds database server on the Connect to Server window.
  5. If you are using SQL Server Authentication, click SQL Server Authentication in the Authentication field, and then specify your credentials in the User name and Password fields.
  6. Click Connect.
  7. Click the name of your server to view an expanded list of objects associated with your server, and then right‑click Databases.
  8. Click Restore Database.
  9. Leave To database blank.
  10. Click From device, and then browse (…) to the location of your .bak file.
  11. Click Add, and then navigate to the .bak file and click OK.
  12. Click OK on the Specify Backup window.
  13. Check Restore.
  14. Select the name of your database from the To database field. It will now be populated with the correct name. For example, select "NCM_database".
  15. Click Options in the left Select a page pane.
  16. Check Overwrite the existing database.
  17. For each Original File Name listed, complete the following steps to ensure a successful restoration:
    1. Click Browse ().
    2. Select a directory that already exists.
    3. Provide a name for the Restore As file that matches the Original File Name, and then click OK.
  18. Select Leave the database ready to use by rolling uncommitted transactions…(RESTORE WITH RECOVERY).
  19. Click OK.
  20. Open and run the appropriate SolarWinds Configuration Wizard to update your SolarWinds installation.
  21. Select Database and follow the prompts. Note: Due to the nature of security identifiers (SIDs) assigned to SQL Server 2005 database accounts, SolarWinds recommends that you create and use a new account for accessing your restored Orion database on the Database Account window of the Orion Configuration Wizard.


SQL Server 2008

To restore your database backup file on a server running SQL Server 2008:

  1. Log on to the new database server using an administrator account.
  2. Click Start > All Programs > Microsoft SQL Server 2008 > SQL Server Management Studio.
  3. Click File > Connect Object Explorer.
  4. Specify the name of the new SolarWinds database server on the Connect to Server window.
  5. If you are using SQL Server Authentication, click SQL Server Authentication in the Authentication field, and then specify your credentials in the User name and Password fields.
  6. Click Connect.
  7. Click the name of your server to view an expanded list of objects associated with your server, and then right‑click Databases.
  8. Click Restore Database.
  9. Leave To database blank.
  10. Select From device, and then click Browse ().
  11. Confirm that File is selected as the Backup media.
  12. Click Add.
  13. Navigate to the .bak file, select it, and then click OK.
  14. Click OK on the Specify Backup window.
  15. In the Destination for restore area, select the name of your database from the To database field. Note: The To database is now populated with the correct name. For example, select "NCM_database".
  16. Check Restore next to the database backup you are restoring.
  17. Click Options in the left Select a page pane.
  18. Check Overwrite the existing database (WITH REPLACE).
  19. For each Original File Name listed, complete the following steps to ensure a successful restoration:
    1. Click Browse ().
    2. Select a directory that already exists.
    3. Provide a name for the Restore As file that matches the Original File Name, and then click OK.
  20. Select Leave the database ready to use by rolling uncommitted transactions…(RESTORE WITH RECOVERY), and then click OK.
  21. Open and run the appropriate SolarWinds Configuration Wizard to update your SolarWinds installation.
  22. Select Database and follow the prompts.  Note: Due to the nature of security identifiers (SIDs) assigned to SQL Server 2008 database accounts, SolarWinds recommends that you create and use a new account for accessing your restored Orion database on the Database Account window of the Orion Configuration Wizard.

What's red, white and blue and warns you when your log traffic is spiking?


Kiwi Syslog Server, of course!  (Red for errors, blue for info...) 

New this week: a two-minute guided tour (YouTube video) that shows you how Kiwi Syslog can monitor, archive and alert on your log files. 



Syslog Server Guided Tour


If you're in a hurry, here's a quick schedule of the guided syslog server video tour.

  • 0:10 - what types of messages Kiwi Syslog Server can supportmon
  • 0:20 - how many messages Kiwi Syslog Server can support
  • 0:25 - what the real time display looks like
  • 0:30 - a filtered real time display
  • 0:35 - the optional web interface
  • 0:40 - log file handing (split logs and archiving)
  • 0:55 - complying with retention policy requirements
  • 1:15 - reacting to message events with email, sounds and scripts
  • 1:25 - forwarding messages to database servers or other log servers
  • 1:45 - secure forwarding utility
  • 1:55 - conclusion, including overview/summary diagram (at 2:05)

If you’re a security practitioner, you should be reading this.



The 2013 Data Breach Investigations Report (DBIR) has published some alarming statistics that question us on our preparedness to combat new-age security attacks. The speed and sophistication of today’s attacks and new threat vectors being introduced are causing financial and reputational disasters across various geographies and organizations.



The report found that:

  • 19% of breaches combined phishing, malware, hacking, and entrenchment. This is known as the Assured Penetration Technique.
  • 78% of intrusions took little or no specialist skills or resources. This means companies weren’t prepared enough and had no preventive mechanism in place.
  • 66% of breaches remained undetected for months. Imagine the loss of data and resources during this period!
  • 84% of intrusions took just minutes to inflict damage. This means the threat response systems employed in companies were weak and slow to respond.



These meaningful numbers reinforce the need to be prepared for today’s advanced attacks. Most organizations don’t know how effectively their security systems avert threats and counter breaches and intrusions. The best place to start fishing for clues is the wealth of logs generated from various entities in the IT infrastructure.



Logs are the Means to an Actionable End

Logs provide a wealth of information about virtually everything that’s happening on your network. It’s only wise to take advantage of what’s available in the logs and get better visibility into the problems and security vectors that are impacting your IT infrastructure. You can achieve comprehensive log management and analysis by:

  • Aggregating log data from various disparate sources on your IT environment
  • Correlating the collected logs to obtain meaningful information about device and user activity on your network
  • Setting up alerting to automatically notify you if there’s a suspicious or non-compliant activity on your network and systems
  • Programming automated active responses to counter and prevent threats in real time



Security Information & Event Management (SIEM) tools provide all the protection you need to detect, alert, and respond to attacks by preventing or containing them. SIEM tools will further help you analyze log data for advanced incident awareness and perform event forensics to isolate the root cause of a threat or attack. For a full-function SIEM virtual appliance, try SolarWinds Log & Event Manager. Our solution will enhance your IT security and prepare you to face the onslaught of sophisticated zero-day attacks.

Welcome to SolarWinds blog series “Diving Deeper with NetFlow – Tips and Tricks”. This is the last part of the 6 part series where you can learn new tips by understanding more about NetFlow and some use cases for effective network monitoring.


In the previous blog, we had discussed about Quality of Service (QoS) and how you can implement QoS polices across your network by analyzing the data from NetFlow. In this blog, we will dive into capacity planning using NetFlow and understand its impact in scaling up your network.

NetFlow helps administrators plan network capacity more accurately—by deploying greater bandwidth for advanced networking services—as organizations scale up. Using NetFlow, one can easily check if bandwidth growth is aligned with resources utilized in the current environment and plan for the future. This will allow network managers to more easily monitor bandwidth consumed by applications.

Capacity planning using NetFlow can also help network administrators implement QoS policies and prioritize mission critical applications by characterizing traffic. By distinguishing different types of network traffic like voice, email and other applications, administrators can analyze and understand the QoS policies they have implemented. Top applications and conversations based on NetFlow data can be stored for reference unlike PCAP, which requires extensive storage.

Capacity Planning will help enterprises collect more NetFlow historical data and compare the trends with the organization’s network. This helps to allow enough bandwidth for business critical applications and prevent any anomalies to enter the network. Having NetFlow for capacity planning will also assist in scaling up the network according to needs and utilize the available bandwidth in a better way, ensuring good resource alignment and capacity planning.

Using NetFlow Traffic Analyzer, you can store aggregated historical network traffic data which will give you an additional help need during the process of capacity planning in your network environment. Scaling up based on network requirement and implementing QoS policies becomes much easier by having relevant network information while operating at an enterprise level environment.

To learn more about NetFlow, check out our NetFlow V9 Datagram Knowledge Series.

Watch the entire ‘Diving Deeper with NetFlow – Tips and Tricks’ webcast here and become an expert in understanding and implementing NetFlow in your enterprise networks.

Download a free fully functional 30-day trial of SolarWinds Bandwidth Analyzer Pack.

Last week I had the pleasure to interview Tracy Dennis of Antioch Unified School District, located in Antioch, California.

JK: How did you come to know SolarWinds for your network management needs?
TD: It happened when we were changing daylight savings time in California a few years ago.  We have a large array of older Cisco products which did not natively support the new daylight savings time so we were going to need to manually reconfigure about 500 network devices.  This would have taken several days to do.  Our integrator recommended we use SolarWinds to automate this process.  We downloaded and tried Network Configuration Manager (NCM) and were able to push a script out to all 500 devices in about 10 minutes.  This product changed my job!

JK: Aside from saving weeks of time in making configuration management changes, how else to you use Network Configuration Manager?
TD: Whenever we have a major project, we go out to bid and sometimes we have multiple firms helping us with our network.  Network Configuration Manager gives me peace of mind because if one of these firms makes a configuration change that results in loss of service, I can easily compare the configurations on all devices and pinpoint what happened in my environment, see who made the change, and resolve it quickly.

JK: What other SolarWinds products do you use?
TD: We use most of the products in the Orion suite – Network Performance Monitor, Server & Application Monitor, VoIP & Network Quality Manager, Integrated Virtual Infrastructure Monitor and NetFlow Traffic Analyzer.

JK: Why did you choose SolarWinds for your network and server performance monitoring needs?
TD:  Previously we used a product called InterMapper, but we needed to have better support for the number of network devices, applications and server resources we support.  We were so happy with NCM and we had seen online demos of Network Performance Monitor, and because of that good experience, we decided to buy a suite of SolarWinds products and we signed a 5 year maintenance agreement.

JK: What are the benefits of using Network Performance Monitor?
TD: We can manage our bandwidth - show the top talkers, where the traffic is going and we have visibility into how the network is performing.  With Orion’s flexible interface, this visibility can be shared across all teams.  For example, I have a customized Orion portal for me which looks at key network components, interfaces and utilization metrics; my co-worker who manages the server infrastructure has a custom view of server, application and virtual environment metrics, and we also have a custom view for the desktop support team.  This is very beneficial because the help desk does not always need to rely on me to understand what is going on in the environment.   They have visibility into network and server status and can often diagnose and remediate network issues without having to wait for me! It just allows us to provide better customer service.

Being proactive is also a huge time saver.  With so many aging facilities, one of our biggest challenges is power outages. Because of the proactive alerts, I’m often the first to see an outage and report it to Maintenance, so the problem is often fixed well before our teaching staff even starts their day.

JK: Being a K12 school district, you are probably faced with challenges of BYOD/BYOA.  How are you dealing with this challenge?
The California Department of Education has a new mandate requiring all state testing to be performed on-line for all students starting in 2014.  This is one of the driving forces moving us away from thick textbooks and putting portable devices in the students’ hands with access to educational portals (textbooks, testing, etc.).

We are building out a large wireless network and we will need to monitor this environment to ensure there is adequate bandwidth, that the devices are up and running and healthy, and so on.  Right now our major challenge is figuring out how to support a wide array of devices and OS’s on the network, specifically the authentication of these devices to the network and to the various academic systems we support.



Get benefits like Antioch - try Network Configuration Manager or Server & Application Monitor - free for 30 days.

Remember when you turned on your computer for the first time and it was fast? (Even that wasn't fast enough for me.) Over time, most computers slow down. The reasons are usually software related in some way, shape, or form. The good news is you can get that speed back, and probably more!

Tips to keep your rig moving at warp speed:

  • If possible, turn the page file off. What's the page file? When a computer runs out of available memory (RAM) by default it uses free hard drive space to act as extra RAM. This is known as the page file, or swap file. Typical RAM is strictly electronic and involves no moving parts. Having the page file enabled slows your computer down because of the time involved with RAM reading from and writing to the hard drive. Needless to say, this also wastes hard drive space. I began disabling the page file back in ye olden days when 512MB came with my machine. Haven't had a problem since. With 16GB of RAM today, I feel pretty safe without the page file slowing things down.
  • Add RAM.  More RAM = More gooder (sic). RAM memory gives more breathing room to your programs, allowing them to operate more freely, and thus, faster.
  • Uninstall useless programs. Useless programs can start unnecessary processes and services in the background, wasting valuable resources. Tell them to hit the bricks!
  • Clear your Startup folder. Many programs like to start with Windows. They sometimes hide in the Startup folder in the Start menu. If you don't need anything starting with Windows, delete it.
  • Kill unnecessary services and processes. Once you've removed your useless programs and cleared your Startup folder, you still may have unnecessary processes and services running. I cannot tell you which ones you should terminate, but I will tell you how to do it:
    1. From the Start menu, type msconfig
    2. Navigate to both the Services and Startup tabs. For safety, you may want to check the Hide all Microsoft services box, as shown below:
    3. From these tabs, determine which processes and services you can uncheck. By doing so, you are telling Windows not to run these programs once you reboot, thereby using fewer resources.
  • Optimize your internet connection. By default, your internet connection settings are not maximized. To make life easy, a free program called TCP Optimizer was created to achieve just this. There are many settings you can tinker with, or you can just use the program's recommendations. A great deal of good documentation is on their website for those of you who enjoy this type of stuff.
  • Get rid of the viruses and spyware. An obvious realization to be sure. I prefer Spybot - Search and Destroy and CCleaner.
  • Ensure your Antivirus solution has a small footprint. There are several AV programs out there that are resource hogs. I've removed them from my computers for just that reason. I prefer something that is lightweight as far as resources go. Do a little investigation and see which one is right for you. I prefer AVG and Avast. They're free, have a small footprint, and can be set as to not interfere with your surfing.
  • Defragment you hard drive and registry. Over time, data can become fragmented, meaning that the way data are arranged on your hard drive is not optimal for reading and writing by said hard drive. Defragmenting will ensure everything is where it should be and that hard drive reading and writing is optimal. Windows does come with a basic hard drive defragmenter, but has nothing for the registry. Personally, I prefer the free defragmenting program from Auslogics over the Windows version.
  • Clean your registry. You'd be surprised how many junk registry entries your computer has. Last time I checked, I had over 3,000, all useless junk. Again, Auslogics offers a great free tool for just this purpose.
  • Update everything. Windows, and most other vendors, offer regular updates to their software. Getting the latest and greatest just may help speed things up.
  • Reboot once in a while. If you're like me, you keep your computer on 24/7. To a computer, a reboot once in a while is like you or me stepping out of the shower and putting on brand new clothes. It just feels nice and fresh.
  • Grab an internal Solid State Drive. Read about the benefits here.

You want more speed.

The aforementioned tips were designed to add a bit more pep to your computer. In fact, I use them all the time. Now you're thinking, "What more can I do? I still want more speed."


The next logical step would be to speed up your individual programs. Being a writer, you would think I would suggest reading the manuals for all of your software. (Actually that's a good idea.) I would also suggest navigating to the Settings or Preferences menu of your software and visiting each option provided. Usually, there are settings available to help make the software run more efficiently, or at least suit your needs. Take Server & Application Monitor (SAM) for instance. SAM is a great program with many settings and a large database attached. Here are some SAM optimization tips:

Let's get SolarWinds SAM up to speed.

Use these tips to help keep SAM happy:

  • Make sure to regularly re-index tables.
  • Try not to have the polling interval set to below 300 seconds on non-critical monitors.
  • Avoid using RAID 5 for your SQL Server. RAID 10 is recommended.
  • Make sure nightly database maintenance runs.
  • Do not increase the retention settings beyond what your server is capable of handling.

Hopefully you'll incorporate as many of these tips as possible. Remember, if we wanted slow computers, we'd all still be running a 386. Yikes!

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.