As networks grow, manual network configuration and change management (NCCM) becomes extremely cumbersome and can cause costly network downtime.

Network administrators using manual processes can experience:

  • Increased time and effort to manage the growing number of devices
  • Human errors causing network outage and loss of productivity
  • Longer time to resolution of configuration issues
  • Incomplete or no information on device end-of-life (EoL)

All these management inefficiencies ultimately eat into your organization’s budget.

Does automating NCCM help improve network management ROI?

NCCM solutions are packed with features that support network visibility, accurate execution of configuration tasks, standardized configuration processes, rollback & remediation options, and so on.Learn from real-world examples how NCCM automation can help reduce efforts and improve your network management ROI. View this SlideShare to gain a detailed understanding of ROI for NCCM solutions.

Download ROI for NCM Solutions in Enterprise Environments and read what industry experts say about improving network configuration management ROI.

NCM ROI WP Image.png

Whenever there is an issue in your Windows® environment, one of the places we troubleshoot issues is by looking at Windows event logs. Unfortunately, issues are prone to arise and they get logged whether it’s caused by an internal event or an external event. On the bright side, today, you have server monitoring software that allows you to browse through all the events in your event logs, as well as filter for specific events in a particular log.


It’s important to monitor Windows Event Logs because they indicate where the real issue lies. Here are a few examples of potential issues that get logged as events in your Windows environment.

  • Application Exceptions: Application exceptions are logged into application logs in Windows. They occur when an alert or an issue is raised while running the applications.
  • User Lockouts: When users have multiple incorrect password entries, they get locked out of their system. Both the password attempts and the lock-outs get logged as individual events.
  • Failed Backups: This usually happens when the server has reached its maximum limit or if you don’t have folder rights to back up your information. All these get logged as events.
  • Abruptly Failed Processes/Services: If a file is heavy to load and tedious to navigate, sometimes it just stops working because your system can’t handle the volume. This causes your systems to freeze.

Real-Time Event Log Viewer: Monitor Event Logs from One Place

Windows generates hundreds or even thousands of logs for one server in a day. It’s not practical to go over each server and check logs to determine what’s causing an application to fail over and over again. A busy sysadmin needs something that monitors event logs proactively but also looks at event logs in a more organized manner. A server monitoring software comes with a real-time event log viewer that gives sysadmins the flexibility to monitor any systems’ event logs remotely from any location. In addition, a real-time event log viewer gives sysadmins the following benefits:

  • Let’s you choose the type of log files you want to monitor—security logs, system logs, etc. After selecting the log files, you can filter based on sources and look at logs by event log message, event log ID, severity of the event, date and time of event occurrence, the computer or user that generated the event, etc.
  • Allows you to troubleshoot problems as they occur in your environment.
  • Connects to the server and starts collecting Windows event logs from a specific host—both current and historical logs.
  • Create custom monitors for events that have occurred periodically.


Finally, you have the added advantage of monitoring Windows event logs through a server monitoring software and not having to worry about using the event log viewer that comes with the Windows operating system. This is because the event viewer differs with the version of the Windows operating system that you’re using as it logs events based on the version of the operating system.

Cyber-attacks have become very common in today’s technology world. According to Akamai's Q2 2013 State of the Internet Report, the number of cyber-attacks that occurred during the second quarter of 2013 fluctuated across the globe. However, places like Indonesia and China experienced a significant increase in attack traffic—38% and 33% respectively. Given the continuous increase in cyber-attacks, it is time to revisit the factors involved in cyber security. We often measure the effectiveness of our cyber security based on our ability to detect attacks and most importantly how we respond to them.


Firewalls are your first line of defense and it becomes important to manage your security policies in a way that ensures compliance and reduces risk. A typical network environment consists of firewalls from multiple vendors, which creates a need for strong firewall security management. Here are some guidelines for isolating and validating policy changes to lessen the threat.


Best practices for isolating and validating firewall policy changes to close the threat:


  • Isolate the packet information from the threat signature: Isolate the address of the source of the threat, the targeted destination address, and the service ports present in the attack. By identifying this information, the user can block the attack from the source.


  • Import firewalls into your firewall management tool inventory: Import the configurations of the firewalls to be analyzed into your firewall tool inventory.


  • Run Object Query: Using the packet information from step 1, run Object Query across the firewalls in the firewall management tool inventory to quickly find objects that represent the source, destination addresses, and service elements of the packet.


  • Run ACL Rule Query: Using the packet information from step 1, run ACL Rule query across the firewalls to find ACL rules that allow or deny the packet.


  • Run Policy Query: If the destination represents an internal private RFC address, use the post-NAT data flow query to find all the firewalls in the inventory that allow the given packet. This will identify all ACL, NAT, VPN, and routing rules within each firewall that allows the given packet to go through the firewall.


  • Run Rule Dependency Report: For the firewalls that require firewall intervention, run the Firewall Cleanup and Optimization report to identify the rule order dependencies that exist between the rules in each rule set.


  • Run Policy Difference Report on the new configuration: Once the changes are complete, run the policy difference report using the original and the modified configurations to make sure that only the packet(s) related to the threat are being blocked.

A proper and timely response to an incident requires understanding the complexities of the firewall configurations in the context of your network, identifying the specific changes that will prevent it from recurring, and ensuring that no undesired effects occur due to the change.

Security is a shared responsibility and we all need do our part. Stay secure folks!

To dig deep into your organization’s network behavior and movement of traffic, you need to better understand NetFlow data in your routers. Enabling NetFlow on your routing and switching devices allows you to collect traffic statistics from that device. When traffic passes through the interfaces of a NetFlow enabled device, relevant information about the IP conversation is captured and stored in the NetFlow cache.


Quick Recap – What is NetFlow?

NetFlow is a network protocol developed by Cisco® Systems for collecting IP traffic information. Over time, this protocol has become the universally accepted standard on traffic monitoring. NetFlow answers the questions of who (users), what (applications), and how network bandwidth is being used.


What is NetFlow v5?

NetFlow v5, or traditional NetFlow, is the most widely used and supports AS reporting and a few additional fields. The packet format is fixed and always the same, making it easy to decipher for most NetFlow collection and network traffic reporting packages.


What is NetFlow v9?

Version 9 is the Flexible NetFlow technology. The distinguishing feature of the NetFlow Version 9 format is that it‘s template-based. Templates provide a flexible flow export with user defined key and non-key fields. It has the ability to monitor a wide range of IP packet information which is absent in traditional NetFlow.


The slideshare below will give you more details on How to Configure NetFlow v5 & v9 on Cisco Routers.




Once NetFlow is configured on the routers, the NetFlow packets are sent to the designated server or collector. Having a tool in place that collects all NetFlow packets and presents them in an easy to understand, comprehensive view helps you effectively manage your bandwidth. Efficient network operation lowers costs and drives higher business revenues through better utilization of the network infrastructure.


Learn More

Whitepaper – NetFlow Tips and Tricks

Seven Priorities of Network Management by EMA

Geek’s Guide to the NetFlow v9 Datagram

NetFlow Lab – Thwack Community

As IT folks, we all know the importance of keeping a network management system (NMS) available at all times. Without the NMS availability, we would not be able to diagnose and triage network issues and keep the network up and working well for the business. Although an NMS is not a network device, it is of as much importance in the network as the other network infrastructure. Because an NMS is so critical for network administration teams in terms of achieving better process automation and operational efficiency, we need to ensure we understand the implementation of a network monitoring software really well to be able to leverage its benefits and to support your growing network.Scalability 2.png


We know for sure the enterprise network is going to scale and it’s only a question of time before it starts expanding. You need to get your NMS prepared to support your network growth. So, let’s understand network growth a bit before diving in to understand what impacts NMS scalability.

Cisco® says, by 2017, around 3.6 billion people will be online, representing 48% of the world's projected population. This is a startling number. When so many folks are going to be online, then organizational networks should be able to support that which results in increased IT budgets and adding more network infrastructure, and IT staff. Network growth could happen when there’s increase business demand and dependency on the internet requiring higher bandwidth, support for more users, more storage, expansion into more geographical regions, etc.


To deal with this rapid network expansion, you must understand various NMS deployment and scalability options to ensure your NMS scales along with your network growth. There are three primary variables that impact NMS scalability.


#1 Number of Monitored Network Elements

Obviously network growth is associated with the adding more network elements. For the NMS, a network element is defined as a single, identifiable node, interface, or volume. It is based on the number of elements that you monitor, you purchase the license for the NMS.


#2 Polling Frequency

This defines the interval in which the NMS polls for information from the network devices. For example, if you are collecting statistics every few minutes, the system will have to work harder and system requirements will increase.


#3 Number of Simultaneous Users Accessing the NMS

NMS is a centralized tool for network performance monitoring that can be used by IT staff for different requirements. As it is being used by different IT staff from different network locations, the NMS should be able to support concurrent Web sessions to support the increasing end-user demand.


By controlling these three levers, you will be able to ensure the NMS can scale according to the needs of the network. SolarWinds offers a lot of scalability options so you can affordably extend your Network Performance Monitor, Server & Application Monitor, or other Orion platform modules to meet your growing network requirements.


Register for this FREE webinar to understand more about how SolarWinds network management, application and server management software seamless scales to support your growing network infrastructure.


Webinar: "SolarWinds Scalability for the Enterprise"


When: Wednesday, November 6, 2013 10:00 AM - 11:00 AM CST


Register Here.png



  • Growing a single SolarWinds instance to support larger or distributed networks as well as additional users
  • Monitoring geographically distributed environments with a single SolarWinds instance
  • Monitoring geographically distributed environments with multiple SolarWinds instances
  • Consolidating multiple SolarWinds instances in a single view
  • Creating high availability and fault tolerant monitoring environments


Hear from our product experts how SolarWinds can easily and affordably support scalability requirements for your enterprise network!

Device end-of-life (EoL) management is a sizable activity that takes up substantial amount of every network administrator’s time. Especially for those still using spreadsheets or other manual methods to record and maintain device data. Due to the enormous effort involved and the unavailability of resources, it is often seen that organizations fail to proactively manage the task of maintaining EoL data.


Why is EoL Management Difficult?

  • Effort involved in maintaining multi-vendor device information in one place
  • Need to constantly watch out for product announcements from the vendor
  • Frequent replacements and induction of new devices in the network

Every organization relies on a wide variety of network devices and when a critical device that is out of support contract fails, we are talking of serious network downtime! Most manufacturers do not provide support at EoL, and by not tracking this information, organizations using these devices are at big risk. Apart from business disruption and diminished productivity there is also excessive amount of support costs involved.

How can network administrators get rid of such predicaments and efficiently manage EoL data?

Network administrators need to adopt a mechanism that helps systematically identify and notify on devices reaching EoL. An EoL management tool easily serves the purpose of managing, tracking and verifying device EoL data.


Networks are made up of various devices from multiple vendors. Collating all device information in one place provides a convenient platform to plan for renewals much in advance. As an added advantage, integration between your network discovery, device profile manager and EoL process helps you correlate information and enable quick searches to take informed decisions.

Eliminate EoL mess-ups, be rid of the manual time consuming processes and gain the ability to save time and money for your organization!

View this fun video and check out the EoL feature available in SolarWinds Network Configuration Manager (NCM)!


Visualizing Your Network

Posted by docwhite Oct 28, 2013

Every IT team scales its organization and practices to fit an evolving set of inter-related business, network, user, and security requirements. Any sized team must have tools that facilitate the basic work of deploying network devices (switches, routers, physical and virtual servers, desktop and laptop computers, IP and smartphones) and managing all of those devices as well as the users who depend of the network for the work they do for their business.


Monitoring nodes, users, applications, and the different kinds of network traffic, and addressing issues that impact performance and security are twin daily challenges. And a good alerting system is the key to efficiently relating monitoring to management.


The Power of Visualized Information

Trends in graphs and percentages in charts effectively tell the story of what’s happening with the different aspects of your network at the interface, node, and traffic level. Send the results of your measures to reports and your team gets a snapshot of daily, weekly, and monthly behavior. Policy adjustments, configuration changes, and capacity planning all depend on the metrics against which your monitoring systems generate its graphical information. Without these statistics an IT team’s anticipation and planning would be trapped on the edge of impending crisis; planning would be through the guesses occasioned by crises.


Seeing the Whole through the Particular

The most powerful view of your network is the one that shows how particular nodes are connected to each other. A set of alerts tell you what the team needs to triage; those same alerts distributed as signals on a topology map show how the pattern of alerts indicate—for example—that a particular switch sits in the path of all the nodes currently sending alerts. Triage becomes much more finely focused when you can see how impacted nodes are interconnected.


Among the requirements for a mapping tool that integrates with the other pieces of your monitoring system should be these:


  • Provides accurate, deep, and maintainable network discovery, using multiple discovery methods (SNMP, ICMP, WMI, CDP, VMWare) to map all types of devices (switches,  routers, servers, VMs, unmanaged nodes, desktop computers, peripheral devices) and their interconnections; and using scheduled rediscovery to regularly reconfirm topology details.
  • Enhances node management (by integrating with the primary node monitoring system), creating a visual analogue for all nodes being monitored; showing node details (including load stats) with rollover graphics down to the interface level; and capable of generating reports on switch ports, VLANs, subnets, and device inventory.
  • Facilitates IT monitoring, planning, trouble-shooting workflows by being able to export maps to multiple formats (for example, Visio, PNG, PDF & NTM Map format).


Check-out SolarWinds Network Topology Mapper as a mapping tool that satisfies all of these requirements.

Transmitting sensitive files online is becoming more risky every day. There are chances that your sensitive corporate data can be easily accessed by 3rd-parties—especially when the file transfers happen over insecure channels, or when data is stored on the public cloud. The most common methods used for this purpose are emails or FTPs which are vulnerable to attacks because:

• Files are sent without encryption

• No control over the transferred files

• Lack of advanced options like session auditing


To ensure the security of your files during transfer, you can send them via Secure File Transfer Protocol (SFTP), a method that requires an SSH secure file transfer client. SFTP is secure because it encrypts the file transfer connection and it provides stronger authentication. Typically, in an SSH file transfer, data is encrypted throughout the SSH connection and decryption happens at both ends (server as well as the client).  Though these are some of the most used protocols to secure data during file transfer, when it comes to the operating medium and data storage, there are still uncertainties over data security.


Cloud is a common way of file transfer and many organizations adopt that as a default means of file transfer. But the million dollar question is whether to cloud or not to cloud?



Why not to cloud your File Transfer?


Recent research revealed that about 48% of the surveyed organizations in the UK were using cloud-based file hosting services. Some major revelations were:

• For 74% of the surveyed organizations, the major concern with sharing files outside of the corporate firewall was malware infection.

• For 54%, the major concern was information theft.


Fueling their concerns further, cloud-based file transfer has been in the spotlight for quite some time and so much has been spoken about its security vulnerabilities, especially when it comes to sensitive customer data.


Let us consider a solution like Dropbox: It was a big sensation last year when a number of usernames and passwords of Dropbox accounts were compromised resulting in a spamming campaign to a number of Dropbox users. Similarly, there was another breach in 2011 that exposed hundreds of accounts without proper authentication. Adding to the users’ woes, the Dropbox service as a whole wasn't available early this year.


How secure are cloud based file transfer solutions? Not having your data secure can be costly because breaches can have a direct impact on your business reputation and customers’ trust level. The more sensitive and confidential data you handle, the more secure you need to be. Cloud-based solutions are not always able to accommodate your security requirements. Fortunately, that is not the only option you have.


Self-hosted managed file transfer (MFT) is a much more secure alternative as it provides certificate-based authentication.


Here are some criteria that you can use as guidelines to choose the right managed file transfer solution:

• Does it provide security for data that’s both at rest and in motion?

• Does it provide certificate-based authentication?

• Does it allow monitoring file transfer progress in real time?

• Does it report transfer activity and access?

• Does it provide internal resource protection?

• What B2B/E-Commerce protocols does it support?


You cannot afford to lose control of your data. Secure your files and stay safe!



In June 2013, SolarWinds released version 10 of DameWare Remote Support which includes an innovative new mobile remote desktop tool called DameWare Mobile. DameWare Mobile allows system administrators and help desk pros to support end-users and computers from mobile devices using DameWare’s award-winning remote control software, Mini Remote Control. Originally released only for iPhone and iPad, SolarWinds is happy to announce the addition of Android to DameWare Mobile. 

Now from an iPhone, iPad, Android smartphone, or Android tablet, you can remotely access and control computers on your network from anywhere.  This tool is great for IT pros that participate in on-call rotations. Instead of being stuck at home on nights and weekends, IT pros can now take back the time they lose to on-call rotations and live a little!


DameWare Mobile consists of two components – the DameWare Mobile Client for iOS or Android and the DameWare Mobile Gateway (DMG).  The Mobile Client for iOS and Android can be downloaded directly from the App Store and Google Play and installed on a mobile device or tablet.  The DMG is included with every DRS v10 download.  The DMG must be installed and configured to allow connections from mobile clients to Windows computers inside your firewall.  Instructions for configuring the DMG can be found here.


As with every DameWare product, DRS v10 with DameWare Mobile is available for a fully-functional 14-day free trial.

If you are considering Cisco® Prime Infrastructure for network management, here are some principal reasons why you would want to try SolarWinds Network Performance Monitor (NPM)


SolarWinds NPM can reduce your TCO and increase ROI


Additionally, SolarWinds NPM is

#1 Easy-to-Deploy, Easy-to-Use and helps with increased Operational Efficiency

  • Intuitive Web Interface
  • Customizable Dashboards to focus on critical devices or issues
  • Drill-down performance and availability views
  • Reduced time spent on troubleshooting and analysis


#2 Built on a scalable and modular architecture

  • Suited for networks of all sizes – small, medium and large organizations
  • Integrates with other SolarWinds products seamlessly
  • NPM can support any number of devices on your growing network


#3 Built by IT Pros, for IT Pros

  • Thoughtfully built based on the needs of IT community
  • Customer and community driven product enhancement


  Detailed information on the slideshare below

Comparison of SolarWinds® NPM and Cisco® Prime Infrastructure from SolarWinds

Now, you can Save up to 67%(1) over Cisco Prime Infrastructure with SolarWinds NPM

(1) Estimated cost savings for 1000 nodes using SolarWinds Network Performance Monitor SLX vs. Cisco Prime Lifecycle (part # R-PI12-Base-K9 & L-PI12-LF-1K) retail unit price


Learn more about SolarWinds NPM

NPM features and functionality

NPM Interactive Demo

Fully Functional 30-Day FREE trial of SolarWinds NPM

If you been using SolarWinds products for any length of time, you're no doubt familiar with Network Performance Monitor. Hopefully, you're running it right now. If not, go demo it now. Yeah, that's right: demo it now, download a free trial, or just cut to the chase and buy it already.


There, that feels better...


RTFM: Read The (Free) Manual...or Don't

You've got NPM. You may have even gotten it up and running without so much as clicking open a .pdf. Excellent. We've been working hard to make that increasingly possible. Now what?


Typically, traditionally, in an ideal world, you'd sit down with a nice beverage and tuck into the NPM Administrator's Guide to see just what NPM can do. Problem is, it's not an ideal world; you do IT, not history or interior design, so traditional doesn't mean much to you; and you're probably quite atypical, at least in all the typical ways. No matter how brilliant its prose, you just don't have the time to sit down and read the whole NPM docset, or even the Admin Guide in all its glory, in one shot. It's simply not going to happen. You need quick answers; we have them.


Alternatives to Traditional NPM Documentation

As both thwack and our user base have grown, we've collected and generated quite a bit of additional information to help you with your NPM installation. In the past, these documents have lived in thwack, somewhat separate from our more conventional NPM Documentation page.


They are separate no longer.


As of today, you should be able to find links to a number of additional resources, including videos ("NPM Videos"), KB articles ("Popular & Most Recent Knowledge Base"), and more, directly from the Using NPM and Additional Resources sections of the SolarWinds NPM Documentation page. Check out these videos and articles, and let me know in the comments how we could improve the information we have. Do you want more videos? What kinds of examples would be helpful to have available? Don't be shy...


BTW: We should be updating most SolarWinds [Product] Documentation pages in a similar manner in the near future. Look for updates to your favorite product soon.

As more and more organizations are planning to move their data on the cloud, there are apprehensions looming high on vendor lock-in. What if after spending thousands of dollars on moving corporate data to the cloud, the vendor technology is not compatible for expansion or transition to other vendors? Vendor independence is not yet achieved in arena of cloud computing. As enterprises are looking towards robust security options on both public and private cloud, it becomes no less imperative to assess the risk of vendor lock-in before taking the bold step forward towards the cloud.


Cloud Vendor Lock-in.png

Let’s look at some key factors to consider in the planning phase of cloud adoption that will help assess the risks involved in vendor lock-in, and equip us to be better prepared.


#1 Study and understand the nitty-gritty of the cloud agreement

Ultimately it’s about the legally binding SLA that defines the cloud agreement between the service procurement party and the cloud provider. Understanding the fine print well, assessing it with the cloud adoption plan of the enterprise—both short-term and long-term—will help to overcome any hidden agenda, constraints or ambiguous points that may later lead to a lock-in scenario. Especially in federal agencies, IT teams are doubly careful and cautious in signing cloud agreements with vendors.


In a post by TechNewsWorld, IDC Government Insights recommends vendors take the following actions.

  • Assume responsibility for meeting SLA requirements
  • Be knowledgeable about the government client's business
  • Provide money back or other credits for inadequate performance
  • Consider third-party verification of compliance with certain SLAs to ensure unbiased analysis.


Private organizations and federal agencies must be aware of these points and discuss with the provider and decide in advance on the conditions before signing the deal.


#2 Check if the cloud provider is equipped with data backup plans

It might so happen in case of an uneventful contingency that the cloud provider is unable to secure the customer’s data on the cloud leading to data loss. The cloud customers must certainly ask about data backup and disaster recovery plans from the vendors, and understand the procedure well before entering in a cloud agreement.


#3 Analyze cloud provider’s data migration plan

In order to prevent a situation of data lock-in because the vendor has decided to move the cloud infrastructure. The cloud procurement agencies must check with the vendors what migration tools and portability services the vendors are equipped with.


Even if the customer wants to migrate from one cloud vendor to another, assessing the vendor’s migration plan will help. There may be cases where the cloud data has been for compatibility with the vendor infrastructure, and migration to another vendor may require rolling back to the original state of the customer data. One must check if the cloud service provider is prepared for these scenarios.


#4 Assess if your cloud provider is financially supported

This might be quite important when it so happens that the provider doesn’t make good business and goes on to sell off assets. It is only wise to assess how well the provider is funded and capitalized.


At the end of the day, to the customer, cloud is all about security, functionality, cost and performance. Ensuring all these attributes are justified in spend and ROI, choose the right vendor, and make your move to the cloud. For those who wish to host data on a private cloud, you may want to consider employing private cloud monitoring and management tools to ensure performance and scalability.

Last week I had the opportunity to spend another few days in New York City. I’m a big fan of the city, especially at night, and NYC is definitely the place to absorb that experience. But it wasn’t all touristy; in fact, I spent most of my time at the Interop New York trade show, which was held in the Javits Convention Center in Midtown West.

The show covered five days, and also included co-locations with a number of other smaller shows, including  the Mac & iOS IT Conference, the InformationWeek CIO Summit, and LightReading’s Ethernet & SDN Expo. As a result of the convergence of all four of these events, there was quite a cross-section of ITPros present. I talked with CIOs, VPs, VCs, IT Directors/Managers, as well as Network and System Administrators working in the trenches every day (and working even on those four days, compliments of our 21st century remote access technologies).

As perhaps expected, a significant theme of the show was the growing interest around Software Defined Networking (SDN) and Software Defined Data Centers (SDDC). SDN and SDDC are all about “the cloud” and methodologies for managing the cloud, but thoughts about the cloud still have notable polarization: Why does IT fear the cloud?

In the expo hall, the two most notable groups of vendors I saw were those who build products for network monitoring (including SolarWinds, of course), and vendors selling refurbished Cisco hardware.

The coolest thing I saw at the show was a “big screen” touch screen by nuPSYS. They build technologies for NOCs, including a multi-touch plate that sits on top of existing screens, effectively converting the display into a multi-touch display. This big screen consists of 40 individual rear-projection display units, synchronized together, with a touch screen mounted on the front of each projector box. In the photo you can see the first 12 panels removed.

For those who are attracted to keynote presentations at these events, you can find the entire InterOp archive online, but Interop New York offered a couple of very interesting presentations: John Chambers, Cisco’s Chairman and CEO talked about networking and “the Internet of everything”, and William Murphy, CTO, The Blackstone Group revisited the ongoing question “Is IT Irrelevant?”. Recently on Thwack, we had a similar conversation about The Future of IT Jobs. Where is your IT job headed?

For me, half of my duties included setting up and tearing down the tech equipment for the SolarWinds booth, which included a humbling lesson about remembering to plug in the network cables before freaking out about why the network doesn’t work right. The other half, the half I enjoy the most, is hanging out in the booth and talking to customers and not-customers about SolarWinds products, and technology in general. One thing unique about Interop, less so than other shows such as TechNet, Cisco Live!, or VMWorld, is that there’s a higher number of executives and affiliated professionals (such as venture capitalists), and while they’re not the primary people we generally talk to, it’s always interesting to have conversations about our products with them.

All in all it was a great week in New York, and I’m looking forward to the next opportunity to come out and mingle with the people who are doing the real work in I.T.

Week 4 of NCSAM and its time for some awareness on Cybercrimes!!


With the continuous increase of cybercrimes, nearly every enterprise is affected in some way. PricewaterhouseCoopers (PWC) recently conducted a survey on the Cybercrimes in the US. The survey revealed the following:

  • Organizations are not well-informed about the kind of cyber-attacks and threats they are prone to.
  • Organizations are unknowingly increasing their vulnerabilities due to their increased social media collaboration.
  • Identifying IT assets that are vulnerable to security risks is becoming more complex.


Also, the costs resulting from cybercrimes in 2013 are higher than ever.  Ponemon Institute conducted an annual study of American companies and found that cybercrime’s annualized cost for a company is on average, about $11.56 million, a 26% increase over the average cost in 2012. It was observed that 55% of the total attacks were Denial of Service (DoS) attacks, internal information theft, and other Web-based attacks.


Know what’s happening on your network

If you want to shield your network against cyber threats, the first thing you need to know is what’s happening on your network. There is always a hunt for sensitive and personal information like credit card and social security numbers and patient records. Therefore, it is important to guard your network against various kinds of attacks like viruses, Trojans, malware, botnets, Web-based attacks, DoS attacks, malicious codes, phishing attempts, and many more.


The purpose of having an IT security workforce in place is to be proactive toward security issues. Here are few steps you can take to reduce the potential risks of cyber-attacks:

  • Monitor your systems and devices in your network. By doing this, you can create a baseline for your network behavior and identify anomalies. SIEM tools can help you do this by collecting and correlating logs from various devices in your network and provide you with actionable intelligence.
  • Ensure that all your systems are updated with the latest patches. Patch management plays a key role in managing vulnerability.
  • Manage your firewalls with appropriate rules and filters and prevent unauthorized configuration changes.


For more on potential risks within your corporate network, check out the Cybercrime Section on the NCSAM page!!


Comparing Q2 2012 to Q2 2013, there has been a 33% increase in the number of cyber-attacks. And based on the number of DoS attacks in the last two years, the data-center operator and cloud services provider IPC predicts even more of an increase in cyber-attacks in 2014.


It is always better to take precautionary measures against cyber-attacks than repairing the damage.  Are you properly securing your network today?

“Over 52% of IT infrastructure pros other than networking cited IP address provisioning as being a task that took the most time to accomplish.”                                                                                                                                                                                                                              Enterprise Management Associates©(EMA)

Too often, IPAM solutions have been considered to be optional and not business critical. Most companies continue to try and make do with spread sheets. However, this view is now shifting from manual towards automated IP management, leading to the need for dynamic and centralized solutions.

Why do organizations look at IPAM solutions?

With the rapid growth in the number of IP based devices in the network, the manual approach of managing IP addresses proves to be insufficient and unavoidably leads to reduced operational productivity. IP address management through spread sheets is slow, inefficient and largely error-prone. Administrators often end up with a fragmented database full of orphaned IP addresses and incomplete information.

Some of the evident reasons to invest in an IPAM solution for automated IP management are,


ROI for IPAM Solutions?

View this Slideshare to understand in detail the advantages and ROI realization for IPAM based on 3 case studies where organizations benefited from migrating to an automated IPAM solution.


Download a FREE trial of SolarWinds® IPAM and see if you’d want to go back to spreadsheets again!

When enterprises scale up, network management becomes an integral part as majority of the business processes and critical applications run on enterprise networks. This is when administrators and engineers face diverse set of issues, most of them a result of lack in transparency of new devices added to the network. To simplify network management and achieve more transparency, you should keep an eye on three important factors – Discover, Map and Monitor.


Why are these factors important?

By discovering all the devices in your network, you can gather information that gives you a glimpse of what devices you will monitor and the resources connected to it. Using network management tool, you can scan the network and find all device details like status of the device, machine type, description, location, etc. Once you discover the devices in the network you can organize them into logical group. On other hand, network mapping provides you a schematic representation of those devices and relationships between them. Creating a network map helps you to understand the relationships and dependencies between those network devices. These two factors help administrators to monitor large number of network devices, with more transparency and visibility.


Network Management Mantra – Discover, Map & Monitor

The mantra for better network management can only be achieved by following the best practices in network discovery, network mapping and monitoring.

    • Discover
      • The process of network discovery while scaling up is cumbersome, when network environment constantly changes. Manually adding each device is impractical. You can go for automatic discovery by scheduled periodic network scanning that notifies administrators whenever a device is added to the network. Later, you can add or ignore the devices that you want to monitor.
    • Map
      • If you have a distributed network, it is important to organize the discovered network devices in a logical group (E.g. Location based grouping). This helps to find where the problems have occurred. Network map fulfills the need for a graphical representation of your network from a centralized location providing a visual guide to your network operational status. Use maps to quickly isolate issue and analyze scope, and impact of the problem.
    • Monitor
      • With the help of network discovery and pictorial maps, you have the capability to ascertain the network availability by monitoring all the devices through ICMP and SNMP polling. By this you can collect a range of metrics on network health and activity. Using a network management tool, you store historical data and compare them for trends and abnormal activities. Any abnormality will result as events and if they exceed the threshold limit, administrators will automatically get notified on the situation via email, text, SNMP traps, Syslog messages, etc.


These steps will ease your network management and make sure your network doesn’t get affected as you grow. You can use a network monitoring tool that provides the capability to automatically discover new devices by scheduled scanning and visually track key network performance statistics of network devices. Now, you can repeat this network management mantra to attain network nirvana.

Network access security is a must for all network admins and we always spend a lot of time granting and managing device access permissions on the network. Network access security necessitates that you need to ensure only authorized people can connect to your network ports to avoid network break-in from rogue devices. Operationally this is a tedious task:


  • You need to know what new devices are being added to the network each day
  • Who and what device is connected to which port
  • Decide what all devices are authorized for network access
  • Devise a strategy to ensure unauthorized devices are not gaining network access


Here are 3 simple steps that you can execute using SolarWinds User Device Tracker (UDT) to help you maintain proper access security control for your network.


Step 1: Create Device Whitelist

From the UDT Web console, specify what devices are allowed to access the network and create a whitelist. You can add devices by

  • Individual IP, MAC address or hostname
  • IP address ranges
  • MAC address ranges
  • Subnets
  • Custom patterns

UDT 1.png


Devices added to this list are categorized as ‘Included’ which means whitelisted. Those devices that are not there in the whitelist are marked as ‘Rogue’ and an immediate alert will be triggered to notify you of the device connection and related port activity. There’s also another ‘Ignored’ category where you can add the devices which are discovered, and you don’t want UDT to add to a whitelist or alert as rogue.

UDT 2.png


Step 2: Set Up Device Watch List

To watch for a specific user or device on the network, you can simply specify a MAC address, IP, hostname or username, and SolarWinds UDT will trigger alerts when the user or device being watched connects to the network.

UDT 3.png


Step 3: Shutdown Port

If you suspect a malicious user or believe a port has been compromised, you can simply shut down the port directly from the UDT console with the click-of-a-button.



UDT also allows you to view device port details, user logins, and connection history to easily investigate and troubleshoot a network problem.


These steps will help you ensure robust network access security and give you control to shut down an unsafe port and terminate malicious connections and activity that could potentially harm your network. Try SolarWinds UDT today!

In a previous post, I provided our Scalability Engine Guidelines technical reference as an attachment. It has since been updated and published to the Network Performance Monitor (NPM) Documentation page of Check it out.


As an additional note, if you are installing Additional Polling Engines for NPM version 10.6 on Orion Platform 2013.2 (or higher), use the Smart Bundle installer.



Why is this important to you?

In recent years, you would have heard about the loss of sensitive information through USB drives and other mass storage devices that make it easier than ever for data to walk out the door, that too when the usage of thumb drives at workplaces are very common. Secondly, there are chances that your employees may accidentally leak data when they use 3rd-party websites or cloud storage services, especially when there are file transfers.

In this webcast, you will learn the various areas where data loss can happen and how to protect and secure your confidential/sensitive information.

When: Tuesday, October 22nd at 10:00 AM CDT


That's not all, you can win one of the three $50 gift cards by simply attending the webcast!!


Registration link:



Enterprise network architecture has certainly evolved, from flat networks where everything was interconnected, to hierarchical models with enhanced security now to a borderless world. Cloud, BYOD, telecommuting and the Internet of things have made the network perimeter effectively disappear. The one metric that has remained a priority in the network in spite of the changes is bandwidth and by extension the individual traffic flows that comprise it. Many enterprises have treated bandwidth like the elephant in the room, knowing they don’t have enough awareness of its details but not always having tools or time to analyze it. Here are a few reasons on why it is important to keep an eye on network traffic details.

End-User Satisfaction:



Customer satisfaction is what every organization strives for. With more and more commerce shifting online, website or e-store outages or failed transactions will encourage your customers to look elsewhere. Equally important is your employee - remember how frustrating it was to browse during the dial-up era? Your work force, be it the engineering, sales or marketing, demands frictionless access to do their jobs. Poor connectivity when trying to access resources from the data center or during telecommuting can affect employee satisfaction and productivity.

Application and Data Delivery:



Whether you choose on-premises, hosted or cloud for your applications, bandwidth plays a critical role in service delivery. There is no point in investing on high end servers or expensive cloud solutions if the applications cannot be accessed due to pegged bandwidth, often hogged by non-business applications. Looking at usage patterns can tell you who and what is using your bandwidth and also if your business applications have the right priority they need to traverse the network.




Beyond ensuing available bandwidth, analyzing usage patterns improve network security by helping spot possible security issues. Be it zero-day malware that breached your IDS/IPS, infected bots sending spam from your network, or even complex DDoS attacks, each leaves a very visible footprint on your network traffic. Keeping an eye on usage and traffic patterns can help detect network behavior anomalies that possibly can be security issues.

Branch Office Connectivity:



For many organizations remote offices are key to business in the regions where they operate. Then there are the DR sites, server farms, data centers, etc., all connected by limited WAN links. It is important to ensure that transactions such as accessing and sharing of resources and information, voice and video communications, and data backup is completed successfully when your organization has a geographically distributed architecture. Here again, traffic analysis and bandwidth monitoring plays a key role in ensuring connectivity between branches and other sites, ensuring access and business continuity.

What can you do?



When it comes to effective bandwidth monitoring and traffic analytics, the options available are device interface statistics via SNMP, packet analysis and flow analysis (NetFlow, J-Flow and sFlow). SNMP tells you fine details such as how much of your link is utilized and the speed of total traffic but gives no information on who or what was responsible for it. Packet analysis gives you the finest details possible at the packet level, but also requires expensive tools, span ports and huge storage resources. When you need to see the finer details of bandwidth but with none of the implications associated with packet analysis, technologies like NetFlow is your best bet.



NetFlow technology can report on who is using the bandwidth, the end-points, applications, ports and protocols involved, DSCP priority of conversations and time details of when something happened. Using NetFlow you can ensure appropriate bandwidth for critical business apps, discover users hogging the pipe, whether important applications have the right priority, and detect network behavior anomalies. Best of all, NetFlow is not resource intensive - you can store NetFlow data for extensive reporting windows without the need of large data storage solutions.



The only limitation to getting started with NetFlow data analysis is that you’ll need a tool to receive and digest the exported flow data from your existing routers.  Server based tools like SolarWinds Bandwidth Analyzer pack and others like standalone apps for your laptop, or even portable hand held network analyzers make this easy. Before you know it you’ll sort out the flow data, get reports on your bandwidth usage and if you export traffic details from multiple locations on your network you can even get a holistic view of your entire network.




30 Day Full Feature Trial | Live Product Demo | Product Overview Video | Twitter

What is Business Service Management?

As IT professionals we know the importance of technology to successfully driving business. IT acts as a business-enabler that brings connectivity, automation, and process efficiency to all business operations. All businesses depend on network connections—be it LAN or WAN. There are client-facing and internal applications that facilitate business transactions and also drive technology development within enterprises. As much as a business is customer-centric, internally, it’s equally IT-centric. Beyond the infrastructure that IT provides for business functioning, there’s also the need for high-quality IT services in order to achieve smoother and more effective business operations. This enables you to benefit from the business value and ROI for your technology investment. For most organizations, business continuity depends on high availability of IT operations.


Nobody wants to have their network down or a slow-responding application when there’s a business transaction in process. It may be a customer engagement over the website, a trading portal that allows payment processing, a customer service offered via email, or any business transaction. There will be a negative impact on business when IT fails and the technology enabler for business is not up to snuff.




Business service management (BSM) is an approach used to manage business-aligned IT services. BSM is needed for high availability and performance of IT services that, in turn, facilitate business services to meet Service Level Agreements (SLAs). The basic IT fabric that holds the IT-business framework together is the network and the systems infrastructure. Enterprises need to be able to manage these resources to ensure continuous availability and high performance. You can’t deliver critical business services to your customers without providing robust IT service management.

The essence of IT service management is monitoring the core (IT infrastructure) of your business services and ensuring they operate at high performance levels. This will result in improved business service.


What to Monitor, When to Monitor, and How to Monitor?

  • What to monitor: Your network availability and performance, bandwidth utilization and network traffic quality, application performance, and server health
  • When to monitor: Throughout the course of your business operations (i.e. 24x7)
  • How to monitor: Employ IT operations management tools to let you know when your network and application infrastructure starts developing performance issues and tend to go down


What’s Your Win?

For the IT Operations Team:

  • Better understanding of the dependencies between business processes, business applications, and the IT infrastructure
  • Continuous IT operations monitoring to ensure high IT performance, and in turn, business service availability
  • High operational efficiency achieved with IT management tools
  • Improved time-to-resolution of IT issues


For the Business:

  • Improved business service availability and performance
  • Cost and time savings on business service management as IT now has the right tools to do this job
  • Stronger business-to-IT connectivity leveraging the benefits of IT towards business enablement

Most people use some type of anti-virus software on their servers to protect against unwanted invasions. Having anti-virus security is good yet it can also come with a caveat when running in parallel with Storage Manager software. The problem we could encounter is file locks caused by real time scans and other security options such as blocked ports. This can cause problems with Storage Manager and monitoring. To get around this issue it is recommended to configure anti-virus software to avoid scans of the Storage Manager software and database.


It is recommended to add exclusions to your anti-virus and intrusion detection software when it is running on the same sever as Storage Manager.




Storage Manager Server for Windows Exclusions:


  • Install Directory\Storage Manager Server\agent\systemic
  • Install Directory\Storage Manager Server\agent\administrative
  • Install Directory\Storage Manager Server\mariadb

                 Note: Storage Manager versions 5.6 and newer use MariaDB. For previous versions, MySQL is used. For versions prior to 5.6, substitute MySQL for MariaDB

  • C:\Windows\Temp


Storage Manager Agent for Linux Exclusion:


  • Install Directory/Storage_Manager_Server/agent/systemic
  • InstallDirectory/Storage_Manager_Server/agent/administrative
  • Install Directory/Storage_Manager_Server/mariadb


Storage Manager Agent for Windows Exclusion:


  • Install Directory\Storage Manager Agent\systemic
  • Install Directory\Storage Manager Agent\administrative


Storage Manager Agent for Non Windows Exclusion:


  • Install Directory\Storage_Manager _Agent/systemic
  • Install Directory\Storage_Manager _Agent/administrative


Port Exclusions:


Your anti-virus software must allow access to the following ports:


  • TCP 4319
  • UDP 162, 10162, 20162

Here we are in the third week of NCSAM and its time for some Security education!!



The whole logic of educating someone is to dispel the myths. Talking to a lot of customers, we discovered that there are some invariably common myths and confusions around the SOX regulations, despite the varied spectrum of industries they represent. Some of the common factors include:

  • Compliance with Section 404
  • Responsibilities of the auditors
  • Implications of Outsourcing, etc.



A very basic example is that many organizations look at SOX Compliance as a technology mandate, but in reality it is more of a financial reporting mandate. So this week, let us have a look at the top 3 myths and what exactly does SOX Compliance imply in each of these cases.



MYTH 1: SOX is all about defining Financial Business Practices and Data Security

To start with SOX Compliance doesn’t emphasize on financial practices or how to secure your financial records, it rather throws light on the records to be stored. Also when it discusses about in-house control or internal control, it predominantly points to the financial controls and not data security. Unlike other Compliance regulations such as HIPAA or PCI, SOX doesn’t discuss any specific data security requirements like password protection or encryption.



MYTH 2: Meeting the Section 404 Compliance once, means that you are compliant

Myths can’t get any bigger because compliance is very much a continuous process. More precisely, this means that 404 certification has to happen every year. With your organization growing continuously, you need to regularly monitor, evaluate and test your systems to comply with the policy requirements.

Secondly, if part of your process is outsourced, it doesn’t mean that the compliance is taken care of. If that process is likely to have an impact on your financial systems, you are very much responsible for the controls at your outsourced unit. Hence, you need to constantly monitor and test the systems at your outsourced unit as well.



MYTH 3: My auditor is solely responsible for SOX

SOX Compliance clearly states that it is the organization that is accountable for the financial reports and disclosures and not the auditor. Your auditor can only assist you by checking your reports but it is your organization that is responsible for the reports. In fact, SOX clearly prevents auditors from certain services to avoid conflict of interests. To keep things clear, it doesn’t mean that SOX Compliance will not allow you to approach your auditor for other services such as tax preparation, rather it would become your audit committee’s responsibility to determine who provides the tax services.



For more on Education week, check out the NCSAM page!!



Also if you had missed out on your chance to have a glimpse of the SANS Analytics & Security Intelligence Webcastwatch here



People often need a solid justification in terms of return on investment (ROI) while making a purchase decision. With increasing complexity in IT infrastructure, administrators are asked to do more with less, as well as show a tangible ROI for investment made on IT, by rightsizing their network management system. How do you assess the effectiveness of an NMS for higher ROI? A cost-benefit analysis can quantify your total cost of ownership (TCO) and help you to understand the potential long-term benefits.


Why Is It Important to Calculate ROI of your NMS Implementation?

Organizations implementing network management and monitoring software are experiencing strong ROI, improved data analysis and reporting, and faster time-to-resolution for network issues such as downtime and performance bottlenecks. Determining the right solution for your company not only requires finding a tool that satisfies your current priorities, but also the way it can adapt to your future IT needs.


Learn How to Calculate ROI

The Slideshare below will help you to understand more about calculation ROI for Network Management and Monitoring.


Benefits of Higher ROI in NMS!

ROI for an NMS can be realized across a number of areas as follows:

  • Salary/Staff Time Savings
  • Reduced Network Downtime
  • Reduction in Support Calls
  • Decreased Time-To-Resolution
  • Managing Service Level Agreements


Learn More

This ENTERPRISE MANAGEMENT ASSOCIATES® (EMA™) white paper examines the importance of finding extensible, cost-effective network performance management solutions that fit deployment needs.



"Features and capabilities aside, the best way to understand both product fit and Total Cost of Ownership (TCO) advantages of replacing an existing network performance management solution with SolarWinds® NPM is to learn from the experiences of those who have made the switch."- Enterprise Management Associates


Who is EMA?

Enterprise Management Associates (EMA) is a leading industry analyst and consulting firm that specializes in going “beyond the surface” to provide deep insight across the full spectrum of IT and data management technologies.


Also, try the FREE network management ROI calculator from SolarWinds!!

We all know how critical it is, from the business and application service perspective, to monitor the virtualization environment. This is certainly a full-time job that requires constant monitoring to keep check on the performance of the virtual infrastructure. With far-reaching benefits of virtualization, we now have the choice of employing virtualization monitoring and management tools of our choice to keep the VM infrastructure under check. One single pane of glass on the monitoring solution will get you all the statistics and metrics to monitor your virtual infrastructure. This is all good to be true during work hours and when in office. But, what about the time when a virtualization admin plans to get some time out – a vacation, holiday, or just some time off from work? One may be on a holiday, but trouble at the virtual environment just keeps happening 24x7, and necessitates administrative tasks to be carried out. In such a situation, vocation does not compromise with vacation. If there is a situation critical and needing attention, and nobody’s around to do the job, how do we ensure it’s taken care of while being away from the work desk?




BYOD for Mobile IT Management

In a survey of 400 IT pros jointly conducted by Network World and SolarWinds®, it has been studied that the BYOD trend is catching up real fast in enterprises of all sizes. A startling 85% of companies that the respondents represented issued mobile devices and smartphones with network access to improve employee productivity. The impact of BYOD on employee productivity was charted (shown below) based on the survey findings.





Benefits of Mobile IT Management

By leveraging the BYOD trend and mobilizing your applications, you can be connected to them whenever needed and benefit from:

  • Faster response times to solve issues
  • Flexibility to work remotely
  • Simpler and easier after-hour support
  • Personalized work interface for users
  • Increased productivity for IT pros
  • Improved employee morale and job satisfaction


SolarWinds Mobile Admin™ for Mobile VM Administration & Management

Using a mobile IT management tool such as SolarWinds Mobile Admin, you can ensure you are remotely connected to your application in a way that is both secure in connectivity and scalable over various IT management applications. Within the simple-to-use Mobile Admin interface on your smartphone or mobile device (Android, Blackberry, iPhone and iPad), you can extend remote administration for your VMware® and Hyper-V® environment on the go.


For VMware Environment

For Hyper-V Environment

Virtual Machine Management:

  • Find VMs
  • View VM properties
  • Edit VM settings
  • View host summaries
  • Manage hosts
  • View hosts, clusters, and VIServer


ESX® Servers:

  • Maintenance mode
  • Restart the ESX server
  • Shut down the ESX Server


System Monitoring:

  • View events and event details
  • View triggered alarm and triggered alarm details

Hyper-V Management:

  • List VMs
  • Show/Configure VM settings (CPU, RAM)
  • Show thumbnail of VM screen
  • Restore, create, and delete snapshots
  • Manage VM OS directly from Mobile Admin


For advanced virtualization management functionality from the convenience of your workstations and servers, check out SolarWinds Virtualization Manager.

$634,000,000 if you're the US government - and the darned thing doesn't even work! (I assure you, there is nothing more I'd like to do than rant about the government and healthcare insurance. Sadly, this is not the forum. However, I will vent by discussing the technical and financial angles of the "Obamacare" website debacle.)


The Price Tag

I've built many websites (that worked, mind you) over the years and the cost ranged in price from $0 to $0. Granted, my websites were not as complex as the Obamacare one, but $634,000,000? Really??


$634,000,000 can buy you the following items:

  • 27 Boeing 747 airplanes
  • 21,000 BMW cars
  • 634,000 really good computers
  • 4,226 houses fully paid for
  • 12,680 full time employees for one year @ $50,000/yr
  • 176,000 health insurance premiums paid for a family of four for an entire year
  • 1 broken website

Why Did it Cost so much?

Most geeks have one thing in common: The desire to overcome obstacles. Take Napster and The Pirate Bay, for example. They were created as a way to overcome the cost of media. Geeks are usually poor in the early stages of their lives. Creating technology that overcomes personal expenses is beneficial for said geek because, a) the cost of what he's looking for goes down when he invents/finds a way, and b) his education and creativity skyrocket by putting forth the effort to achieve his goal. Legal issues aside, geeks usually look for ways to be free from cost and restrictions. With this knowledge and desire, the geek can then convert that into a profit by building efficient and creative technology. Vis-à-vis, a website that works.


The government, on the other hand, has no such motivation to be efficient. With endless tax dollars pouring in and nary a profit to be seen, why would cost be an issue? Who worries about cost when the money being spent is not their own?


The Technology Used

For all you uber-geeks, you can view the code of the homepage here. (Looks fine to me...not!) It's being reported that the website was designed to handle 50,000 users. I would think, if the government were logical and optimistic, that they would want everyone in the country to sign up. The website should have had the ability to handle 300 million people. (They could have started with at least 10 million IMHO.) My rinky-dink site that cost me $0 can handle 50,000!


What Did We Learn?

  • Cost <> Quality
  • Necessity is the mother of invention. Really. Check out all the cool new features we added to SAM - and it cost us slightly less than $634 million.
  • SolarWinds is a company full of geeks that do have a profit motive. We worry about cost and pass the savings on to you
  • We are geeks and find ways to overcome obstacles every day, which transforms into a better product for you at a cheaper price
  • I'm sure I can build a better website at half the price. I wouldn't mind having 2,000 homes and a few thousand cars. Who's with me?

Lack of proper configuration management capabilities lead to many problems that result in serious network downtime and costs to your business. Some commonly faced problems:


#1 What happens if a configuration change results in downtime and the network engineer who made the configuration change is not around. For example, the impact of a change made at the end of the day may not be noticed till the engineer arrives the next morning. It’s now too late when network functioning and users are already affected. In a worst case scenario the administrator ends up coming to work to fix things in the middle of the night!


#2 A change in device configuration is implemented, but without the required verification or approval. The unauthorized change results in the network going down, business is impacted and the approvers are left accountable. No accountability or audit trail of who made the change or when. The wrong people have to bear the burden of the situation.


#3 Even if you manage to zero-in on the impacted devices, take into consideration the manual effort involved and time taken to determine the damage,and then starts the process of fixing the issue. Furthermore, if the change involves multiple devices, the effort multiplies. The administrator must invest huge amounts of valuable time to implement the solution or workaround to get things back and working.


#4 The management has absolutely no visibility into configuration changes made in the network. Device configuration is a function that holds so much of important data vital to the operation of other functions in the network. A minor human error in configuration can lead to large business losses.



So, why Is Configuration Management Important?


Configuration Management brings about,


  • Implementation of change process approval Reduced downtime through change alert notification
  • Improved productivity and reduction in errors while executing configuration changes
  • Ensure compliance with internal and external standards for device configuration, software versions, and hardware
  • Executing configuration changes in bulk and rollback in case of errors
  • Improved visibility and accountability at all levels


With device configuration management playing a vital role in the successful functioning of your network and business, the right move at such a point would be to invest in a good network configuration and change management solution. Check out SolarWinds Network Configuration Manager (NCM) to meet your configuration management needs.

Easily manage bulk configuration changes, configuration change requests and approval, user roles, permissions and activity tracking, remote firmware and IOS transfers with multivendor device support and more. Download a trial version today!

What is Index Fragmentation?


SQL server has indexes for organizing data within the database. Having a clean index helps you perform SQL operations efficiently. When you modify or update the data in your SQL server, the index content scatter. This scatter is what’s referred to as index fragmentation.


Why does it occur?


Fragmentation occurs because of the following:

  • Adding new information to the database
  • Modifying or editing the existing database
  • Removing information from the database

Even a simple operation like inserting a new row or updating a row can create empty spaces in the database causing indexes to fragment.

How do fragmented indexes affect SQL server performance?


Highly fragmented indexes can significantly decrease database performance. When one of the following occurs, you’ll know it could be due to fragmented indexes:

  • Queries take longer to respond
  • More disk space is utilized
  • Applications take time to respond
  • Database server may be slow to respond to modifications and additions done to the database

Whatever the issue is, it’s worth looking into the database indexes to check for fragmentation.


Repair Index Fragmentation

Just because there’s going to be fragmentation when you modify the database doesn't mean that you should stop updating or modifying it. It is good to proactively monitor SQL servers to ensure index fragmentation does not affect the overall database performance. Having a clean index will ensure data searches are faster in SQL server. Whether you’re making adjustments to the database or trying to retrieve information, having a proper index will get your job done faster. You can do the following to improve performance in case of an issue:

  • Rebuild and reorganize indexes periodically.
  • Monitor query performance, this helps to avoid bottlenecks.
  • Monitor what your storage drives are doing to the database. This will help you determine if the issue has to do with your indexes or your storage system.
  • Ensure you have a regular maintenance cycle in place.

Here is something to keep in mind: Be sure your indexes are fragmented before you decide to defragment. By determining this, you’ll know what the next steps are that you can do to improve your SQL server performance. A server monitoring tool will tell you if your database indexes are fragmented along with other critical metrics to improve overall SQL server performance. With a server monitoring tool, you can also set baseline values to your database indexes. That way, you’re not only notified but you can also proactively identify and fix issues before it impacts more end users.

The week of Mobile Security!!


Do you know how mobile devices affect your network security? The boundary of enterprise security doesn't end with your organization’s perimeter. You might have improved your defense mechanism against direct attacks on your network, but mobile devices can still be a potential threat, unless the endpoints are secured.


mobile devices.PNG


Threats are now Mobile!!

Over the last few years, we have seen a big bump in the number of mobile devices plugged onto the organization networks. Now imagine a situation, where your user downloads a vulnerable application on his smartphone, there are chances that the user might have unknowingly installed a malware – wouldn’t your data be walking out of your doors, right in front of you?


The Mobile Security Week of NCSAM, gives you an opportunity to revisit and manage the mobile application and device risks and restrict their access to trusted networks.


It’s time to revisit your:

  • Data securing mechanism
  • Wi-Fi access points
  • User logons
  • BYOD policies


To defend your network at this level, here are some key areas that you need to focus on:

  1. Monitor network connected devices and identify unauthorized ones
  2. Unusual access patterns or after-work network activity can be a sign of corporate espionage or sabotage in progress – particularly if your system is logging higher-than-average login attempts on sensitive financial or R&D areas of the network.
  3. Tracking LAN traffic can help pinpoint the malware introduced by BYOD devices, based on how it tries to access other ports or network hosts
  4. Monitor all the event logs on your network continuously for anomalous, unusual or non compliant activities.
  5. You need to be alerted on issues in real-time, so it is ideal that you use an SIEM tool that uses in-memory correlation to send you notifications on anomalies and triggers action based on the set thresholds per event or a group of events.

Check out our Week 2: Being Mobile section on our all new NCSAM page to learn more.


Network availability is always crucial for continuity of business operations. With more than 80% of network outages caused due to configuration errors, network engineers are often faced with the challenge to configure network devices and ensure the network is secure and operational at all times. Below are some of the key challenges that network administrators face when manually configuring devices.


  • Configuration changes can be time-consuming, error prone and could possibly result in hours of downtime for the company
  • As more and more devices are added to the network, it is difficult to track configuration changes, and especially in the event of any network outages troubleshooting becomes a daunting task
  • Even the smallest of configuration errors could pose a big threat to companies as it exposes the network to hackers and malicious attacks


As a result, more network administrators are turning towards automated network device configuration management tools to better manage their configuration changes. Here are some key benefits that automated configuration management solutions offer:


#1 Minimized Downtime

NCM 2.png

Administrators can continuously track configuration changes and immediately identify the erroneous configuration changes that caused the outages. By applying the previous known configurations, devices can be up and running within no time thereby reducing downtime for the company.


#2 Reduced Human Errors

With the ability to automate backups, network admins need not worry whether each device is being backed up or not. Automation of bulk changes reduces human errors, saves time with just a few clicks and without the need to remember scripts each time while making a change.  


#3 High Operational Efficiency

The time to complete tasks such as patches or backups which requires multiple changes could be significantly reduced. Network admins can continuously monitor changes to devices and receive real-time change notifications increasing the efficiency of network operations and troubleshooting.


#4 Enhanced Security

Network admins need to be fully informed of the attempts made by unauthorized sources gaining access to the network. With the ability to monitor and track real time changes to devices, it is easy to trace the actions of end users ensuring accountability in case of any security breaches.



While manual configuration of devices could leave the network team wondering which error caused the downtime, automated network configuration management solutions enable administrators to take complete control of managing configuration changes, ensuring security and improving efficiency.


Learn More:

NCM White Paper.png


Voice Over I.V.

Posted by LokiR Oct 7, 2013

No, the title is not a typo. October's wacky, weird, so-new-it-hurts technology is a subcutaneous cellphone that runs off of your blood.


The phone rather looks like a prototype for the awesome phones from the Total Recall remake. Essentially, the phone is a small, thin, silicon touchscreen that is inserted under the skin and only "lights up" when you make or receive a call. The other cool thing about the subcutaneous cellphone is that it would use a blood battery - a tiny, biological battery that converts the glucose in your blood to electricity.


How can this technology affect you? Not only is this not available (and there are no plans to make it available), who wants to implant cellphone technology that will be out of date in a year or so? On the other hand, the battery is going to be very useful in the health care field.


There may come a time in the near future when you'll have to track medical implants and determine battery life or functionality. It would be awesome if you could monitor these things remotely like you can your server health.


Actually, the blood battery can act as a human health monitor since it directly interfaces with your blood. When this is implemented, there is probably going to be a wireless or Bluetooth element to it so that health care providers can monitor you for blood disorders. There will also be an app for that.


And in that eventuality when cellphone technology has plateaued, you will be able to get your very own subcutaneous cellphone. I hope by that time we might have some holograms or something to make it even more awesome. 

Well, as you might have heard, the final version of the PCI DSS 3.0 requirements will be up only by November 2013 and it would be effective from January 2014. Alright, it’s time to get a glimpse of the proposed changes in the newer version.


PCI Requirement No.

Current PCI DSS Standard

Proposed PCI DSS Update for 3.0 on top of existing standards



Install and maintain a firewall configuration to protect cardholder data.

Have a current diagram that shows cardholder data flows.

To clarify that documented cardholder data flows are an important component of network diagrams.


Do not use vendor-supplied defaults for system passwords and other security parameters.

Maintain an inventory of system components in scope for PCI DSS.

To support effective scoping practices.


Use and regularly update antivirus software.

Evaluate evolving malware threats for systems not commonly affected by malware.

To promote ongoing awareness and due diligence to protect systems from malware


Develop and maintain secure systems and applications.

Update list of common vulnerabilities in alignment with OWASP, NIST, SANS, etc., for inclusion in secure coding practices.

To keep current with emerging threats.


Assign a unique ID to each person with computer access.

Security considerations for authentication mechanisms such as physical security tokens, smart cards, and certificates.

To address feedback that requirements for securing authentication methods other than passwords need to be included.


Restrict physical access to cardholder data.

Protect POS terminals and devices from tampering or substitution.

To address need for physical security of payment terminals.


Regularly test security systems and processes.

Implement a methodology for penetration testing, and perform penetration tests to verify that the segmentation methods are operational and effective.

To address requests for more details for penetration tests, and for more stringent scoping verification.


Maintain a policy that addresses information security.

Maintain information about which PCI DSS requirements are managed by service providers and which are managed by the entity.

Service providers to acknowledge responsibility for maintaining applicable PCI DSS requirements.

To address feedback from the Third Party Security Assurance SIG.







What do these changes mean to you?


  • Policy guidance and operational procedures have to be given with each requirement
  • You would have to maintain an inventory of all systems within your PCI scope
  • It will eliminate the redundant sub-requirements
  • These changes will bring in clarity on the testing procedures for each requirement
  • It has strengthened the requirements around penetration testing and validation of network segments.
  • It would allow you to be more flexible around risk mitigation methods comprising password strength and complexity requirements


Is your IT infrastructure ready?

Well, looking at all these additions to the current PCI requirements,  you may feel that it’s a big change, but having a closer look at it, the change has been more structural. So, the question you need to ask yourself is how well are you equipped to embrace PCI 3.0

Some key questions would be:

  • Have you constructed policies and procedures to limit the storage and retention time of PCI data?
  • Do you have constant assessment and reporting systems across employees of different levels?
  • Do you have an SIEM tool that will correlate and alert you in real time upon any security breaches?


PCI 3.0 will be effective from January 1, 2014 and it would become a mandate from July 2015!!


Are you all set?



PCI Data security.PNG

As an IT environment grows, you will continue to add more hardware, software, and applications for end-users. Your current IT assets are going to increase and you will have to make continuous adjustments to the existing setup to expand and scale your infrastructure. As IT pros, not only do you have to proactively monitor hardware and application health, but you also have to keep an up-to-date inventory of all your IT assets. Even though it’s a time consuming and tedious manual process, it has to be done due to most organizational policies.

Not maintaining an inventory of your IT assets can have these results:

  • Lack of visibility on your current hardware and software
  • Lose track of warranty information for critical hardware components
  • Lose out on important software, operating system, and firmware updates
  • Lack of visibility on the overall hardware lifecycle and maintenance timelines


Asset Inventory in SAM 6.0

SolarWinds® Server & Application Monitor (SAM) has always been a comprehensive server hardware and application monitoring software. With the new SAM 6.0, you can now proactively monitor and manage your IT assets. Adding IT Asset Inventory Management to SAM means you can now automagically maintain a detailed inventory of your hardware and software.

With the Asset Inventory dashboard, you can look at various information about your IT assets. Major assets include:

  • Server Warranty: Track server warranties that have expired and that are going to expire. SAM periodically checks the status of each server warranty against the vendors it supports (HP, IBM, and Dell) on their online warranty validation servers.


  • Hardware Inventory: Get reports on your hardware such as hard drives, memory, volumes, removable media, graphics and audio, USB controllers, and other computer peripherals. SAM gives you hardware information such as manufacturer name, publisher, version, and serial numbers.


  • Software Inventory: SAM will check all the software installed on your servers, identify the publisher, version, and install date. This lets you clearly see all the software products that are regularly used, as well as those that are rarely used.


  • Operating System Updates: SAM monitors the operating system updates and populates the name of the operating system, the type of update applied (whether it is a system or a security update), date the update was installed, and the person who installed it.


To simplify the process, SAM will only collect inventory data once a day. You also have the option to configure the data collection interval weekly, bi-weekly, or even monthly depending on your preference. This way, inventory data does not have to be collected with the same level of frequency as availability and performance information. This is beneficial because it minimizes the impact on your polling engine.


Takeaway for IT Pros

  • Visualize IT asset inventory for both physical and virtual assets
  • Automatically track your assets and view software update cycles
  • Facilitate improved inventory lifecycle management
  • Respond to key IT questions that help drive your business decisions
  • Monitor and manage your IT assets in one place

Explore Asset Inventory Management and other features in SAM 6.0.

Network administrators find tracking and maintaining device End-of-Life (EoL) and End-of-Sale (EoS) data difficult because …..


  • Hard to collect, verify and manage various multi-vendor device information
  • Big effort to regularly track and keep a watch on Vendor announcements
  • Unpractical to check on a daily/regular basis for devices reaching EoL
  • Maintaining an up to date and detailed inventory of all devices
  • Frequent device replacements in the network


To manually maintain a running and up to date network demands great deal of time and resources. But, again, why is device EoL important?


As a network administrator, you wouldn’t want to find out one day, that your core router is down and the reason is a faulty part. You then go on to discover that the device is out of Vendor support! You are faced with the difficulty in obtaining spares and replacing the device itself may mean more downtime.


To avoid being in situations like such and to plan and prepare in advance for device replacements or support renewals, it is beneficial to have a device inventory system in place. Such a solution can help,


  • Maintain a centralized inventory with all device details
  • Easily plug-in EoL/EoS data and prompt when devices are nearing expiry
  • Provide up to date information to plan and budget for replacements or renewals in advance
  • Ensure that your devices are running on current firmware and IOS
  • Ensure that the devices are covered under hardware warranty and technical support contracts


If you are struggling with manual device inventory management or if you are paying someone else to do the job, it is time to invest in a tool or solution that can help you take care of your inventory requirements efficiently.


IP address management, as all of us networking professionals know, is an integral part of the enterprise network management system. With the explosion of IP addresses—thanks, but no thanks to the BYOD trend—and the encroaching need to migrate to IPv6, network admins are looking for an effective solution to monitor, track and manage IP addresses with the ability to:


  • Control the entire IP infrastructure from a centralized web console
  • Scan DHCP servers for IP address changes
  • Prevent subnets and DHCP scopes from filling up with preventative alert notifications
  • Create, schedule, and share reports showing IP address space percent utilization
  • Create IPv6 subnets and plan for IPv6 migrations


As I’m sure you’re already aware, trying to implement all of this using spreadsheets is impossible, but there are IPAM and DDI solutions available in the market that can help. In choosing the right solution for your specific needs, a decision will need to be made as to whether you should go with a software-based IPAM solution or a hardware-based appliance. Let’s dive into a comparison and find out which would suit your needs better.


IPAM SW vs. HW.png


#1 Price

As much as we all wish price didn’t have to be a factor in our decisions, the hard truth is that it always will be, especially for today’s IT departments who want to be seen as adding business value and not just as a cost center. Hardware-based IPAM appliances are significantly more expensive than their software counterparts. Closing a deal is a lengthy process likely requiring weeks for approvals, signoffs, product demonstrations and finally, installations. On the other hand, software-based IPAM solutions are far less expensive and easier to implement, making them ideally suited to businesses of all types and sizes. Why purchase a cost-prohibitive, proprietary hardware appliance when you can get a far more flexible and agile solution that will add scale to your existing infrastructure at a fraction of the cost?


#2 Installation & Setup

Hardware IPAM solutions are difficult to setup and install – in other words, time-consuming and likely headache-fraught!

    • First, you must wait on the device to be consigned to your location.
    • Upon delivery, it must be racked, stacked, and cabled properly.
    • Next, you need to ensure its communicating with the rest of your network (if it’s not, go back to previous step).
    • Finally, you’ll need to configure the management UI from a separate computer
    • For each additional device (which will likely be needed to scale), rinse and repeat aforementioned steps

With IPAM software, you can download and install the product immediately and be up and running in as little as an hour!


#3 Maintenance & Overhead

Anything hardware-oriented poses significant maintenance overhead. You need to ensure the IPAM appliance is regularly paid attention to for maintenance and overhaul. You will need to factor in:

    1. Manual effort required for device inspection, maintenance and repair
    2. Time consumption in tending to hardware servicing needs
    3. Productivity and operational time loss during the hours of maintenance
    4. Cost involved in replacing any damaged or faulty hardware parts


The very nature of hardware makes it vulnerable to failure – one damaged part, no matter how small, could shut down the entire device. With a software-based solution, you don’t have to worry about a proprietary physical device being impacted or the associated time, effort and financial overhead required for maintenance and repair. Software can be installed on any existing server and can be easily scaled.


#4 Upgrade

IPAM software can be easily upgraded anytime.  It’s as simple as downloading and installing the new version. Whereas, hardware upgrades are often a painful task involving firmware and driver updates, which if done incorrectly, could render the device inoperable. In some instances, an upgrade isn’t even an option. Instead, the entire appliance may need to be swapped out for the latest and greatest hardware, costing you even more money!


#5 Technical Support

Some 3rd party IP address management software providers offer full first year maintenance and free on-call technical support 24x7. And, subsequent annual renewals are just a fraction of the cost. Plus, there’s no need for technical personnel to visit the site for support and repairs. Hardware, on the other hand, does require first-hand physical check-ups and repairs if there are technical problems. You lose time logging the issue with the vendor, waiting for the vendor technician to visit the site, then waiting some more for the issue to be diagnosed and resolved. And, during this whole time, the appliance is down and IPAM capabilities are lost.


#6 Scalability

Software-based IPAM solutions scale much easier with additional pollers that can be installed onto any web server, unlike hardware-based solutions that often require additional (and costly) appliances to scale up.


#7 Integration with other Network Management Systems

Many software-based IPAM solutions can easily integrate with other network monitoring systems to help achieve comprehensive network management.


Hardware appliances are not as flexible or easily integrated with existing network monitoring solutions, resulting in the need to have different monitoring consoles and added administration.

SolarWinds IP Address Manager (IPAM) is a scalable, flexible and nimble IPAM software solution that offers simple yet powerful IP address management, DHCP and DNS management from a centralized web console. IPAM leverages an intuitive point-and-click web interface that allows you to easily manage, monitor, alert, and report on your entire IP infrastructure, identify and resolve IP conflicts, as well as efficiently plan for your future IP address needs, including IPv6 migrations.

It’s October!!


Yes, you are right, it is the National Cyber Security Awareness Month (NCSAM)



This year, we are celebrating the 10th anniversary of NCSAM, which is sponsored by the Department of Homeland Security and the National Cyber Security Alliance.

With the continuous advancement in technology, those trying to access your personal information are also growing smarter. The core intention of NCSAM is to educate online consumers and businesses about cyber security issues and the best practices to avoid them.



NCSAM at SolarWinds – What does it mean to you?

It’s that time of the year, when you look back at your security and safety precautions, understand the consequences of your actions and behaviors online, yet enjoy the benefits of the Internet. To help you do this, we have compiled our best resources that can be readily used to measure your current level of security.


Have you visited our all new NCSAM page yet??

For the month of October, we will be discussing cyber security issues every week as follows:




Area of Discussion

Week 1

October 1-6

General Security

Week 2

October 7-13

Being Mobile: Online Safety & Security

Week 3

October 14-20

Cyber Education

Week 4

October 21-27


Week 5

October 28-31

Cyber security



Keep an eye on this space, there’s more coming!!


Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.