For a long time, patch management hasn't taken a strategic approach. Patch management need not just take an operational routine, mainly because adding security aspect to it would give you the perspective of what patches to apply and when. But even with that progress being made, we always come across a whole range of systems that too mission critical ones that tend to run unpatched versions.


One of the major reasons might be that hundreds of patches are released every month, of which many apply to the OS and applications that reside on your network. Let us take Java as an example: There are several mission critical applications that are tied to the Java versions that runs on the systems in a given organization. So if it were to be patched every now and then, you run a risk of failure of application that are dependent on that version of Java. This means that you have committed resources that can test and assess the possibilities of failure, which adds to overheads on IT resources and also prolongs the timely deployment of patches.


Suggested Approach to Patch Management

Patch management as a process can be a little complex if you do not have a regularized process in place. Let us consider two key scenarios:

  1. Many organizations go ahead and deploy every critical patch that comes their way, without assessing or testing their impact once deployed.
  2. There are several applications outside the core operating system realm, and these applications often can be potential threat vectors compared to Windows or IE.


Effective patch management is all about prioritizing and yet maintaining the balance. You can break your patch management process into three areas:

  • Patch prioritization
  • Patch deployment
  • Patch testing


To start with, you need to develop an up-to-date asset inventory of all the applications, operating systems on your endpoints. It is highly recommended that you use an efficient patch management tool that would alert you on the applications that need to be updated, so that you can classify them into the ones that affect your systems and the ones that don’t. Patching every application in your IT environment as every patch updates comes up is not a viable solution either. Hence, it is important to classify the application needing patch updates, based on their severity and then assign them criticality levels in a way that the most critical ones are addressed immediately.


Now that you are all set to apply the patches, it is important that you are able to do it without a disruptive uptime. Automated patch management tools can be really handy especially if your IT environments is spread across multiple locations, as they can help you with:

  • Patch deployment packages for your target systems with the latest security updates and bug fixes
  • Testing the patch before actual deployment and analyze post-implementation results
  • Scheduling automated patching of all target systems with the prepared patch package


Of course, managing patch updates is not an easy task but taking the above approach, can help you stay ahead of the curve!!

NSA-related news in recent months warrants revisiting an earlier discussion of AES. In an article on the NSA's Bluffdale, Utah datacenter, I noted that despite the peta-flop processing power of those systems AES-192 and AES-256 are still currently unassailable encryption schemes: "For now, however, nobody and no system on Earth can decrypt AES if used with 192 or 256 bit key lengths. In fact, the US federal government requires AES with a 256 bit key length for encrypting digital documents classified as 'top secret.'"


Advanced Encryption Standard (AES) is the symmetric key algorithm certified by the NIST. According to NIST cryptographer Bruce Schneier, the NSA's Bluffdale systems are not aimed at breaking high-bit AES.


The only serious caveat in computer science is that Peter Schor's factorization algorithm could defeat AES were quantum computing a reality. And keep in mind that two teams of physicists were awarded the Nobel Prize in 2012 for making advances in capturing and manipulating ions that create an important milestone in the possibility of building a quantum computer capable of running Schor's algorithm. How far away that is from happening is a physicist's dream worth discussing.



For now let me point out that besides AES's symmetric key generation many software applications use asymmetric or public key encryption; and the RSA corporation's implementation of public key encryption is the most widely used.


While the NSA's Bluffdale wolf may not be able to blow down the house of AES, he is huffing and puffing at a plenitude of RSA-1028 encrypted data held among the storage arrays. Bruce Schneier puts it both more cryptically (as it were) and plainly on his blog: "I think the main point of the new Utah facility is to crack the past, not the present. The NSA has been hoovering up encrypted comms for decades and it may be that the combination of a petaflop computer plus terabytes of data might be enough to crack crypto weaker than 128-bit (and especially 64-bit)".


I'll say more in another article about the recent Snowden revelations surrounding RSA encryption. In conclusion here I'll reiterate this: use AES-based tools for handling any data that you really need to keep secure. That means for any systems that monitor your other systems that carry AES-encrypted data, SNMPv3 is your best option. And many of you already know that SolarWinds tools for monitoring nodes and configuring network devices support SNMPv3.

As a network administrator, you are more likely to generate custom reports on network usage frequently. Even though traditional tools provides basic functionalities, you still need to retrieve any network statistic from your device. Seasoned network administrators use OIDs, MIB browsers, and SNMP poll to get custom data from devices.


Before starting an in-depth probe, you need to understand where MIBs and OIDs can be used, and its basic applications. For instance, you have a group of network devices that you want to monitor regularly. In any type of network, administrators look for key network information from the devices monitored, at pre-defined intervals. The type of information can be the status of a device or any hardware information. Generally, administrators employ a monitoring tool that helps to automate the processes. For example, if you want to monitor the temperature of the Cisco® switch, you can easily poll it using SNMP. The information that you poll, is available as a raw data in your network device. The raw data (e.g. hardware temperature) is called an ‘object’ that resides within the device in a database, i.e. Management Information Base (MIB), and every object (the device statistics that you are trying to poll) is uniquely identified with an object identifier i.e. OID.


What is MIB?

MIB is a collection of information used for managing devices in the network. Network devices have a database called ‘MIB’ or ‘MIB table’ or ‘MIB Tree’ with the set of ‘objects’. These object stores valuable information like memory status, hardware status, etc. within the network device.


What is OID?

OID or Object Identifier, is an identifier used to name and point to an object in the MIB hierarchy. As mentioned earlier, SNMP-enabled network devices (e.g. routers, switches, etc.) maintains database of system status, availability and performance information as objects, identified by OIDs. It will be a piece of quantifiable data which gives you statistical information to maintain an effective network system.

For instance, if you want to retrieve information on managed systems up time (sysUpTime) from your device. Your MIB request will be as, OID = ( When an SNMP manager asks for the value of (, it basically reads the request, giving the SNMP agent present in your network device the path to follow in order to find sysUpTime object. After locating, it retrieves that information from the MIB table and reports to you.


Where to use MIBs & OIDs? And How to manage them?

Considering that each network device have its own MIB table and multiple OIDs within them, you have more than million OIDs accounted for. It is physically impossible to know all those OIDs and MIBs while managing your network. Network management systems usually provide both basic and detailed performance information about your network devices. But to go an extra mile and poll custom MIB objects, administrators have to use MIB browsers that will helps to find OIDs. Custom pollers can be used to poll those OIDs to retrieve any information, by helping administrators to monitor devices with multiple statistical data.

Have you ever wonder what your dog is really saying? There’s lots of research going on to find out what dogs really think, and technology’s been a key tool.


Some doggy thoughts are pretty simple to figure out – such as when Sheena rings the bell hanging from the back door to let you know she wants out or when Ruger sits next to the pantry door whining to let you know he is ready for dinner – other thoughts and wishes may require some clarification.


For example, wouldn’t it be great if your dog could tell you what it is he’s actually scared of, so you could either avoid or do something about it? Or why she always barks at the neighbors across the street? (I really want to know the answer to this one.)


The Nordic Society for Invention and Discovery (NSID), a Scandinavian professional group working on new applications for technology, is working on a prototype of a product called No More Woof. No More Woof consists of headset with EEG sensors and a speaker that you place on the dog’s head. The sensors search for specific brain waves for thoughts such as “Who is that?” and transmit those brainwaves to a Brain Computer Interface (BCI) microcomputer in the headset.


The BCI collects, analyzes, and translates the brain waves into English and then broadcasts those thoughts through the headset’s speaker. NSID expects to eventually have No More Woof outputs in French, Mandarin Chinese, and Spanish. 


The implications of this technology could be pretty amazing, not only for our companion dogs, but also for working dogs. How would this type of technology application improve what bomb sniffing dogs, assistance dogs, and police dogs?


And what if the product could be reconfigured for humans, and translate our thoughts into dog language? Creating “…a machine that translates human thoughts into dog language, a task that seems quite a challenge, to put it mildly,” claims NSID.

In what is one of the most calamitous cyber-crime attacks that organizations storing personally identifiable information and card-holder data have faced, Target Corporation, one of America’s leading and reputed retailing company and chain of retail stores, has succumbed to a data breach that allowed hackers to compromise, as it is estimated, a massive 40 million credit card and debit card data of end-customers.


When: Between November 27th and December 15th, 2013


Where: The attack impacted almost all of the 1800 retail stores that Target runs in the USA.


What was stolen: Around 40 million customer credit card and debit card data stored by Target’s data warehouse.  The data theft included names, card numbers, expiration dates and three-digit security codes which could allow criminals to make fraudulent purchases almost anywhere in the world.


The Impact: Brian Krebs in his blog, Krebs on Security, analyzes that the type of data stolen (aka “track data”) allows the cyber-thieves to create counterfeit cards by encoding the information onto any card with a magnetic stripe.

It is also theoretically possible that the hackers could intercept PIN data for debit transactions and create phony debit cards and use them to withdraw cash from ATMs.


It is estimated that Target could end up spending almost USD 100 million to cover legal costs and to fix whatever went wrong. The company would probably have to reimburse banks and their customers for the unauthorized and illegal transactions made by the hackers using the secure card data. In short, victims like Target will have to face:

  • Financial penalty and settlement for the banks and customers for their money loss
  • Customer and industry reputation at stake for not being able to safeguard secure customer data
  • Impact of the heist reflecting on the company’s stock prices
  • Possible lawsuits from the affected parties
  • Loss of time and productivity in dealing with all these issues
  • Attack on one IT system could also have cascaded attacks on other dependent applications and servers.


How it happened: Target has not disclosed the mechanism of the attack or any clear motive behind it. Security experts are performing forensic analyses to determine the modus operandi of the breach incident. According to the Wall Street Journal, this theft “may have involved tampering with the machines customers use to swipe their cards when making purchases.”


Target’s Notice to Affected Consumers:


It’s not just Target that’s being targeted. Any organization whose security data protection measures are not sophisticated and advanced enough to defend against hack attacks could end up being compromised.


What Should We Learn from This Incident?

Security is not just a prerogative to a few chosen systems, the entire IT infrastructure is an arena for hackers and cyber-crime perpetrators to inflict damage upon and find inroads for intrusion and transgression. We need to be able to institute comprehensive defense mechanisms and security measures to stay on the vigil and monitor all aspects of the IT infrastructure including servers, employee workstations, network devices, security appliances, cloud infrastructure, and so on. Logs are a good means to start monitoring your IT systems. Every device and system will have logs that record the activities and events happening in real time. If we have immediate access to these logs and are able to interpret any suspicious behavior patterns or policy violations, it’ll be easier to identify possibilities of imminent attacks. Log management is a necessary call to action and a good entry point to start implementing enterprise information security strategy.


More info on Target breach (Mar 2014): Target Hackers Broke in Via HVAC Company — Krebs on Security

Targeted attacks in general come with an intent to spy on confidential/sensitive business information such as financial information, proprietary product information and so on. Typically, a highly targeted attack like spear-phishing is targeted at an individual or an organization via emails that contain maliciously crafted executables.


Of late, we hear a lot about “waterholing” attacks which are becoming a preferred form of attack mainly because waterhole attacks are less-labor intensive. They do not socially engineer you into visiting a compromised site, rather they just need a compromised website in your area of interest (a waterhole), and they wait for you to fall prey in normal course of things.


How does the attack happen?

It is not a completely new-kind of attack and may be classified as a new APT-style of attack. Once there is a visitor to the waterhole, they are mostly likely to be redirected to a number of infected sites and thereby attempting to exploit the Microsoft XML Core Services or a Java exploit. In the case of the attack being successful, there are high chances that the visitor, the visitor would be infected with a version of Gh0st RAT. One of the key reasons for the attack being more successful is that most victims choose to visit a site driven by personal interest and in most cases there are no security precautions taken. RSA coined the term “water holing” after the infamous VOHO attack campaign that happened in July 2013.


Waterhole attacks expected to be on the rise

Based on several security reports, it is seen that the number of waterhole attacks has been continuously increasing over the last two years and is expected to increase further in 2014. To be really effective in network defence, and not just from a post-attack forensic analysis standpoint, you need to make sure that the security event data are analysed and correlated in real time. This means that you need to capture threats in real time, correlate them in-memory and respond to the attacks in a timely manner. It is ideal to start monitoring your logs for activities across your servers, firewalls and endpoints.


Organization need to more vigilant and ensure that measures are taken to identify malicious activities on your network. You also need a risk mitigation plan that automates the response the moment an anomaly is identified. You can opt for an SIEM tool that uses automated responses to respond to critical security events, and shuts down threats immediately.


Some key built-in responses that you might need for sure are:

  • Send incident alerts, emails, pop-up messages, or SNMP traps
  • Add or remove users from groups
  • Block an IP address
  • Kill processes by ID or name


Stay proactive, stay secure!!

Just something quick for y'all here on a warm last day of Autumn in the Northeast...




You can read more about it here:


The short of it is this: It's a post-it note, presumably to a cleaning crew, telling them not to touch the surge protector under the desk.


How many things can you find wrong with such a scenario?


Even my kids got a kick out of this "fix", and they are 9 and 10 years old. I wonder what the average age is for the folks working in that office...


Have a great weekend!

How would you like 100 Gb/s over wireless?



In an experiment funded by Germany's BMBF, researchers have transmitted 100 Gb of data per second (the equivalent of 2.5 DVDs) over the grand distance of 20 meters. Before you are underwhelmed, sister project transmitted 40 Gb/s to another station over 1 kilometer away in a field experiment earlier this year. The researchers are designing these experimental wireless systems to integrate seamlessly with existing fiber optic networks.



How are they getting these amazing speeds?



The earlier field test at 40 Gb/s used higher frequencies (200-280 GHz) and experimental transmitter and receivers. The higher frequencies allow the increased speed and large volume of data. The experimental transmitter/receivers enabled them to use the higher frequencies.


To get the 100 Gb/s speeds, researchers used a photonic method - a photon mixer generates the radio signals for the transmitter - to produce the high frequency radio waves. By applying photonic methods, data from a fiber-optic system can be directly converted to a wireless signal. Of course, they must use the experimental transmitter/receivers to use the signal.

Take a look at their sadly paywall-ed article in Nature Photonics for more information.



This is amazing, and I hope someone makes this into a wireless standard pretty fast. I would definitely volunteer to be part of that beta.

It’s normal for IT admins to utilize several operating systems to support different applications and user groups. Particularly because certain operating systems are best suited to take on the user load, ease of troubleshooting, faster to implement updates, easy to detect issues, cost effectiveness, and so on. At the same time, supporting multi-vendor operating systems can have its own drawbacks, as opposed to strictly running a Windows® or Linux® environment. Let’s say your end users are having an issue with an application running on a Windows server and it’s taking a significant amount of time to diagnose and resolve the issue. Not to mention, the lack of visibility and downtime will put a dent on your business. Similarly, imagine having critical applications running in a multi-vendor environment. You can’t troubleshoot all the issues at the same time. It’s impossible, and your senior IT staff and other executives are going to come after you with serious questions.


Let’s assume you require a multi-vendor operating system because of its capabilities and flexibility. To start, the question you need to ask yourself is, “do I have the right tool to help me get through issues if they arise during peak business hours?” Having the right server management solution to monitor multiple operating systems will provide IT personnel with the specific operating system expertise they need to dig deeper and correct relevant issues quickly. For example:

  • Get complete visibility into various operating system performance, metrics, and key statistics
  • Ensure continuous application performance and availability
  • Leverage diagnostic capabilities to get insights into the health of your commonly used environments such as Windows, Linux, and UNIX
  • Get notified using advanced alerting mechanisms so you can learn and fix the issue before your users starting raising trouble tickets
  • Quickly identify key performance bottlenecks by looking at rogue processes and services
  • View error logs to find the reasons for operating system issues and crashes


In addition, a server management solution offers a range of capabilities that will help you monitor your server operating systems. Look at real-time processes and services that are hogging resources, performance counters, real-time event logs, a range of business applications, and top components with performance issues.


Drilling down into server operating systems will show you the obvious and critical metrics that’s causing a strain on the server, operating systems, and applications.

  • CPU Utilization: You can see how much load is being placed on the servers’ processor at any given time. If the CPU value is higher, then it’s probably because of an issue with an underperforming hardware which is affecting operating system performance.
  • Physical Memory: RAM is where the operating system stores information it needs to support applications. A server, despite its operating system and applications running inside will face issues when there’s inadequate memory.
  • Virtual Memory: When there’s an increase in virtual memory usage, data is going to move from RAM to disk and back to RAM. In turn, this puts physical disks under tremendous pressure.
  • Disk Performance: Disk performance also is a leading cause of server and application issues. Partly because a large virtual environment puts a strain on servers’ disk I/O subsystems.

2014 is around the corner, and every organization is gearing up to face all the new types of threats, malware attacks, other cyber-crimes including data loss and identity theft which are getting more advanced and difficult to tackle. Let’s take a dive into what the industry experts are saying about the emergence of new threats, the advent of new IT security technology to combat the threats and novel ways to keep the corporate network and IT infrastructure secure and protected.


The statistics, trends and information furnished below is the industry perspective as analyzed by information security organizations, individual security practitioners and market research institutions, and will give you a fair idea of what the IT teams can expect in 2014.



Threats Expected to be on the Rise

  • Social engineering attacks will increase significantly in 2014 given the fact that people are becoming more careless in safeguarding their Internet account passwords
  • Mobile malware will get more advanced and difficult to find – especially malware attacking Android OS
  • Novel threats like ransomware (already rampant in Europe), water-holing and spear-phishing will find more victims
  • There will be a surge in cyber-espionage including government agencies
  • Vulnerabilities from unsupported software will be challenging to address. For e.g., Microsoft is going to end support for Windows XP as of April 8th, 2014, and Oracle has already stopped releasing patches for Java 6 since Feb, 2013.
  • Cloud APIs expected to open up more vulnerabilities
  • In its 2014 security predictions report Trend Micro stated that one major data breach will occur every month next year, and advanced mobile banking and targeted attacks will accelerate


Security Trends to Look Out For

  • Big data security will get more attention, the larger secure data is stored, the requirement for more security technology such as security information and event management.
  • Mobile data security, due to BYOD explosion, will be a major concern for IT security teams – which includes BYOD management, mobile malware, and data loss from employee-owned devices.
  • FireEye analyzes and predicts that detecting advanced malware will take even longer in 2014 than it does now. Currently, on an average industry scale, detecting a breach can take 80 to 100 days, and remediating it can take 120 to 150 days.
  • By 2015, Gartner predicts that the demand for greater security intelligence sharing for context-aware systems will form a marketplace for brokering security data.
  • Data privacy paranoia will definitely grapple you. A recent survey conducted by Carnegie Mellon University reported that 86% of internet users have taken steps online to remove or mask their digital footprints—ranging from clearing cookies to encrypting their email.
  • Internet of Things (IoT) will become more hacker-friendly and security challenges will be on the rise as there will be more devices to protect.
  • Multi-factor data authentication and password protection will be a common trend among social media users to protect their credentials from hackers.
  • Microsoft forecasts that there will in an increase in cyber-crime related to the FIFA World Cup 2014 where hackers will be looking for illegal ways to make money and take advantage of the excitement surrounding the World Cup.
  • Key and certificate management is expected to become more popular based this article on Forbes. This has traditionally been a cumbersome process. 2014 will open up the opportunity for enterprises to adopt certificate discovery and management tools for IT security.


IT Security Spend Forecast for 2014

  • ITC Government Insights predicts that overall IT security spending will rise from $5.9 billion in 2012 to over $7.3 billion in 2017. And in 2014, IT security spending by the Federal Government is expected to top $6.1 billion.
  • According to new research by Tech Pro Research, 41 percent of IT managers say they will put more money into IT security in the New Year.


The threat landscape is definitely expanding adding more sophisticated and difficult-to-detect threat vectors each day. We need to be prepared for a year of IT boom in 2014, and with it, many the security exploits and cybercrimes to deal with. Here’s to 2014: a year of intelligent IT investment and security planning!

The rapid developments in the network world can sometimes seem overwhelming. Plus, along with these new developments comes increased complexity in the challenges a network admin faces every day. In order to keep up, it's important to fully understand how your organization's network is utilized. In this blog, we'll discuss how you can use NetFlow to improve network traffic visibility and get a better grasp on the increasing complexity of your network.


Why network traffic visibility is important?

Times have changed. No longer are networks used solely for internal communication. Instead, networks now include business critical applications that are expected to run 24x7. As networks have evolved, so too has the role of the network admin. Network traffic visibility has become increasingly important for effectively managing and troubleshooting issues, optimizing resources, streamline operations, ensuring network security, and capacity planning. Any traffic information that’s hidden from the admins’ view can end up causing serious damage while performing business processes.


How can you improve network visibility with NetFlow?

If you’re using NetFlow enabled devices in your network, you can easily analyze network traffic that enters and leaves the routers or switches. Flow technology allows you to extract and analyze detailed traffic packet information passing through your network interfaces. Raw data extracted from network devices can be normalized to help administrators comprehend network traffic. Normalization of raw flow data presents flow data in a readable format and provides a quick understanding of network traffic patterns. This can help when you’re troubleshooting issues by locating the user or application responsible.

You can manually get flow data from network devices or you can use tools to automate the cumbersome processes. The latter approach helps you to ease monitoring and reporting functionality of NetFlow. But ensure you have these three capabilities before you zero in on the tool of your choice.

  • Real-time traffic monitoring of network bandwidth utilization by users, protocols, and applications
  • Network forensics that allow you to analyze historical granular flow data for any time period
  • QoS monitoring for effective management  of all your priority traffic and detailed reporting on flows


Here are some of the benefits of using an automated process:

  1. Increased Network Performance – By understanding bandwidth consumption, network admins can plan for future networking needs, i.e. capacity planning. Forecasting removes most performance issue surprises.
  2. Higher Security – Now network admins can easily identify and isolate what creates ‘bad unwanted traffic.’ This gives you the ability to discover where attacks on the network originated.
  3. Ease of Network Monitoring – By closely monitoring traffic, admins can get a better understanding of what’s happening in their network and effectively responding to fluctuating network traffic.
  4. Improved Automation – Deploying tools to monitor NetFlow helps network administrators and staff by automating routine monitoring processes.


With NetFlow support, you can quickly troubleshoot application and performance related problems in your network. Thus helping you to manage and control network complexities in your organization.


Hello Thwack

Posted by sqlrockstar Employee Dec 17, 2013

Just taking things out for a test drive. Pay no attention to this post. Nothing to see here.


Very excited to be a member of the SolarWinds family and looking forward to helping as many people as possible in 2014.


My next post will be better, I promise.

Network bandwidth capacity planning is all about balancing user performance expectations against capital budgets.

Organizations try to control costs by investing in the minimum amount of bandwidth necessary to handle user needs. However, in many cases this could lead to congestion or poor application performance. Moreover, when bandwidth capacity peaks several times in a single day, network performance issues occur, user productivity is reduced, and overall negative effects to day-to-day business are felt.

Challenges to Network Bandwidth Capacity Planning

  • Obtaining accurate information on bandwidth consumption in the network
  • Increasing WAN bandwidth utilization affecting application and user performance
  • Understanding how voice and video application bandwidth is being shared with normal business applications
  • Inability to measure service levels and obtain information on bandwidth consumption needs per application
  • High bandwidth costs falling heavy on budgets

Categorizing Network Traffic

  • Legitimate traffic: Typical business or work related traffic is termed as legitimate traffic. Companies build networks and invest in bandwidth to accommodate business related traffic. If existing bandwidth is currently exceeding capacity and all traffic is genuine, then an upgrade may be necessary. Then again, genuine traffic like backups, file transfers, and VoD replication can be scheduled outside of working hours. By scheduling such traffic to occur outside of working hours, you'll be able to better manager bandwidth consumption and postpone the need for an immediate bandwidth upgrade.


  • Illegitimate traffic: This includes traffic that is generated due to employee activities like video streaming, social networking, downloads, and so on. Most organizations don’t impose restrictions on access to these applications but when bandwidth consumption for such activities affect bandwidth availability, something should to be done.


  • Unwise traffic: This traffic consumes bandwidth from applications like backup tasks and data sync operations performed during working hours. Traffic that is legitimate but instead scheduled to run during working hours is ‘unwise’ traffic. Safely moving or scheduling this traffic to operate outside of working hours will help ensure that it does not consume significant bandwidth during peak work time.

The network engineer in charge of planning for network capacity determines what traffic falls into what category. Information on volume and content of traffic helps size the network accurately to forecast and budget bandwidth capacity needs.

NetFlow Can Help:

Cisco NetFlow helps extract granular information on traffic in the network. This detailed traffic information is useful to characterize and analyze traffic. With Cisco NetFlow you can:

  • Establish a structured and proactive approach to bandwidth capacity planning
  • Better manage deployments in the network, e.g.: MPLS
  • Increase employee satisfaction
  • Save money on bandwidth costs


Are you taking advantage of Cisco® NetFlow for network capacity planning? If Yes, do share with us your comments below.

Starting with v12.0.0, Web Help Desk (WHD) can integrate SolarWinds NPM, SAM, and NCM to create WHD tickets from SolarWinds Orion alerts. Integrating these platforms means that Orion sends WHD an alert for any network change that it is configured to. These changes could be, for example, a SolarWinds - monitored network node that fails or becomes unmanaged or an interface failure – or anything else that you configure. If you configure WHD to recognize any of these Orion alerts and configure Orion to share its alerts with WHD, WHD can recognize and convert Orion alert data into WHD tickets. Then WHD can assign those tickets to the techs who can fix the issues originally described in the Orion alerts.


Whenever a change occurs in SolarWinds Orion, the change triggers, updates, or resends an Orion alert. After you have established a SolarWinds Orion-to-WHD connection, Orion sends an HTTP request with the change to WHD. An example of this type of change could be a WHD ticket (based on an advanced alert) updating when someone adds a note or acknowledgement or changes the status from the Orion interface.


Here’s what the Orion-to-WHD communication process looks like:

Orion to WHD Communication.jpg

Configuring your Orion product and WHD to work together requires four basic steps:

  1. Enabling SolarWinds Orion to share alerts
  2. Adding the Orion server link to WHD
  3. Configuring alert filtering rules in WHD
  4. Testing alert filtering rules in Web Help Desk


And here’s a sample of what you can get:


Orion Ticket in WHD.jpg

For details on how you can set up Orion – Web  Help Desk communications and make Orion tickets into WHD tickets, check out the full version of Making SolarWinds Orion Alerts into Web Help

Desk Tickets in the Web Help Desk Administrator Guide.

Many times when administrators face network bandwidth constraints in distributed networks, the first solution that comes to mind is to increase bandwidth speed. However, even after increasing speed, administrators sometimes find that applications are still running slower than expected. The solution? Implementing WAN optimization.

Let's take a look at one example of how to take advantage of WAN optimization. Assume an administrator is responsible for managing a remote site. He decides to deploy a WAN optimizer at both of the end points between the server and router, one at the main data center, and one at another remote site. He knows that there are different ways WAN optimization can be used, but he also knows that the most important method is caching. One day the administrator needs to transfer a file from his primary data center to the branch office. In this situation, the optimizer caches the file sent for the first time and then sends the cached data on the second request. As a result, the speed of the information sent to applications or users at the remote site is greatly increased.

There are also various other ways WAN optimization can be used to enhance network performance. Some of these methods include application acceleration, TCP optimization, data compression, etc.

Here's a list of WAN optimization benefits:

#1 Access your files faster – If someone wants to share files frequently, caching can allow faster file access by storing the data transferred earlier. When caching occurs, only new data is saved. This helps you avoid excess network bandwidth utilization and increases performance.

#2 Increase your application’s performance – Once implemented, WAN optimization allows users to immediately feel the effect on their application’s performance. This is because bandwidth used for other data transfer can now be utilized by applications.

#3 Make your remote backup process easier – With WAN optimization, administrators can have a more reliable data recovery process. It allows them to reduce ‘time to respond’, resolve application downtime issues faster, and help users reclaim access to business critical applications with ease.

#4 Increase your network’s efficiency – With reduced latency through WAN optimization, administrators can solve network bottlenecks and optimize bandwidth utilization.

#5 Increase data transfer speed – By allowing users to quickly access data in remote offices, administrators can eliminate network issues caused by latency. WAN optimization also helps in faster data transfer between branch offices in different time zones.

#6 Reduce costs on network bandwidth – With methods like caching, application acceleration, and compression, you can use WAN optimization to reduce expenditures on network devices, bandwidth usage, and third party software.

With all these benefits, you can rely on WAN optimization without affecting your current network setup and stop worrying about network performance.

Any organization that depends on IT to run their business knows that network management is a crucial part of IT administration. Continuous network availability is considered the most important aspect of a successful business operation, irrespective of the company’s size. It’s a common practice to implement network monitoring tools to maintain and troubleshoot network issues. When enterprises invest significant time, money, and effort in deploying network management tools, they expect to benefit from their ROI faster. But the reality is, not all implementations work. IT pros often struggle with managing the tool, which in turn has a huge impact on network availability and performance.


This blog explores more about the common pitfalls in network availability and performance monitoring. It discusses why tools that are implemented fail to fulfill the needs of network administrators and their growing environment.


#1 Ambiguous Network Management Needs – When you buy a network monitoring tool, you are responsible for examining your enterprise’s needs and defining your requirements. Most times, the reason implementations fail is administrators don’t fully understanding the organization’s networking needs enough to find a suitable solution that matches their requirements.


#2 Implementing Complex Network Management Tools – A network availability and performance monitoring solution must eradicate the existing tedious processes. If administrators implement a complex solution on top of their unstructured processes, it makes troubleshooting and managing their network more difficult. This is one of the most common pitfalls and enterprises often resort to an expensive, time consuming approach to troubleshooting rather than employing simple and easy-to-use tools.


#3 Fragmented Network Monitoring Solutions – Managing disparate systems in your IT is never the best approach. Fragmented solutions create the majority of the network performance problems that are usually resolved in a war room environment. This approach is avoidable. Choosing an integrated solution can be an optimal approach for large network environments.


#4 Too many resources required to manage the tool – If your network management tool requires too many IT pros to manage, they are likely performing the complex and frustrating task of being an interface between the disparate network monitoring systems as opposed to having a single tool that automates many of the processes.


#5 Unused features in your tool – Another common pitfall in network availability and performance monitoring is having unused features in your tool and paying a hefty maintenance fee every year. Most times, network engineers avoid using that kind of tool because it is complex or they have found another tool that gets the job done faster and easier.


#6 Scalability – When implementing a network monitoring tool, always consider the future growth of the business. Companies often implement solutions that don’t scale enough to support the company’s large number of transactions. Consequently, as your network grows, your administrators are forced to perform many tasks with the tool’s limited resources and capabilities.


#7 Unfocused IT processes can affect your core business – Implementing a network availability solution that doesn’t focus on business transactions will fail to provide business managers with insight into their operations. The best approach is to deploy the tool in a preproduction environment to validate the benefits realized from the tool.


You can learn more about the importance of network availability and performance monitoring here.

Microsoft Lync is a commonly used collaboration application by organizations that runs a Windows environment. Employees in large organizations often use Lync as a medium to send chat messages, setup internal and external conference calls, share screens with remote users, and other call recording facilities. All this makes Lync a critical application that system administrators need to monitor continuously to prevent from failing.


When Microsoft released Lync Server 2013, it shipped Lync Server 2013 with a built-in monitoring functionality. All you have to do is enable the monitoring functionality on the Front-End Server. If you’re using a previous version on Lync then you probably have a dedicated monitoring server running separately. You don’t have to do this if you’re using Lync Server 2013. Using the built in monitoring functionality in Lync 2013 allows you to access a variety of reports based on Lync’s call detail recording. Some of the key reports you can access are:

  • User activity report: Get a summary of what users are sharing via the instant messaging window in Lync. It can be any multi-media files, and other file transfer sessions.
  • Conference summary report: Get a summary of conference calls involving more than 3 people. Look at conference call times and other conference activities.
  • IP phone inventory report: Generate a report that shows user logins and which IP phones they’re mapped to. Get a complete list that shows which users are active on Lync and which users haven’t logged on to Lync. Have an up to date phone inventory based on user login patterns and determine active and inactive users.
  • Call admission control report: This report shows you all conference activities and user activities. The call admission control will alert the leader whether video calls are permitted with a particular call. In some instances, if many users are having a video call, there can be a network bottleneck. Also, depending on the type of session, you can estimate your network speed and availability.


These reports give you the overall system usage summary for all your users. As IT admins, you can also answer key questions that are raised by your senior IT managers on Lync performance – whether it’s related to the amount of calls made in a given hour, if call recordings are taking a long time to process, how many users are connecting from outside the office with or without a mobile device, or if latency issues are present, etc.


The built-in monitoring functionality will only tell you information about the performance of the Lync server. To drill down deeper to diagnose and troubleshoot when issues arise, you need an application performance monitoring software that will show you: where the bottleneck in Lync is, connection delays are, incoming and outgoing requests, messages, responses, and many more that goes beyond Lync performance. All these components in Lync Server 2013 are critical to monitor. With these, you can assess the status and overall health of services as well as the performance of front-end Microsoft Lync Server 2013.

At some point, all networks deploy devices on a small or large scale. This could be a result of network expansions, device replacements due to end-of-life, vendor changes, hardware upgrades, and even cases of replacing faulty or failed devices. Below are a few tips that can help you simplify the process of configuring devices and getting them up and running.

Blog Image.png

#1 Create an inventory of existing and new devices: Prior to device deployment, it’s important to perform a detailed inventory of all devices in your network—including new devices awaiting induction into the network. This not only helps meet compliance requirements, but helps the network engineer stay in control by knowing what devices are in operation and those nearing end-of-life. Maintaining an up-to-date inventory helps you make informed decisions rather than taking actions based on speculations. Device serial numbers, interface and port details, IP addresses, ARP tables, installed software, and other details are recorded and updated.


#2 Backup and archive all device configurations: Ensure that all device configurations have been backed up. With device replacements, instead of creating the new configuration from scratch, the backed up configuration can be uploaded onto the new device. This reduces network downtime and speeds up the deployment process. Pushing configurations from an archive or backup is convenient and useful especially when the configuration involves many devices.


#3 Create baseline templates for new devices and enable SNMP and Flow technologies: While preparing new devices for the network, create base configurations and prepare the device for deployment. In some cases, it might be a good idea to baseline the entire network. These baseline configurations may be archived for future use as well. Once you complete the initial configuration, you can make specific changes to devices within the configuration to achieve the running configuration. As the base scripts are ready and uploaded onto the device, enable technologies like SNMP, NetFlow, JFlow, and SFlow whichever may be appropriate for the type of device that is being used.


#4 Mount devices and deploy device configurations: Once the devices are physically mounted and plugged-in to the network, you can begin to deploy device-specific configurations. You can apply these configurations to the required devices individually or in bulk to a set of devices. 


#5 Ensure devices run smoothly and revert changes in case of an issue: Once your configuration is up and running, check all operational aspects on the network to make sure that there are no hiccups. In the event of a setback or a problem in the network, revert to an earlier state and cancel the deployment changes. If your device management software gives you the option of rolling back your configuration changes, you can quickly fix issues due to a bad change. Additionally, check for any policy violations or non-compliances and fix them before more things go wrong.


These are a few important aspects that are must-haves on your device deployment checklist. Do share with us your views and comments on this topic.

If you're already a SolarWinds user, you're pretty well aware that we at SolarWinds are keenly interested in making your IT job easier. If you're not currently a SolarWinds user, well, what are you waiting for? Get on our demo or download some sweet free tools and check us out! We really do want to make your IT life easier. As further proof, check out our IT Management Software Solution Finder, too.


See? Lots of good stuff for making your IT management life easier.


Even if you've got great tools, though, refusing to use them is the absolute quickest, most sure-fire way to achieve that easiest of all IT management jobs: Unemployed Former IT Manager.


We want to make life easier for you, just not that much easier.


So, in that spirit, I want to share with you an article I've recently come across listing some truly great examples of tool refusal. It's hilarious in the always funny "I-am-so-very-glad-that-did-not-happen-to-me" sense. Call it Profiles in Really Poor IT Management.


For those of you who actually need to be working right now, though, here's the quick-and-dirty, but not nearly as much fun, tl;dr, with some links to SolarWinds products that can help you make good on the suggestions:

  1. Maintain backups of all your data.
  2. Don't abuse your access privileges.
  3. Don't lie about your screw-ups; own them honestly.
  4. NSFW is not merely a suggestion. (Network Performance Monitor and NetFlow Traffic Analyzer)
  5. Document. Document. Document. (IP Address Manager)
  6. Develop and test a disaster-recovery plan. (Failover Engine)
  7. Sometimes, in the end, getting fired is not such a bad deal after all...


And, since it's the end of the year, let me send you over to another source of IT schadenfreude. Remember, your life could always be worse; you could have been involved in one of these projects. this past year.


Enjoy the end of 2013! I'll see you again in 2014...

We are pleased to announce that NetFlow Traffic Analyzer (NTA) version 4.0 is now available.


Using the flow technology that is built into most routers you can see who, what, and how your network traffic and bandwidth is being used.


The notable benefits of version 4.0 include:

  • Larger networks will benefit from a dramatic improvement of the flow processing capabilities (at least 5 times more flows compared to NTA 3.x)
  • All network sizes will benefit from more accurate network troubleshooting data, based on the possibility to retain highly granular flow data (1 minute) for months vs. hours before
  • Fed customers will benefit from FIPS compatibility


to check out SolarWinds NetFlow Traffic Analyzer version 4.0, download a free fully functional 30-day trial at

It’s this time of the year that so many of your IT staff will go on vacation. When you know you are going to be under-staffed to handle service requests, it is only wise to strategize and implement some best practices that will help your IT teams ensure they don’t get overwhelmed by help desk tickets, and at the same time render timely and efficient IT support. You can implement the 3-prong strategy of PEOPLE, PROCESS and TECHNOLOGY to institute some help desk best practices for the holiday season.


WHD PPT 3.png


  • It is critical that you set the expectations for your available IT technicians to be ready to wear multiple hats in IT administration roles especially when they are short-staffed
  • Make sure that your help desk teams know their SLA times (for ticket assignment, closure and escalation) as they get to work on end-user service tickets
  • Try to also estimate the volume of service requests based on number of employees and service request creation trends.



  • Ensure to implement process automation to auto-assign trouble tickets to help desk technicians based on pre-defined business logic. Typically, ticket assignment is a manual procedure causing a lot of analysis and consumes more time. Automating this step will make the entire process more efficient.
  • Implement ticket escalation automation if service requests are not getting closed by SLA time.
  • Make use of reusability and automate service request creation for recurring help desk tasks (such as provisioning IT assets to a new employee) and improve overall help desk operational efficiency.
  • When you have change approvals as part of your change management workflow, be sure to institute an approval procedure and communication channel in place to ensure, IT support doesn't get delayed.



  • Asset inventory discovery and management is a useful thing to do during the holidays when you have some time on your schedule. Use automated asset inventory tools as part of your help desk solution to discovery assets and associate them with service requests so they can easily be tracked and history of IT support and issues identified.
  • It’s recommended to do your IT administration and support remotely as much as possible to avoid manual efforts of visiting workstations one by one. Try using remote control tools that could initiate remote connection with end-user systems right from your help desk interface.
  • Help desk Knowledge Base is another simple, yet powerful technology that could save you a ton of time when looking for fixes and issue resolution especially when your IT admin peers are out on vacation. You can add commonly encountered IT support issues in the knowledge base and use it whenever required.


Help desk and ticketing management need not be complex whether it’s the holiday season or any time of the year. When you have the right people, process and technology in place, you can rest assured your help desk services are rendered with higher efficiency with better customer satisfaction results.

NetFlow has become an industry standard for traffic monitoring and is supported on various platforms. Enabling NetFlow on devices helps characterize flows and understand traffic behavior. You can export flow information to perform traffic analysis, bandwidth capacity planning and security analysis.

Benefits of using NetFlow

  • Understand the impact of network changes and services
  • Improve bandwidth usage and application performance
  • Reduce IP service and application costs
  • Detect and classify security incidents

How does NetFlow help with Security Analysis?

Delve into traffic information by analyzing NetFlow data to obtain information on:

  • Who is talking to whom - Source & Destination IP addresses
  • Over what protocols and ports - Layer 3 protocol type, Source & Destination ports
  • For how long
  • At what speed
  • For what duration

It’s important to identify any unusual traffic patterns compared to previously collected network data or baselines. These traffic patterns are commonly referred to as anomalies.

NetFlow helps identify anomalies by providing high level diagnostic information on traffic flows and changes in network behavior. You can classify attacks by noticing small size flows to the same destination, which may also be a sign of botnet communication and propagation. NetFlow data can be used to deduce information on what is being attacked, where the attack is coming from, how long the attack has occurred, the size of packets used, and much more.

Usually, DoS attacks flood the network with packets from an untrusted source to a single destination. These packets are often of an unusual size. An attack can be determined by monitoring changes or number of flow counts on the edge routers. NetFlow data can be collected and correlated to identify a DoS attack in progress.

Delving a bit deeper, one way to use NetFlow to identify anomalous behavior is to establish a baseline that describes 'normal' network activity. This baseline can be set according to some historical traffic pattern. Then, all traffic that falls outside the scope of this baseline pattern will be identified as anomalous. Furthermore, flow data that has exceptionally high volume, especially those that are much higher than the established baseline, would demand attention as ‘unusual activity’.

In conclusion, NetFlow certainly provides another layer of valuable threat detection and insight, but it's not your primary threat detection and mitigation solution. Data collected from NetFlow helps analyze, detect and address security blind spots, and highlight DDoS, Botnet, top conversations, streaming and other hidden anomalies.

How are you using NetFlow to aid in security analysis for your network? Tell us in your comments below.



Posted by cyrussw Dec 6, 2013

Security is always a good thing when it comes to websites and confidential information. When a website uses Hyper Text Transfer Protocol (HTTP), it can be more vulnerable to attacks vs. using Hyper Text Transfer Protocol Security (HTTPS) which is HTTP integrated with the Secure Socket Layer (SSL). It is the SSL which encrypts the HTTP data before sending it to the destination. One of the main advantages of HTTPS is that the information is sent encrypted from the source to the destination. HTTP does not have this functionality. The SolarWinds EOC website allows users to utilize HTTPS thus securing transmissions from the webserver to the end user.


The following explains how to configure HTTPS on Windows Server 2003, 2008, and 2012 for the EOC website:


To enable SSL Connections to the SolarWinds EOC web site in Windows Server 2003:

  1. Log on as an administrator to your SolarWinds EOC server.
  2. Click Start > Control Panel > Administrative Tools > Computer Management.
  3. Expand Services and Applications > Internet Information Services (IIS) Manager > Web Sites.
  4. Click SolarWinds EOC and then click Action > Properties.
  5. Click the Web Site tab.
  6. Confirm that SSL port is set to 443.
  7. Click Advanced.
  8. If the Multiple SSL identities for this Web site field does not list the IP address for the SolarWinds Web Console with SSL port 443, complete the following steps:

a.   Click Add and then select the IP address of the SolarWinds Web Console.

    • As it was set initially in the Configuration Wizard, this option is usually set to (All Unassigned). If the IP address of the SolarWinds Web Console was not initially set to (All Unassigned), select the actual, configured IP address of the SolarWinds Web Console.

b.   Type 443 as the TCP port, and then click OK.

   9. If you want to accept only HTTPS connections, complete the following steps:

    1. Click the Directory Security tab.
    2. Click Edit in the Secure communications section.
    3. Select Require secure channel (SSL).
    4. Select Accept client certificates in the Client certificates area.
    5. Click OK on the Secure Communications window.

  10. Click Apply and then click OK to exit.


To enable SSL Connections to the SolarWinds EOC web site in Windows Server 2008:

  1. Log on as an administrator to your SolarWinds EOC server.
  2. Click Start > Administrative Tools > Internet Information Services (IIS) Manager.
  3. In the Connections pane, expand the name of your SolarWinds EOC server, and then expand Sites.
  4. Select your SolarWinds EOC web site, and then click Bindings in the Actions pane on the right.
  5. Click Add in the Site Bindings window.
  6. In the Type field, select https, and then confirm that the Port value is 443.
  7. In the SSL Certificate field, select a certificate, and then click OK.
  8. Click Close on the Site Bindings window.
  9. In the center pane, double-click SSL Settings in the IIS group.
  10. Select Require SSL, and then click Apply in the Actions pane on the right.
  11. In the Connections pane, select your SolarWinds EOC web site.
  12. Click Restart in the Manage Web Site group on the right.


To enable SSL Connections to the SolarWinds EOC web site in Windows Server 2012:

  1. Log on as an administrator to your SolarWinds EOC server.
  2. Click Settings > Control Panel > Administrative Tools > Internet Information Services (IIS) Manager.
  3. In the Connections pane, expand the name of your SolarWinds EOC server, and then expand Sites.
  4. Select your SolarWinds EOC web site, and then click Bindings in the Actions pane on the right.
  5. Click Add in the Site Bindings window.
  6. In the Type field, select https, and then confirm that the Port value is 443.
  7. In the SSL Certificate field, select a certificate, and then click OK.
  8. Click Close on the Site Bindings window.
  9. In the center pane, double-click SSL Settings in the IIS group.
  10. Select Require SSL, and then click Apply in the Actions pane on the right.
  11. In the Connections pane, select your SolarWinds EOC web site.
  12. Click Restart in the Manage Web Site group on the right.

A fiery debate arises when discussing flow technologies i.e. NetFlow vs. sFlow. As we all know, existing flow technologies act as a vital and viable solution to troubleshoot network and bandwidth issues. While network engineers contemplate which one is better, NetFlow and sFlow serve as an example of how flow technologies are used in modern network devices.


NetFlow and sFlow – What are they?

NetFlow is a network protocol developed by Cisco Systems for collecting IP traffic information. It’s supported on most platforms and has become the universally accepted standard for traffic monitoring. NetFlow answers the questions of who (users), what (applications), and how network bandwidth is being used.  By understanding NetFlow on a deeper level, you can probe further into the insights and everyday uses that you haven’t thought about.

sFlow is a technology for monitoring network devices that use sampling (to achieve scalability) and is applicable to high speed networks. sFlow performs two types of sampling: random sampling of packets/application layer operations and time-based sampling of counters. These flow samples and counter samples are sent as sFlow datagrams to analyze and report on network traffic i.e. sFlow collector.


NetFlow and sFlow – What are the differences?

Most network devices support either NetFlow or sFlow. Prioritizing what administrator’s want, while managing your network and bandwidth can help during the process of streamlining network traffic and troubleshooting. NetFlow can help you manage IP based traffic information, whereas sFlow can capture non IP traffic by working on Layer 2 and Layer 3 interfaces by sampling most of your network traffic.

For instance, if there’s a sudden increase in network traffic, NetFlow can handle a minimal load by having few flows for massive packet volume. Conversely, sFlow has a 1:N sampling which increases additional work load. However, sFlow may miss some traffic due to the sampling method it employs, while NetFlow gives you an advantage by capturing all network traffic. NetFlow can also be helpful with network forensic analysis and threat detection. sFlow claims to be better because it‘s processed at a core hardware level.

Most available network management tools support both NetFlow and sFlow. If you’re choosing a tool, look for features that support flow technologies, such as NetFlow, sFlow, jFlow or IPFIX.


NetFlow and sFlow – What are their applications?

Both NetFlow and sFlow are used to generate more visibility into an organization’s network. Fundamentally, the usage of NetFlow and sFlow is to eliminate guesswork when it comes to managing network services and making decisions. Typically NetFlow and sFlow are used in:

  • Analyzing network and bandwidth usage by users, applications
  • Measuring WAN traffic and generating statistics for creating network policies
  • Detecting unauthorized network usage - Security and anomaly detection
  • Confirming appropriate bandwidth is allocated using QoS parameters
  • Diagnosing and troubleshooting network problems

With sFlow, packet forwarding information helps analyze the most active routes and specific flows carried by these routes in your network. Understanding these routes and flows creates a possibility for administrators to optimize routing and improve network performance.

It doesn’t matter which type of flow technology you are using. As long as you use a network monitoring tool that supports both, you can focus more on how to manage your resources and optimize your network. Utilizing the best tools available is the key managing loads of traffic data.

Hackers Steal 2 Million Usernames & Passwords from Social Networking Sites including Google, Facebook, Twitter and LinkedIn

This is a whale of a heist. All the social network stalwarts have been outsmarted by hackers. Security experts at Trustwaves SpiderLabs have discovered a trove of 2 million hacked social network user account credentials – usernames and passwords – during the investigation of a server in the Netherlands that cyber criminals use to control a massive network of hacked computers known as the ‘Pony botnet.’


Experts say this massive data breach was a result of key-logging software maliciously installed on numerous number of computers across the globe. The victims were majorly from the U.S., Germany, Singapore, Thailand, and the Netherlands.


Hack Analysis By The Numbers


What Was Stolen?

  • ~1,580,000 website login credentials stolen
  • ~320,000 email account credentials stolen
  • ~41,000 FTP account credentials stolen
  • ~3,000 Remote Desktop credentials stolen
  • ~3,000 Secure Shell account credentials stolen


From Where?

  • 318,000 Facebook accounts
  • 70,000 Gmail, Google+ and YouTube accounts
  • 60,000 Yahoo accounts
  • 22,000 Twitter accounts
  • 9,000 Odnoklassniki accounts (a Russian social network)
  • 2,400 to 8,000 ADP accounts
  • 8,000 LinkedIn accounts


Compromised Passwords: Chart Toppers

A SpiderLabs blog showed that the most-common password in the set was ‘123456,’ which was used in nearly 16,000 accounts. Other commonly used credentials included ‘password,’ ‘admin,’ ‘123’ and ‘1.’


Compromised Passwords

























Some Password Protection & Security Tips [1]

  • Use mix of capital and lowercase letters and make passwords at least 8 characters long
  • Use combination of letters, numbers and symbols like exclamation mark
  • Do not use words found in the dictionary
  • Avoid easy-to-guess words, even if they aren’t in the dictionary
  • Do not use your name, company name or hometown, pets and relatives' names
  • Stay away from birthday dates and zip codes that can be looked up
  • Use to get a sense of how strong is your password
  • Always log out of a site when you’re finished with it


Especially for enterprises and organizations that deal with secure data, it’s wise to invest in security solutions that monitor your entire IT landscape and provide real-time security intelligence.


Even Dilbert Can Guess Your Password!


Virtual storage I/O latency will impact VM performance because high read or write operations can cause performance issues to all the resources in the datastore. In order to resolve the issue, you need a common view into the virtual and the storage environments to help you identify the root cause of the issue.


To effortlessly troubleshoot I/O latency issues in your virtual environment, your virtualization management software should have a datastore I/O latency widget which lists all the top datastores with high read or write values. When you drill down to a latency issue within the datastore, you will be able to see several key indicators that show the current performance of the datastore. For example, IOPS, IO latency, disk usage, capacity, disk provisioning information, alerts that have been raised for that datastore, etc.


Monitoring the I/O latency metric is crucial. Generally, when you see a read or a write value which is greater than 20 millisecond, your datastores will more than likely experience latency issues. Drilling down into the datastore will show you a relationship or an environment map informing whether the datastore is affecting another resources’ performance. It’s possible that more VMs are hogging this storage resource causing latency issues. This occurs when VMs contend with each other for storage resources causing performance issues to other VMs and the storage system. When VMs have performance issues, critical applications running inside will also end up having a bottleneck. All this ultimately affects end users.


You can take the following measures to regulate I/O latency issues so your VM performance is not affected:

  • Allocate I/O resources to VMs based on their workload.
  • If you have a critical application or a latency heavy application, ensure that’s the only application running in the VM. Associate a VM that has a critical application running in it to a single datastore to prevent high I/O latency.
  • If the VM is not utilizing the allocated storage resource, consider routing unused resources to other VMs which have I/O contention issues.


It should be noted that not all storage system performance the same. That being said, set appropriate threshold values based on the performance of each storage device to avoid latency issues. Manage your VMware datastore with high I/O latency issues easily. In the video provided at the end of this blog, vExpert Scott Lowe shows you how to drill down from a datastore. You’ll see how the datastore is mapped to hosts, clusters, and VMs in your environment and look at their dependencies. Once you have this mapped, it’s easy to find related resources that are either causing the problem or being affected by the problem. Leverage a virtualization management tool to avoid storage I/O latency issues.


It’s already happening – the IPv4 apocalypse. There aren’t many IPv4 addresses left though the total IPv4 availability is supposed to be 4.3 billion addresses. It’s true: we are having an IP explosion. Let’s take a look at some stats and future predictions.

  • There are only 4.3 billion IPv4 addresses in the world. Don’t be shocked at this astronomical figure. We have almost run out of IP rations.
  • There are 7.1+ billion people on earth. And this matters because we are adding more IP endpoints each day
  • By 2016, there will be around 20 billion devices online
  • Gartner predicts that the Bring Your Own Device trend will result in the doubling or tripling of the mobile workforce
  • Cisco network traffic forecast predicts 1.4 zettabytes of traffic by 2017


All this points in the same direction that IPv4 exhaustion is happening faster than anticipated, and we have to start preparing for IPv6 for future-ready network strategies.


The Internet Assigned Numbers Authority (IANA) has distributed approximately 16.8 million IPv4 addresses each to all 5 regional Internet registries (RIR) in the world. EMEA and APAC have already depleted their IPv4 addresses, and the other RIRs are getting closer to exhaustion

IPv4 Exhaustion.png



IPv6 to the Rescue

IPv6, the new 128-bit variant of the IPv4 address (which was just 32 bits), is the next resort for all companies, government agencies and service providers. There are 340 trillion trillion trillion (3 x 1038) IPv6 addresses in the world so it’s virtually impossible to run out of them any time soon – at least for the next 20 odd years.


All this is said only from the perspective of IP space availability. IPv6 has more far-reaching benefits than IPv4 in terms of security, operational efficiency, and overall network management.

  • More efficient address space allocation
  • Direct and end-to-end addressing. No need of network address translation (NAT).
  • Fragmentation only by the source host
  • More efficient routing
  • More efficient packet processing
  • Multicasting made more easier
  • Built-in security mechanisms (IPsec)
  • Single control protocol (ICMPv6)
  • Auto-configuration
  • Modular header structure


IPv6 is here to stay. And we’ll now have room for unique IP address any each and every IP-enabled device in the world.


By now you would have got a strong understanding of what IPv6 can do for you, and you now have to start looking at how to get IPv6 into your network – how to prepare your infrastructure and processes for that, how to migrate IPv4 addresses to IPv6, and how to manage dual-stack networks.


Check out these blogs to understand more about IPv6 transition and IPv6 migration.

The licensed version of the software can handle around 2 million messages per hour and the Free version 300000 per hour. The licensed version has been regularly tested to handle 400-600 messages per second while logging to file.

The licensed version has been increased to have a 20000 message buffer, while the has a 500 message buffer.

If you suspect that you may be losing messages then have a look at the File > Debug options > View message buffer option to check that the "Message Queue overflow:" value is always 0. This indicates the amount of messages that have been dropped. If you are running the Service version then this same information can be found from the Manage > Debug options menu.

To decrease the amount of messages being displayed, you may want to modify your device configurations to only send messages that meet a set level.

If the volume of syslog messages you send to Kiwi Syslog exceeds the above recommendations, you may experience instability and you should consider distributing the load to another installation of Kiwi Syslog Server.

Load Balance Kiwi Syslog Server

Overloading in Kiwi Syslog Server manifests in a couple of ways. 

The first (and most obvious) way, is when there is a non-zero value in the "Message Queue overflow" section of the Kiwi Syslog Server diagnostic information.  A non-zero value indicates that messages are being lost (due to overloading the internal message buffers).  To view diagnostic information in Kiwi Syslog Server, go to the View Menu > Debug options > Get diagnostic information (File Menu > Debug options, if running the non-service version).

The second way, is a little harder to discern, but is most obvious when the "Messages per hour - Average" value in the Kiwi Syslog Server diagnostic information is above the recommended "maximum" syslog message throughput that Kiwi Syslog Server can nominally handle.  This value is around 1 - 2 million messages per hour (average), depending on the number and complexity of rules configured in Kiwi Syslog Server.

If either of these two scenarios is true for your current Kiwi Syslog Server instance, then load balancing your syslog message load can mitigate any overloading that may occur.

To load balance Kiwi Syslog Server, start inspecting your Kiwi Syslog Server diagnostic information, specifically looking for syslog hosts that account for around 50% of all syslog traffic.  These higher utilization devices are candidates load balancing, through a second instance of Kiwi Syslog Server.

For example, consider the following "Breakdown of Syslog messages by sending host" from the diagnostics information.

Breakdown of Syslog messages by sending host  
 Top 20 Hosts
   ...   ...   ...

From these diagnostics, you can see that and account for ~50% of the syslog load.  We normally just start adding utilization figures from the top of the list, until we get to about 50%.  Most of the time 50% of all syslog events come from one or two devices, and this is indeed the case here.

To enable a load balanced Kiwi Syslog Server configuration, perform the following actions:

  1. Install a second instance of Kiwi Syslog Server (on a second machine).

  2. Replicate the config from first machine to the second. 

    On the original instance – (File Menu > Export Setting to INI file).
    and on the new instance – (File Menu > Import settings from INI file).

  3. Reconfigure devices and to send syslog events to the new instance.



For more information about Kiwi Syslog, see this link Syslog Server and CatTools Network Configuration Manager | Kiwi

Download the Free version here: Free Syslog Server | Kiwi Free Edition

An IP address conflict occurs when two computers connected to a Local Area Network (LAN) or the Internet are assigned the same IP address. When this happens, the network interface on systems becomes disabled, causing each system to lose connectivity until the conflict is resolved. Some of the reasons for this common network issue include the BYOD phenomenon, DHCP servers and their configuration, and human errors that occur during manual IP address management.

Resolving IP Conflicts in Your Network

The first and foremost step in resolving IP conflicts is to locate the systems that are in conflict. Administrators often locate these conflicting systems using ping/ARP commands or tracking tools. In networks where IP addresses are statically assigned, the system administrator has to manually locate and change the IP address of the system in conflict. However, in dynamic IP distribution environments, systems issue repeated requests for valid addresses, enabling IP conflicts to eventually resolve themselves. But, this might not always be the case.

  1. Static IP – If IP addresses are manually assigned to computers in your network, the following steps will help you change the IP address of the system.
    • Right-click "Network Neighborhood" on your desktop. If you have Windows Vista or Windows 7 and Windows 8, right click on your network card and choose “Open Network and Sharing Center”.
    • Select "Local Area Connection" Or “Ethernet” and then click on "Properties."
    • Click "TCP/IP" in the list of protocols. Click the "Properties" button underneath the list of protocols. Enter a new IP address in the opened window. Click "OK" to confirm your settings.

Image - 3.png

  1. Dynamic IP - If IP addresses are distributed dynamically by a DHCP server, one way to fix the conflict is by manually entering IPCONFIG/RELEASE and IPCONFIG/RENEW from the command prompt.
    • Open the command prompt by clicking "Start" and typing "cmd" into the search field.
    • Type "Ipconfig/release" at the "C:" prompt and then press "Enter." The results of the IP release process are displayed under "Windows IP Configuration." The prompt re-appears at the end of the results when the process is complete.
    • Type "ipconfig/renew" at the prompt and then press "Enter." This command automatically assigns your computer a new available IP address.



Tips to Avoid IP Conflicts

  1. In networks where IP addresses are statically assigned, be careful to ensure each device is configured with a unique IP address.
  2. Confirm that your DHCP servers are configured properly so that there is no duplication in assignment of IP addresses.
  3. To avoid DHCP malfunction, ensure that the firmware on the DHCP servers are up to date.
  4. Set up a mechanism to be notified of IP conflicts in the network so that you are aware of the issue and take proactive action before any users call in to support.

There are also automated systems that make these steps much faster and easier. Learn more about automating your IP address management tasks with SolarWinds IP Address Manager.

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.