Remote Support


Remote support is more than just remote control.  At SolarWinds, when we talk about remote support, we’re really talking about all of the remote tasks that sys admins perform in a day.  If you’re a sys admin, you’re probably doing a lot more throughout the day than just remoting in to end-users’ computers to troubleshoot.  You’re probably also the person in charge of managing Active Directory objects like user accounts, organizational units, and security groups, as well as maintaining Exchange accounts.  Do you ever have to edit group policies?  I thought you might!  I’ll bet you even manage a few servers.


So, if remote control isn't remote support, then what is it?  Remote control is just one part of remote support.  Sometimes it’s the right tool for the job, but sometimes it isn't necessary to initiate a full remote control session to accomplish a task on a remote computer.  The other big part of remote support is remote administration and this is where LogMeIn and other web-based remote control tools fall short.


Performing remote administration tasks means providing remote support to Windows users like restarting services, killing runaway processes, viewing event logs, managing disks and shares, editing registries, and managing local groups and users on computers WITHOUT starting a remote control session.  It also means managing computers that have either crashed or are in sleep mode.



How Many Remote Support Tools Do You Need?


When I think about remote support in that larger context, I often wonder why sys admins waste time and money using multiple tools to perform their day-to-day activities.  Having been a sys admin myself, I can tell you that it’s not uncommon to see the following tools open on a desktop:  an Active Directory MMC or two, RDP for Windows sessions, VNC for Mac and Linux sessions, Exchange System Manager, maybe a web-based remote control tool like LogMeIn, and the list goes on.


With DameWare Remote Support (DRS), you can perform nearly all of your daily sys admin tasks from just one console.  DRS lets you manage multiple Active Directory domains from the console where you can add/delete/edit users, OUs, security groups, and group policies.  DRS has support for RDP and VNC sessions as well as DameWare’s powerful Mini Remote Control Viewer so you can connect to Windows, Mac OS X, and Linux computers from the same console.  DRS also lets you perform remote administration tasks on Windows computers without making a remote control session to cut down on time interacting with end-users unnecessarily.


That isn’t really even the best part.  DRS is significantly less expensive than its web-based competitors while offering superior functionality.   It’s also super easy to set up and use.  You can have it downloaded, installed, and discovering computers on your network in a matter of minutes.


The following table really tells the story.  I'm picking on LogMeIn here a bit because of their recent IP litigation issues, but the story is pretty much the same for other web-based remote control tools.


Remote Support Features


LogMeIn Rescue

Desktop remote control



Take screenshots of the  remote desktop



Chat with users during sessions



Upload files to remote computer



Supports concurrent sessions



Remote connection without requiring user interaction



Automatically deploy agent



Remote Windows Administration



Start, Stop, Restart Services



Intel AMT Support



View Performance Data



Manage Exchange Accounts



View Event Console



Active Directory Management





Per user

per year



Don't waste any more time or money on web-based remote control tools like LogMeIn.  Download a fully functional free 14 day trial of DameWare Remote Support and start performing all of your daily sys admin tasks from just one console.



For the purpose of this discussion of alerting let’s assume a business with a very small IT staff consisting of two techs and the IT manager. Besides receiving alerts on network events—including device failures and excessive use of disk space or bandwidth—the team needs a way to take action as efficiently as possible should multiple concurrent events require simultaneous intervention.

In three standard ways IT systems provide state-related alerts: through SNMP polling that a monitoring application initiates; through SNMP traps that, when triggered, send an alert code to the monitoring console for parsing and user-friendly display; and through syslog message forwarding.

The IT manager’s challenge in this case is to define the best troubleshooting and resolution workflow.

Engineering the Right Process


Implicitly, the IT manager and team needs a monitoring application that both provides timely information about problems and begins to organize and coordinate a response. Minimally, in setting up network monitoring, the team depends on it to do these things:

  1. Immediately, as soon as the application recognizes an alert condition, the application generates and sends an email to the entire team, sends a page to the tech who is on call, and writes an entry into an event log.
  2. If the alert is not acknowledged within 20 minutes, the application fires a second alert, generating another team email and another page—this one targeted at both techs, and writes another entry into the event log.
  3. If the second alert is not acknowledged within 20 minutes, the application fires a third alert, generating a third team email and another page—this one targeted both techs and the IT manager.

With escalated alerts the team can efficiently triage concurrent events by not losing time on communication-related confusion. Acknowledgments let everyone know that a team member is fielding a specific issue. Assuming that everyone is available as expected, all three members of the small team would be simultaneously engaged in addressing three different issues within 60 minutes.

SolarWinds Network Configuration Manager supports the escalated alert features needed to implement this workflow.

Visual Basic 101 (part 1)

Posted by Bronx Oct 31, 2012

Programming is Fun

I'm sure many of you reading this have been exposed to at least a little bit of a programming language, be it HTML, PowerShell, VBScript, or whatever. Some of you may enjoy programming, some may not. (I know I do.)

Let me begin by saying that it is virtually impossible for any single programmer to know everything about a given language. Also, there are countless ways to program the same thing. Some solutions may be more elegant than others, but in general, there is no single right answer, as long as the program works correctly. That said, you should feel a little less intimidated by the amount of available reference on the inter-tubes concerning programming languages. This article will explore the tiniest fraction of Visual Basic. Hopefully this will encourage you to explore more on your own once you begin to realize it's not as mysterious as you may have thought.

Here are links to the whole series of posts:

Part 1 - Getting the environment set-up

Part 2 - Creating the VB form

Part 3 - Writing the code

Part 4 - Compiling the code

The Gift

A while back, I took it upon myself to create an internal tool we needed here at SolarWinds. Hence, the Bandwidth Calculator was born. This calculator was designed specifically for SAM to estimate the recommended amount of bandwidth needed for a certain amount of component monitors based on their protocol. As you can see below, 21 WMI component monitors would, on average, use the same amount of bandwidth as 10,000 SNMP component monitors. In this series I will explain step by step how I built this calculator, explain what everything means, then give you all the code needed along the way so you can build this yourself. This will be your Visual Basic classroom starting today. At the end, you get to keep the calculator you've re-created below (it even talks).

The only way to get this calculator is to build it following the steps in this series. It will not be available for download nor will SolarWinds offer support for this tool. The code for this tool is made available for educational purposes


Lesson 1 - Installation

Before we begin, you'll need to install Visual Basic Express 2010, free courtesy of Microsoft. Click this link to begin downloading, followed by the installation.


Lesson 2 - The Environment

Now open Visual Studio Express and select New Project from the File menu. A new window will pop open. From there, select Windows Form Application, then click OK. If you done everything successfully, your screen should now look like this:


Now you have all you need to build the Bandwidth Calculator, minus the code. Let me explain what you're looking at above:

  • Highlighted in red is a Form. A form, in essence, is a window; hence the name, Windows. A form is an empty workspace where all of your buttons and controls will live, once you put them there. (Notice the form of the calculator above with all of its controls.)
  • Highlighted in green is the Toolbox. The toolbox contains all of the controls you will need to build almost anything, including the calculator. (In fact, you can even build your own controls, but I will not cover that in this series.) These controls may be placed on the form as needed. As you can see in the calculator, there are multiple text boxes, sliders, labels, a pie chart, and a button. These all came from the toolbox.
  • Highlighted in purple is the Properties window. Every control, or object (including the form itself) has certain properties. These properties can be set and changed both before running the program and while the program is running. Think about the properties of a television. One property is its color. Other properties include the TV's height, weight, picture resolution, and so on. The reason the button on the calculator says, "Reset" is because I changed the Text property of the button in the Properties window to read, "Reset."



Play around with this new environment and try to get comfortable. Explore the controls and the properties of the more common controls.
Tip: Once you place a control on the form and select it, the Properties window will show the properties of that control.

Got any zombies on your network?


That probably depends on how you define “zombie,” right?


Virtual Machine Zombies


A zombie could be a lost, lone virtual machine languishing on your network. Or a zombie might be a dummy machine, taken over by a spammer, sending out spam messages whenever it’s ordered to.


Whether you’re dealing with a virtual machine running amuck—or not running at all, or too many virtual machines to keep track of, or even possible PCs taken over by hackers, SolarWinds may have some answers. Are you working with Microsoft Hyper-V or VMWare’s vSphere? Are you running Windows 7, CentOS, OS X, Windows Server 2003, or Linux? SolarWinds has options for getting your zombies, or any other virtual machines, under control. If you’ve got one of these kinds of zombies, get a virtualization overview, by taking a peek at Hyper-VTM vs. vSphereTM: Understanding the Differences. Then learn more about what you can do to control your virtual environment, with powerful configuration and management products like SolarWinds Virtualization Manager. And free tools, like SolarWinds VM Monitor and VM Console for monitoring, starting, and stopping virtual machines.


Flesh-eating Zombies


Or, since it’s Halloween, we can’t forget to recognize the original, flesh- eating kind of zombies from Sean of the Dead. A quick search of the Internet reveals that that kind of zombie has been taking over Pittsburgh and Sioux Falls.


If you’ve got one of those zombies on your network, you’ve got trouble bigger than what we can help you with here. Better grab a baseball bat and RUN!

Network_Engineer_Examining_Server.JPGAs a network engineer, you're no doubt intimately familiar with the various species of conventional network hardware and related equipment: servers, routers, cables, switches, and, of course, more cables. There's a lot of stuff that goes into a network, and eventually that stuff breaks, gets old, or just plain needs replacing. As such, you've probably got a list right now of stuff that needs to be fixed or replaced. And since, as a network engineer, you've either got a boss somewhere who signs your checks or you have the dubious privilege of paying the bills for your operation yourself, saving a few bucks is probably in line with your immediate interests.


That's also why you refer to your list is a wish list. Wishes can come true, but it's usually at a price. How do you optimize your network while maximizing the black numbers in your bottom line?


Cost-effective Network Optimization

You're here, so you probably already know that SolarWinds can help you stretch your network management dollar pretty far (i.e. SolarWinds gives you FREE STUFF). There are also a number of ways you can update your network equipment to make your network faster and greener, too, which will likely save you some money, and maybe even a few headaches, so this post is the first in a series that will explore a few of your possibly lesser-known options for making your network faster, greener, and cheaper.


First up: make it greener with EnergyWise.


Make it Greener with EnergyWise

EnergyWise is a technology Cisco introduced a few years ago on their Catalyst line of switches. You may already have it on some of your switches right now, just waiting for you to enable it. if you don't already have a few EnergyWise-capable devices in use, you should consider deploying some. So, what does it do?


Saving Energy and Money with EnergyWise

In short, EnergyWise helps you save energy on your network. With a little bit of effort, you can configure an EnergyWise-enabled switch to automatically power itself down or, similarly, to power down any connected Power-over-Ethernet device on a schedule of your choosing. For more information about the technology itself, see Cisco EnergyWise Technology at Quite simply, effectively managing energy in your enterprise, even if it is simply a matter of turning stuff off when you're not using it, saves you money, and saving money is good business.


Managing EnergyWise with SolarWinds

Both SolarWinds Network Performance Monitor and Network Configuration Manager provide EnergyWise-specific network management resources. For more information about monitoring EnergyWise-enabled devices, see "Monitoring EnergyWise Devices" in the SolarWinds Network Performance Monitor Administrator Guide.

Andy McBride

Making SQL Scream

Posted by Andy McBride Oct 29, 2012

If you are one of the thousands of users of our SolarWinds Products, or one the users of thousands of other products that use SQL for the back-end database, you already know that the importance of SQL performance cannot be overstated. Any application that interfaces with a database is only as fast as the database it uses. The good news for Microsoft SQL users is that there are a lot things you can do to make sure your SQL Server is a screaming machine. Here are some of my recommendations:


  • Keep your primary files (yourdatabase.mdf and yourdatabase.ldf) and your temporary files (tempdev.mdf and templog.ldf) on separate arrays. Moving the I/O intensive temp traffic off of your primary file drives will result in a nice performance boost.
  • Use RAID 10 for primary and temp files. RAID 5 or 6 will kill you array speed. Raid 01 is a poor choice as well as many controllers do not implement it well.
  • USE 15,000 RPM drives or high end SSDs if possible. SSD price continues to fall and longevity now rivals spindle storage.
  • Use 64 bit SQL and lots of RAM.
  • If SSDs are not an option, try RAMDisk for your primary .MDF files.

Just implementing 3 or 4 of these will have a huge impact on your SQL performance and the application using SQL for data storage.


For more tips, take a look at this Technical Reference on Managing Orion Performance.

Automatic approval rules in WSUS are extremely useful, especially to admins in small shops that don't want to have to review and approve every single patch that lands on their WSUS server. A common example is to create an automatic approval rule that says something like, "Automatically approve all Critical and Security updates for the All Computers group." With a rule like this in place, WSUS will evaluate the Classification attribute of all updates, and then approve any updates that have a classification of either Critical or Security. This is great when all you're managing is updates from Microsoft, but what does a rule like this do in an environment that also supports third-party patch management?


Automatic Approvals and Third-party Updates

The important thing to note about this scenario with regard to third-party publishing is that third-party updates also have a Classification attribute. So, if you publish a Java patch to your WSUS server, for example, and that patch is classified as critical, WSUS would automatically approve that update, just like it would any critical Microsoft update. Where this causes issues is in environments that require more granular control over their third-party patches than the patches from Microsoft. If this sounds familiar, what you need is an automatic approval rule that addresses the Microsoft products in your environment, but leaves out any third-party products, which typically require more attention.


The Solution

The solution to the WSUS patching problem I just described is to create a more specific automatic approval rule. For example, you could create a rule that says something more like, "Automatically approve all Critical and Security updates in these Microsoft products for the All Computers group." That way, you can publish critical and security updates from third parties to your WSUS server, but still retain control over which updates you approve for which computer groups.


To create an automatic approval rule for specific Microsoft products in WSUS:

  1. In the WSUS console, create a new rule (or edit your existing rule) in Options > Automatic Approval Rules.
  2. In the rule, select the option, When an update is in a specific product.
  3. In the bottom pane, click any product. This should be a blue hyperlink.
  4. In the window that lists all of the Microsoft products, clear the check boxes next to any product that does not apply to your environment. The resulting list should contain all of the Microsoft products for which you want WSUS to automatically approve updates. If you leave all of the products selected, the rule will continue to apply to any product.
  5. Click OK.
  6. If you are creating this rule from scratch, select the appropriate options to define the classifications for which you wish to automatically approve updates and specify the applicable computer group(s).
  7. Click OK to save the rule.


After you have this rule in place, WSUS will only automatically approve the Microsoft updates that meet the specific criteria you defined in your rule. WSUS patch management simplified!


For those of you who don't know, an apologist is one who defends his position, not apologizes for it, as the word would imply. (I learned that ages ago from the king of reference books: the dictionary.)


Since you're reading this, I'm guessing most of you are like me. You're probably smart, love to tinker, and most importantly, impatient. You're probably the guy who tries to assemble the toys on Christmas eve, gets frustrated, then, in a last ditch effort, reads the instruction manual at 2am. I'm that guy too. (I find that last thought rich with irony since I'm the guy who writes the manual.)


The Point

Even though you probably know 90% of what's going on in all of your endeavors, you're still unaware of 10%. That additional 10% of information can mean the difference between success and failure. For example, when I was just a little tater tot learning to play chess, no one ever explained the en passant rule to me. I was caught off guard when an opponent removed one of my pawns from the board as though we were playing Checkers. I was also embarrassed by my ignorance. Needless to say, I learned the rule.




How much do you know about your software?

In my career, I have developed software and written manuals for software. To this day there are still functions that I have created, documented, and then forgotten about. The only thing that jogged my memory was the reference material. You would be amazed at how much detail developers put into their software, and it's not for their amusement. Their hard work is for your benefit. (Easter eggs are for their amusement.) The more you know about the software you use, the more productive you'll be. My suggestion to you would be to spend a few minutes going through the Administrator's Guide of your software and browse the topics. Are you really getting the most out of your network monitoring software? Do you know everything about application monitoring? How well can you create detailed alerts? Are you afraid of reports? Questions like these are covered and answered in SolarWinds' Administrator's Guides.

Any technical manual will be to your benefit.

I bet you avoid reading manuals like the plague, and here's how I know:

  • Did you know that in your car there's a little arrow next to the fuel gauge light? Do you know what that's for? I didn't either until I read the owner's manual. The arrow points to the side of the car where the gas cap lives.
  • I'm sure you all know how to copy and paste via the keyboard in Windows. But do you know all of the keyboard shortcuts? Do you know 90%? Do you even know how many there are? For a cheat sheet of some of these shortcuts, click here.
  • You use Google, don't you? Why search the entire world when the manual may be right in front of you?


See, you don't even know what you don't know. For instance, if you decided to learn all of the Windows shortcuts available, imagine how much more productive you would be.


Instruction manuals do have their place in the world.

The SAM Administrator's Guide is now tipping the Toledos at a whopping 1,100 pages, and that doesn't even account for the additional 1,000 pages in template reference! While that may seem rather large, you will certainly thank me later when you search it for something and then find the solution. Remember, this is not a novel that you need to read cover to cover. Don't know how to do something? Just look at the Table of Contents and jump to that section for the answer. Read a few pages and you're done. Use it for reference. Don't be afraid. Ask those tough questions. The more questions you ask, the more answers you'll find...which will ultimately lead to less questions. In school, open book tests were laughed at. This is the real world and those books were written for a reason. Please use them. In fact, right now I'll provide you a list of our more popular ones:


General Networking and NPM


Server & Application Monitor (SAM)


But I read the Release Notes.

Good for you. The Release Notes are short and easy to follow. That's a start. Reading the Release Notes will inform you of new features and potential problems you may encounter in a particular release. This does not excuse you from at least perusing the Table of Contents of your SolarWinds' Administrator's Guides. The monitoring software you use is their to help you. So is the documentation. Would you rather spend five minutes reading how to do something, or spend an hour on the phone with Customer Support? The choice is yours. For me, the choice is simple: RTFM. (If you don't know what RTFM means, let me Google that for you, just this once.)


Whatever the industry or discipline, big data is there to help glean the future. And now big data pioneers can open their data and get even broader insights, sharper views, and a significant competitive edge. According to a recent Gartner report, open data is more important and more valuable than big data alone. The Gartner analysts go on to say that big data certainly makes organizations smarter, but open data does more to increase revenue and business value.


Open Data


According to Open Definition, a piece of content or data is open if anyone is free to use, reuse, and redistribute it — subject only, at most, to the requirement to attribute and/or share-alike. Across the U.S. and across the globe, open data initiatives are being introduced, and governments are leading the way.


Open Government


President Barack Obama introduces Digital Government with the challenge, "I want us to ask ourselves everyday, how are we using technology to make a real difference in people’s lives?" And the White House has outlined a strategy focusing on open data to help drive the Digital Government agenda.

The three main objectives in the strategy are:

  1. Enable the American people and an increasingly mobile workforce to access high-quality digital government information and services anywhere, anytime, on any device
  2. Ensure that as the government adjusts to this new digital world, we seize the opportunity to procure and manage devices, applications, and data in smart, secure and affordable ways.
  3. Unlock the power of government data to spur innovation across our Nation and improve the quality of services for the American people.

The strategy is funded with an estimated $73 million annually for research grants.

Open data as a spectator sport


Governments and businesses are promoting innovation through competitions using open data.  Illinois launched an Open Technology Challenge and is offering $75,000 in prize money. Kaggle Competitions hosts game show style competitions with up to 57,000 contestants to solve problems using open data. Europe's Open Data Challenge offered 20,000 Euros to the winners of their open data competitions


Someone is always watching


You can do more than wonder if that ticket you got was because a law enforcement officer was trying to make his quota,  A student intern used open data to glean the patterns of traffic citations issued by police in the city of Baltimore. His home-grown study reveals a connection between the days at the beginning and end of the month, and the number of tickets issued:


Anyone can access data on They even offer developers an API to search government, federal, and science databases, and you can surf their open data from the convenience of an Android app

Need more data? offers open data sources, open data communities, and open data platforms. The resources for open data are vast and growing.

A pretty close semblence of myself

Matt Simmons


JK: What inspired you to start the Standalone Sysadmin blog?
MS: I had a LiveJournal where I used to write technical things that I had learned, mostly to document knowledge for myself.  A couple of my nontechnical friends would read it and wonder what I was writing about.  This was about 5 years ago, when I was probably around a Level 2 – Junior system administrator.  In looking at the LISA job description – where I wanted to go as a senior sysadmin – I knew I would need to advance from passively learning to actively producing knowledge.  At that time, I had never written any technical papers or blogs and so I set up Standalone Sysadmin as a way for me to start getting my thoughts out.

Initially, it was difficult to write because I had to question my own assumptions about things, but over time, the process of researching my own topics allowed me to grow my knowledge, and I think that people really responded to being able to grow along with me.


JK: Do you get a lot of questions or feedback on your blogs from your readers?
MS: Yes, surprisingly, I do.  I get about a half to a dozen of questions per week.  Sometimes people write that they identify with an issue I wrote about on my blog, sometimes I get questions from sysadmins who are stuck and I try to help out as many as I can.  Twitter (@standaloneSA) is an easy way to get my attention, although you can’t write much of a response with the character limit.  If I can’t answer a question, I can normally connect people with someone who can help them.


JK: How do you keep up with new technologies or techniques for solving problems?
I join a lot of communities and am one of the moderators for  When looking around at other forums/blogs, if I see posts that I know nothing about, that I would not be interested in, I read them, because those are the posts where I learn something new.

JK: Based off what you hear from the community, what are the most frustrating issues sysadmins face today?
MS: Bureaucracy is one.  The other is when people are placed in areas of responsibility where they have managers that don’t have the right technical knowledge.  It can go one of two ways, either they trust the technical people who work for them, or they don’t and their ego gets in the way.  There is not a easy solution for the latter.  I have been fortunate to have never had to deal with that situation.


JK: For people contemplating a career in IT today, what advice do you have for them in terms of skills to learn or classes to take?
MS: I have some experience in this topic in that I have thought a lot about training for system administrators, and am currently in academia at Northeastern University.  As a senior system administrator, I wish I would have taken more statistics.  Also today, the nature of being a sysadmin is much different than it was just a few years ago with cloud computing, virtualization, and the like.  Today, instead of thinking about setting up and managing physical servers and software, you need to think about these elements as abstract objects.  Now, we write software to build and provision servers and software together.  The biggest piece of advice I have for students entering IT and also working sysadmins is to learn how to code – become a programmer.


JK: It’s hard doing a day job and learning something so new.  How should a sysadmin in the field today get started learning code?
For someone not in the field, who just wants to learn, I would recommend Javascript. Everyone has a web browser and that browser can interpret Javascript.  This language is not terrible to learn and you can actually do quite a bit with it.  System administrators should start learning to program by writing scripts to automate the mundane tasks they perform again and again. For folks who deal with UNIX every day, I would suggest learning Python or Ruby.


JK: I bet sysadmins more familiar with Linux might have an easier time with learning code, do you agree?
MS: In the past, absolutely. Not anymore.  Microsoft has put a lot of work into Powershell.  You can write and remotely execute Powershell scripts to do almost anything. It comes built-in to every Windows desktop and server.  And on the system administration side, a lot of Windows tools are coming with a “show me the Powershell” button that displays the Powershell equivalent of what the GUI interface is doing - making it easier to  automate what used to be a manual process.


There is still a lot of learning to do with automation, even after you have learned one or two programming languages.  Check out Matt’s Appeal for advice: tying together Windows and Linux
on his blog Standalone Sysamin.


JK: What is your favorite SolarWinds product?

I've played with several of them for very short periods of time - I've never really had a Windows infrastructure. The one that I've used and like the most is actually the free IP Address Tracker. It was really handy when I first came onboard here to scope out what was where. I'd recommend it to someone else who needed to do something similar.


Big jobs in Big Data

Posted by susan.cohen Oct 26, 2012

Big Data is giving us more than mountains of information, it's giving us Big Job opportunities.

Big Data is Creating Big Jobs, and IT departments might have to scramble to fill the positions. It is estimated that big data will generate 4.4 million IT jobs by 2015, and 1.9 million of those jobs will be in the U.S. For each IT job, 3 jobs will be created outside IT. That's a total of 6 million new jobs generated in the U.S. But the IT industry has only an estimated 1/3 of the talent needed to fill these positions. Data experts such as data scientists, data analysts, business intelligence professionals , and data modelers will be in high demand. Math heads and computer scientists might be encouraged to re-purpose their skills to meet the growing demand. Gary King of Harvard deems Big Data as a revolution that is only beginning to get underway.

Who qualifies?


ComputerWorld defines the standard qualifications for the jobs in Big Data.

  • Data Scientist are the "top dogs" of big data. They have the deep analytical talent, and they fill many top management positions. Many have backgrounds in math or statistics and some come from areas in artificial intelligence or data management. .
  • Data Architects are generally programmers who excel at working with "messy data, disparate types of data, undefined data, and lots of ambiguity". Creative persistence is a key quality.
  • Data Visualizers can translate analytics into information that businesses can use. They bind big data with context and are able to communicate what the data means to the company.
  • Data Change Agents "drive changes in internal operations and processes based on analytics".  They are versed in the Six Sigma business management strategy and have stellar communication skills.
  • Data Engineers and Operators design, build, and manage the Big Data infrastructure.


Why data science?


Data scientist are all the rage and are culled from from fields in computer science or mathematics. They are the New Rock Stars of the Tech World. Yes,  math and science to be ultra cool, but the rock star status refers to some specific characteristics of data scientists. Like rock stars of the music world, careers in data science are diverse, include artistic freedom, and require adaptability, and they generate a large and diverse following. Harvard Business review declares data science the Sexiest Job of the 21st century. Desperately Seeking Data Scientist proclaims the role of data scientist is "hotter than ever" and claims American Express, Facebook, Google, Stumbleupon, PayPal, and American Express are actively pursuing this hot new trend. In the Age of Big Data: That's Why Math Counts!, McKinsey Global Institute is cited as saying the U.S. needs thousands more workers with deep analytical expertise and people with good math skills with the ability to glean patterns from seemingly unrelated data.

There is no shortage of advice on training for the big data jobs. I'm starting by learning Hadoop for free.

Sync or Swim...

Posted by Bronx Oct 26, 2012

I just finished reading a story about three school kids who got suspended because they were using a school ipad in the classroom and came across some risqué pictures of their teacher. Apparently, the teacher's iphone had accidentally synced with the school's ipad, causing the mishap. (Nevermind the politics of the punishment.)


Syncing makes us lazy thinkers

The above scenario is troubling for a number of reasons (aside from a certain implied moral turpitude):

  • Is technology too complex to control?
  • Are computers becoming too automated?
  • Are we becoming lazy in our ability to think when we have computers think for us?


I believe all three bullet points listed above are true to some degree, but none more important than the last. Every generation has their memories of yesteryear when they begin a sentence with, "When I was a kid,...." Let's take a look at how technology thinks for us: When I was a kid, the stuff listed below didn't exist yet:

    • The computer- I had to do math with pencil and paper.
    • Spell check - I had to buy a book called, "The Dictionary," and then read it.
    • Grammar check - Can grammar you by help yourself write? (sic)
    • Autopilot - I hope the pilots still know how to fly if this feature ever gets the blue screen of death!
    • The microwave - I learned to cook by working in a restaurant as a kid. Do you cook, or just nuke what's in the box?
    • The cellphone - I remember pre-cellphone times and having to remember all of my friend's phone 20. Yikes!
    • Syncing - Is this you? "I don't know where my stuff is, how to properly use a computer, and I will I never read the manual, so I'll just let the computer sort it out and not think about it."


Syncing ≠ Copying

That's right, syncing does not equal copying. Below are the definitions of the two words, so says

  • Synchronize - To cause to agree in time of occurrence; assign to the same time or period, as in a history.
  • Copy - To make an exact duplicate.

What syncing actually does is compare the timestamps of the same file in two locations. Once compared, the software then chooses to move the file from here to there based on certain rules that may or may not be changed by the user. Let me say that again, the software chooses. The problem with this is that once the move is complete, a file you may not have wanted synced is now gone forever because the software did the thinking, not you.

Learn to think

Whenever I buy a new piece of technology, the first thing I do is go to the Settings menu. I make sure I read every available option and ensure I know what every option controls. If I must sync certain files, like I do here at SolarWinds, I always make a backup copy of my files. I can recall at least one time that our syncing software took a nosedive causing me to lose a great deal of work. Fortunately, I took my own advice and made a copy prior to this syncing disaster. People think, computers do what they're told.

The moral

After reading this you're probably thinking that I hate technology. Quite the opposite! I am one with technology. I am simply highlighting a persistent problem of people relying too heavily on machines for simple tasks. Machines are here for one reason only: to help us. Not to replace us. Understand this and you will be better off. Remember, you're the boss of the machine. Don't let it do anything without your knowledge.


P.S. An interesting postscript - After publishing this article, the word "punishment" in the first paragraph was replaced by asterisks because it was on the banned list. Once again, taking my own advice, I took control and now that word is clearly visible. I rule.

For some of us, a trip to the doctor includes the doctor inputting our information into our medical record using their computer or tablet. The updated record saves to the medical office’s server. Your information can then go anywhere from a medical billing office, to your insurance company’s claims office, to your pharmacy, or back to your primary care doctor’s office


What Does this Mean for EHR Security?


Safeguards for Keeping EHRs Available and Secure

Keeping data secure requires physical and online safeguards. For mobile devices, enforcing secure passwords and encryption, as well as rules on keeping  mobile devices in close physical proximity can help. And of course, storing the data from mobile devices on network servers or clouds can ensure data availability. Networked data security and availability is what SolarWinds does, with network security tools, like:

  • Log and Event Manager Collects log and event data from devices and performs true real-time correlation, enabling you to automatically take action against threats.
  • Mobile Admin -– Quickly deploys secure mobile access for mobile devices and IT management technologies with one integrated mobile application.
  • Firewall Security Manager – Automates security audits and simplifies firewall troubleshooting for multi-vendor, Layer-3 network devices.


Fun with SAM

Posted by Bronx Oct 25, 2012

Okay let's face it, at some point we all take care of personal matters on company time and it usually involves a computer. Here's how one of our senior Application Engineers (AE) used SolarWinds Server and Application monitor (SAM) to get herself some primo concert tickets before everyone else.


The Setup

One day our heroine, Kate (who happens to be a Kung Fu expert) wanted to score some tickets to the Austin City Limits (ACL) music festival. The only way to get tickets was to go to their website and wait for them to become available. (The twist with their website was that they never mentioned when tickets were actually going to be available. There was no set time or date. You just had to keep checking the site until they were available for sale.) Who has time for that? Kate certainly did not. All of her time was spent helping valuable SolarWinds customers!


The Plan

Having an unparallelled knowledge of SAM, server performance monitoring tool, Kate used this knowledge to set up SAM to alert her the moment her prized tickets went on sale. Here's how she did it:

  1. First, Kate checked the source code of the concert webpage and found the words, "Coming soon." She figured that these are the words that would change in the webpage the moment the tickets went on sale.
    Note: You can find the source code of any webpage by right-clicking on it and selecting, View Source.
  2. Once she chose the words she thought likely to change, Kate went to the SAM web console and created an HTTP Monitor.
  3. She then took that string of text ("Coming soon") and put it in the Search field for the HTTP Monitor.
  4. Next, she set the polling interval to every 60 seconds (the lowest allowed).
  5. Then, she created an alert for this monitor that would send her a link to buy the tickets when her two keywords, "Coming soon" were not found. This link would be sent directly to her email inbox as well as her cell phone.
  6. Next, she configured the alert to check every 60 seconds.
  7. Finally, receive the alert, click the link, and buy the tickets!


The Wrap-up

Granted, this is not a typical use-case scenario for server and application monitoring. It does however highlight the ease of use and flexibility that SAM, your effective server monitoring tool offers. With SAM, you are only limited by your creativity. What can SAM do for you, other than:

  • Monitor virtually any application: Leverage out-of-the-box monitors for email, Active Directory®, Java® applications, databases, and more; select from an extensive library of community-contributed monitors; or easily create your own monitors for custom apps in a few simple steps with SAM application monitor.
  • Out-of-the-box hardware, Hyper-V, and VMware® host monitoring: Monitor the health of Dell®, HP®, and IBM® System X servers and the underlying hardware for your VMware hosts to get valuable insight into environmental data, hardware status, and more.
  • Built-in expert knowledge: Take advantage of built-in expert guidance on what to monitor, why to monitor it, and optimal thresholds for popular applications and servers.
  • Root-cause analysis: Use dynamic service groups to determine where your problem is — the application, database, hardware, O/S, or something else.
  • Auto-discovery: Have SAM scan your network and auto-discover applications and servers, making deployment a breeze. Graphical, easy-to-use interface: View trends, capacity, and performance in easy-to-use dashboards and reports that can be customized to your needs.
  • Enterprise scalability: Consolidate application and server monitoring with one tool that scales to thousands of logical endpoints.
  • Start monitoring in minutes: Enjoy a product that is easy to download and deploy in less than an hour, easy to use, and easy on your budget.
  • Manage O/S performance out-of-the-box: Leverage built-in templates that support Windows® Server 2003 and 2008, UNIX, Linux®, IBM AIX®, Sun Solaris®, and more.
  • Monitor server processes in real time: Monitor all of the processes running on your servers in real time, including CPU, memory, virtual memory, and disk I/O.
  • Microsoft® Active Directory® support: Simplify creation of login credentials by leveraging Active Directory user accounts and groups.
  • And so much more!

What happens when a user locks out his user account? He calls you, right? Well, something else happens, too: your domain controller logs the event. The neat thing about that is it's not just good for compliance reporting. Logged events like this are also useful for log management systems that monitor your logs and notify you in real time when a problem occurs. In this example, you wouldn't have to wait for the locked-out user to call you; you could set your log management system to notify you when the lockout occurs. At that point, you could contact the user proactively instead of waiting for a call.


There are countless other times your logs can tell you something's wrong before your users do, or worse, before some sort of failure occurs. For example, Windows event logs can tell you when users access or modify sensitive files, when critical services stop, or even when a suspicious process starts. On the network side, device logs can warn you of abnormal traffic patterns or traffic from a malicious source. If you had a way to constantly monitor these logs and get notified when things like these occur, you could make your life a whole lot easier.


Log Management the SolarWinds Way

SolarWinds Log & Event Manager (LEM), our SIEM tool not only provides the proactive problem analysis described here, it also provides functionality that helps resolve the problems remotely - even automatically. LEM can reset passwords, re-enable accounts, and even start and stop services. It goes far beyond what you would expect from a solution that monitors and aggregates log files.


Troubleshooting Complex IT Problems

If you're looking for a more complete remote support tool to troubleshoot and fix more complex problems than what you want to automate with LEM, check out DameWare Remote Support (DRS). DRS doesn't provide the alerting a log management system would provide, but it provides everything you need to solve the issues that come up:

  • Manage multiple Active Directory sites in a single interface
  • Edit Group Policy Objects from your workstation, not your domain controller
  • Start and stop desktop services on remote computers on demand
  • View registry information on a computer without remoting into it
  • Remotely control desktops and servers with integrated chat and screenshot functionality


Together, LEM and DRS create an IT troubleshooting environment that's not only compliant, but also proactive and flexible.



I believe that an IT department's best tool for keeping customers happy is through communication. When I turn the monitors on, login, and begin the day's first task I expect to find what I need on the network quickly. I want to get my first task done and move on to the next.  It's like flying - I want to get from point A to point B and then move on to the next trip. There is nothing worse than showing up for a flight to find it is delayed for some unknown amount of time and some unknown reason.  I traveled two to three times a week during most of the nineties. Unfortunately this was probably the worst period for airline communications. This was the time of the big IT build-out, and IT was hearing the same thing the airlines were, "Tell me what the heck is going on!".


IT Service and Bragging Rights


IT services are much more reliable overall than when we were just figuring out how to deliver networks on a large scale. TCP/IP winning the protocol wars was a big step forward. Not that TCP/IP was the best layer 3 and 4 solution, it was the most widely accepted and the deal was done when Microsoft and Novell announced native support for TCP/IP. IT is by no means an easy job, you have to know a lot and be able to connect the dots in pressure situations. Coordinating hundreds of flights every day can't be easy either. I don't know if the airlines are actually doing better than they were 15 years ago, but I feel more confident traveling now because they have improved so much in communicating with passengers.


Here are three steps I see IT following today to make their users more confident when an issue arises:

  1. Detecting the issue.
  2. Assessing the impact.
  3. Communicating the issue to the users.

Then IT does something the airlines probably will never do; they ask you how your experience was. Not only that but they communicate the issue status all the way through to resolution.


IT Service Management Automation

If you have read many of my blogs, you know I'm a huge fan of automation in IT. Without automation the above steps would take a long time. So, what are the gears that make this automation possible?

  1. Network Management System (NMS) fault and performance alerting.
  2. NMS alert to Help Desk ticketing.
  3. Help Desk task management, messaging, and surveying.

When the above are all tied together, IT manual task and process overhead are greatly reduced and the user knows that IT sees the issue and is on it. With SolarWinds Network Performance Monitor, Server and Application Monitor,  and Web Help Desk you will have this covered.


Now if I could just stop the airlines from charging me for extra for some leg room......




The Importance of IOPS

Posted by docwhite Oct 24, 2012

This article has nothing to do with pancakes. But feel free to eat one while reading.


Writing data to disk is the most fundamental source of latency in an online application that performs a high volume of concurrent transactions. Assuming every part of the application stack is tuned as well as possible, the operation of writing data to disk in the storage array will always be the bottleneck in scaling the system.


This is why platform engineers will tell you that their ultimate challenge in architecting a system to handle peak loads of online transaction processing is to maximize the system’s input-output operations per second (IOPS). While the entire software stack—at web server entry points, application layers, the database itself and the storage subsystem—must consistently and coherently cooperate to produce and share data as quickly as possible, the database is where all transactions must get in line for the one moment—writing to disk—when data absolutely cannot be shared.


In fact, database tuning consultants often measure their success in terms of the IOPS their efforts enable at peak system workloads. All of these operations impact IOPS:

  • How applications make SQL queries
  • How server resources interact with database management processes
  • How storage arrays are setup to move bits to and from disk.

Demand from financial institutions in particular, for whom IOPS literally equal dollars, strongly drives technology innovation. Solid state drives are a current example of enterprise technology innovating to increase IOPS.


Watching a Database Write

Assuming your in-house or mercenary gurus haved tuned your production platform, you need a way to monitor IOPS against some critical threshold. While every database and storage vendor sells monitoring tools optimized for their products, none offer a console for seeing performance across vendor products. Since enterprise planning teams tend to avoid being yoked to a single vendor for any piece of their production system, your IT team probably needs a third-party product with multi-vendor support to monitor the different database and storage components in your system. From a single web console, for example,  to monitor IOPS on all storage devices in your system, SolarWinds STM, storage performance software uses the most efficient method (for example, SMI-S Provider APIs, SNMP, CLI)  available for a given array. So, go-ahead and check out the ideal storage performance monitoring solution. Storage monitoring simplified!



Insourcing is supposed to be the next big thing in the IT world. At least that’s what business guru type sites like,, and have been saying. And it looks like some companies may really be following this trend. A few months ago, General Motors shocked the business world when it announced that it was bringing most of its IT functions back in house. In fact, last month, GM announced that it was hiring 10,000 IT pros, as explained in the Computerworld article, GM to hire 10,000 IT pros as it 'insources' work.


Is your company following this trend to bring IT services back in house? Of course, your company may very well not be outsourcing its IT functions at all. According to the accounting and consulting firm BDO USA, LLP's recent study of 100 Fortune 500 CFOs, “Outsourcing in the U.S. technology industry has declined for the third straight year…This marks a notable shift from 2009 when nearly twice as many companies (62 percent) were outsourcing.”


Perhaps the biggest issue a company needs to look at when considering insourcing or outsourcing is which option best supports its goals. What does the company want to achieve? If a small startup has limited financial resources, outsourcing can provide immediate financial savings, so the company can invest more in R&D. A more proven company with an established department may prefer insourcing so it can have greater direct control over its IT functions.


In the network management arena, you might also want to ask yourself which option – insourcing or outsourcing – would improve your network’s quality and availability. If you need some help figuring out what’s right for your company’s network management functions, SolarWinds may have some answers. With SolarWinds Network Management software, you might have more robust, cost-effective network management options than you thought, including:



SolarWinds’ easy to install, easy to use, enterprise-ready software does not require an enterprise to operate it. This gives companies the ability to control costs without sacrificing functionality. In the past, one of the biggest reasons to outsource was to control costs. Signing a single check every year meant that you knew exactly what you were getting, cost-wise. However, with the recent gains in automation and software technologies, outsourcing may make less sense – financially or otherwise – than it used to.


Trunks and Trunking

Posted by LokiR Oct 23, 2012

Continuing on with some telecom basics, here's a basic overview of trunking.




A trunk is a single channel that allows data to move between two points. If you have a network topology map handy, you could consider the lines between each switch to be a trunk, though it's slightly different in telephony.


Trunking is a way to send multiple conversations over a single line. This is highly related to my previous PBX article where an organization rents a couple of lines for use by all members.


An Abstract Example

On a very abstract level, every time you connect a device to your computer, like an external drive or mobile device, you could consider the cable to be a trunk. Point A is the external drive. Point B is the computer. The cable between the two is the trunk.


Now, if you add a USB hub to the mix, you have an example of trunking. Let's say you have an external drive and a mobile device. You could plug both into your computer, but maybe you only have one open port. So you plug in the USB hub and plug both the external drive and mobile device into the hub. They both send signals through the cable connecting the hub to the computer. The act of both devices sending signals through the last cable from the USB hub to the computer (and not getting the signals mixed up) is trunking.


Trunks in Telecom

More realistically, a trunk is the massive bundle of lines your telephone company controls.  Organizations rent a few lines from the telephone company, generally called trunk lines. Rented lines are expensive, so organizations generally try to rent as few as possible. The trunk line goes to a PBX system (or IP PBX systems nowadays), and organizations use their PBX systems to coordinate which device gets what information and then decide which lines transmits the data.


Trunking comes in a few different formats as well, such as SIP and T1 PRI. This is controlled by the PBX system and your telecom provider. Stay tuned for more information on SIP and PRI trunking.



Several of the more technical folks among our customers have asked how SolarWinds Patch Manager communicates with Microsoft WSUS to publish update packages to the WSUS server. Since the process is similar regardless of what patch management tools you use to publish updates, I thought I'd write about it here.


The thing all the WSUS-based patch management products have in common is they all leverage the WSUS API to publish updates. The following outlines how Patch Manager, WSUS patch management completes this process:

  1. Patch Manager, patch management solution initiates the publishing task:
    1. It loads the update definition into the WSUS database.
    2. It compiles the update installer(s) in a cabinet (CAB) file.
    3. It creates the CAB file on the WSUS server in the ~\UpdateServicesPackages file share.
  2. WSUS simulates the "File Download" task:
    1. It renames the CAB file with a 40-character hexadecimal representation of the original CAB file's SHA-1 hash.
    2. It makes the CAB file available for download by copying it to the appropriate folder in the ~\WSUSContent file share.


For additional information about how Patch Manager, patch management software works, check out the Patch Manager Guided Tour.




What is the Cloud?

Posted by DanaeA Oct 22, 2012

You have heard it mentioned by your friends, coworkers, on TV, “oh, just save it to your cloud”, “is it backed up on the cloud?” and you’ve been too embarrassed to ask…what is this infamous “cloud” everyone seems to know about?

Well, the cloud is really just the internet. If you have used Facebook, Yahoo, or Gmail, then you have used the cloud. The cloud or cloud computing is a collection of hardware, software, networks, storage services, and services (Facebook, Gmail) kept in remote locations. Apple’s iCloud, for example, is a huge server farm in North Carolina. If you are used to running your software programs on your personal computer, well, that all changes with the cloud. When your programs, email, photos, and documents reside on remote servers (cloud storage), you can access them from anywhere.


Advantages of the Cloud

Besides being able to access your information from anywhere, you can access it on multiple devices. For example, I received a Kindle Fire for Christmas and to me, it is magic. I can download a book in seconds, start reading it and then turn it off. A few hours later, as I sit waiting in the doctor’s office, I remember that book and open my Kindle app on my phone and there is my book, open to the page I was on when I left. Now tell me that isn’t magic. Another advantage is that all your documents, music files, videos, and photos are all backed up on the cloud servers, so you don’t need to worry about losing them.


Concerns of the Cloud

Since these files are stored remotely, you must have an internet connection to access them.  Accessing your files, photos, and music takes bandwidth, but not very much. What you really have to watch out for are streaming movies and video games. Both of these use a lot of bandwidth, so you may go through your allocated amount of data very quickly and then incur extra charges.



Remote Troubleshooting


One of the major job functions for IT professionals is managing and troubleshooting Windows computers. System administrators use various tools and techniques to accomplish this and most of the time it requires the administrator’s physical presence at each computer. Moving around from one system to another figuring out what went wrong makes it hard for administrators to be efficient.  As organizations have expanded beyond national borders and practices like telecommuting are being practiced in greater numbers, remote support tools have become staple items in the toolsets of almost all IT professionals.


Perhaps the most widely used tool for remote assistance Windows is Microsoft’s Remote Desktop Protocol (RDP).  RDP is native to all Windows operating systems since Windows XP (except XP Home).  This means that sys admins can provide remote assistance to Windows XP users and remote assistance to Windows 7 users alike.  It also means they can provide remote assistance to Vista users.  It allows IT Pros to remotely control Windows machines from anywhere in the world as long as they can establish a connection and certain other conditions are met (i.e. port 3389 open on the receiving firewall, etc.).


Though RDP is easy to use, it cannot be considered a complete remote support tool for Windows computers because its functionality is limited to simple remote control sessions.  During an RDP session, users are unable to share screens, making some troubleshooting processes cumbersome.  RDP does not allow techs or system administrators to perform remote administration tasks such as restarting services, viewing event logs, or stopping processes without initiating a full remote control session.  This can be especially frustrating because there are times when simple administration tasks can resolve computer issues and initiating a full remote control session may lock an end-user out depending on which version of Windows is running on their computer.   Ideally, IT Pros would have access to tools that allow them to initiate remote control sessions and perform remote administration tasks from the same console.



DameWare Remote Support (DRS)


DameWare Remote Support is exactly that; a complete remote support tool that IT Pros can use to manage all of their Windows computers from one console.  DRS allows admins to initiate remote control sessions through one of three methods:




• DameWare’s own Mini Remote Control Viewer (MRC)


This offers admins a great deal of flexibility when connecting to remote computers and it opens up the realm of Mac OS X and Linux to admins who need to support a mixed-OS environment.


DRS lets admins perform Windows administration tasks from its console without needing a full remote control sessions.  Some of these tasks include:


• Viewing event logs

• Viewing and killing runaway processes

• Starting, stopping, or restarting services

• Accessing the Disk Manager to manage partitions

• Managing and creating local users and groups

• Managing attached printers and installing drivers remotely

• Scheduling tasks

• Editing the registry and uninstalling software



In addition to the time-saving remote support features included in the DRS console, other common IT tasks can be performed with DRS.  These include:


Managing multiple Active Directory Domains

• Adding, removing, or modifying AD users, security groups, and OUs

• Managing group policies

• Managing Exchange accounts

• Exporting Active Directory objects in bulk to CSV or Excel files



DameWare Remote Support makes the day-to-day tasks of IT Pros much easier to handle and increases productivity by allowing them to perform many common job functions from one MMC-style console.

A Geek's Tale

Posted by Bronx Oct 22, 2012

I am proud to be a geek, even though I don't look like one. I came up with the idea for this geeky analogy after an unpleasant experience of choosing my "free," upgraded phone. Naturally, it wasn't free, nothing ever is. (I will not discuss my carrier's questionable sales tactics, despite my frustration.) Anyway, I realized I had a choice to make when upgrading my phone. Stick with the Android, or switch to an iphone or a Windows phone. Decisions, decisions. What to do? Undoubtedly, the iphone is the king of the touch-screen phone realm, but being a programmer and geek extraordinaire, I like to tinker.


iphone: I didn't find the iphone to be a suitable match because of the restrictions Apple imposed. There were certain features of the phone I simply could not change or access, lest I jailbreak the thing. If I have to hack a phone to get it to do what I want, then it's not for me. Mind you I am capable of doing so, but seriously? Have we really come to this? Hacking phones to do something fairly routine? iphone cool, but not a good fit.


Windows phone: I love experimenting and I thought this puppy was gonna be great. Microsoft operating systems on both my PC and phone? Xanadu! Then I turned it on and spent two hours "tinkering." My new Windows phone had the exact same restrictions that the iphone had! I was stupefied. Brought the clunker back to the store.


Android: Disappointed and frustrated, I exchanged my shiny new Windows phone for a shiny new Android. Look at that! All the restrictions the other two phones had were not present with my new Android. Needless to say, I was very pleased with my final decision. I can do everything I want with my phone.


Moral of the Story.

I almost find it odd that I didn't base my choice of phones on the quality of the calls. (Who talks on the phone anymore?) The features were there in all three types of phones. My choice was based almost solely on the lack of restrictions the phone had, in addition to the availability of settings. Customization was important to me, very important.


What's this got to do with network and application monitoring software?

Everything. No two network and application monitoring environments are equal. No two Admins are equal. I suspect, like me, you want to be able to tinker and customize your software to suit your needs. (Where's the fun in everyone driving a green 1994 Mazda 626?) You want your dashboard to highlight what you care about. Your reports and alerts need to provide you the information you want. You want to be able to use your cell phone to monitor your environment. With SAM, the options and settings are there.


Server & Application Monitor (SAM) has settings galore!

In fact, even its settings has settings! Customize to your heart's content with SAM. How you set up your monitoring environment is as unique as a snowflake. See for yourself!






















Our digital universe is rapidly expanding, and we're collecting the data. We're gathering social media data, machine data, transactional data, and research data. Information from each tweet on Twitter, like on Facebook, and view on Youtube are recorded. Consumption patterns are tracked at superstores, corner stores, grocery stores, and liquor stores. From the quark to the universe, from the Genome Project to the NASA's New Horizons mission, our researchers are recording and storing data. Data collection is everywhere, and it's big.


How big is big? Visualizing the volume of Big Data


Big Data is generally measured in the petabyte to zettabyte range.

Bytes by order of magnitudeVisualizing order of magnitude



kilobyte (kB)


megabyte (MB)


gigabyte (GB)


terabyte (TB)


petabyte (PB)


exabyte (EB)


zettabyte (ZB)


yottabyte (YB)






According to the Digital Journal, the volume of global data is doubling every two years. As the volume of global data increases, the cost of storage decreases.






Whose using Big Data?

Forbes combined search analytics and the latest Gartner forecast on Big Data to glean who are the largest Big Data consumers.




How is Big Data used?


Big Data is seen as the next frontier for innovation, competition, and productivity. Businesses, governments, and research institutions are investing in ways to harness the full potential of the Big Data frontier. Next week I'll take a deeper dive into the Big Data consumers, and how they're using this vast expanse of data to solve real problems and (hopefully) improve our human experience.

Knowing the Vulnerabilities is Key.

From what I have seen in my years of network management, there is a good deal of misunderstandings surrounding implementing SNMP. The procedures for basic implementation are well understood, but the problems with using default settings and broadcasting SNMP are not well known. Considering that SNMP is used to manage most every network, making sure that you secure access to SNMP is critical. Here are the areas I believe you should check in your network.


Proper Use of Community Strings.

Community strings are a type of password. They control access to Management Information Bases (MIBs) and define the level of access. Here is where one of the problems occurs. Now I don't have a scientific poll, but I am willing to bet if you asked a group of network engineers what the SNMP v2c community strings are, a good number of them would answer, "public and private". The correct answers are read only and read/write.

This misunderstanding happens because the default settings for read only and read/write are "public" and "private". When SNMP v2c is enabled, most devices will populate the read only and read write community string fields with these defaults. I have seen more than once where the Network Management System (NMS) SNMP strings were then set to public and private to allow the NMS to communicate with the devices. Here are a few things you can do to increase the level of security on SNMP v2c.

Best Practices to Avoid SNMP Security Issues.

  • Never use default community strings on devices or your NMS.
  • Use unique community strings by geography or by device function. For example, create unique community strings for WAN access devices, EMEA area devices, data center devices, etc. SNMP v2c community strings are passed in plain text, so this way if one area or device type becomes compromised, the rest of the network is not compromised.ove
  • Run a scheduled discovery for devices using the default community strings as well as the discoveries using valid strings. Once you have a discovery for devices answering to default strings, add an alert for that condition. This automates locating and taking action to correct these devices. You will want to give your network security a heads-up as a scan for default community strings may trigger alerts in security devices.
  • Use automated network configuration management. While default community strings are a security issue, the root cause lies in configuration management weaknesses.
  • Use an automated policy compliance reporting package to demonstrate compliance with internal and external policy requirements.


A couple of great pieces of software to accomplish the above best practices are SolarWinds Network Performance Monitor (NPM) and Network Configuration Manager (NCM).


NPM network monitoring software discovers devices with default community strings and alerts on the issue. NCM offers extensive configuration management and compliance reporting.

If you are ready for the jump to SNMP v3, check out this technical reference.


Fair warning for telephony gurus - this is basic overview of PBX.


What is a PBX?


A PBX connects internal telephones calls and connects outgoing calls to the public network.


PBX stands for Private Branch Exchange and harkens back to the days before cell phones and touch-tone phones - back to the days of real telephone operators. In those dark times, you would connect to a live operator and tell her who you wanted to be connected to. If you were placing a local call, she could connect you right away on her own switchboard. If you were placing a long distance call, she would connect you to a different operator close to the physical location of the place you were calling.

Telephone operators, 1952, from the Seattle Municipal Archives

Nowadays this process is taken care of with technology, but the part where the operator makes the decision to connect you locally or to a different operator is the basis of how PBX systems work today. Technically a more modern

example would probably be a subnet, but telephone operators were neat.


The private branch includes all those local numbers. The PBX works to connect those local, or internal, numbers to each other and also connects a local number to a number outside of the private branch.


Why have a PBX?


Organizations generally use a PBX to save money. A PBX system also includes modern bells and whistles that are a part of everyday life, such as call holding, forwarding, and transfers.


It costs a lot of money to have a dedicated telephone line. Organizations will decrease their cost by renting only a couple of telephone lines instead of direct lines for each employee.


For example, assume that your organization has 40 people. Renting 40 telephone lines is an exorbitant amount of money. Instead you take a calculated risk and decide that out of those 40 people, only 10 people at a time are going to be talking on an outside line. You then rent 10 lines from the telephone company and set up your PBX. Each person receives their own telephone with an extension. Those extensions are numbers local to the private branch. The PBX then handles connecting local numbers and connecting local numbers to telephones outside of the private branch.


The PBX handles extensions, directing internal calls to the correct extension, and connecting internal numbers to numbers outside of the private branch on the rented public telephone lines. It manages all calls by establishing, maintaining, and disconnecting connections between numbers and records call metrics.


PBX systems can be rented or purchased, though you are more likely to use IP PBX systems now.

Contemporary wireless technology showcases a martial relationship between security and privacy concerns; sometimes appearing to balance each other while always being leveraged to gain advantage with emerging commerce.


Knowing how always implies a readiness to do—which lacks only a motive and opportunity, as any sleuth will tell you.


For a time the company Carrier IQ would happily tell you of their deal with the Nielsen Company to “deliver critical insights into the consumer experience of mobile phone and table users worldwide, which adhere to Nielsen’s measurement science and privacy standards.” On November 12th, thanks to Trevor Eckhart, we discovered that Carrier IQ technology embedded in millions of mobile devices could record and send as data to the relevant carrier (AT&T and Spring, in the US) not just the usual clicks on webpages but also strokes made on the device’s keypad.


Following the Carrier IQ ‘rootkit’ disclosure, Carrier IQ, wireless device manufacturers, and wirless service providers attempted to reassure consumers that all data was being used to improve the consumer experience. Everyone denied that, though keystrokes could be recorded, such data was being collected; or if it was being collected, they hedged, the data wasn't being used for commercial purposes.


Meanwhile, the FBI implicitly finds Carrier IQ-related data so valuable in their ongoing surveillance operations that it declined to comply with a Senate Judiciary Subcommittee’s request for the bureau’s written guidance on how to access and analyze Carrier IQ-gathered data.


Every Wireless Device is a Potential "Wire"


Telecommunications carriers have a legal right to ensure that their networks are working properly, which extends to listening to customer phone-calls as needed. PR from Carrier IQ and the carriers themselves emphasizes the diagnostic use of the wireless usage data they are collecting.


Under these circumstances, we should assume that any consumer wireless device can be a ‘wire’ in the surveillance use of the term. The company Fortinet in part builds its business on developing security products based on this assumption. The FortiGate thick wireless access point, for example, besides managing wireless networks and traffic, filters data in many different ways.


Let’s say your team uses the SolarWinds product Mobile Admin as part of its triage workflow. While the Mobile Admin server mediates access to a network device, Carrier IQ technology on an individual mobile device could capture configuration changes tapped into the keypad. Properly setup, the FortiGate access point would provide a layer of security that prevents that keypad data from being transmitted out of the network.


Ultimately, since technology can be used for good and bad ends, an indispensable part of taking care of your network is knowing what users are doing on the network, when they are doing it, and with which devices. For that, you need a monitoring product like the SolarWinds User Device Manager.

In a previous post, I answered the question, What is Patch Management? In that post, I noted that Windows environments generally use one of two Microsoft tools to handle their patching needs: Windows Server Update Services (WSUS) or System Center Configuration Manager (SCCM or ConfigMgr). In this post, I address the major differences between these two solutions.


The biggest difference (at least in my eyes) is that WSUS is free and ConfigMgr isn't. So that really begs the question of why any IT operation would choose ConfigMgr over WSUS. The short (and somewhat cheeky) answer is that there is nothing you can do with ConfigMgr that you can't also do with some combination of WSUS and other free tools. That said, the following highlights the top three use cases ConfigMgr supports natively that WSUS doesn't.


Operating System Deployment

ConfigMgr supports deploying a new or clean operating system to managed computers already running the ConfigMgr agent. This is great when you need to upgrade or re-image a managed computer, but it doesn't do much for fresh hard drives or VMs. The free tool for this use case is Windows Deployment Services (WDS), which Microsoft added with Windows Server 2008. The cool thing here is that WDS also supports OS deployment to computers not already running Windows.


Software Packaging

ConfigMgr supports software packaging, which includes packaging non-Microsoft updates to deploy to managed computers. To get support for third-party updates in WSUS, you need some kind of WSUS extension. However, the downside of ConfigMgr's software packaging is after you package the updates, you have to create a separate deployment package to get them to the appropriate computers. The deployment package is loosely comparable to a WSUS approval, except approvals are update-specific, while deployment packages can contain multiple updates.


Asset Management

ConfigMgr supports asset management - that is, collecting and reporting managed computers' hardware and software information - through automated discovery tasks. WSUS supports this too, but the functionality is only available in the WSUS API, so you need a WSUS add-on or some extensive developer skills to implement it. I discuss how SolarWinds Patch Manager can supplement WSUS reports in this way in another post, but when it comes to patch management,  Patch Manager's managed computer inventory task is more on-par with what ConfigMgr offers and is the right patch management solution.


For those of you interested in implementing ConfigMgr in your environment, I'll address some of these features, along with others, in more detail in another post. So stay tuned. Otherwise, between the patch management tools, you might also want to check out how Patch Manager supplements Microsoft SCCM patch management.


I'm new to IT storage management, for me, storage monitoring is first-hand and the first things I asked were: Who needs storage management and why do they need it? How important is it to have an efficient storage performance monitoring system? Here’s what I found:

  • The primary goal of storage management is the timely, free flow of information.
  • If your businesses is using storage on a network, you are a good candidate for storage management software.
  • Effective storage management improves reliability, scalability, security, and efficiency.
  • Effective storage management maximizes productivity, saves money, anticipates future needs, and mitigates risks by notifying teams when there is a problem.


Okay, this is good information, but how does storage management software meet these lofty objectives? Some examples of what a storage management solution can do:

  • Monitor data and balance loads on volumes, RAID groups and, storage pools
  • Monitor DAS, NAS, and SAN performance capabilities
  • Provide real-time information on the performance and status of your storage environment
  • Identify how storage resources are used and predict future usage patterns


Many businesses choose to use mixed vendor storage solutions. Using storage devices from different vendors offers cost and feature flexibility. But these heterogeneous storage environments can lead to storage management nightmares. Storage Manager powered by Profiler is the leading third-party storage management solution for monitoring and managing multi-vendor devices. Examples of supported devices include NetApp ®, EMC ®, IBM ®, HP ®, Dell ®, Sun ®, and many other NAS, SAN, and Fibre Channel vendors.


I was surprised to learn that storage management and having a storage performance monitoring software is often an afterthought and comes into focus after a problem is encountered. Solarwinds has a need-for-speed storage management solution. You can quickly download Storage Manager, install it, and learn to use it so you can find and fix storage fires.


I feel like a spelunker in a vast cave system. Storage management is complex, but I am determined. Stay tuned as I continue my adventure.



If you're still using a 3G mobile network, you may want to move up to 4G - and fast! According to the article, 3G protocols come up short in privacy, say researchers, British and German researchers "...found that 3G telephony systems pose a security weakness that results in threats to user privacy. The weakness makes it possible for stalkers to trace and identify subscribers." The hole in 3G security, the article further explains, can also enable spy operations and commercial profiling


Why is Security an Issue Now?


Since 3G networks have been in use since 1999, it seems odd that 3G now has security problems. The paper resulting from the British and German study, New Privacy Issues in Mobile Telephony: Fix and Verification, explains why this problem exists today, rather than a decade ago. A decade ago, the high cost of the equipment and the lack of open-source protocol stack implementations made a 3G system security breach extremely unlikely. Today, however, access to all kinds of cheap technologies makes it easy to breach a 3G network, including easily programmed Universal Software Radio Peripheral (USRP) boards and software emulation.


Proposed Fixes


Proposed fixes are based on public-key cryptography. This system requires two separate keys: one public key for encryption and one private key for decryption. The two keys can perform only the functions they’ve been created to perform; a public encryption key can only encrypt and a private decryption key can only decrypt. The two keys are mathematically linked and you can use the private key to generate a public key. The New Privacy Issues in Mobile Telephony: Fix and Verification paper provides details on this solution.



Have you heard about Huawei and ZTE? An October 8, 2012 Reuters article, U.S. lawmakers seek to block China Huawei, ZTE U.S. inroads, states that the U.S. House of Representatives Intelligence Committee has recommended “U.S. telecommunications operators…not do business with China's top network equipment makers because potential Chinese state influence on the companies poses a security threat.” The Intelligence Committee issued this statement in a report based on nearly a year’s worth of investigating U.S. complaints about Huawei and ZTE telecommunications equipment.


This recommendation applies to networking hardware and software used in large-scale networks. Not individual telephones, which Huawei and ZTE also make. According to the Intelligence Committee’s report, companies that used Huawei equipment say Huawei routers performed a number of strange behaviors – such as sending large data packs to China late at night.


Huawei and ZTE are two of the largest network hardware companies in the world. Huawei is the second largest network hardware company in the world. ZTE is the fifth largest. To get the full story on just how significant this possible economic espionage case is, read the original Reuters article.


This situation strikes me as incredibly important for a number of reasons. For starters, what does this mean for U.S.-China relations and existing agreements? How will these allegations change the global market, if at all? And what about U.S. companies already using Huawei and ZTE equipment? Do they ditch what they have and start over? Is there a penalty for not doing so? Do these companies even want to keep their equipment?


I’m really curious to get your opinions on this situation – so tell me folks, what do you think? And what do you think the end results will be? Based on the results (and even the allegations) of this case, do you think we’ll see any changes in the way we do networking?



If anyone can break your code, then potentially everyone can break it. The difference between anyone and everyone is just access to the right tools and the time to apply them.


You may remember that in 2004 a retired engineer named Mark Klein blew the whistle on a National Security Agency (NSA) electronic surveillance tap setup in Room 641A at the AT&T building in San Francisco, CA. Fiber optic lines in the trunks carrying internet backbone traffic through the building are beam-split to feed Narus Sta 6400 deep packet inspection devices in this secure room.


The Narus devices are capable of looking at every packet of data passing through the internet backbone in real-time (10-gigabit-per-second), filtering all the data into a pipeline for warehouse storage. William Binney, a former director of NSA operations, estimates that 10 to 20 of similar rooms are setup in telecommunications facilities geographically distributed across the US.


It's all part of the formerly secret project called Stellar Wind; though illegal at the time, the project became retroactively legal with The FISA Amendment Act of 2008 and continues today.


Narus devices cannot inspect packets encrypted with AES; nor can any other data-mining system currently available. AES-encrypted data are like unbreakable nuts swallowed whole into the vast storage system. Even when the NSA's new 1 million square foot datacenter in Bluffdale, Utah comes online in 2013, the NSA will be no more capable of cracking AES-encrypted data. Crypto experts confidently dismiss brute force computer attacks on the Rijndael algorithm (underlying AES), saying that such an effort would take longer than the age of this universe.


Yet, building out their big data Leviathan is the NSA's long-term gambit to succeed in both comprehensive surveillance and storage, and also at gaining cryptoanalytic access to the most secure of data. The growing and housing of a data set in the yottabytes (1 yottabyte = 1quadrillion gigabytes) along with petaflops of computing power are conjoined attempts to enable their crypto software the leverage of scale and speed. As the spookiest of NSA cryptoanalysts hypothesize, if their software establishes enough patterns among similar data, the code-breaking software may defeat the AES algorithm.


Should the NSA Leviathan break AES, and assuming no stronger encryption algorithm takes over, then everybody's data will be subject to whoever controls the system. No digital communication over the internet would be private.


For now, however, nobody and no system on Earth can decrypt AES if used with 192 or 256 bit key lengths. In fact, the US federal government requires AES with a 256 bit key length for encrypting digital documents classified as 'top secret.'


Encrypting your Data

So if you want the data flowing to and from your network to be truly secure, you should use IPsec tunnels on the WAN linking your LANS, use AES cipher suites for TLS in passing web and mail data, use SNMPv3 for polling MIBS in all network monitoring.


As far as network monitoring tools that support SNMPv3, SolarWinds offers Network Performance Monitor for monitoring nodes, Network Configuration Manager for configuration downloads and uploading configuration changes, and VOip & Network Quality Manager for watching the quality of traffic flow on your VoIP-enabled network.



There is no doubt that we live in a high-tech world where everything is constantly getting faster. If you're over 40, like me (sigh), you may remember a time before emails. There was a time when people actually wrote by hand and mailed letters to one another to communicate. It was a romantic and personal way to communicate, but glacially slow.


Today we communicate via email and text messages. (Some of us still talk on the phone.) With this warp speed improvement in communications, our writing has become little more than acronyms and abbreviations. (Gr8.) Personally, I don't mind the abbreviations so much when being informal. However, it can be confusing at times. For instance, take the title of this post. In the not so distant past, a friend of mine handed me a magazine titled, WTF. I thought the same thing you just did. He explained to me that he just returned from Finland and the title was an acronym for, "Welcome to Finland." With that explanation, he handed me a can of reindeer meat. It was then I thought, "WTF?"


The Price.

Networking software and application monitoring software are replete with acronyms and abbreviations. It's a virtual language unto itself! Do you know this language well? If you know nothing about networking, SolarWinds software, or computers, the following exchange would be meaningless:


Hey Bill, can I FTP the RFP to the DoD, or should I just ZIP it and use the IPSec MPLS?

Well, it is up to you, but the MFIC has made it clear that SOP requires CESA approved methods, and by the way I think the ACLs are FUBAR anyway. LOL, TTYL.

The SolarWinds Acronym Dictionary:

Below is a list of the more common acronyms. Following this list is a link to a more complete acronym table.



Acronym:What it Means:
APMApplication Performance Monitor. Name changed to SAM. (SolarWinds software)
BIOSBasic Input Output System
BBSBulletin Board Service
CMYKCyan, Magenta, Yellow, Black
DBADatabase Administrator
DLLDynamic Link Library
EoCEnterprise Operations Console. (SolarWinds software)
FoEFailover Engine. (SolarWinds software)
FSMFirewall Security Manager. (SolarWinds software)
FTPFile Transfer Protocol
GUIGraphic User Interface
HTMLHyper-Text Markup Language
HTTPHyper-Text Transfer Protocol
HTTPSHyper-Text Transfer Protocol (Secure)
IPInternet Protocol
IPAMInternet Protocol Address Manager. (SolarWinds software)
IPV4Internet Protocol Version 4.
IPV6Internet Protocol Version 6.
LEMLog & Event Manager. (SolarWinds software)
MAPIMessaging Application Programming Interface
MIMEMultipurpose Internet Mail Extensions
NTANetflow Traffic Analyzer. (SolarWinds software)
NCMNetwork Configuration Manager. (SolarWinds software)
NOCNetwork Operations Center.
NPMNetwork Performance Monitor. (SolarWinds software)
RAIDRedundant Array of Independent Disks
RCRelease Candidate
RDPRemote Desktop Protocol
RGBRed, Green, Blue
RPCRemote Procedure Call
SAMServer & Application Monitor. Formerly APM. (SolarWinds software)
SeUMSynthetic End-User Monitor. . (SolarWinds software)
SNMPSimple Network Monitoring Protocol
SQLStructured Query Language
SWISSolarWinds Information Service
SWQLSolarWinds Query Language
TCPTransmission Control Protocol
UIUser Interface
URLUniform Resource Locator
UDTUser Device Tracker. (SolarWinds software)
VMANVirtualization Manager. (SolarWinds software)
WHDWeb Help Desk. (SolarWinds software)
WMIWindows Management Instrumentation
XMLExtensible Markup Language

The list is virtually endless. To see a more complete list of such acronyms, check out this Wikipedia article.

As you all know, part of what we value here at SolarWinds is the connection with IT users – all of you – who have a job to do every day and use our products to get that job done.  Now normally we don’t announce new roles publicly but since these individuals will be very visible on thwack and other forums, I wanted to take the time to introduce the role and both of them.


As a techie at heart, it’s always challenging to find real knowledgeable help when you need it most. Normally it’s just when you are in the middle of doing something that you run into a brick wall.  That is when you go in search of those folks (the geeks) who’ve already walked the path you’re trying to walk so you can avoid the lengthy detours that they took to get to their destination.  Well the new geek role at SolarWinds is targeted squarely at this problem – our geeks are the real thing.


You’ll find the SolarWinds geeks on thwack, you’ll find them on other forums too, answering questions, developing useful content (how-to’s etc) that help make your life a little bit easier.   They both know a lot about our products and systems and network management so no question is too tough.


Patrick Hubbard is our new Networking Geek.  Patrick has been with SolarWinds for many years but has been involved in IT since the dawn of time.  Patrick is the guy at SolarWinds credited with putting together the live demos that many of you have used, and he knows a ton about networking.  Patrick’s thwack profile.


Lawrence Garvin is a Microsoft MVP and is well known in MS sysadmin circles, he’s a techie at heart and like many of you spends a fair bit of time learning new technologies in his spare time so feel free to throw your systems questions Lawrence’s way. Lawrence’s thwack profile.


Both of these guys are an invaluable resource to SolarWinds and we hope going forward for all of you. If you have ideas of topics you really want them to cover you can drop either of them a line ( and

Welcome to SolarWinds inaugural IT Blog Spotlight column where we will share some of our favorite IT blogs and the geeks behind them.


Tom Hollingsworth is a network engineer based in Oklahoma City who has a lot of letters after his name (CISSP, CCNA, CCDA, CCNP, CCDP, CCVP, CCSP, CQS: Unity Support & Design, CQS: Rich Media, CQS: UCCX, CDCNISS, MCSE 2000: Messaging & Security, Novell Master CNE, HP Procurve Master ASE: Convergence, A+, Network+, Security+, Linux+, Project+, and VMWare VCP). Tom started The Networking Nerd in 2010 because he was looking for an avenue to express his thoughts on all things IT in more than 140 characters.


Tom tries to balance his blog posts between his own opinions, while also helping train and educate those interested in the IT space by providing overviews and technology deep dives. His most popular post is When Is A Trunk Not A Trunk?


When asked what the top 5 tools a network engineer needs in their toolbox are, he came up with this list:

  • Network discovery tool such as an IP network browser to footprint a network
  • Notetaking software to document everything
  • Terminal client to make serial connections such as PuTTY, Tera Term, or ITerm2 and ZTerm for you Mac fans
  • Social Media – you can solve a lot of problems on Twitter
  • NetFlow monitor (of course we recommend SolarWinds Real-Time NetFlow Analyzer or NetFlow Traffic Analyzer)


We think this educational, opinionated, and humorous blog is worth checking out!

Tom Hollingsworth.png

Connect with Tom:


Twitter handle: @networkingnerd


If you've got an IT blog that you would like in the SolarWinds IT Blog Spotlight, send us the link and we'll check it out.


Bring Your Own Device

Posted by DanaeA Oct 16, 2012

You are browsing the apps on your smartphone and you see one that catches your attention. It is from a trusted source, so you press Download App without a second thought. What you don’t realize is that this simple app could cause a security risk to your corporation. IT departments are beginning to realize the dangers caused by bring your own devices (BYOD), such as personal cell phones and tablets, and they need to have set security policies in place for these devices.


Mobile Device Management (MDM) applications can help impose password enforcement, remote locking and wiping, and app inventory control. It is also recommended that remote or BYOD users only access the corporate network using a virtual private network (VPN).

The National Institute of Standards and Technology recommends the following BYOD security measures for organizations:

  • Remotely wipe the device if it is lost or stolen
  • Require a password or other domain authentication
  • Have the device automatically lock after it is idle for a set amount of time
  • Restrict which applications can be installed through whitelisting
  • Restrict the use of synchronization services
  • Digitally sign applications
  • Distribute the organization’s applications from a dedicated mobile application store


Controlling accessible websites by having a blacklist and whitelist reduces the chances of a user going to a malicious site. A blacklist is a list of sites that users are prevented from accessing from or through the corporate network. Whitelisting can be used to restrict which applications can be installed on corporate devices or devices that use the corporate network.


The Changing Paradigm of IT

Posted by LokiR Oct 16, 2012

How do you view Information Technology? Do you think it's important? Or is it just another job, a few steps up from flipping burgers?


For those of you who don't think it's that important, let's imagine a time when IT just stops working.


If IT Stops


Suddenly your fancy gadgets and gizmos don't work - your coffee machine doesn't dispense anything; your microwave is dead; your MP3 player stops working. You can't purchase anything on your credit card. In fact, you may not be able to purchase anything after the cash drawer runs out. The ATM no longer works. You can't get money from the bank because the bank doesn't know who you are or how much money you have available. You can't make telephone calls or watch TV. You don't even have electricity. Your car either won't start or won't let you go anywhere. The stock market crashes. The hospitals can no longer provide care.


Not Just the Techie Anymore


IT has  incredible impact on the modern world - it arguably is the backbone of the modern world. As a person in the industry, however, I don't think we quite grasp how important our industry is. Frankly, information technology is so ubiquitous that everyone takes it for granted.


There's been a shift in how technology is perceived, but have we also shifted our own opinions to match? We're not just the techie doing incomprehensible things to magically make the incomprehensible machine work better. We are in your centrifuges, watching your uranium. We are in your stock markets, watching your trading. We are in your cars, watching you drive.


The Cost of Errors


I think the first time the potential real-world impact of IT crossed the collective consciousness was the Stuxnet worm. For those of you who don't want to google that again, Stuxnet was designed to attack the Iranian nuclear program by sabotaging the centrifuges used in uranium enrichment. Prior to Stuxnet, most people thought of cyberwarfare as speculative fiction at best, if they thought of it at all. Now cyberattacks are mainstream news stories.


Some more recent examples of technology errors impacting the "real world" are the various stock market glitches. From the US to Japan to Spain and other countries, technical errors in either the software running the stock markets, or the hardware, have negatively impacted the exchanges. The recent Knight Software glitch on the NYSE cost an estimated $440 million. The stock market was down for 44 minutes.


How about the automobile industry? Outside of the factories, it doesn't seem to be a tech heavy industry? You stick a key in the ignition, and the good, old combustion engine starts up and you're traveling from point A to point B. However, GM had to stop selling several car models due to a software problem with OnStar. Renesas, a Japanese automotive chip manufacturer, significantly slowed automobile production in Japan and some US manufacturers due to losses in the 2011 earthquake.


Changing Attitudes


IT is more than the little gadgets and gizmos that make life so fun. We affect the world in big ways that we're just now realizing. When a software error causes $440 million in damages in less than an hour, it's time to start thinking about our jobs differently.

Overloading your Kiwi Syslog Server can occur in many ways. 

The first and most obvious way, is when there is a non-zero value in the "Message Queue overflow" section of the Kiwi Syslog Server diagnostic information.

A non-zero value indicates that messages are being lost due to overloading the internal message buffers. This can be verified by viewing your diagnostic information:

Go to the View Menu > Debug options > Get diagnostic information (File Menu > Debug options, if running the non-service version). If you see non-zero values then you know you have a problem.

The second way overloading occurs is when the "Messages per hour - Average" value in the Kiwi Syslog Server diagnostic information exceeds the recommended "maximum" syslog message output that Kiwi Syslog Server can nominally handle.  This value is around 1 - 2 million messages per hour average, depending on the number and complexity of rules configured in your Kiwi Syslog Server.

If either of these two scenarios is true for your current Kiwi Syslog Server instance, then load balancing your syslog message load can mitigate any overloading that may occur.

To load balance Kiwi Syslog Server, start inspecting your Kiwi Syslog Server diagnostic information, specifically looking for syslog hosts that account for around 50% of all syslog traffic.  These higher utilization devices are candidates for load balancing. This is accomplished through implementing a second instance of Kiwi Syslog Server.

For example, consider the following "Breakdown of Syslog messages by sending host" from the diagnostics information.

Breakdown of Syslog messages by sending host 

Top 20 Hosts






























From these diagnostics, you can see that and account for >50% of the syslog load.  Most of the time, 50% of all syslog events come from one or two devices, and this is indeed the case here.

To enable a load balanced Kiwi Syslog Server configuration, perform the following actions:

  1. Install a second instance of Kiwi Syslog Server on a second machine.
  2. Replicate the config from first machine to the second. 

    On the original instance – File Menu > Export Setting to INI file.
    and on the new instance – File Menu > Import settings from INI file.
  3. Reconfigure devices and to send syslog events to the new instance.

What is Patch Management?

Posted by phil3 Oct 15, 2012

OK...I admit: I've done this a little backwards. I've been writing a lot about the benefits of patch management solutions (see Avoid Crashes and Outsider Intrusion by Patching 3rd Party Applications) and patch management best practices (see Third-party update approval best practices), but I haven't really discussed the basics of patch management. So, in this post, I'll outline a few key concepts of patch management, and define some basic terms.


Patch Management - The Basics

First, let's answer the main question posed by the title of this post. Patch management is the process by which SysAdmins keep the software on the servers and workstations they manage up to date. This involves applying updates - or patches - to managed systems in an effort to maintain the highest levels of security, functionality, and consistency. A lot of this can be done by the end-users; however, the patch management process ensures the updates that get applied within a managed environment actually work with the rest of the software on those systems, and that critical updates are applied in a timely fashion.


That's where the SysAdmin comes in. Admit it: As end-users, it's easy for us to ignore the little popups from Adobe or even Microsoft telling us our software's out of date. Even the polite restart-Chrome-and-we'll-update-it-for-you icon gets ignored by users that don't regularly restart their applications, much less their computers. SysAdmins - by way of a patch management process - reduce this unpredictability by automating the updates they want applied, and systematically disregarding the ones they don't.


To make such a process work, several components come into play. The following are the basic components you're likely to find in any Windows patching environment.


The Update Server

The update server serves two purposes:

  1. It collects updates from one or more software vendors. In a standard Windows patching environment, the update server collects the updates directly from Microsoft.
  2. It distributes updates to managed clients. This part of the process is scheduled on a per-client basis, and is facilitated in part by the Windows Update Agent on each client.


Generally, the update server is either a Windows Server Update Services (WSUS) server, or a System Center Configuration Manager (ConfigMgr) server. In a future post, I'll outline some of the differences between these two options.


The Update Agents

The update agents accept updates from the update server, and then install them on a pre-defined schedule. For Windows systems, this is the Windows Update Agent (WUAgent). If the system is not configured to participate in some kind of patch management process, the WUAgent collects updates directly from Microsoft Windows Update, and then notifies the end-user with that all-too-familiar popup in the task bar. With a patch management process in place, you can decide whether or not to require/allow this level of user intervention.


The Management Console

The management console is where the SysAdmin makes the rest of the process work. This is where you approve and decline updates, and define how you want the WUAgent to handle the updates you approve. Furthermore, the management console lets you define groups of servers and workstations to streamline the approval process and impose a high level of consistency across similar systems.


What About Third-party Patching?

Third-party patching is something that is notably missing from the standard Windows patching environment. Microsoft Windows Update helps you keep your operating systems and Microsoft applications up to date, but it doesn't help you when it comes to patching Firefox, Adobe products, or Sun Java. That said, the WUAgent is particularly well-suited to handle updates from these and other third-party vendors. So, if you publish updates from these vendors to your update server (WSUS or ConfigMgr), the WUAgent on your clients will "see" those updates and apply them when they're approved and applicable for the client.


This is not a simple as it might sound, however. In order to publish a third-party update to a Windows update server, you have to also publish some metadata along with it. This metadata includes rules that define the type of clients to test for applicability, and then specify what constitutes whether or not the update is actually applicable. Furthermore, the metadata defines how the WUAgent should handle the installer for the update: Should it reboot the client before or after installing the update? Does it need to stop any processes before it can start the installer? and so on. Configuring these update packages can equate to a lot of work for you, especially if you're working with several third-party updates, or even just one or two that require a lot of work from the WUAgent (patching Java is a good example).


Bridging the Gaps

A good patch management tool is an efficient patch management software that can help bridge the gaps inherent in the shortcomings - some discussed here, others not - in the standard Windows patching environment. For example, SolarWinds Patch Manager, your very own patch management solution, can help bridge the third-party-update gap by providing a collection of pre-built, pre-tested packages from tens of popular third-party vendors. Patch Manager also provides deployment flexibility beyond what WSUS or ConfigMgr provide natively. Namely, with Patch Manager, you can deploy updates on demand - you no longer have to wait for your clients to check in with the update server, you just tell Patch Manager to push the update. In a similar fashion, Patch Manager can even deploy full-installer software packages to clients that don't even have the software yet.


To see how Patch Manager, the efficient patch manager software can help you implement a powerful, yet flexible patch management process, check out our interactive demo or download a free trial.

It's that time again. Time to vote for your favorite politicians (or "the lesser of two evils" as we have all probably joked with a smirk). Like me, you probably feel like your vote doesn't really count. We have over 300 million people in the U.S. and one vote feels like a drop in the ocean. You spend months learning the issues and backing your guy, only to find that the day after the election, he lost. It can be frustrating. So what's this got to do with technology? Good question.


The Politics of Software Development.

I'm guessing many of you never have worked for a company that develops software. (It's wonderful, trust me.) One of the amazing things about creating software is how new features get implemented in future releases. Think about it. Where do these new feature ideas come from? The Product Manager just doesn't make them up willy-nilly. Like any company manager, his ultimate goal is to increase sales. How does he do that? Answer: Give the people what they're clamoring for. (At least that's how it works here.)


A perfect example of this is the host of new features SAM has incorporated within the last two releases. Within one year, SAM has added tons of new features. Here are just of few:

  • Hardware Health Monitoring
  • Native support for Hyper-V
  • Multi-Edit Templates
  • Support for Google's Chrome
  • A Real Time Process Explorer

All of the new feature ideas came from the users. The Project Managers just have to determine how much time we have and how much money we can spend on them. Other than that, the users are doing the driving.


The Vote that Really Counts

Out of the 300+ million people in this country, only a small percentage actually vote. How many people do you think vote for new features in software they use? The answer is fewer than you think. I cannot give you an exact number; however, I have counted aloud higher in the third grade. (I was actually amazed that a handful of users could help drive a product as enormous as SAM.)


So what's stopping you?

When you ask or vote for a new feature, your voice is heard. It really is! Most of the staff, including the Project Managers, hang out here on thwack. (Some of us even hang here on our off time, just for fun.) This is the place that new ideas and features come from. Who better to improve the products than the people who use them on a daily basis? Think of your vote as a drop in a shot glass, as opposed to the ocean.


Cast your Vote! (No ID required)

Poke around this page, do a little reading and research, then think about features that would benefit you, and most likely others, the most. A good tip is to give a real world example as to how your feature request would benefit you and your monitoring environment. And hey, if your feature makes it in a future release, smile (sans smirk) knowing that you actually did make a difference.

Microsoft Windows Server Update Services (WSUS) provides a nominal amount of reporting information in its native console. This information is primarily related to the updates on the WSUS server and the clients that report to it. More precisely, the information you get from WSUS about its managed clients is limited to their patch compliance and specific data about their synchronization history and status. Basically, Microsoft WSUS provides three types of reports:

  • Update Reports
  • Computer Reports
  • Synchronization Reports


One thing users have noticed about these reports is that the Computer Reports are limited to update statuses for a specific computer or group of computers. That is, they lack any information about the computers' hardware or other attributes. This is not to say, however, that WSUS doesn't collect that information. The WSUS database (SUSDB) contains much more information about its managed clients than it actually displays. In order to access this information, you must either query the database directly (which Microsoft does not support), or purchase a product that uses the WSUS API to collect and parse the information.


Extending Microsoft WSUS for Detailed Reporting

There are currently two products on the market that extend the reporting functionality of Microsoft WSUS:


With these extensions of Microsoft WSUS, you get the following information in addition to the update-related information provided in the standard WSUS console:

  • Make
  • Model
  • Processor
  • BIOS version
  • Operating System (OS)
  • OS language
  • OS service pack
  • IP address
  • Monitor
  • Disks
  • Network adapters
  • Printers
  • Processors
  • Sound card
  • Video card


This significantly improves the reporting capabilities of your WSUS servers, thus improving your WSUS patch management. Furthermore, with Patch Manager, you get perfect patch management and the added capability to do the same update-level reporting on third-party updates. So, you can check the update statuses on your managed clients even if you're patching Google Chrome. WSUS patching, simplified!


But Patch Manager, the ideal patch management solution doesn't just supplement your WSUS reporting with canned reports. Patch Manager displays the information it gets from WSUS in several customizable views, which can show you both the update statuses on individual computers, and the computer statuses for individual updates. Add this to the extended reporting functionality discussed above, and you have a flexible, 360-degree view of everything you'll ever want to know about the computers in your publishing environment.


For additional information about how this patch management solution can extend your publishing environment, check out the Patch Manager product page.


VDI in a Nutshell

Posted by LokiR Oct 12, 2012

Have you felt how wasteful it is to spend money on 10 machines when you could spend the same money on a server and get the same, or more, capacity?


Ever worked some place where you couldn't get a RAM upgrade, but the boss' system got retired every three years like clockwork? Or waited around for a couple of days for that new system you can see on the other guy's desk? Or working on a machine that is five years past its prime, but the new hire gets a shiny new machine with all the extras?


Have you misplaced your work laptop or phone with all sorts of proprietary information on it? Have you lost months of work because your computer is not part of the backup system?


Theoretically, VDI is here to solve these problems.


What is VDI


VDI, or virtual desktop infrastructure, allows desktop operating systems to be hosted on virtual machines on a centralized server. VDI is basically a return to the old mainframe and dumb terminal model. You use a dumb terminal or thin client to connect to a virtual desktop. The experience should essentially be the same as using a regular desktop computer. The virtual desktop is hosted on a server that has been carved into multiple desktops.


How does VDI Solve these Problems?


By hosting the desktop as a virtual machine, you reap all the benefits of virtualization. You can have more desktops on the server than you would be able to buy individually. The virtual desktops can share resources, such as memory or hard disk space, reduce your energy consumption, and keep better backups.


Individual desktop virtual machines will last longer, are faster to deploy, and can stay more up-to-date. Increasing individual desktop capacity can be as easy as making a configuration change or upgrading the server. It can be a matter of minutes to deploy a new virtual desktop instead of waiting for a new system to ship to your organization and then waiting some more so that someone can set it up.


Your data is always as secure as the server you're hosting the virtual desktops. You no longer have to store data on unsecured devices. Moreover, because your data is on a centralized server, it is much easier to back up.


These are a few of the benefits of VDI. Now, what kind of problems will you be likely to experience with VDI?


Common Problems in VDI


You will have the same problems as you experience on your regular virtual infrastructure. Performance degradation will always be a trial, as well as the dreaded complaint, "my VM is too slow". Of course, in VDI a slow VM will have to be a higher priority since it could potentially affect multiple users on the same host.


Some applications will not work as well in a VDI environment. For example, image editing, animation, and video editing software won't run as well in VDI.


One important issue that you will be faced with is network connectivity. If the network goes down between the virtual desktop host and the rest of the company, or is otherwise "interrupted," no one with a virtual desktop will be able to work. If the outside connection goes down, off-site employees will no longer be able to work. This could have significant impact on your business, so if you were thinking of moving to VDI, or if you are already implementing VDI, you should probably invest in some network monitoring tools.


So that's the pros and cons of VDI in a nutshell. Do you have other VDI woes? Experienced different problems? You should share them in the comments.

In part 1 we discussed the following points:

  • Identify server hardware failure – when, where and how
  • Determine if the applications running on the server are performing properly or creating issues
  • Ensure server security, high availability and performance to meet business service requirements

The other component to Server Monitoring is Hardware related.

Hard Disk Monitoring:

The hard disk is the device that the server uses to store data.  The data stored is permanent (survives a reboot unlike RAM) and is available till it is consciously erased by the end user.

It’s important to monitor hard disk for a couple of reasons: The operating system needs space on the disk for normal operating processes including paging files and caches. The applications running on the server also needs space to write temporary data to cache. Low free space on a drive is one of the reasons for file system fragmentation which causes severe performance issues.

Server Hardware Monitoring:

A server has many hardware components that need monitoring. Server performance monitoring issues may be due to a malfunctioning or failing component in the hardware.

  • Monitoring CPU Fan

The fan draws heat away from the CPU by moving air across a heat sink to cool a particular component.  If the CPU fan fails, the server will eventually overheat causing your server to become unavailable. To prevent this from happening, you should monitor CPU Fan speed. Monitoring the historical data of fan RPM is one way you can keep a watch for any sudden spikes in the fan RPM.

  • Power Supply

A power supply unit (PSU) converts AC to low-voltage regulated DC power. To have visibility into this metric it’s important to monitor amperage, voltage, and wattage of the power supply.

  • Temperature

This refers to the temperature of the system board or mother board. Unusually high temperatures can cause permanent damage to the server, and will affect server performance adversely. Safe working temperature limits can be obtained from the manufacturer. These must be monitored to ensure they do not exceed this safe range for efficient server monitoring.

  • Environmental Factors

Temperature, air flow, and humidity are important parameters to monitor. Problems with temperature may be a direct result of faulty A/C, improper air flow, and dangerous humidity levels. Other Components to Monitor include CMOS Battery, Disk array health, Intrusion detection and CPU hardware status.

Click here to learn more about the various server and application monitoring tools SolarWinds offers.

According to the Pew Research Center’s Project for Excellence in Journalism series, Future of Mobile News, half of all adults in the United States own either a smart phone or tablet. And almost 70% of those people use their mobile devices to get the news.1 In another article in this same series, 56% of tablet users check the news on their tablets multiple times a day, but mostly from 5:00 pm to 9:00 pm. 56% of smartphone users get news on their phones multiple times a day, mostly from 8:00 am to 12:00 pm and 5:00 pm to 9:00 pm.2


Since August of this year, The YouTube Politics Channel has been streaming videos focusing on the 2012 US elections. Video sources include heavy-hitter news organizations such as:


  • The New York Times
  • Univision
  • BuzzFeed
  • The Wall Street Journal Live
  • ABC News
  • Aljazeera


Of course, you probably already know all about the numbers of mobile device users streaming news. Folks use the company network to stream video because it’s faster and cheaper than using their own data plans. And now that the U.S. elections are less than a month away, your bandwidth usage has probably increased even more.


Fortunately, the elections are only a few weeks away, so your bandwidth usage may lessen once those are over. But what do you do for bandwidth in the meantime?


Short of limiting access to news sites and You Tube, you could consider free and low-cost methods, such as the free tools embedded in the router OS to monitor and analyze network usage. For even more monitoring power, you could consider SolarWinds NetFlow Traffice Analyzer (NTA). This inexpensive, easy-to-use bandwidth monitor offers you capabilities to determine which system users and applications are using the most bandwidth so you can more effectively allocate resources. See the SolarWinds NetFlow Traffic Analyzer site for more information on the product and its capabilities, which include:


  • Monitoring network traffic, patterns and bandwidth down to the interface level
  • Identifying which users, applications, and protocols are consuming the most bandwidth
  • Highlighting the IP addresses of top talkers
  • Analyzing Cisco® NetFlow, Juniper® J-Flow, IPFIX, sFlow®, Huawei NetStream™ and other flow data
  • Typically deploying in less than an hour



1Future of Mobile News: The Explosion in Mobile Audiences and a Close Look at What it Means for News, Pew Research Center’s Project for Excellence in Journalism at

2Future of Mobile News: The Most Popular Times of Day Vary, Pew Research Center’s Project for Excellence in Journalism at

On 09 September 2012 Go Daddy and all the domains for which it provides DNS were down for five or more hours. Thousands of sites—some associated with businesses, and many the primary means of doing business—all went dark.


This protracted outage provides a good opportunity to discuss two related points of IT practice: risks of outsourcing DNS for your website(s) and the need for tight control on config updates for network devices.


What Happened?

In the name of the hacker group Anonymous, someone claimed responsibility for coordinating a distributed denial of service (DDOS) attack against Go Daddy. Since Go Daddy has been alone among ISPs in declaring support for internet-policing legislation (SOPA), one might think Anonymous would seek to make a political example of Go Daddy.


Indeed, if Anonymous insurgents or dosnet bots launched High Orbit Ion Canon software at Go Daddy DNS servers, it’s possible that the flood of traffic could overwhelm an infrastructure that can handle 10 billion DNS queries a day.


Yet, before the outage on September 9th, Go Daddy already had reversed their position and opposed SOPA, seemingly removing the motive for hacktivist mischief. And in this case, review of relevant logs reflected no spikes.


Go Daddy itself claims that corruption in routing tables caused the outage. Assuming that’s true, let’s first look at the routing footprint to web servers. I’m located in the middle of the US. When I trace the route to I see it pass through datacenters in the DC metro, Atlanta, New Jersey, Dallas, Colorado, and Phoenix (where Go Daddy owns two large datacenters). The Godaddy DNS servers CN1.SECURESERVER.NET and CNS3.SECURESERVER.NET are located in New Jersey; and CN2.SECURESERVER.NET is located in Phoenix.


Some hops in the route to—probably those in the DC metro—involve ICANN root servers. But let’s say that at least 10 hops occur within Go Daddy’s network in the process of their name servers sorting the query and sending the user’s browser to the appropriate web server farm. And let’s keep in mind that, in terms of DNS, the same route pertains to any of the millions of websites that Go Daddy hosts on the more than 30 thousands servers in their datacenters and the millions more for which it resolves DNS requests.


The point here is that even if the route to the web servers involved 10 different Go Daddy routing points, one expects that a routing table issue replicated on all affected devices could fairly easily be resolved within an hour; unless the IT team’s routing table repository is such a mess that it takes a protracted forensic analysis of recent config changes to figure out the source of the problem and how to fix it without causing other problems.


Covering Bases

Taking Go Daddy at their word, we confirm that an IT team must always know the history of the config changes made to network devices. And as part of creating a point of truth for that history, we must include an approval step in the process of changing network configurations.


We can also infer from fact of Go Daddy’s extended downtime the importance of having a clear triage process for troubleshooting network events.


We are left with the question of the risks and benefits of outsourcing DNS to a big provider like Go Daddy. As I have discussed in a different article, architecting DNS for high availability requires multiple redundant name servers, preferably located in different datacenters. In that regard, Go Daddy in particular would seem to have that base covered.


Your alternative is to setup your own DNS and rely on Go Daddy or another registrar to secure your use of the domain name. However, in this case, you must make sure that the registrar properly sets-up the A record for your domain. In the case of the Go Daddy outage, at least one domain holder experienced downtime for his domain even though Go Daddy does not manage DNS for the domain. So the lesson in registering your domain is always confirm that registrar sets-up DNS records for your domain the ventway you expect; and if possible devise tests to verify it.

What to look for in Server Monitoring

Let’s understand some basics of Server Monitoring and what you should look for to proactively monitor your servers, and data you need to troubleshoot your issue quickly.

Server performance monitoring is the process of automatically scanning servers on the network for irregularities or failures. These scans allow administrators to identify issues and fix unexpected problems before they impact end-user productivity.

Server and application monitoring will also help you determine if the issue that’s affecting your application is really a problem with the network or with a server, and can help identify a root cause.

Key Server Components to Monitor for Server Monitoring:

CPU Monitoring

  • CPU usage is critical for all the processes executed by the computer. There must always be some portion of CPU available to handle workload; CPU usage should never be 100%.

When the CPU usage impacts server performance, you have to either, upgrade the CPU hardware,  add more CPU’s, or shut down services that are hogging these critical resources. Charts and graphs can help you visualize CPU load over time to determine normal usage – match workloads to CPU capacity to save on power.

RAM Monitoring

  • Random Access Memory (RAM) is a form of data storage. A server can load information required by certain applications into RAM for faster access thereby improving the overall performance of the application.
  • RAM is flash-based storage and is several times faster than the slower hard disk (physical moving components).

If a server runs out of RAM, it sets up a portion of the hard drive as virtual memory and this space is reserved for CPU usage. This process is called swapping and causes performance degradation since the hard drive is much slower than RAM.

Swapping also contributes to file system fragmentation which degrades overall server performance. So, it’s important to have constant visibility into RAM usage. One way to balance rising RAM usage is to add more RAM. This is an economical way to boost server performance. You should monitor RAM utilization to determine which processes are causing spikes in usage for effective server monitoring.


In Part 2 we'll discuss Hardware Aspects of Server performance monitoring.

When it comes to ice cream, some people like vanilla. Some people like chocolate. Some like strawberry. And in a pinch, some people will choose a fig newton. While figs are fine, they aren't ice cream . And if you want ice cream, a fig newton isn't going to fit the bill.

In the same way, if you're looking for a server and application management product, you should consider buying an actual server and application management product. Because if you're looking at WUG, you're looking at a "network management toolset" (their words, not mine.) While they do provide some server and application monitoring capabilities, that is not the focus of their product. So if you're choosing between WUG and Server & Appliction Monitor (SAM), please take a look at the chart below, and then read up more on this page.



WUG even admits that they offer "entry level application monitoring" because that 's not their focus with the product. It's not made for your use case, and ultimately, you're probably not going to find what you need in the application.


Here's our Top Five Reasons to Choose the right server monitoring tool SAM,


1. Ease of set up and ongoing use.

2. Robust application support.

3. Expert Templates - know what counters are important, what thresholds are optimal, and how to fix issues in your environment.

4. Virtualization monitoring is included!

5. Enterprise scalability at all license levels.



Oh - and here's an extra bonus to consider - WUG's technical support hours are pretty thin. They are closed on US holidays, and you'll only find them in the office M, W-F 9am to 6pm. EST On Tuesday they sleep in a bit, as support doesn't open until 10:30. These hours can make it really rough on people on the West Coast, Europe, or just about anywhere where you could need help /something might break outside of banker's hours. In contrast, SolarWinds support is 365/7/24 because we know things tend to go wrong when it's most inconvenient, and we want to be there to help.


We invite you to compare and contrast, and do let us know your thoughts. Again, you find out more and download SAM, the ideal server performance monitoring tool on this page.

Being the consummate professionals we IT administrators are, we always want to do things the right way and the smart way – saving us precious time and improving the efficiency of executing tasks. This is the reason why—not most of us, but—all of us share our desktops remotely for simplifying most of the things we do at work. In this blog, we’ll see the three of the most popular situations where we can leverage the utility of remote desktop sharing.


#1 Taking Control of the Remote Desktop for Remote Administration


Some of us use remote desktop to connect to end-users’ PCs and laptops for IT troubleshooting and support. Most of the IT administration tasks we do don’t really require our presence at an end-user’s desk. Remote desktop sharing provides a quick and easy way to perform the following IT administration tasks.


• Start, stop and restart services, applications, and processes

• Log in to deploy or uninstall software

• Copy and delete files

• Add shares and reformat disk drives

• View and clear event Logs

• Shutdown or restart computers


Even when a system is unattended by the user or in power-saving/sleep mode, IT admins can use remote desktop connection to wake it and make it accessible within the network.



#2 Managing Role-based Access Privileges for Security


IT admins are also responsible for safeguarding user access to corporate computers. It’s a cumbersome activity to log into each system individually and define access privileges. Why complicate it when you can just remotely connect to the end-users’ systems to provide role-based access privileges such as


• Editing and modifying registry settings and system services

• Implementing access and logon policies

• Forcing encryption and policies

• Monitoring processes


Active Directory management and Group Policy configurations are also made simpler and quicker when done remotely.



#3 24x7 Remote Access to Data and Applications


Whether you are at the workplace, at home, or traveling, business must always go on. Computers and notebooks need to be accessed around the clock for sharing data and applications. When you are within the enterprise network (directly on the LAN or connecting via VPN), remote desktop sharing becomes a powerful means to share computer screens for running presentations, demos, and other business applications.


When you cannot email a huge file, you can just transfer it between the remote computers. Third-party remote desktop sharing tools such as DameWare Mini Remote Control allow you to take screenshots when working on a remote system. You can also chat with the end-user easily from within a centralized remote access console.


Remote desktop sharing enables us to quickly and easily address most of the common IT support tasks that would otherwise take additional time and effort that could be used for troubleshooting critical network and application problems.


Watch this 3-minute demo to see how DameWare Mini Remote Control, one of the most affordable and easiest to use remote assistance tools on the market, can help you to easily share desktops.

In a previous post I outlined the escalation process that your network monitoring system should be able to provide as part of efficient triage of multiple simultaneous events. With the right automated communications workflow, a small team of three could easily begin work on three different events within minutes.

Actually resolving the issues depends on their scope and the required fixes; reducing the time to resolution depends on what happens before problems occur.

Establishing a Point of Truth for Device Configurations


Let’s take one of the most common issues in IT management: an erroneous configuration is pushed to multiple devices, generating many connectivity and access alerts.

Let’s even assume that astute IT engineers quickly infer the cause of the problem. And better, the monitoring software automatically downloads the running configuration from each affected device as part of the alerting workflow.

Since the running config is presumably the one with a problem, downloading it is useful only if you have a back-up of the previous configuration. If you scheduled nightly back-ups for all impacted devices, comparing the running with the last nightly back-up should quickly tell you the changes, show you the problem within those changes, and let you know if resolving the problem is as simple as pushing the backed-up configuration to the relevant devices or requires you to edit the back-up with selected non-erroneous lines from the running config.

If you have the back-up and the running config downloaded at the time of alert, then the time to resolve the problem on all affected devices is dramatically reduced to some minutes (possibly as few as 10). Essentially, you have in the form of the two comparable configs a way of determining the point of truth for the config files in play.

If you do not have a recent back-up for each device, or you cannot compare the back-up to the erroneous config causing the current problem, then you are going to need considerably more time to verify what config will fully resolve the current crisis. As a start, you might reboot the devices and let them come up with their start-up configs; then work from the start-up configs to piece together appropriate additions that restore the devices to their functionality prior to the crisis.

SolarWinds Network Configuration Manager provides the alerting, scheduling, and config management tools needed to resolve this type of IT outage in the shortest possible time.


Components of VM Sprawl

Posted by LokiR Oct 10, 2012

I've talked about what VM sprawl is, but what about all the terms that were mentioned in the previous post, like zombies, orphans, rogues, and spawning?

Quick recap: VM sprawl - the proliferation of VMs in the virtual infrastructure that are unnecessary and are frequently unauthorized or unusedthus consuming resources that are better used elsewhere.


Zombie VMs


The term "zombie VM" is inherently amusing and calls to mind images of a zombie VM chowing down on a healthy VM and infecting it with the zombie virus.




This is not far from what actually happens.


A zombie VM is basically a VM that has been left to rot. It is a VM that has been created but not deleted or removed when its purpose has ended. Zombie VMs take up resources that would otherwise go to VMs in use - such as CPU, memory, or storage - and can eventually slow other VMs on the same host. Since it is so easy to create a VM, people forget about the VMs they have made and - suddenly - zombies.




If you search for "orphan VMs" online, you can find several definitions that can be boiled down to VMs whose data exists, but which are not found in the inventory. Orphaned VMs consume resources, usually just storage resources, and are not accounted for in the VM management system. This may occur if you have a linkage problem between the VM and host, or if you try to delete the VM manually instead of through your management console. If you delete the configuration file, for example, the management console may not know that the VM still exists.




Rogue VMs are unmanaged VMs on your network. They introduce a number of security concerns - they can be unpatched, run unauthorized software, or be riddled with malware. These VMs are generally deployed from the desktop instead of being a remnant of the VM creation/deletion process. Because they are often deployed from desktops, they either bypass VM policies or no VM creation policy exists (or is enforced) on the network.


For example, in an entirely hypothetical situation, if you needed to spin up a gentoo VM, you may decide to use a downloaded image instead of wasting time deploying and configuring the VM. Because the images are unauthorized, they are more likely to be a security risk, either through passive vectors, like viruses, or through deliberate malice.



Spawning is a term used when creating VMs, though in this context usually refers to the unregulated creation of VMs. This is similar to other uses of the term "spawn," such as "spawn points," in gaming. It may not be considered a problem in and of itself, but it is a problem when combined with zombies, orphans, and rouges. You will always create new VMs, but you should remember to remove them when unnecessary and to use authorized images.


So there you have a brief explanation of terms used in VM sprawl.

Most network monitoring solutions (NMS) are going to use some combination of Simple Network Monitoring Protocol (SNMP) and Internet Control Message Protocol (ICMP) to monitor your network. SNMP and ICMP are industry-standard technologies that are supported by pretty much anything you could care to monitor on a modern network, but, if a significant portion of the devices on your network are running some variety of a Windows OS, you should consider an implementation of WMI for those Windows PCs and servers. This post will provide a brief overview of WMI and show how it can function as a complement to SNMP and ICMP polling.


What is WMI?

Windows Management Instrumentation (WMI) is a proprietary technology used to poll performance and management information from Windows‑based network devices, applications, and components. When used as an alternative to SNMP, WMI can provide much of the same monitoring and management data currently available with SNMP‑based polling with the addition of Windows‑specific communications and security features. Basically, if you've got any Windows PCs and servers on your network, WMI gives you the ability to manage them in accordance with industry standards, specifically the Common Information Model and Web-Based Enterprise Management.


So How Does WMI Help Me?

If you're managing any network equipment running a Windows-based OS, WMI gives you a Windows-specific, standardized framework for developing management scripts and applications. In other words, if you're comfortable working in the Windows domain, if the .NET Framework and COM/DCOM programming are old hat for you, you'll want to give WMI management a try. There is even a CLI available for the old-schoolers among you that prefer to manage your equipment without all the visual bells and whistles your NMS gives you. Getting into implementation details would exceed the limits of this blog space, but you can find all the info you need by searching WMI at Microsoft.


Does SolarWinds Work With WMI?

Yes, of course we do. If you've enabled WMI monitoring and management, SolarWinds Orion Network Performance Monitor (NPM), SolarWinds Server & Application Monitor (SAM), SolarWinds IP Address Manager (IPAM), and SolarWinds Orion Network Configuration Manager (NCM) are all capable of giving you insight into the Windows-based devices on your network, particularly if, for security reasons, you do not want or are unable to use SNMP for network monitoring.


When you install any SolarWinds Orion application, simply indicate that you want to monitor WMI-enabled devices on your network by providing appropriate credentials when prompted. For more information about configuring a SolarWinds Network Discovery to discover WMI-enabled devices, see "Network Discovery Using the Network Sonar Wizard" in the SolarWinds Orion NPM Administrator Guide.

Readers’ choice awards are the best kinds of awards to win. They take all of the suspicion out of the process, and just focus on the thing that matters the most: the users’ experience. That’s why we are so flattered that SolarWinds Log & Event Manager recently won the Readers’ Choice GOLD Award for the Best of SIEM 2012 from TechTarget’s SearchSecurity publication. We are humbled by our customers’ endorsement of this product, and excited that they loved it enough to tell the world about it.


Log & Event Manager is just another manifestation of our mission at SolarWinds to bring powerful IT management tools that solve specific problems at a fraction of the cost of competitive offerings. This log analysis and SIEM tool gives IT professionals the ability to find the proverbial needle in the haystack of logs and events to quickly – and even proactively – determine which issues are most pressing.


So, we’re excited that our users love SolarWinds Log & Event Manager, and hope that fact will encourage other to try it out as well! You can download our free 30-day trial today to see what all the buzz is about!

SNMP - It's not just your grandfathers protocol anymore!


Yes, SNMP has been around long enough for a few grandfathers to have used it. In version 1 there was not really a lot you could do. You could find the value of a system OID and set an OID if you really knew what you were doing. Version 2 added the ability to perform get bulk requests, rather than the old way of issuing strings of get next requests. Using get bulk was like being able to look at a whole page of a book, where get next was like having to ask for each word one at a time.


Version 3 added a very strong and flexible security mechanism along with some other minor features. For a long time version 3 seemed to scare people off because of the many options of the security models it uses.  If you have been interested in SNMP v3 and want to know how it is implemented, see this Technical Reference.


If you want to know more about SNMP and how it works, see this educational paper.

The Affordable Care Act and Network Security – Are You Ready?


Worried about how the Affordable Care Act’s requirements are going to affect your network in 2014? January 2014 is when U.S. states are required to have Health Insurance Exchanges (HIEs) available for consumers to buy health insurance from. Access to the HIEs is going to be online. Your networks will be supporting more confidential patient information than ever. Compliance will become even especially important, as thousands of new users become part of your network.


What About HIPAA?

And how does the Affordable Care Act affect the Health Insurance Portability and Accountability Act (HIPAA)? HIPAA governs the privacy of patient medical records, especially electronic records and communications. What does this all mean for you, the network administrator who works for a State Health and Human Services office, a health insurance company, or a medical office or hospital?


New Requirements Are Coming

According to the REPORT ON PATIENT PRIVACY, an online journal for the healthcare industry, many new privacy and security requirements are probably going to apply to the health plans, medical providers, and states involved in HIEs. HIPAA is unlikely to apply across organizations involved in HIEs, because they’re not necessarily sharing the same information HIPAA covers.


Policy-based Tools

How will you ensure your network meets these new requirements? What kind of monitoring and security will you use? Is what you already have in place good enough?

If you’re already working with policy-based network tools, you’re off to a great start. Policy-based network monitoring and security software are incredibly flexible, so you can customize what your network does or doesn’t allow. Policy-based tools also let you change your policies so you can adapt to developing requirements as they happen.


For a closer look at what you can do to enhance your network monitoring and security, take a look at what SolarWinds' IT Management software has to offer. There’s everything from log and event management to network monitoring and security. So you can be ready for whatever 2014 may bring.

September’s Information Security magazine has an article on mobile applications calling them a menace and a danger that needs to be watched. Mobile security was also a very hot topic at several technical conferences this summer. Why the hullabaloo? How does this affect your network?

Smartphone users download applications all the time and don’t consider that there may be malware or other significant security risks. How many people in your company have smartphones or a tablet? I would estimate that to be 85% or higher. Each one of these users can cause a security risk by downloading an application that is infected with malware, or maybe one that sends secure information to the originator of the application. This may seem James Bond-like, but it is the reality of today’s technology. 

Deloitte came up with the Top 10 Mobile Threats:

  • Mobile device attack surface is narrow but deep
  • Mobile malware
  • Application (and subsequently data) proliferation
  • Device and data loss
  • Device and data ownership
  • Network communication channels
  • Immature security solutions
  • Less IT control
  • Exercising tight control has its downside
  • Lack of a formal strategy


Your IT department needs to be aware of these potential risks and plan for these potential threats to your corporate network.


Mobile Device Attack

Near Field Communication (NFC) allows smartphones or other devices (tablets, e-readers, etc.) to communicate with each other through radio waves, as long as they are within a close proximity. Say that you are walking through the mall with your work smartphone and you pass within range of someone else who has NFC enabled on their phone. They can force your phone to load malware or browse to a site that has malware or other security risks without any interaction on your part. You then come back to the office and sync up your phone with your laptop. If your IT department is not prepared for this issue, there is the small chance that you just infected your network without your knowledge.


Mobile Malware

Mobile malware is growing. According to Juniper Networks, between July and November 2011 there was a 472% increase in Android malware. Your IT list should provide a list of approved apps, or possibly prohibit the use of any third-party applications. Mobile Device Management (MDM) is also a recommendation, unfortunately this adds cost to your IT overhead, but it can save you in the long run.

If all you manage are Microsoft updates in your patch management environment, your update approval procedure is pretty straightforward: identify the updates you want to approve, and then approve them for the appropriate target groups. However, if you manage third-party applications with an automated patch management tool, you're probably faced with a lot more choices. Why? Because Microsoft only releases updates for their software, while third-party vendors like Adobe, Mozilla, and Oracle release full installers. Furthermore, with an automated patch management solution, you probably have two options for each third-party update: full-install and update-only. This gives you the flexibility to approve updates for only the computers that already have the software, or approve the full installer for all computers in a group so you can install it if they don't. So, which do you approve: the full installer, just the update, or both?


Best Practices for Approving Full and Update-only Installer Packages

In most cases, when you're updating third-party software with an automated patch management solution, you'll run into at least two scenarios:


Scenario #1

You want to update only systems that already have the product installed. In this case, publish and approve the update-only package. You don't even have to publish the full-install package.


Scenario #2

You want to ensure ALL systems have the current version of the program installed. In this case, publish and approve the full-install package. You don't have to publish the update-only package, since the full-install package also updates systems that already have the software.


Depending on the solution you use, you might also run into a third scenario. This is where SolarWinds Patch Manager, the patch management solution can provide some additional flexibility to the standard scenarios:


Scenario #3

You want to update only systems that already have the product installed, but you also want to make the program available to other systems on demand. In this case:

  • Publish and approve the update-only package.
  • Publish, but do not approve the full-install package.


Patch Management With SolarWinds Patch Manager allows you to deploy published updates/installers to managed clients on demand. In this scenario, you would have the full installer available on your WSUS server, and then use Patch Manager to deploy the software to specific computers when they need it. When you deploy the software, you can tell the Update Management Wizard to ignore the approval state on the target computer(s). That way, the Windows Update Agent installs the software even though it's not approved for the target computers in WSUS.


For additional information about update-only and full-install packages, check out this Q & A article on the SolarWinds Knowledge Base: What's the difference between FULL and UPGRADE packages?


To learn more about Patch Manager,  the ideal patch management software, check out this video: Patch Manager Guided Tour - YouTube

You get an email from your boss. He's fresh from a conference, he's got all these new ideas, and now he wants to meet and talk about performance bottlenecks in the virtual infrastructure. He wants a report on all the bottlenecks and your plans to eliminate them in his inbox ASAP.


Now you have to explain in boss-speak what a performance bottleneck is, how "elimination" isn't exactly the term to use, and put together a report of your current bottlenecks and your plans to mitigate them.  Excellent!


What Is a Bottleneck?


Bottlenecks in this instance are resources that constrain the performance of your virtual infrastructure. For example, say you have 90% of your hard drive free, but the system is still slow and you're only running the OS. In this case, assuming no other problems with the VM, memory is the performance bottleneck.


In the VM world, we're mostly concerned with memory, disk, and CPU bottlenecks. VMware performance monitoring is critical.


Symptoms of Bottlenecks


The most frequent sign of bottlenecks usually comes from users, unless you spend your day monitoring your virtual infrastructure. Unfortunately the symptom of bottlenecks from a user perspective is a slow VM. Obviously this could cover any number of issues.


If you take a look at the problematic VM, you should notice high activity in CPU, memory, or disk. If you use a management tool, look for high latency in those areas. High latency is probably the best indicator of a performance bottleneck from a VM management perspective.


Solving the Bottleneck Problem


In a long-term sense, you never really fix bottlenecks - you move them around. For example, let's say your current bottleneck is a memory issue. You added more memory and fix your performance bottleneck. But, six months later users are having the same complaint again. This time when you investigate, it seems that the disk is too slow for the demands of the newer OS and programs on the VMs.

Now you have a new performance bottleneck.


So, you can't "eliminate" bottlenecks, but you can prevent them.


If you use some virtualization management software, like SolarWinds Virtualization Manager, for your VMware monitoring, you can monitor your virtual infrastructure for performance bottlenecks and take preventative steps, or at least make a case to get the extra resources. Most tools even come with reports that will show you, and your boss, what is likely to be a bottleneck or is already a bottleneck. This makes everyone much happier in the long run

In part 2 of this series, we discussed utilizing PowerShell with SAM templates and component monitors for monitoring your servers and applications. In this installment, we'll discuss the basics of PowerShell code and provide examples that you can use and modify for your monitoring environment.

PowerShell Code with SAM


Let's say we have a SAM template made up of various Exchange monitors to report email flow. The monitor we will use in this example is called, "Number of items received by specific user during last month." This monitor, when configured correctly, will simply report the number of emails a specific user received during the last month.


To edit the default script for this monitor, click the Edit button for that monitor within SAM. Below is the default PowerShell script for this component monitor. If you don't know anything about code, this will look a little scary. Don't panic. We'll cover what's going on here.


$ErrorActionPreference = "silentlycontinue";
add-pssnapin Microsoft.Exchange.Management.PowerShell.E2010;
add-pssnapin Microsoft.Exchange.Management.PowerShell.Admin;
$address = $args.get(0);
$server = $args.get(1);
if ( !$address )
Write-Host "Message: Can't find "user_mailbox" argument. Check documentation.";
exit 1;
if ( !$server )
Write-Host "Message: Can't find "server" argument. Check documentation.";
exit 1;
$t1 = Get-Date;
$t2 = $t1.AddMonths(-1);
$stat = (Get-MessageTrackingLog -Server $server -Recipients $address -EventID "Receive" -ResultSize "Unlimited" -Start $t2 -End $t1 | Measure-Object).Count;
if ($Error.Count -eq 0) {
Write-Host "Message: User $address received: $stat items during last month";
Write-Host "Statistic: $stat";
Exit 0;
Write-Host "Message: $($Error[0])";
Exit 1;




Variables are used for storing information, much like the X in a simple algebraic equation. For example, X + 5 = 12. The variable X, in this case, represents, or stores, the number 7. As the name implies, variables change depending upon what they're asked to do. If the previous equation changes to X + 5 = 14, then the variable X becomes 9. In programming, variables represent more than simple numbers. They can store and represent just about anything, including text strings, times, email addresses, and so on. In SAM, variables are prefixed with "$", as highlighted below. The following code snippet from the above PowerShell code calculates a numerical value (number of emails received per month) and then stores it in the variable named $stat (short for statistic).


Using the code snippet above, the $stat variable's value is reported to SAM as 9356, as highlighted in the Statistic column's output below:


Let's look at the variables and see how they relate to the output SAM displays. The variables and the values they store are highlighted below in both the code and the output, respectively. Notice the variables change to show the actual values they store, be it a number or email address, when they are output to SAM's web console. The variable $stat reports 9356 in two places while the variable $address reports the email address.



Text and variables in code within quotes indicate information that is visible to the user. When made visible, the text will be displayed as it is written in the code. Variables will be replaced with the values they store, as demonstrated above. With these same lines of code, Message: and Statistic: refer to the columns where the information will be placed. So, the code highlighted below displays what you now see above.

Let's examine the first two lines of code above.

  • Message: is within the quotes, therefore it is displayed in the output as the column header.
  • User $address received: $stat items during the last month is also within quotes and to the right of Message: therefore, the text of the message and the values of the variables are displayed in the Message column, also shown above..


Scripts Must Report Status Through Exit Codes


Scripts must report their status by exiting with the appropriate exit code. The exit code is used to report the status of the monitor, which is seen by the user through the interface. The following table explains the exit codes and their values:



Exit Code










Any other value



The following code snippet highlights proper usage of exit codes.


The two exit codes in this example are conditional, meaning either one or the other will be triggered based on a certain outcome. When Exit 0; (status of Up) is reported, the message and statistic are displayed and the monitor shows a status of Up. When Exit 1; (status of Down) is reported, the message and statistic are not displayed and a status of Down is reported.


If you want to inform SolarWinds SAM that a PowerShell script reports an Up status, you would exit the script using Exit 0;

Scripts with Text Output


Scripts report additional details by sending text to the script’s standard output. SAM supports multiple values returned by a script using the following format. There is a limit of 10 Statistic and Message pairs for the script. These can be placed anywhere in the script output. The Statistic and Message names you give must contain valid letters and/or numbers.


Note: Each statistic and message output pair of your script requires a unique identifier. A maximum of 10 output pairs can be monitored per script.


Detail Type





A numeric value used to determine how the monitor compares to its set thresholds. This must be an integer value, (negative numbers are supported).

  1. Statistic.Name1: 123
  2. Statistic.Name2: 456



An error or information message to be displayed in the monitor status details. Note: Multi-line messages are supported. To use this functionality, print each line using a separate command. For example:

Message.Name1: abc

Message.Name2: def


For more information, refer to the following:

"Creating a Windows PowerShell Monitor".

For more information about Windows PowerShell, visit:

For a complete list of available PowerShell cmdlets, visit:

Most companies use contractors for strategic purposes within their development and support organizations. Even the IT team within many companies of different size do so.

Part of getting a contractor integrated into the workflow of a team is making sure that all the usual communication channels are open. So the common practice is to issue a contractor a company laptop and setup a single-sign-on account on the relevant domain controller.

Contractors tend to be highly skilled and versatile in what they can do within a LAN. And since many companies use a WAN to integrate LANs in geographiclly dispersed sites, the skilled contractor gains access to company resources at large, despite perhaps being specifically limited to project work at one site. Since even contracts lasting months or a year are temporary, the contractor by definition never has full allegiance to company interests.

In short, while contractors tend to be highly functional people with notable integrity, an IT team needs to address the structural risk presented by a non-company employee operating on the network with an employee’s ease of use.

Ways to Keep Track of Contractor Access

Since the contractor is a user within the domain, each login event is recorded. So viewing the event log provides a history of all systems on which the user logs-in; generating a report based on filtering even log transactions would provide an overview for some specified duration (daily, weekly, monthly).

Since the contract-user always uses the same company laptop to access points within the domain, you can more carefully limit and watch user access to the network by creating a DHCP policy that requires the user’s laptop to re-lease an IP every 24 hours.

Often the contractor works out of a temporary workspace with the laptop connected through an IP phone. By monitoring the switch port passing through the IP phone you can see each instance that the laptop (MAC address) connects to the network.

All three of these ways of monitoring access are available in the Solarwinds User Device Tracker (UDT). You can use UDT to create a watch list in which either the username or the laptop’s MAC address generate a notification upon login or connection.

Part one and part two of this series outline my take on the goals of today's NOC.

My top three NOC goals, in order of chronological importance are:

  1. Network Management Visibility - You cannot manage what you cannot see.
  2. Network Fault Management - Dead devices trump slow devices.
  3. Today's topic!

Goal Three - Network Performance Management

How should we interpret this utterance; "Oh man! The network is so slow today!" I think it roughly translates to "Something is making this really slow and I don't know what it is  and everyone always blames the network so....". The truth is that the end user does not care at all what is causing the poor performance, they just want if fixed. The typical silo (AKA cylinders of excellence) approach is to give the issue to the network department. If they cannot find an issue it gets tossed over to the server group. If they come up empty, then the finger pointing begins. There are obvious problems with approaching network performance monitoring this way, but I think that all of the problems come from a systemic issue - the network is being viewed as a group of individual electron movers, when in fact the network is a system for completing business transactions.


Rethinking Network Systems

Network devices should not be thought of as machines, but rather as interconnected links in a business process chain. Now your network systems become hundreds of these chains, relying on their links and other process chains as well.


Here is my take on properly managing performance:

  1. Define your network enabled business processes and identify all the links.
  2. Prioritize each of the processes by the impact they have on your company's business line.
  3. Begin with the most critical process and create performance management groups containing the links. These groups should monitor machine level performance (CPU, memory, etc) and link utilization, synthetic traffic performance over involved links (IP SLA), and synthetic transactions mimicking a user experience.
  4. Set thresholds and alert triggers specific to the business process.

Now you will be managing the ability to operate your business. Instead of getting an alert that just says "SQL server 007 is at 100% CPU" you can add information about the actual impact of the business process (transaction time) and other issues in the chain.


While I have placed performance management as Goal 3 chronologically, I think it is the most important part of network management to show that you understand the business you are in and manage to business expectations.


For a quick look at the technologies I mentioned in action, see this site.

Everyone has dealt with some quality issues on calls. You connect and suddenly it sounds as if you're underwater or on an airfield. If you video conference a lot, you've probably seen other issues like combing, pixelation, or ghosting. Sometimes it can be pretty funny, but if you were on an important conference call, these kinds of issues could negatively affect the outcome.


For example, if one of your fellow employees was on a conference call that could make or break your organization, poor VoIP or video quality could negatively affect the outcome, especially if it was difficult to understand the other participants. Additionally, poor call quality negatively affects everyone who uses the VoIP network. Consistently not being able to understand what's happening on a call is frustrating and can reduce the desire to communicate via that vector.


What is QoS?


QoS stands for Quality of Service and refers to metrics that directly correlate to how well your network performs. In this context we're talking about the quality of your network and its translation to VoIP or video. QoS takes into account several factors such as packet loss, latency, jitter, and MOS.


Packet Loss

Packet Loss is a measure of information loss over your network connection. An example of packet loss is when words are garbled or missing from a conversation. Though packet loss is inevitable in any network, the goal is to identify where packets are lost in transmission so you can act to minimize information loss and maintain high QoS for your services.



In VoIP, latency is the difference in time between when one caller speaks and when the other caller hears what the first has said. An extreme example of latency is during calls to the space station. The long pause after someone asks a question is not the astronauts gathering their thoughts; it's a combination of waiting to hear the question, gathering their thoughts, and then waiting for their response to reach Earth. Excessive network latency can cause noticeable gaps and synchronization issues, particularly when VoIP is used with other types of data, as in a videoconference.



Jitter is a measure of the variation in network latency that results in a loss of synchronization over time. In VoIP phone calls, users experience jitter as distracting noise, clicks, and pops.



MOS, or Mean Opinion Score, is an industry standard measure of call quality expressed on a scale from 1 to 5, where 1 is extremely poor call quality and 5 is excellent call quality. MOS is primarily a user input, but some VoIP monitor can calculate MOS based on other algorithms. Most IT shops do not want their MOS to go below 3.5.


What do Monitors Do for Me?


Well, if you install a QoS monitor, like SolarWinds VoIP and Network Quality Manager, you can tell where the trouble is in the network when issues arise and begin to either troubleshoot the issue or begin to make plans to modify or expand your network before disaster strikes. You could make recommendations to ameliorate the network congestion based on what you know about the network and improve your QoS. Most QoS monitors come with alerts, so you can closely monitor your latency, packet loss, or jitter.

If you have not already started you IPv6 transition planning, now is a good time. Think of where your business will be in 3 years, and how you'll interface with the rest of world. This will provide a clearer picture of the action that is required. Once IPv4 gets depleted, you will still be able to communicate with all other IPv4 hosts out there, but you cannot reach any IPv6 hosts. This is what you are wishing to avoid. Don't get too far behind in the planning phase of your inevitable v4 to v6 transition.


The first thing you should do when conducting an IPv6 readiness audit is make an inventory of your network. Include all of your systems and document any mystery devices you find. Inventory all devices inclduing firewalls, routers, switches, and load balancers.Your systems guys would need to audit server operating systems and key infrastructure applications like DNS and e-mail. Application owners will likely need to audit each of their applications, both the transport layers, as well as any user interfaces where IP addresses are entered. For your DBAs, include any place where an IP address is referenced. This is by no means a complete list, but you get the picture.


If you are currently implementing Windows7 or Server 2008/2008R2 then remember you are already adding IPv6 into your IT environment by default.  Consequently, a remediation plan is necessary to maintain control of your infrastructure until you are ready to fully enable IPv6.



SolarWinds IP Address Manager can help you coordinate your migration to IPv6 by providing the ability to add IPv6 sites and subnets for planning purposes. IPv6 addresses can then be grouped to assist with network organization. Use the discovery tool to start your inventory task. Know what is out there-

To obtain a better understanding of the current state of IPV6 readiness with SolarWinds products see this product blog.

To test IPv6 on your current browser- click here.

In a previous post, I outlined a couple of steps you should take to ensure that you are best protected against "apocalyptic disaster" on your network. In short:

  1. Get a Network Management Software (NMS) to keep track of network health. SolarWinds NPM network monitor is a great place to start for this.
  2. Since that NMS is probably feeding all your network performance data to a database of some sort, as a regular practice, create backups of that NMS database. This is what this previous post was about, primarily.
  3. Consider a failure and disater recovery solution to ensure the highest possible availability of your NMS. What would such a consideration enatail? Let's answer that here.


First, What is NMS High Availability?

When you want to monitor network performance or uptime on your network, the depth and quality of your analysis can never be any better than the continuity of your data. An NMS that isn't "always up" isn't going to provide you with anything approximating continuous data. A high-availability NMS provides the continuous data you really need. To put it in another context, security cameras are only of much use if they're on and recording. Security guards are only good if they're present and awake. You don't even get the dummy effect working in your favor if your NMS isn't up and monitoring.


So you need to have your NMS up, polling, and writing to your database to get any real benefit out of it. The more it is up and the less it is down, the better.


So, How Do I Maintain High Availability? Failover Protection

It's inevitable that you're going to need to take your NMS server down for maintenance, from time-to-time, and, as we've been discussing, it's even possible that it might "fall down" all on its own. These are cases when you really need a failover solution. A good failover solution effectively provides a backup NMS to fill in when your primary NMS can't get the job done. Ideally, the transition is seamless, from primary NMS server to backup NMS server: your primary NMS server goes down and your backup NMS server picks up polling and writing. Hopefully, this happens instantaneously, so you don't miss anything.


Failover protection, is, of course, a bit more complicated in the technical details, but you can find more detailed information about it in the section, "Orion Failover Engine Concepts" of the SolarWinds Failover Engine Administrator Guide.


In a future post, I'll talk about what a good disaster recovery solution can do for you after an apocalyptic network disaster.

End-to-end performance monitoring sounds great!  You have a handle on all of your applications and supporting resources so you can quickly figure out which resource/component is causing the application performance issue.  Now, let’s implement that vision.  This could be tricky if you are not a subject matter expert in all areas.  Sure, you have a handle on monitoring server performance – CPU, Memory, I/O and the like, but what does it take to monitor Exchange or DB2 or Java?  Lucky for you, SolarWinds has a new resource in the Server & Application Monitor on-line demo which defines for each application what you should monitor, why and what the metric value should be for effective server monitoring.  For example, is fragmentation above 10% good or bad and what causes fragmentation to begin with?  Check out this link for more details of this server monitor tool and how to use this new server performance monitoring resource!



WANs link our sites together and connect us to the internet. They’re usually the most expensive part of networking. So we’re always trying to increase their performance and make the most of our investment.


Bandwidth will always be limited, compared to what you need. Just managing data is not enough – you need to enhance your technology to get more from your bandwidth. Begin by looking at available free tools that can help you squeeze more value out of your WAN.


IP SLA Tools

Cisco and Juniper, for example, have free tools in their routers than can help you monitor the network. Cisco encodes an Internet Protocol Service Level Agreement (IP SLA) into its IOS. The IP SLA enables you to review an SLA for an IP application or service This means you can use the IP SLA to validate service guarantees, monitor the network and any issues, and confirm network performance.

Connectivity Tools

By using another free tool like WireShark, you can determine if your WAN is sending and receiving data packets. If your service is supposed to have 99 or 100% uptime, but you can only send and receive data packets 85% of the time, you may need to contact your service provider and find out what’s going on on their end.


Quality of Service Tools

Enabling Class-Based Quality of Service (CBQoS) on  your router lets you check the quality of your data determines how usable your network data really is. This is especially important for voice data, such as in the cases of Voice over IP (VoIP) and video conferencing. High quality data means clear voice and video transmissions.


If these tools aren’t enough for you, consider buying software designed especially to help you get the most out of your WAN. Software is almost always cheaper than hardware, and the market offers some really effective network software tools. For more information on getting the most out of your WAN, check out the SolarWinds WAN Analysis Best Practices webcast. This helpful video contains lots of useful information on how to keep costs down while keeping your network running at peak performance.


VoIP and SIP

Posted by LokiR Oct 2, 2012

This is a basic overview of SIP in a VoIP context, so it may be too generalized for VoIP experts. You have been warned.


What is SIP?


SIP stands for Session Initiation Protocol, but what does that mean?


SIP is used to establish a connection between two or more phones, or endpoints. This protocol is primarily used with voice calls, but it can be used to set up multimedia meetings - such as video conferences, and can run over both IPv4 and IPv6.


How SIP Works

To show how SIP works, let's back up and talk about VoIP networking. When you place a call, you are not immediately connected to the other person's phone, or endpoint. A SIP request is sent first to establish a connection to the other endpoint. A different protocol transmits your conversation over the wire.


When I hit send on my phone, there is a delay before the phone starts ringing. That delay is SIP establishing a connection to the other endpoint. This is similar to the way HTTP works. If you go to a website that you've never visited before, there's a slight delay before it starts loading. That's because the browser is trying to establish a connection with the server. If you've never seen that pause, try going to a website that is outside of your country or across an ocean.


Like HTTP, SIP sends a request, waits for a response, and then makes a decision based on the response. For example, SIP could say, "This endpoint is available; let the call ring through," or SIP could say, "This endpoint is disconnected; read the disconnect message," based on different responses it receives from the other phone. This is similar to HTTP requests and 404, 403, or 200 codes.


If you have established your call and then decide to conference in another person, you use SIP again to establish a connection to the new phone.


SIP and codecs

Because SIP establishes phone or endpoint connections, it makes sense that SIP would also determine to which quality level each endpoint is capable of performing and then choosing the appropriate level for the call.


You may have noticed this, but when you connect to another phone, the quality of your call is determined by the quality of the other phone. So if the other phone can't handle a lot of data at one time, then your data is compressed to the appropriate level and sent to the other phone. A common negotiation between two phones is agreeing on a codec to use. When transmitting data, the data is compressed/coded at one endpoint and decompressed/decoded on the other endpoint. Selecting a codec that both endpoints can use is key to actually understanding the conversations you have.


SIP Proxies

While phones can directly connect through SIP, the devices generally route traffic through a SIP proxy and the proxies take care of the SIP traffic. Say you're at a large VoIP site and you want to call a coworker via their extensions. The request is probably routed to one or more SIP proxies, assuming you're using SIP on that network. The proxy will take care of appropriately translating the extension to a SIP URI and connecting the two endpoints.


SIP Registration

Going hand-in-hand with SIP proxies is SIP registration. Similar to a DNS server, the SIP registrar acts as a location service for devices using SIP. Phones can register their URI to speed up the amount of time it takes to establish a connection between two phones. The registration server is frequently co-located with SIP proxies.


As with most things, especially networking protocols, there is more to SIP than this, but that is for another post.

You have a 100 routers in your network. For simplicity let’s say the gear is all Cisco and running the same version of iOS. If each router serves as a gateway to the others, sucessfully ferrying packets among many edge nodes distributed throughout the various subnets, your routing table already covers all the relevant route permutations.


Today you need to turn-up a new subnet on the network. Besides readying the new device(s), you need to modify the routing table on all other routers to include a new route.


A TFTP server would help only if all devices targeted could download the same config file. In this case, each ip route statement will be a little different for each device.


Do you have the time to login on each router to update the table? You might—if you devote the entire day to this numbing task.


Automating Trivial Configuration Changes

Essentially, you need a scripting tool that supports octet replacements based on a defined pattern; which is what version 7.1 of SolarWinds Network Configuration Manager offers with the new function called setoctet—available in creating Change Config Templates.


With this function you can create a script to perform programmatic substitutions for a specific octet in an IP address, so that the value of the next hop gateway to the new route statement—ip route {network}{mask}{gateway}—can be appropriately altered.


A web-based template fronts this script and allows you or a member of your team to input the range of devices for which you want to target the change. For each targeted device the output is a specific set of CLI command statements. You confirm that the CLI statements include the ip route statement that will update the routing table for the appropriately targeted device. When you have verified the matches (CLI statements/device) you click a button to schedule all the necessary jobs that accomplish the routing table update across the devices targeted for the change.


For detailed information on creating and using Config Change Templates in NCM see this chapter of the NCM administrator guide. To explore the workflow of the Config Change Template feature see this live demo.

Network administrators are constantly faced with the challenge of ensuring network and infrastructure availability and performance at all times. Outage, downtime, latency, and faults on network devices will significantly affect business-critical applications and ultimately the bottom line. As network administrators, it is critical that you have the following information available at all times:


  1. Network Availability - by monitoring the up/down status of network nodes, and analyzing real-time and in-depth network availability statistics, you can quickly view the availability of your core IT services and data center
  2. Network Fault - identify network faults by monitoring statistics such as bandwidth utilization, packet loss, latency, errors, discards, quality of service, disk space, CPU load, and memory utilization
  3. Network Performance - identify and analyze performance bottlenecks by monitoring various performance metrics, counters and statistics over time using device-critical information such as resource utilization, network traffic, throughput, etc...



Through the use of automated tools, network administrators can simplify the collection of this critical information for more effective network management. When assessing a network availability, fault, and performance monitoring tool, you will want to ensure that it provides the following:


  1. Intelligent Network Alerting – create and send alerts to respond to different network scenarios.  Look for the ability to define device dependencies; configure alerts for correlated events, sustained conditions, and complex combinations of device states; and escalate through a variety of delivery methods.
  2. Customizable Reporting – out-of-the-box and customizable reports that can be automated and exported.
  3. Intuitive Dashboards - view performance metrics in easy-to-understand charts and graphs, that allow you to drill-down and navigate to the root cause of the issue, and customize views to focus on highlighted issues that cross predefined thresholds.




SolarWinds Network Performance Monitor (NPM) is a best-of-breed network monitoring software that integrates these three elements and offers a single unified and intuitive web console from which you can monitor your network nodes, drill down to analyze issues, and be alerted if any performance metric is behaving differently than expected. Having the control to be alerted/notified on issues and warnings, you can be in a better position to provide both strategic and tactical solutions to your network issues quickly and effectively.


Network Node Availability.png

               SolarWinds NPM showing Network Node Availability Stats

Network Interface Availability.png

                         SolarWinds NPM showing Network Interface Availability Stats for each Node         

Top 10 Nodes.png

                         SolarWinds NPM showing Network Interface Availability Stats for each Node    

Node Details.png

        SolarWinds NPM showing performance metrics and charts on the performance of a router 

Download a free fully functional 30-day trial of SolarWinds Network Performance Monitor.             

I have recently written about two major reasons to keep your certificate stores clean:

  • Microsoft KB2661254 invalidates certificates with a key length of 1024 bits or smaller on all supported Windows systems.
  • Microsoft's algorithm for searching and scanning certificates in the Trusted Root Certification Authorities store fails if the store contains more than 200 certificates.


From an even broader perspective, you should keep your certificate stores clean in the same manner you limit the software installed on your systems. The same best practice applies to both scenarios: If you're not using it or don't know what it is, get rid of it!


That said, consider "clean" certificate stores as being those free of any outdated, unwanted, or unneeded certificates. Outdated or unnecessary certificates can cause a lot of problems for SysAdmins. And the maintenance needs to happen both on the CA and the application hosts. Both of the reasons mentioned above can cause your applications or websites to fail; and if your customers (be they external or internal) can't access the tools they need to do business, no one in the situation will be happy.


More About Clean Certificate Stores

The first reason I mentioned is the most pressing for the broadest audience. Currently, KB2661254 is only available for manual download in the Microsoft Download Center; but when Microsoft releases the update to Microsoft Update this month, things are liable to break if you don't plan ahead. The reason I say this is the update is classified as Critical, which means a vast majority of Windows systems out there will apply it automatically, and those systems will no longer be able to interface with websites or applications with weak certificates.


The other reason admittedly has a narrower scope, but it's important nevertheless. My colleagues and I have looked for documentation on why Microsoft's algorithm fails when certificate stores reach a certain capacity, but the best explanation we can come up with is they never expected the stores to get so big. Compare the certificate stores on your Windows 7 machines, for example, to those on XP systems, and you'll see the latter are a lot bigger. That's because Microsoft (and others), are constantly issuing new certificates, but few organizations (much less the issuers themselves) have a solid plan for ongoing certificate management. So what you end up with is certificate stores full of expired, replaced, or mystery certificates.


Recommendations for Cleanup

The recommendation for addressing the first reason is pretty straightforward: replace any certificates with a key length of 1024 bits or less with a stronger certificate ASAP. If you can't do that this month, and you have the necessary level of control over the computers that rely on those certificates, make sure those computers are not configured to automatically deploy KB2661254 when it goes live.


As for the second reason, we recommend reducing your certificate stores to about 180 certificates or less - just to play on the safe side. As you consider what certificates to remove, think of the following as "safe to delete":

  • Expired certificates
  • Unknown foreign certificates
  • Certificates with a key length of 1024 bits or smaller


Managing the Cleanup

One of our products, SolarWinds Patch Manager, can help with both of these tasks, albeit indirectly. First, if you need to postpone the enterprise-wide deployment of KB2661254, Patch Manager can help you do that as an extension of Microsoft WSUS. Second, with the WMI inventory and reporting functionality within Patch Manager, you can view the certificates on all of your managed systems, and then decide what certificates to delete at an enterprise-wide level.


Whether you use Patch Manager or not, keep those certificate stores clean. Your servers and applications depend on it.

Today it is not uncommon for organizations to deploy a variety of operating systems across departments.  Some of the most common reasons cited for this trend are that certain operating systems are better at performing specific tasks and that users more familiar with one type of OS should be allowed to use it at work for the sake of productivity.  The proliferation of Mac and Linux in the workplace coupled with the growing BYOD trend poses a unique challenge to IT administration teams.  When it comes to supporting a mixed-OS environment remotely, this challenge becomes even larger.


The 3 most widely used operating systems today are Windows, Mac and Linux.

• Windows, as we know, is present in almost every organization in all sectors and sizes

• Mac OS is becoming popular with creative and art departments

• Linux in its various flavors has become a staple in IT departments as a server OS


Here are some interesting OS platform statistics from that show the market share of operating systems and the year-over-year growth of Linux and Mac in the industry.



Providing effective remote support to end-users means that help desk technicians must:

• Be able to remotely control desktops running any of the three most widely used operating systems

• Be able to remotely perform administration tasks in order to troubleshoot services, apps and processes on Windows (as this is still the most commonly used operating system in businesses and other organizations)



To accomplish this, it is important that IT departments provide their technicians and system administrators with easy-to-use and comprehensive remote support tools.



The most common remote control tool Windows is the Remote Desktop Protocol (RDP).  Though RDP is easy to use and is included in all Windows operating systems since XP, it has limited functionality.  When delivering remote support RDP techs and end-users are unable to share a screen making it difficult for techs to witness first-hand what end-users are experiencing.  Similarly, the most common remote control tool for Mac and Linux operating systems is VNC software, another free tool.  Though VNC Linux and VNC Mac give end-users and techs the ability to share a screen, it lacks other important functions like file transfer, in-session chat, and one click screenshots.  Neither of these tools give help desk techs and system administrators the features needed to perform Windows administration tasks remotely without taking full control of an end-user’s desktop.


DameWare Remote Support from SolarWinds wraps all of these remote control functions into one easy-to-use console.  It allows techs and sys admins to remotely control computers with RDP, VNC, and DameWare’s own Mini Remote Control Viewer.


In addition to these features, DRS gives techs powerful tools needed to perform Windows administration tasks remotely without having to take full control of end-users computers.  Some of these Windows administration tasks include:

• Remotely rebooting computers

• Starting, stopping and restarting services

• Viewing and clearing event logs

• Adding shares, moving files and reformatting disk drives

• Changing configuration settings



DRS also includes tools that allow techs to manage multiple Active Directory domains and perform basic functions in Microsoft Exchange 2003-2010 from its intuitive console.



Watch this quick 1-minute demo to understand how DameWare Mini Remote Control which is available out-of-the-box within DameWare Remote Support to remote control and share screens between Windows, Mac and Linux operating systems.

In part one of this series, we discussed the basics of PowerShell. In part 2, we'll discuss how you can incorporate PowerShell with SAM to build a more effective monitoring solution.


PowerShell Templates and Monitors

Many SAM templates contain component monitors that allow for the use of PowerShell scripts. An easy way to find a list of these templates is to navigate to the Manage Application Monitor Templates page and search for the word, "PowerShell." This can be done from the SAM web console by navigating to Settings > SAM Settings > Manage Templates. The search text box is at the top-right of the screen.


Below is a sample list of the templates found when "PowerShell" is searched. To examine and edit a template, check the box next to the template name. Once a template is checked, the Edit button will become enabled. Click Edit at the top of the list to open the selected template.

PowerShell Template Search View

The PowerShell Template

In this example, the Exchange 2007-2010 Mailbox Send and Receive Statistics with PowerShell template is used. This template tracks Exchange Mailbox Send/Receive statistics of Exchange 2007-2010 servers with the Mailbox role using PowerShell scripts. The following screen appears once you have selected a template to edit, revealing the individual component monitors as well as details about the template:


The following documentation accompanies this template:



• PowerShell 2.0 and Exchange Management Tools 2007 or 2010 installed on the SAM server.

• The Exchange server must have an Exchange Mailbox role.

• The SAM server and the Exchange server must be in the same domain.


The credentials must be an Exchange Administrator (Organization Manager) account with at least view-only permissions.

Note: Before using this template, under the Advanced tree collapse [+], you should set the correct platform; either 32-bit or 64-bit, from the dropdown menu. The default it set to 32-bit.

For all PowerShell component monitors: You must specify the correct name of your Exchange user and server in the Script Arguments field of the corresponding PowerShell Monitor. If you fail to do this, the counter will return with an error of "Undefined" status.

For example: If the name of your Exchange server is, and the user you want to monitor is some.user@domain.sw, the value in the Script Arguments field should be the following: some.user@domain.sw,

To see the names of your Exchange servers, run the following PowerShell command in the Exchange Management Shell: Get-ExchangeServer

To see the names of the users, run the following PowerShell command in Exchange Management Shell: Get-Mailbox

The Component Monitor

To examine and edit an individual PowerShell component monitor within the template, click the plus sign [+] to the left of the monitor.


For example: Number of items received by specific user during last month.

The following details about the selected component monitor are revealed:


Using a PowerShell script, the monitor in this example is designed to return the number of items received by a specific user during the last month. In order to use this monitor, you will need to change the Script Arguments field from the default example of, user@domain.sw,server.domain.sw to something that will suit your needs for your particular environment. You can do this by clicking the Edit button (highlighted above).You also have the ability to alter the pre-defined script that comes with PowerShell component monitors.


Note: Unless otherwise directed by the documentation, you should not need to edit pre-defined scripts.


Once you have changed the Script Arguments field, click Submit to begin using the component monitor within the template. The output for this script using the SAM monitor, Number of items received by specific user during last month, should be similar to the following illustration:


The output for the script using only PowerShell should be similar to the following illustration:


In the next of this series, we'll discuss PowerShell Code with SAM.

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.