Get Off of Our Cloud?

Posted by docwhite Nov 30, 2012

In a previous post I discussed AES as the only encryption scheme that protects data from snoops. In this post I want to report on a possible security hole in the virtual machinery of cloud computing.


Let's first recall that scads of user data are in the warehouses hooked up to those clouds. Personal information kept in files of the common media types—text, pictures, audio, video—that 10 years ago might have been sitting on desktops in removable storage now form the Big Data sitting out in expanding bit-cumuli.


Why that shift?


Online services that promote social interactions spur us to accumulate our data in their warehouses. Those photo streams on Flickr, videos on YouTube, random stuff dropped into DropBox, tweets on Twitter, blogs on Blogger, timelines of FaceBook activity are all parts of a Social Web for which cloud computing became an infrastructural answer. And free hosting, content tagging, RSS feeds and other technologies that remove the burden of cost and expertise from the media-making user help drive a spiral of online social activity.



Virtualization is the scaling technology that makes cloud computing cost-effective for businesses of the Social Web. With many instances of the same operating system software tied to the same hardware resources, businesses are able to contain the cost of indefinitely holding user data as advertising and other revenue grows.


While it’s great for scaling data-intensive online services, virtualization raises the stakes of securing different user data during runtime sharing of the same hardware. Users like the ease of online socializing as much as they dislike the possibility that their data is vulnerable to snooping, misuse, and theft.


And that is why the results of a recent RSA experiment—if not proof of a credible threat to the security of data within a cloud—could stoke public worry.


Using a side-channel attack, and executing priority requests for processor time with obnoxious frequency, researchers were able to gain access to data shared in memory by applications working for users on another virtual machine, ultimately deciphering encryption codes by watching calculations involved in handshakes. In short, software running under a user in one virtual machine was able to discover the secret key that protected the data of a user in a different virtual machine.


Yes, it’s possible that such an attack could be shut-down with the Unix nice program or other operating system equivalents. However, proving the concept of a possible exploit casts a big shadow when data security is the issue. Hackers certainly will be interested in the results.


In the meantime, as we’re waiting for the next shoe to drop, from a VMware performance monitoring perspective, you can look for signs of irregularities in resource use within your cloud with appropriate virtualization monitoring tools for your VMware monitoring.



For years I've been using Microsoft© products. I thought as the maker of the most well known operating system, Microsoft would ultimately get it right. I thought wrong, and here we are. Before we go any further, let's start at the beginning. (I'll omit some OS versions I deem less noteworthy for this exercise.) Let's pretend that Microsoft is in charge of building cars, in addition to operating systems. Below are some very brief reviews of the Microsoft operating systems over the years from a user's point of view, as well as their automotive counterpart:


Windows 3.1

  • The Good: For those of you who don't remember, Windows 3.1 was the new thing by grouping files in these things called Windows. The Apple Mac did the same thing really, but I won't argue the differences, technically or legally.
  • The Bad: Still in its infancy.
  • The Car: Basic 1966 VW Beetle. Does the job, but not too flashy or powerful.

Windows 95

  • The Good: This was the game changer. Some of the new features introduced were the Start button, the Taskbar, and desktop shortcuts. Simple to use and efficient.
  • The Bad: Stability was so-so. Needed more functionality.
  • The Car: 1966 Mustang. Flashy, cool, and powerful. Could use more uumph all around.


Windows ME

  • The Good: The only saving grace was that it still had the look and feel of Windows 95.
  • The Bad: Complete train wreck. This OS had more bugs than a NYC sewer. It got to the point where I actually began reading the text on the Blue Screen of Death because I saw it so often.
  • The Car: AMC Pacer. Functional, in between its frequent falling apart episodes. (Side note: Pop had this car and loved it. I called it the leper car because everything I touched fell off.)


Windows XP

  • The Good: At last, a stable version of Windows 95, Start button, Taskbar, and all!  I ran this OS as long as possible with nary an issue. Then, wanting to be on the bleeding edge of technology, I did the unthinkable. I bought Vista, and boy did I bleed.
  • The Bad: Nothing.
  • The Car: Corvette Stingray. Cooler, stable, and more powerful. More uumph found.


Windows Vista

  • The Good: Aero glass effect was introduced. The Start button, desktop shortcuts, and Taskbar, remained intact. USB RAM also introduced.
  • The Bad: After two long weeks of being stymied by countless security pop-ups and various other incompatibilities, I had had enough. Vista was garbage from Jump street. Out it came. This is what I should have done with Vista.
  • The Car: Undercover police cruiser, starting and stopping constantly. Not allowed to do anything. Had power though.


Windows 7

  • The Good: Aero glass effect still alive, as well as the Start button, shortcuts, the Taskbar, and the USB RAM. The ad campaign focused around the user building Windows 7, and it worked. It looks like actual users had input into this version. Best OS to date.
  • The Bad: Nothing.
  • The Car: BMW 3 series. Luxurious, stable, and powerful.


Windows 8

Well, here we are. Look at this screenshot for just a moment and really soak it in.

  • The Good: Faster boot time. Incorporated touch screen technology for supported devices.
  • The Bad: No Start button. The Taskbar is "hidden." The tiles are intrusive and annoying.
  • The Car: The Partridge family's bus. Looks funny, not easy to drive, but has power.
    windows 8.jpgbus.png

Let's take the new OS apart piecemeal:

  • Where's the Start button and Taskbar? These were introduced in Windows 95 and caught on quickly. Everyone loved them and every Windows OS since has had them. The removal of the Start button and Taskbar from the home screen in Windows 8 is like having built a new car and replacing the steering wheel with voice commands! I have to re-learn steering? Really?
  • Tiles as the home screen? This looks like my computer threw up! Anyway, tiles are supposed to provide you real-time information on just about anything from news to weather to sports scores. Do I need all of this information at once and in my face? I think not. Brain still works, don't need a computer for the obvious every second of the day. And what's with the random colors of each tile? Do the colors mean anything? Can I change them?  At first glance it looks as though Picasso had a cubist moment.
  • Where are my desktop shortcuts? Better still, where is my desktop?! Right-clicking the home tile screen solves the shortcut mystery, to a degree. Nothing is obvious though.
  • Desktop found, sort of. There is actually a tile for the desktop. Opening the desktop brings you to a half-@$$ Windows 7 view where you can view these things called windows. The problem is, you can't access your programs from the desktop tile without going back to the home tile screen first. Useless waste IMHO.
  • For an OS called Windows, all I see are tiles. Minimize, maximize, and close buttons? Who needs them? Umm, everyone. Put them back please.
  • The touch screen interface is cool, but not very practical for home or work computers, especially since we still type. The mouse is just too darn easy and efficient. Granted, we'd all love to have that cool holographic computer in Minority Report, but we're just not there yet.
  • A computer is not a tablet! People have to be more than entertained by their computers. They need to be productive.
  • How do I close apps short of launching the task manager?
  • Even turning the OS off is perplexing. You have to now navigate to Settings > Power > Shut Down to accomplish this minor miracle.

Linear thinking meets a fork in the road.

Windows has improved on itself over the years by adding new features and retaining the successful ones (ie: The Start button, the Taskbar, desktop shortcuts, etc.) They've also learned from their mistakes (the more stable XP replaced ME, the less restrictive 7 replaced Vista). After looking at Windows 8, I think Microsoft will have a great deal to overcome. I suspect "Windows 9" will look quite a bit like Windows 7 after the Windows 8 dust has settled. Whoever took the lead on this project (probably the same joker who introduced the ribbon to MS Office) hit a brick wall (most likely tiled).


Don't force us to relearn everything again and again.

Remember the ribbon episode with MS Office? Well in their infinite wisdom, Microsoft decided to put the ribbon in Windows Explorer for Windows 8. (I lost a lot of time and work because I couldn't find my file menu items because of that enormous idiotic ribbon - believe it or not, people can still read.) Build upon your successes Microsoft, add to them. Don't change them. Here are some tips just for you, Microsoft:

  • Don't remove features people like and use by replacing them with things you think are better. They're not. Remember Vista? Didn't work out too well, did it? I'm sure you'll catch a lot of flak for killing the Start button and killing our desktop with tiles. Be ready with a serious update, and soon.
  • Talk to a wide variety of your customers. Find out what they want, like, and dislike.
  • Let the user decide what's best. Everyone is different. Provide options, not requirements.


All signs are pointing toward world domination.

Windows 8 is now on computers, tablets, and phones. And that's fine. I'm sure they all work well together. But does that benefit the consumer? For me, it's all about control. If I cannot get what I want, I will go somewhere else. This is precisely why I ditched my Windows phone for an Android. Hopefully Microsoft will learn from this and actually get to know their customers like we do here at SolarWinds. Try less domination and more innovation Microsoft. But that's just me.


The Bottom Line

Microsoft took the least popular phone OS and threw it on a computer, with a Windows 7 knock-off lurking somewhere in a tile. The new OS upgrade costs $15-40. Take that money and grab a beer or 12. It would be better spent at a bar making jokes about Windows 8 with your geeky friends.

You're using SolarWinds Storage Manager, for storage monitoring and you're getting an error message, ERROR: Could Not Connect to the Agent! Please Make Sure That the Agent is Running. Here are some troubleshooting steps to help resolve this message and continue your holistic storage performance monitoring.


About Storage Manager HTTP and Traps

The Storage Manager Collector service gathers data from the Storage Manager Agent using HTTP and Traps. The Storage Manager Agent sends a Trap to the Server when the data is ready. When the server receives the Trap, it will connect to the Agent using HTTP and submit a GET command to collect the data. By default, the Server receives  a Trap from the Agent about once every hour.  If communication between the Collector service and the agent is blocked, the data  Is not collected. Below is a diagram to help visualize how Storage Manager uses HTTP and Traps:



NOTE: The Storage Manager Collector is also an agent.

NOTE: If the agent originated from Profiler, the service name is Profile Agent.


Checking the HTTP or Traps for issues

  • In the Storage Manager web console, navigate to the resource with issues.
    • If the Last Collect is updating, the HTTP is OK.
    • If the Last Trap is updating, the Traps are OK.


Finding the communication break

  • In the Storage Manager web console, navigate to the OS agent resource and click the traffic light.
    • If a list of modules and their status is returned, then the HTTP communication from the server to the agent is OK.
    • If an error message is returned, be sure the Storage Manager Agent service is started.

  Example error message: Could not connect to the agent. Please make sure the agent is running. 

  • If the Storage Agent service is running, and the error message is returned, then try the following:
    • The <Storage Manager Agent Install Directory>\core.xml file contains the correct IP address of the Storage Manager Server.
    • No Firewall Access Control List (ACL) is blocking HTTP port 4319
      • Telnet from the Storage Manager Server to the agent on port 3419 (telnet 4319). This will return a blank page if it can connect.
    • Restart the agent.
  • If the Last Trap is not updating, verify the following:
    • Confirm the Storage Manager Server  Trap port in the Main Console > Settings > Server Setup All > Server > SNMP Trap Port.
    • Validate that the <Storage Manager Agent Install Directory>\core.xml file contains the correct IP address and Trap port for the Storage Manager Server.
    • Check if the   Storage Manager Event Receiver service is started.
    • Check if the connection between the SNMP Trap port from the Storage Manager Agent to the Storage Manager Server is getting blocked by a firewall.


Let me know if this helps resolve this error message on your storage performance monitoring software. Or let me know what you did to resolve it.

Occasionally we get a customer who needs to add a network to Network Performance Monitor that is not reachable from their main network. This type of connectivity is shown in the figure below.

dual nic orion.png

Making this work is not very difficult as long as you follow a few rules.


  1. As always, the networks cannot share any IP address space.
  2. Don't count on the default gateway alone to define any networks except the ones directly attached to your server.
  3. Use the route add command in a Windows prompt to add all networks you need to manage.
  4. Do not connect the two networks. If you add a router later the ability to route between the networks will break this solution.


To see more unique Network Management ideas, visit our NPM thwack forum.





Modern PBX (IP PBX)

Posted by LokiR Nov 29, 2012

This continues on from the Evolution of Telephony post.


What is an IP PBX


An IP PBX connects internal telephone calls and connects outgoing calls to the public switched telephone network over a data connection using the Internet protocol.


IP PBX is the next (and current) evolution of PBX. It stands for Internet Protocol Private Branch Exchange. Some folks call it IP-based PBX as well.


As we move further away from our humble, person-based switch boards, we more towards more automation and tighter integration with all communication channels. With IP PBX we no longer have two lines of communication entering the office - voice and data. We combine the two one telecommunication line and run voice and data over that line (aka VoIP). The IP PBX takes the voice data and redistributes it to the appropriate phone, like traditional PBX systems.


To recap traditional PBX systems:  A PBX is a telephone switching system that connects numbers within the private branch and connects the private branch to the public phone system.


IP PBX switches voice information over the Internet instead of the traditional telephone system. People have moved to IP PBX for many of the same reasons as the move to PBX - it's cheaper and more efficient. With IP PBX you do not need to maintain separate data and voice lines. Startup and maintenance costs are lower; instead of specialized equipment, IP PBX systems can be as easy as a piece of software. Of course, the larger your organization, the more complex your IP PBX system is.


The systems can perform the same tasks as the traditional PBX systems. You can probably get more functionality than traditional PBX systems for comparable amount of money. Typical functionality includes extensions, voice mail, call-forwarding, call-holding, call-parking, and conferencing.

I try to avoid writing blog posts that are strictly related to our products at SolarWinds, but as I was recently conceptualizing some content for our newest product, Firewall Security Manager, I learned something that's just too good not to share. FSM came to us when we acquired Athena Security a few months back. This product utilizes firewall configuration files to analyze and manage firewall rules and changes offline to eliminate any potential impact planning might have on the production network. For years, Athena Security advertised their ability to integrate with SolarWinds NCM, so they were a perfect candidate for joining our family. In this post, I'd like to illustrate just how these two products work so well together.


Integrating FSM and NCM for 360-degree Firewall Management

For those of you who don't know, SolarWinds NCM is a network configuration change management software that allows you to back up, analyze, and modify the configuration files on all of your network devices. Of course, this includes firewalls and firewall-capable routing devices, so it was natural for the team at Athena Security to leverage that functionality. With these two products, you can collect, analyze, and update your device configurations without ever having to go to the command line or manually access the device itself.


Collecting the Config Files

In FSM you have several options for collecting configurations files: you can connect directly to a Cisco or Juniper NetScreen device; you can connect to a Check Point management server; or you can import a single set of configuration files from your company's file system. You can also connect to your NCM server to import configs from several devices, regardless of vendor (assuming the devices are supported). This allows you to leverage what NCM has already done for you and streamline the initial import process in FSM.


Analyzing the Config Files

This step is where FSM really shines. After you have the config files in FSM, you can analyze your firewall rules in human-readable tables, compare different versions of configuration files, and even generate reports to tell you what rules aren't being used or open your network for security risks. Using the various tools and reports in FSM, you can easily identify what needs to be changed on what devices, and then test those changes in an offline change-modeling environment to ensure your changes won't have any adverse effects.


Updating the Device Configurations

After you have identified what needs changing, FSM generates change scripts with the proposed changes. These scripts are fully editable, so it's easy to change only what you want and customize where necessary. When you've finalized the scripts, you can manually push them out to your devices using your preferred method, or you could use NCM to do that for you. NCM allows you to execute scripts on the devices it manages, so that closes the loop we started in step 1 when we used NCM to import the device configs into FSM.


I look forward to learning more about how to use FSM and NCM together as I continue working on this product, and I'll share tips as I learn them. If you like reading articles like this on Geek Speak, please let me know in the comments.

For over 10 years, DameWare has been providing remote support solutions to system administrators and other IT pros.  The latest product offerings from DameWare, part of the SolarWinds family, include the popular remote control software, Mini Remote Control, and the robust remote support software, DameWare Remote Support.  Today we’re happy to announce a new remote support tool that IT pros can add to their tool kits.  And best of all….it’s totally free!


The new free tool is called the DameWare SSH Client for Windows (catchy name, eh??).  For years, IT pros have had to choose between free tools like PuTTY with limited feature-sets that aren’t often updated and paid tools that provide a rich set of features, but that can cost well over $100 per license. The DameWare SSH Client is a completely free SSH client for Windows that bridges the gap between expensive paid software and limited free SSH clients.  The DameWare SSH client provides several features that are usually only seen in expensive alternatives.  Like other DameWare products, the features included in the SSH client are designed to help IT pros save time and money and are packaged in an easy-to-use interface.


Get Your Tabs On


You wouldn’t use an internet browser without tabs, would you?  Then why use an SSH client without them?  The DameWare SSH Client for Windows has a user-friendly tabbed interface that lets IT pros manage multiple SSH or Telnet sessions from one console.  Give it a try and see for yourself how easy it is to manage multiple sessions.  Just like tabbed browsers, we bet you won’t go back!


Quit Re-typing Your Credentials


If you’re anything like the busy system administrators I’ve worked with in the past, you’ve probably got several SSH and Telnet sessions going at one time.  Most likely you’re connecting to computers and devices that require different credentials.  The DameWare SSH Client for Windows includes a very handy feature that lets you save multiple sets of credentials for easy login access to all the devices on your network.


Saving Sessions = Saving Time


Like most network environments, yours probably has a handful of devices and computers that you access more frequently than others.  The DameWare SSH Client for Windows includes a feature that lets you save your favorite sessions.  Rather than retyping session information like machine name, IP address, and login credentials, simply choose from the list of your favorite saved sessions, click connect, and voila’!


The DameWare SSH Client for Windows – It’s Half the Price of PuTTY!


What is half of zero?  Zero!  That’s how much the DameWare SSH Client for Windows will cost you.  Download it for free and it’s yours to keep.  If you like it, and I’m sure you will, remember to click on the Google +1 button at the top right corner of the DameWare SSH Client for Windows homepage.

I had the pleasure of interviewing Sean Ackerman on why he decided to go with Server & Application Monitor.  Sean is an Infrastructure Engineer who works in the insurance industry.


JK: Tell me a little about your IT environment.

SA: Our infrastructure is 70% virtualized.  We have about 3,000 VMs, excluding our VDI environment.  WE have a wide variety of hardware vendors to include HP, Cisco, pSeries, Storage Wise, NetApp and EMC DataDomain.  From an application standpoint, we are a big Microsoft shop with the normal Microsoft applications like Exchange, Sharepoint, Active Directory, and Lync – but we also have some IBM applications like Cognos and WebSphere.


JK: What prompted you to search for a new server monitoring tool?

SA: Our company used HP Openview for monitoring our application infrastructure, but there was a lot of complexity in using HP because it often breaks and from a maintenance stand point it is not simple.   We even used System Center Operations Manager (SCOM) in the past, and though it provided agentless monitoring, SCOM required agents to monitor most applications with unique metrics.


With HP, we spent a lot of time managing agents and had little clarity on what was being monitored.  When alerts would fire, the question would come up – Why did we get the alert?  What is the problem?  When it came time to monitor a new application, it was difficult to figure out what should be monitored and what the thresholds should be since the HP product did not provide out-of-the-box best practices for monitoring applications.


JK: Why did you choose Server & Application Monitor?

SA: I was familiar with the SolarWinds brand as our network team had used Network Performance Monitor and the product was easy to use and get going.  SAM provided relatively good value for what we got, it works out-of-the-box and pretty straight forward.  It didn’t require 3 full time engineers just to support the product (SAM), it can be setup relatively easy and the support from the thwack forum was great.


The fact that Server & Application Monitor did not require a whole lot of admin overhead was a huge selling point for us. With the simplicity involved in using SAM, the workload of five roles has been reduced to one.


In evaluating a server monitoring solution we also looked at System Center Operations Manager but felt it was complex.  My team preferred SAM as the product was broad in its application coverage with new applications being supported with frequent releases.  We also chose SAM because it could be customized and used to monitor anything that needed to be monitored.


JK: What kind of value are you seeing from Server & Application Monitor now?

We use SAM to run an application scan every 4 days to automatically discover and monitor new applications. The unlimited license of SAM is worth the money since it allows us to add monitoring for any application when a new scan is run.  With SAM, I feel it’s now easy to get any application monitored.   With the out-of-the-box templates with pre-defined thresholds, it’s like 90% of the legwork and it is very easy to pinpoint issues very quickly.


With SAM in place, my team now only spends 20% of our time setting up application and server performance monitoring company-wide and we now have time to work on projects to improve IT services.  We use the dynamic grouping feature of SAM to look at performance issues by application and location.  The Real-Time Process Monitor feature of SAM is also effectively used to identify causes of spiking performance.



In a previous post I discussed the importance of having an intelligent communication system automate the process of sending alerts and managing escalation in the event of a large-scale IT emergency.


Let’s assume a regional emergency takes down power for an indefinite period. You have your production systems replicated in two geographically distinct datacenters. Though your primary site is in the impacted region, the datacenter’s generators give you some time to perform a graceful switchover to the other secondary site.


Telecommunications in the impacted region may be sporadically or unevenly available. For this reason—among others—you  chose as your intelligent notification system a third party service (SaaS) whose systems are appropriately redundant in regions different than yours.


As your NOC kicks-offs alerts to operations teams, your notification system looks up relevant on-call contacts in all the team calendars and sends out initial, secondary, and tertiary calls, pages, emails, text messages, as needed, automatically steering contacts into appropriate chat sessions and conference calls, and automatically escalating alerts up the chain within each team. Concurrently, higher level operations managers are similarly alerted and join forums to make a go/no-go decision to switchover every 15 minutes, based on estimates of lost data—currently in the replication pipeline—and lost revenue due to unavailability of services should the primary site fully go down.


More Channels are Better

Based on device and carrier types, members of your different NOC and operations teams may have different access to communications channels. As a result, you should leverage as many channels as possible.


In this sense, though it cannot substitute for a full-scale intelligent communication system, Twitter effectively could help you bridge gaps. Twitter offers a highly scalable system that supports any device with the software to send and receive tweets.


In this case, you need to restrict communications, which runs at cross purposes with Twitter’s vast majority of tweeters, who broadcast their short messages to whoever will listen. You can setup a Twitter group with privacy that only the NOC and operations teams follow and simply disapprove any other unwanted followers.  Posting all messages to the group in direct mode keeps all communications private.

The 140 character limit for each tweet will of course hinder the continuity of communication over this channel. But tweets would be perfect for short progress updates on specific tasks.

Passing tickets between the network team and the server team is very time consuming for both parties and wastes precious minutes and hours of getting the service back and running.  Server teams and network teams often work in silos with different (sometimes that means manual tools) tools for managing each environment – each with a different UI and database.


In server monitoring, when a service goes down, it can be any number of things causing the outage – from a network issue to a server hardware failure, to virtual machine performance or a rogue process which complicates the server performance monitoring aspect. When it comes to monitoring applications, you get a complaint that an application is slow, it’s very difficult to ascertain the root cause if all you can see is that network performance is hunky-dory.  By adding a server and application monitoring tool, you can actually see that you have a disk failure or that you have a process that is out of control, or maybe one of the services for your application stopped working.


Check out this video to see how you can solve problems faster by adding server monitoring to your network monitoring environment.

Thwack! Now I get it.

Posted by Bronx Nov 27, 2012

Why did we call the site thwack and not something more unimaginative like, The SolarWinds Community? Because the last thing we are is, unimaginative.



If you've been on this site, even for a brief moment, you've noticed a bunch of cartoon characters lurking about. These characters have networking and application monitoring problems that need solving. On thwack, we do our best to solve those problems. Hence the name, thwack. Thwack is the sound a cartoon character would make if he were to slap his forehead as if to say, "Now I get it! Problem solved!" (Similar to the "Blamo!" that would splash on the TV when Batman hit The Joker on the original Batman TV series, circa 1970.)


Don't let the Cartoons fool ya!

Don't you dare. Everyone in this community and here at SolarWinds is at the top of their game, despite our playful appearance. I cannot imagine a collection of people who know more about networking and application monitoring than this bunch. Don't believe me? See for yourself. Check out other software communities and see how they stack up, then ask yourself this:

  • Was my question even noticed, let alone answered?
  • Was the answer correct?
  • Do they even care about me?

Yeah, we care. A lot of us hang out here on thwack on our off time, and not because we have's just fun.


Cartoonify yourself

Want to know how to create those cool characters we come up with? Simple, there's an app for that. A lot of us around here follow South Park and The Simpsons so I've provided links to those two cartoonification sites below. Feel free to venture beyond the characters of these two shows though. Be yourself, within reason.


Get the most out of thwack.

Here are some tips to help you solve your networking and application monitoring issues faster using thwack:

  • Search - Most likely your question has come up before. Try the Search bar at the top-right of every page.
  • Explore - There are a ton of forums here. Poke around and see what you can find. Remember to ask your question in the right forum. Not all of our forums are dedicated to our products.  We have a lot of talk around general IT topics.
  • Follow - Follow a specific forum and receive email notifications. You'll be surprised at what you can learn and at what you don't know.
  • Participate - The more you do, the more points you earn. Check out the missions page and the point system page so you can learn to earn. More points equals more goodies for you from the thwack store.
  • Learn - Check out DocWhite's earlier post about the evolution of social networking and online communities, including thwack.

How do you manage your firewall rulesets? Over time, firewall configuration files can become overrun with rules, and thus become large and complicated. It's not unusual for firewalls to have thousands of rules, and many of those are liable to be rendered obsolete by new rules as network and security teams add new rules to meet business needs. And, making things worse, as a firewall's configuration files grow, its performance decreases. If you want to keep your rulesets trim and your firewalls running optimally, you should perform regularly-scheduled firewall audits and then clean up your rule base accordingly.


Look Before You Leap

Ideally, you should analyze your firewall configs for two types of unnecessary rules before cleaning them up:

  • Redundant rules
  • Unused rules


Identifying Redundant Rules

Redundant rules are rules that have the same purpose as some other rule in the ruleset. These redundancies exist because of the structural relationship between all the rules the firewall uses:

  • Firewalls evaluate their rules in a sequence defined in the rule base.
  • If a rule is covered by another rule that comes before it, it will never be triggered.
  • Such "coverage" is determined by the traffic that each rule allows or denies.
  • When you find redundant rules, it is generally safe to remove them because the firewall will never use them.

Cleaning up the redundant rules on your firewall simplifies the configuration, making it easier to manage and less prone to errors.


Identifying Unused Rules

Unused rules are another unnecessary burden to any firewall. They make the configuration complex, but have no real reason for existing. Oftentimes, these rules have become stale due to changing business needs. To identify these rules, use the logging features available on your firewalls and look for rules that never get used:

  • In most cases, logging is a feature you have to manually enable before the firewall will collect any usage logs.
  • After you enable logging, allow the firewall to collect data for a reasonable time period. This period will vary depending on the number of devices and users on your network, along with the general traffic volume.
  • Over time, the statistics generated from your log data will tell you what rules are never hit.
  • If a rule has a zero hit-count, disable it with the appropriate documentation, and then remove it after you are confident you will not get any complaints about service availability.

In this case, your firewall's logging feature adds a level of automation to this task. Nevertheless, cleaning up the rules, whether redundant or unused, can be an onerous task.


The Cost of Manual Rule Cleanup

There are several things to consider as you prepare to clean up your firewall rules:

  • If your rules use a lot of object groups, identifying redundant rules manually will be painful and time consuming. Object groups add complexity to this task because they introduce a high number of combinations you'll have to analyze to fully understand what each rule does.
  • If you have a large number of firewalls, cleaning them manually can add several months to your firewall management schedule, not to mention thousands of dollars in cost.
  • If you want to identify unused rules with log statistics, you'll have to put in a little time upfront to collect a sufficient amount of data.

Who has that kind of time? Clearly, an automated process would be a far better option for firewall cleanup.


SolarWinds Firewall Security Manager (FSM) provides an easy-to-use, automated solution to ensure your firewalls are free of any unnecessary rules and objects. Furthermore, FSM helps you test your cleaned up configs before you deploy them in your production environment to ensure your changes won't have an adverse effect on existing service availability, or expose the business to unauthorized traffic.

Even though fielding emergency calls on holidays is something system administrators have come to expect, resetting passwords while the family sits down to dinner can still put a damper on the holiday spirit.  The workaholics in your office won't let a little something like Thanksgiving slow them down and as long as they are working, you'll be expected to work too.


Whether it's simply rebooting an end-user's computer or troubleshooting a crashed production server, having the right tools can mean the difference between spending a holiday in a datacenter and spending it with family.  Remote support software is exactly that kind of tool!  DameWare Remote Support provides system administrators with all the features they need to support their office from one location.  With DameWare's remote control software, sys admins can remotely control Windows, Mac OS X, and Linux computers.  Performing Windows administration tasks like restarting services, viewing event logs, and managing peripherals is easy from the DameWare software console.  And when you're dealing with a workstation or server that has crashed or is in sleep mode, DameWare's integrated Intel vPro features allow you to troubleshoot remotely.


We here at SolarWinds know all too well about workaholics in the office (there are more than a few here too!), so we put together an eBook for sys admins that highlights several of the types of workaholics you're sure to encounter this holiday season.  So, download it here and enjoy this quick, entertaining read.  We wish you a happy holiday and hope that the workaholics in your office leave you alone just long enough to enjoy a little holiday cheer this season!

Don’t tell your manager about the SolarWinds SysMan Hero browser game


Blow stuff up, virtually

Got a sudden urge to shoot aliens but you’re stuck in a meeting and need to look like you’re working? SolarWinds has you covered with their new SysMan Hero! game.  That's right, SolarWinds is launching a game…  not another free tool or product, just something fun to help you get through the day, blow off some steam, etc. It’s a blast and plays in your browser right over your favorite web pages.  It’s easy to get started and the gameplay is classic and engrossing enough to keep your attention.



Sysman Hero is an overhead 2D shooter, with combined elements from arcade vector classics, and it feels familiar right away.  It’s also web based there’s nothing to install- just click and go.  You start by choosing your hero (and a spaceship), and then use rotation, thrusters, shields and your blaster to zap space bugs before they eat you and your web page.  The intro level is deceptively easy and provides a warm and as it turns out, unwarranted, sense of mastery.  Make no mistake; SysMan Hero can light you up in the rounds beyond.  I thought this was just for grins at the beginning, but ten minutes later the springs in my spacebar were begging for mercy.


What are you waiting for?

There’s nothing to download so check out SysMan Hero now right here ( and let me know what you think. There is a space (SolarWinds SysMan Hero!) in thwack dedicated to the game (feedback, share some tips, give us suggestions for future enhancements).  And, keep an eye out for additional one day bonuses and inter-community challenges. 


No one will know you’re playing a game over the company website.  Be warned a couple minutes later your mad clicking and arm twitches may give you away. If your manager asks what you’re working on, just tell him the truth, you’re squashing bugs.

A successful business needs a robust, secure network that supports its business goals. How do you get all those things – successful business and a robust, secure network? By making sure the folks who handle the network are engaged in the company’s business goals discussion.


Network and Security Strategy and Business Goal Alignment


In many companies, the top brass sets the business goals and then communicates those goals to everyone else. When this type of planning occurs, all IT personnel can do is react to managment's goals.


If, however, you are part of management's business planning process – well, that’s a different story. You can help determine what can or can’t be done with the current network resources and skillsets. You can also help decide on the most cost-effective (always important!) ways to help management accomplish its goals.


For example, if one of your company’s goals is to increase its online business by 25% over the next year, what does that mean for network management? And what about for network security? Will your company be expanding its business in your own country, or overseas, or both? Can your current network strategy support this kind of expansion? Do you need to upgrade your network software or hardware? Should you beef up your security strategy to accommodate all the new customers and the new lines of products they’ll be ordering? And what about your skills? Will you need to upgrade them? Or add to your staff?


SolarWinds can help you help those decision makers make better business decisions. Take a look at the paper, Network Management for the Mid-Market, for more information on what you can do to be more proactive in aligning network management and security strategies with your company’s business goals.

Here's how you set up SSL on your storage performance monitoring software, Storage Manager Web Console with a port redirect from port 80. This enables users to access the Web site on port 80 with HTTPS automatically used. Storage performance monitoring has never been this easy!


Use the keygen tool to create a Self Signed certificate

  1. Be sure you have Administrator authority.
  2. Enter the following command in a Command prompt:
    C:\Program Files\SolarWinds\Storage Manager Server\jre\bin>keytool -genkey -alias tomcat -keyalg RSA
  3. When prompted, enter a password for the keystore. Remember this password. You will need it in later steps.
  4. You will be prompted for additional information such as your name, address, role, and company. Complete the prompt requests.
  5. The certificate file is created in the Home directory for the user that creates the file, It is called .keystore.
  6. Save this file to a location outside the Storage Manager installation directory.
    Example: C:\STM+Certificate

Edit file server.xml

  1. Open the server.XML file in a text editor.
    C:\Program Files\SolarWinds\Storage Manager Server\conf\server.xml
  2. Comment out the default HTTP Connector port as follows:


        <Connector port="9000" maxHttpHeaderSize="8192"

        maxThreads="150" minSpareThreads="25" maxSpareThreads="75"

        enableLookups="false" redirectPort="8443" acceptCount="100"

        connectionTimeout="20000" disableUploadTimeout="true" />


  3. Enter the following HTTP and HTTPS Connectors:

        <Connector port="80" protocol="HTTP/1.1" URIEncoding="UTF-8"

        disableUploadTimeout="true" connectionTimeout="20000"

        acceptCount="100" redirectPort="443" enableLookups="false"

        maxSpareThreads="75" minSpareThreads="25"

        maxThreads="150" maxHttpHeaderSize="8192"/>

        <Connector port="443" protocol="org.apache.coyote.http11.Http11NioProtocol"

        URIEncoding="UTF-8" disableUploadTimeout="true" connectionTimeout="20000" acceptCount="100"

        redirectPort="443" enableLookups="false" maxSpareThreads="75" minSpareThreads="25"

        keystoreFile="C:\STM_Certificate\.keystore" keystorePass="MyPassword"

        SSLEnabled="true" maxThreads="150"

        scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" maxHttpHeaderSize="8192"/>

  4. You can modify the port numbers to be whatever ports you wish to use for HTTP and HTTPS communications. If you update the HTTPS port make sure you also update 'redirectPort= in the HTTP and HTTPS connectors.
  5. Be sure the HTTPS connector on line 09. 'keystoreFile=' points to the location of the keystore file.
  6. Be sure the HTTPS connector on line 09. 'keystorePass=' defines the password you entered for the keystore file.
  7. Save the file.

Edit file web.xml

  1. Open the web.xml file in a text editor.
    C:\Program Files\SolarWinds\Storage Manager Server\conf\web.xml
  2. Add the following to the file just before the closing </web-app> tag:










Restart SolarWinds Storage Manager Web Services service

  1. Restart the SolarWinds Storage Manager Web Services service. Storage monitoring simplified!



In a previous article I discussed using an alert system that supports escalation as a way to ensure that a small team (1-6) efficiently handles multiple issues concurrently.


That model works well for each operations team. However, if an entire datacenter were impacted, each IT team uses their own triage process but must also communicate with other teams to coordinate work on resources in common. Sequencing work between teams becomes the critical task.


Coordinating Communication


Your network operations center (NOC) plays a crucial role in triage when your deployment is large enough to require multiple IT teams, your systems must be highly available to support customer-facing services important to ongoing business, and power or another infrastructural resource goes down in your primary datacenter. The NOC staff tracks each alert that lights up their wall display of all systems in your site.


While each IT team might have its own tool-driven escalation process, the NOC watches all escalations, verifying that appropriate on-call staff respond within an expected interval, and leading a conference call on which to discuss progress with each task related to the current operations issues. Simultaneously, the NOC convenes a conference call with all IT team managers to discuss the unfolding crisis and decide at specific intervals whether or not to switch-over support for specific services to a secondary datacenter.


Any business critical production system ultimately reckons down-time in terms of dollars lost. To expedite triage and switchover decisions, your NOC needs an intelligent notification system that can automatically bring the right people into their conference calls. The sooner the NOC gets the IT teams talking to each other the sooner they can make informed decisions about what needs actions need to be taken and in what order. One team completing a certain task before another team is ready can actually set-back rather than advance resolution.


Important features of a scalable intelligent notification system are:


  • Calendar-awareness: Looks up the on-call schedule for each IT team to find the engineer for the current shift.
  • Escalation awareness: Looks up each team’s points of escalation to find the decision-making manager for switchover decisions.
  • Automatic notification on multiple devices: Targets a specific contact on phone(s), email, SMS in a specific order.
  • Automatic conferencing: Routes the engineer or manager into the relevant conference bridge in which NOC is already waiting.



With Windows Server 2012 comes WSUS version 6. In the WSUS Deployment Guide for Windows Server 2012, attentive readers will notice a critical change to the historical WSUS deployment strategy: "By default, in Windows Server 2012, WSUS [v6] uses port 8530. However, WSUS 3.0 uses port 80, by default." This might seem like a trivial detail to some, but if you're not careful when you're configuring your WSUS server in Windows Server 2012, you're likely to configure a server that no clients can find.


Why does it matter what port WSUS uses?

When you configure computers in your network to get their updates from a WSUS server, you provide the clients the URL for the WSUS server in a local/group policy setting. In the past, it's been sufficient to just use something like http://wsusServer since all URLs use port 80 if the URL doesn't specify anything different. This worked just fine with WSUS v3 and earlier; but now, if you don't change the default port when you configure WSUS on Windows Server 2012, you have to specify port 8530 in the URL your clients use: http://wsusServer:8530


What are the implications of not using the correct WSUS port?

If you neglect to change the WSUS port in either the server settings or the client URLs, your clients simply won't be able to find the new WSUS server. They'll get a 404 error, just like they would if they were trying to access a web page that's not available. What's more, if you use a third-party WSUS patch management, WSUS patching tool like SolarWinds Patch Manager, that tool won't be able to find the new server either. So, when you start testing Windows Server 2012 in your patching environment, be sure to configure the appropriate port to ensure everything plays nicely together. Patch management simplified!


For additional information about the implications of not specifying the correct WSUS port when using Patch Manager, see "WSUS v6 installs to port 8530 by default" on the SolarWinds knowledge base.


Tired of unresolved WSUS issue? Check out this white paper from Microsoft MVP, Lawrence Garvin

Black Eye Friday.

Posted by Bronx Nov 19, 2012

As we all know, the Friday after Thanksgiving has become known as Black Friday. This is the day where major stores offer incredible deals beginning at midnight.


Should you participate?

On the surface, getting an $800 computer for only $200 may sound like a great deal. But what's the real cost? I think it's a wise idea to examine these Black Friday deals just a little more closely. Here's what you're in for if you decide to participate in the frenzy:

  • You'll probably be standing in line for at least six hours, at night and in the cold, with no bathroom in sight. Are you physically ready for this?
  • Let's say you earn $20 an hour at your job. Standing in line for six hours is $120 worth of your time, before taxes, of course.
  • Have you ever been to Pamploña to join in the Running of the Bulls? If not, you'll have a pretty similar experience when you join a mob of hundreds, if not thousands, crashing through the store doors and running through the aisles, with not a soul caring about your safety. Have you seen the videos of people getting trampled? For what, to save a few bucks? (I thought we were living in a civilized society. Silly me.)
  • Is that computer really worth the $800 the store says it originally cost before this sale? Probably not. Consider the markup the store adds for making a profit, not to mention the cost of housing it in an actual store with employees who get paid. There's a bit of padding there.
  • Technology grows fast. By the time you get the computer out of the store and set up at home, it will already be outdated. Why? The items on sale are usually not the best sellers to begin with, and for a reason. Look before you leap.
  • Do you really want to give up your precious free time for this? I can think of 115,412 things I would rather do than stand in line for a sale.


You're not going to participate, but still want the deal?

Enter Cyber Monday. For those of you who won't tolerate the perils of Black Friday but want the deals, Cyber Monday was created just for you. Think of Cyber Monday as the same great deals as Black Friday, without the risk of death. Here are some of the benefits of Cyber Monday, as opposed to Black Friday.

  • Same great deals.
  • No lines, standing or otherwise. Shop from the comfort of work or home, sitting, and near a bathroom.
  • No risk of being trampled by an uncivilized stampede of people.
  • Technology is three days more advanced.


You missed both days and still want to save?

Then use your noodle. Here are some tips for saving money any day of the year:

  • Shop around. Here's a list of the top 15 price comparing websites.
  • Buy used. In economic uncertainty, people often sell items for quick cash, which means you save. Think ebay, Amazon, or Craigslist.
  • Haggle. You'd be surprised how many vendors are willing to negotiate the final price.
  • Get your coupon on. Sites like have coupons o'plenty.
  • Shop online. You may have noticed a growing trend where online shopping has increased, while shopping at actual retail stores has decreased. This is no accident. Virtual stores are much cheaper to maintain than real stores. Of course, that savings is passed along to you, the consumer. It even happened here at SolarWinds. SolarWinds used to sell software on CDs along with hard copies of their manuals. Now it just makes more fiscal sense to sell everything digitally and online.


Happy shopping, and be smart. Be safe.

Continuing in our series of How To's for SolarWinds IP Address Manager (IPAM), we will look at performing user delegation to streamline team access.


SolarWinds IPAM allows you to define and use role definitions to restrict user access to maintain security without limiting your ability to delegate required network management activities. With IPAM, you can designate various levels of access privileges and custom roles for each admin based on his/her operational limitations and access controls on IP address management within the organization.


In this How-To guide, you’ll learn how to define access roles per subnet, group, supernet, DHCP scope or even individual IP addresses.You'll also get insight into the operational scope of the various access permissions offered by IPAM.


View the User Delegation How-To now and see how SolarWinds IP Address Manager makes team-based administration a breeze!

Network Management and the Management Information Base (MIB)

If you are working in IT chances are good that you have been working with SNMP and MIBs. Almost every type of Network Management System uses MIBs. In my New to Networking series, I discuss MIBs at length in Volume 4, Introduction to SNMP. I think that paper gives good overview of MIBs and how they function, so I won't discuss that here. What we will look at is the process of creating a MIB. The Internet Engineering Task Force (IETF) is the party responsible for overseeing the MIB development process and approving MIBs as standards. The IETF welcomes anyone with technical competence to submit work for development and eventual approval as a MIB. So, where does this process start?


Step One - Request For Comments (RFC) Submission

To get the MIB "ball" rolling, the submitter works with IETF members to create an RFC and have an RFC number assigned. The RFC is a working document for the submitter to communicate their need and intention for a new MIB. The IETF is so invested in the RFC process that they have an RFC that defines the IETF. IETF members review the RFC submission and comment back to the submitter and any IETF working committee assigned the RFC. If the IETF decides to move forward with the MIB, they assign the RFC a MIB number on the experimental MIB branch and reserve a MIB number on either the standard or enterprise branch. Standard branch MIBs are vendor independent whereas enterprise branch MIBS are vendor specific.


This diagram shows the structure of MIBs from the root to these branches.

toplevelMIB structure.jpg


Step Two - RFC and MIB Approval

Now don't let this fool you into thinking that this is an easy process to get to this point. This RFC describes the process in full. IETF members tend to be very academic and process oriented people, so this is not for those who are easily frustrated. The good news is that this process works and is the globally accepted method of creating and publishing a new MIB. If you want to take a deeper dive into MIBS I recommend SNMP MIB Handbook by Larry Walsh.

Today we are pleased to announce that the latest version of SolarWinds Network Performance Monitor, version 10.4, is now available.  For existing customers, you will see this within your customer portal within the next few days.


NPM v10.4 is a robust and effective network management software that packs a lot of great new features, many of which have been requested by the user community.  The most noticeable features include:


  • German language localization
  • Extended support for BigIP® F5® devices (including connections, throughput, nodes and virtual pools status polling)
  • Hardware health monitoring (temperature, power supply, fan speed, etc.) for major HW vendors (Cisco, Juniper, F5, Dell, HP)
  • Support for HP® MSM 760/765 wireless controllers
  • Auditing events for user actions such as deleting or un-managing nodes
  • Drag-and-Discover interactive charts
  • Universal Device Poller (UnDP) Improvements
    • Multiple UnDPs in a single chart
    • UnDP Parse Transform function
    • UnDP Polling and Retention settings
  • Web based custom property editor


If you want to check out all of the latest features of NPM network monitor v10.4, download a free fully functional 30-day trial and see just how easy network performance monitoring can be.

RyanAdzima_ThwackAmbassador.pngRyan Adzima is an IT manager in the field of higher education. In his role as IT manager, Ryan oversees the day-to-day operations of a large, dynamic network and works on many network and server projects. With his extensive background and experience as a network engineer, he takes a very hands-on approach to IT management.


On any given day, the school’s network handles 3000+ devices, most of which are wireless. The very nature of higher education networks means IT organizations must have a solid “bring your own device” (BYOD) plan in place. With that said, Ryan stresses the importance of finding the right balance between providing secure, yet easily accessible network access for students and faculty.


Even with everything on Ryan’s plate, he still manages to maintain two blog sites. The first site is A Boring Look | My Life in Text (unlike its name, the site is definitely not boring). Ryan describes it as a brain dump of the many different things he’s learned throughout his career, as well as a venue to share his take on new products and technologies. His second blog site, Techvangelist, offers a more technical, unbiased review of many different products and technologies, including virtualization, routing & switching, security, servers, and wireless. There’s also some great How-To’s!


Ryan holds many certifications, including his CCNA, various Microsoft certs, and he's a member of Mensa. Pretty impressive! He’s also working on getting the CWNP certifications and CCIE. Plus, he’s a SolarWinds Thwack Ambassador who is very well-versed in many SolarWinds products, including NPM, NCM, UDT, NetFlow, STM, and Virtualization Manager.


He uses quite a few SolarWinds products and says his favorite product out of the mix “has got to be UDT. It’s simple and easy to use but saves me a TON of time when I need to track down that printer that the helpdesk forgot to get a config page from before provisioning or when a client loses a device, or we’re tracking a repeat offender of a rogue wireless AP/server/etc. It also gives me a log of where and when people are logging in around campus. Like I said, it’s such a simple product but well worth every penny.”


Check out Ryan’s blogs. I guarantee you’ll learn something new and interesting!




Connect with Ryan:



Twitter handle: @radzima

In another post, I mentioned that one of the differences between Windows Server Update Services (WSUS) and System Center Configuration Manager (ConfigMgr) is that you have to build deployment packages in ConfigMgr before you can deploy the updates. One of the reasons for this is that ConfigMgr does not approve or disapprove updates the way WSUS does, so the Windows Update Agent doesn't know what updates to grab from the update server. In ConfigMgr environments, clients don't install updates until there is a deployment package for them.


In this post, I'd like to discuss about patch management and a little-known feature of SolarWinds Patch Manager that allows you to deploy updates to ConfigMgr clients without having to build or test any deployment packages. This feature adds a level of flexibility to ConfigMgr environments by interacting directly with the update server and Windows Update Agent, essentially bypassing ConfigMgr altogether. With the right patch management software, patch management is never complex.


Getting Updates from the Update Server to ConfigMgr Clients

After you have published your third-party updates to the update server, you have to get them to the ConfigMgr clients. Normally, you would do this by creating a deployment package for one or more ConfigMgr collections. In this procedure, however, we're skipping this step.


To instruct the ConfigMgr clients to grab updates from the update server without building a deployment package, use the Update Management Wizard in Patch Manager. On the second screen, in the Approval Options section, there is an option to Include only approved updates. Clear it. This allows the Windows Update agent on the selected clients to download the applicable updates even though they have never been approved on the update server. (Remember from my previous post that the ConfigMgr deployment packages take the place of standard WSUS approvals.)


After you select the updates you want to deploy and the system(s) to which you want to deploy them, Patch Manager instructs the clients to grab the updates and install them if applicable. The important note to make here is that, even though you haven't approved the updates or created a deployment package, the Windows Update Agent will still check the applicability and installed rules in the update package to determine whether or not it actually needs the update. That way, you can select a large number of updates and a large pool of clients in a single task, and each client will only download and install what they need.


For a step-by-step procedure to do this in your own environment, see How to deploy third-party updates to ConfigMgr clients without building deployment packages on the SolarWinds knowledge base and identify the ideal patch management solution.




In the first post of this series I discussed the importance of establishing recovery time objectives (RTOs) for each component in your production system. In the second post I covered the comparative demands of performing triage during outages of different magnitudes; one that impacts a set of network devices versus an outage that impacts systems in an entire datacenter.


In this post I want to discuss what was merely implied in the last one. When a disaster hits a datacenter, in order to meet RTOs, you might need first to switch-over service from the primary to a secondary datacenter, before coordinating any recoveries within the down datacenter.


Hot/Hot or Hot/Warm?


If you have a production system in which customers merely read static data when they access front-end interfaces, and do not write new data into the database, then you could potentially use a load-balancing technology (for example, F5 ) that routes traffic into web clusters located at two geographically distinct datacenters. In this case, you already have a highly available production system. Similarly, you could have database instances running in different datacenters as part of fully redundant service stacks. And as for feeding in new data, you may have jobs setup to update the database in each datacenter during staggered maintenance windows. In this case, we consider your overall deployment as hot/hot.


However, I’m assuming a system in which customers do create new data through their online sessions. In this case, you can have only one database at a time in an operational state, replicating changes to its warm back-up. DNS directs all traffic to the datacenter where the database is located. While a virtual IP device might sit in front of the cluster of web servers in this datacenter, to provide redundancy and balance load so that this part of the system is very fault tolerant, we still consider your deployment hot/warm.




In a hot/warm architecture, when the primary datacenter goes down, you first need to bring up your database instance in the secondary datacenter. Whatever database you use must have its own specific tools for draining queues of replicated data and opening up the stand-by database for reading and writing.


When all systems are ready in the secondary datacenter, you can edit DNS zone file(s) so that the A record correlates service domains with the appropriate web servers.  As you make that change, you need to monitor the domains; during the black-out period, when requests for DNS cannot be answered, you should see a web page that informs the user that the services are in the process of being redirected. You can expect users to get the redirection notice for the duration of the time-to-live (TTL) associated with the last DNS answers. 5 minutes is a common TTL; authoritative DNS servers and client browsers use cached answers until the TTL expires.


As new traffic comes into the secondary datacenter a good DNS tool helps you verify that the DNS server is passing out the revised resolution for the front-end domains; and a user event simulation tool confirms that DNS is steering visitors accordingly. If the primary datacenter is not entirely down, you can monitor the access logs of your web servers in that site for a report on traffic that is still seeking access to your services through cached DNS answers. Concurrently, operations teams monitor their own pieces of the system—network, web and application, database and storage—for status and alerts.


When all systems are up, and all alerts are resolved, you have successfully switched traffic to the secondary datacenter. It’s now time for the operations teams to fix problems back in the primary site.



Visual Basic 101 (part 4)

Posted by Bronx Nov 14, 2012

If you've read Part 1, Part 2, and Part 3 of this series, then this is the one you've been waiting for. The code! Before you get too excited, let me clarify a few things, for the record:

  • The only way to get this calculator is to build it following the steps in this series. It will not be available for download nor will SolarWinds offer support for this tool. The code for this tool is made available for educational purposes only.
  • When built, the calculator will only provide recommendations based on a small amount of testing. Your environment will be different and you may want to modify the code to suit your needs. (Comments in the code are given on what to modify, if you so desire.)
  • This calculator has not been tested for accuracy and will not be supported in any way, shape, or form, by SolarWinds, or by the author of this article.
  • SolarWinds is not responsible for any errors or non-working code. Sorry, you'll have to troubleshoot yourselves. (Be sure to review all four of these posts.)
  • This is only one of an infinite number of ways to code this calculator. I'm sure there are more elegant designs. The bottom line is, it works.
  • If and when you get this to work where others cannot, please help them if you can.
  • Wireshark is a free tool that you can use to measure and filter your bandwidth traffic. Wireshark was used in getting bandwidth averages for the four protocols in this calculator. You can use Wireshark to get your own averages and modify the figures in this code, which is commented.

Lesson 5 - Compiling the code

Below is the code you will need to add to your calculator project in Visual Basic. In your calculator project, go to the Code view and delete everything. Next, copy and paste the code below. When done, you should have something that looks like this:


If all is well, hit the Play button, highlighted above. The calculator should appear before you working as planned. If everything works, compile your code into an executable.


Compiling your code into an executable:

  1. From the Build menu, select Build...
  2. If successful, your executable should reside in a path similar to this:


Copy and Paste Me

Imports System.Data.OleDb

Imports System.Drawing.Drawing2D


Public Class frmMain


Dim WMIMonitorNumbers, WMINeededBW, WMIStat, WMIConvert, WMITotal, FinalTotal, WMIConvert2 As Double

Dim SNMPMonitorNumbers, SNMPNeededBW, SNMPStat, SNMPConvert, SNMPTotal, SNMPConvert2 As Double

Dim RPCMonitorNumbers, RPCNeededBW, RPCStat, RPCConvert, RPCTotal, RPCConvert2 As Double

Dim ICMPMonitorNumbers, ICMPNeededBW, ICMPStat, ICMPConvert, ICMPTotal, Monitortotal, ICMPConvert2 As Double

Dim WMIsuffix, RPCsuffix, SNMPsuffix, ICMPsuffix, suffixtotal As String

Dim All$


Private Sub txtWMI_LostFocus(ByVal sender As Object, ByVal e As System.EventArgs) Handles txtWMI.LostFocus

        On Error Resume Next

        If IsNumeric(txtWMI.Text) = False Or Val(txtWMI.Text) > 10000 Then

            txtWMI.Text = "0"

            WMIMonitorNumbers = txtWMI.Text

            TBWMI.Value = 0


            Exit Sub

        End If


        WMIMonitorNumbers = txtWMI.Text

        txtWMI.Text = Format(WMIMonitorNumbers, "###,###")



        If txtWMI.Text <= 10000 Then

            TBWMI.Value = txtWMI.Text


            TBWMI.Value = 10000

        End If

        If WMIMonitorNumbers < 1 Then txtWMI.Text = "0" : WMICalc()

End Sub

Private Sub txtRPC_LostFocus(ByVal sender As Object, ByVal e As System.EventArgs) Handles txtRPC.LostFocus

        On Error Resume Next


        If IsNumeric(txtRPC.Text) = False Or txtRPC.Text > 10000 Then

            txtRPC.Text = "0"

            RPCMonitorNumbers = txtRPC.Text

            TBRPC.Value = 0


            Exit Sub

        End If


        RPCMonitorNumbers = txtRPC.Text

        txtRPC.Text = Format(RPCMonitorNumbers, "###,###")



        If txtRPC.Text <= 10000 Then

            TBRPC.Value = txtRPC.Text


            TBRPC.Value = 10000

        End If

        If RPCMonitorNumbers < 1 Then txtRPC.Text = "0" : RPCCalc()

End Sub

Private Sub txtSNMP_LostFocus(ByVal sender As Object, ByVal e As System.EventArgs) Handles txtSNMP.LostFocus

        On Error Resume Next


        If IsNumeric(txtSNMP.Text) = False Or txtSNMP.Text > 10000 Then

            txtSNMP.Text = "0"

            SNMPMonitorNumbers = txtSNMP.Text

            TBSNMP.Value = 0


            Exit Sub

        End If


        SNMPMonitorNumbers = txtSNMP.Text

        txtSNMP.Text = Format(SNMPMonitorNumbers, "###,###")



        If txtSNMP.Text <= 10000 Then

            TBSNMP.Value = txtSNMP.Text


            TBSNMP.Value = 10000

        End If

        If SNMPMonitorNumbers < 1 Then txtSNMP.Text = "0" : SNMPCalc()

End Sub

Private Sub txtICMP_LostFocus(ByVal sender As Object, ByVal e As System.EventArgs) Handles txtICMP.LostFocus

        On Error Resume Next


        If IsNumeric(txtICMP.Text) = False Or txtICMP.Text > 10000 Then

            txtICMP.Text = "0"

            ICMPMonitorNumbers = txtICMP.Text

            TBICMP.Value = 0


            Exit Sub

        End If


        ICMPMonitorNumbers = txtICMP.Text

        txtICMP.Text = Format(ICMPMonitorNumbers, "###,###")



        If txtICMP.Text <= 10000 Then

            TBICMP.Value = txtICMP.Text


            TBICMP.Value = 10000

        End If

        If ICMPMonitorNumbers < 1 Then txtICMP.Text = "0" : ICMPCalc()

End Sub

Private Sub txtWMI_KeyDown(ByVal sender As Object, ByVal e As System.Windows.Forms.KeyEventArgs) Handles txtWMI.KeyDown

        On Error Resume Next

        If e.KeyCode = Keys.Return Then


            If IsNumeric(txtWMI.Text) = False Or txtWMI.Text > 10000 Then

                txtWMI.Text = "0" : Exit Sub

            End If


            WMIMonitorNumbers = txtWMI.Text

            txtWMI.Text = Format(WMIMonitorNumbers, "###,###")



            If txtWMI.Text <= 10000 Then

                TBWMI.Value = txtWMI.Text


                TBWMI.Value = 10000

            End If

        End If

End Sub

    Private Sub WMICalc()

        On Error Resume Next

        WMIStat = 315 'This is the key figure. This number represents multiple tests and averages using Wireshark, filtering out data that is not pertinent. Changing this number will allow you to fine tune the amount of bandwidth used by this protocol.

        WMINeededBW = WMIStat * WMIMonitorNumbers

        WMIConvert = ((WMINeededBW / 1024) * 8) / 1024

        If WMIConvert = 0 Then lblWMI.Text = "00.00 Kbps" : DoTotal() : Exit Sub

        If WMIConvert < 1 Then


            WMIsuffix = " Kbps"

            WMIConvert2 = WMIConvert * 1024

            lblWMI.Text = Format(WMIConvert2, "###,###") & WMIsuffix

            Exit Sub



            WMIsuffix = " Mbps"

            lblWMI.Text = Format(WMIConvert, "standard") & WMIsuffix

            Exit Sub

        End If

  End Sub

Private Sub TBWMI_ValueChanged(ByVal sender As Object, ByVal e As System.EventArgs) Handles TBWMI.ValueChanged

        On Error Resume Next

        WMIMonitorNumbers = TBWMI.Value

        If txtWMI.Text <= 10000 Then

            txtWMI.Text = Format(TBWMI.Value, "###,###")

        End If

        If WMIMonitorNumbers < 1 Then txtWMI.Text = "0"


End Sub

Private Sub txtSNMP_KeyDown(ByVal sender As Object, ByVal e As System.Windows.Forms.KeyEventArgs) Handles txtSNMP.KeyDown

        On Error Resume Next

        If e.KeyCode = Keys.Return Then

            If IsNumeric(txtSNMP.Text) = False Or txtSNMP.Text > 10000 Then

                txtSNMP.Text = "0" : Exit Sub

            End If


            SNMPMonitorNumbers = txtSNMP.Text

            txtSNMP.Text = Format(SNMPMonitorNumbers, "###,###")



            If txtSNMP.Text <= 10000 Then

                TBSNMP.Value = txtSNMP.Text


                TBSNMP.Value = 10000

            End If

        End If

End Sub

Private Sub SNMPCalc()

        On Error Resume Next

        SNMPStat = 0.66 'This is the key figure. This number represents multiple tests and averages using Wireshark, filtering out data that is not pertinent. Changing this number will allow you to fine tune the amount of bandwidth used by this protocol.

        SNMPNeededBW = SNMPStat * SNMPMonitorNumbers

        SNMPConvert = ((SNMPNeededBW / 1024) * 8) / 1024

        If SNMPConvert = 0 Then lblSNMP.Text = "00.00 Kbps" : DoTotal() : Exit Sub

        If SNMPConvert < 1 Then


            SNMPsuffix = " Kbps"

            SNMPConvert2 = SNMPConvert * 1024

            If SNMPConvert2 < 1 Then SNMPConvert2 = 1

            lblSNMP.Text = Format(SNMPConvert2, "###,###") & SNMPsuffix

            Exit Sub



            SNMPsuffix = " Mbps"

            lblSNMP.Text = Format(SNMPConvert, "standard") & SNMPsuffix

            Exit Sub

        End If

End Sub

Private Sub TBSNMP_ValueChanged(ByVal sender As Object, ByVal e As System.EventArgs) Handles TBSNMP.ValueChanged

        On Error Resume Next

        SNMPMonitorNumbers = TBSNMP.Value

        If txtSNMP.Text <= 10000 Then

            txtSNMP.Text = Format(TBSNMP.Value, "###,###")

        End If

        If SNMPMonitorNumbers < 1 Then txtSNMP.Text = "0"


End Sub

Private Sub txtRPC_KeyDown(ByVal sender As Object, ByVal e As System.Windows.Forms.KeyEventArgs) Handles txtRPC.KeyDown

        On Error Resume Next

        If e.KeyCode = Keys.Return Then

            If IsNumeric(txtRPC.Text) = False Or txtRPC.Text > 10000 Then

                txtRPC.Text = "0" : Exit Sub

            End If

            RPCMonitorNumbers = txtRPC.Text

            txtRPC.Text = Format(RPCMonitorNumbers, "###,###")


            If txtRPC.Text <= 10000 Then

                TBRPC.Value = txtRPC.Text


                TBRPC.Value = 10000

            End If

        End If

End Sub

Private Sub RPCCalc()

        On Error Resume Next

        Dim Exponent As Double

        RPCStat = 2392 'This is the key figure. This number represents multiple tests and averages using Wireshark, filtering out data that is not pertinent. Changing this number will allow you to fine tune the amount of bandwidth used by this protocol.

        Exponent = 1

        RPCStat = RPCStat ^ Exponent

        RPCNeededBW = RPCStat * RPCMonitorNumbers

        RPCConvert = ((RPCNeededBW / 1024) * 8) / 1024

        If RPCConvert = 0 Then lblRPC.Text = "00.00 Kbps" : DoTotal() : Exit Sub

        If RPCConvert < 1 Then


            RPCsuffix = " Kbps"

            RPCConvert2 = RPCConvert * 1024

            lblRPC.Text = Format(RPCConvert2, "###,###") & RPCsuffix

            Exit Sub



            RPCsuffix = " Mbps"

            lblRPC.Text = Format(RPCConvert, "standard") & RPCsuffix

            Exit Sub

        End If

End Sub

Private Sub TBRPC_ValueChanged(ByVal sender As Object, ByVal e As System.EventArgs) Handles TBRPC.ValueChanged

        On Error Resume Next

        RPCMonitorNumbers = TBRPC.Value

        If txtRPC.Text <= 10000 Then

            txtRPC.Text = Format(TBRPC.Value, "###,###")

        End If

        If RPCMonitorNumbers < 1 Then txtRPC.Text = "0"


End Sub

Private Sub txtICMP_KeyDown(ByVal sender As Object, ByVal e As System.Windows.Forms.KeyEventArgs) Handles txtICMP.KeyDown

        On Error Resume Next

        If e.KeyCode = Keys.Return Then

            If IsNumeric(txtICMP.Text) = False Then

                txtICMP.Text = "0" : Exit Sub

            End If


            ICMPMonitorNumbers = txtICMP.Text

            txtICMP.Text = Format(ICMPMonitorNumbers, "###,###")



            If txtICMP.Text <= 10000 Then

                TBICMP.Value = txtICMP.Text


                TBICMP.Value = 10000

            End If

        End If

End Sub

Private Sub ICMPCalc()

        On Error Resume Next

        ICMPStat = 1.15 'This is the key figure. This number represents multiple tests and averages using Wireshark, filtering out data that is not pertinent. Changing this number will allow you to fine tune the amount of bandwidth used by this protocol.

        ICMPNeededBW = ICMPStat * ICMPMonitorNumbers

        ICMPConvert = ((ICMPNeededBW / 1024) * 8) / 1024

        If ICMPConvert = 0 Then lblICMP.Text = "00.00 Kbps" : DoTotal() : Exit Sub

        If ICMPConvert < 1 Then


            ICMPsuffix = " Kbps"

            ICMPConvert2 = ICMPConvert * 1024

            If ICMPConvert2 < 1 Then ICMPConvert2 = 1

            lblICMP.Text = Format(ICMPConvert2, "###,###") & ICMPsuffix

            Exit Sub



            ICMPsuffix = " Mbps"

            lblICMP.Text = Format(ICMPConvert, "standard") & ICMPsuffix

            Exit Sub

        End If

End Sub

Private Sub TBICMP_ValueChanged(ByVal sender As Object, ByVal e As System.EventArgs) Handles TBICMP.ValueChanged

        On Error Resume Next

        ICMPMonitorNumbers = TBICMP.Value

        If txtICMP.Text <= 10000 Then

            txtICMP.Text = Format(TBICMP.Value, "###,###")

        End If

        If ICMPMonitorNumbers < 1 Then txtICMP.Text = "0"


End Sub

Private Sub DoTotal()

        On Error Resume Next

        suffixtotal = " Mbps"

        FinalTotal = WMIConvert + SNMPConvert + RPCConvert + ICMPConvert


        If FinalTotal >= 1024 Then

            suffixtotal = " Gbps"

            FinalTotal = FinalTotal / 1024

        End If


        If FinalTotal < 1 Then

            suffixtotal = " Kbps"

            FinalTotal = FinalTotal * 1024

        End If

        Monitortotal = WMIMonitorNumbers + RPCMonitorNumbers + SNMPMonitorNumbers + ICMPMonitorNumbers

        lblMonitors.Text = Format(Monitortotal, "###,###")

        lblTotal.Text = Format(FinalTotal, "Standard") & suffixtotal

        If Monitortotal < 1 Then lblMonitors.Text = "0"

        FinalTotal = 0


End Sub

Private Sub cmdReset_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles cmdReset.Click

        txtWMI.Text = "0"

        txtRPC.Text = "0"

        txtSNMP.Text = "0"

        txtICMP.Text = "0"

        TBWMI.Value = 0

        TBRPC.Value = 0

        TBSNMP.Value = 0

        TBICMP.Value = 0


        txtWMI.Text = "0"

        txtRPC.Text = "0"

        txtSNMP.Text = "0"

        txtICMP.Text = "0"

        TBWMI.Value = 0

        TBRPC.Value = 0

        TBSNMP.Value = 0

        TBICMP.Value = 0

        lblMonitors.Text = "0"


End Sub

Private Sub PieCalc()


        Dim ser1 As Windows.Forms.DataVisualization.Charting.Series

        ser1 = Chart1.Series.Add("Pie Chart")

        ser1.ChartType = DataVisualization.Charting.SeriesChartType.Pie

        ser1.Points(ser1.Points.AddY(WMIConvert)).AxisLabel = "WMI"

        ser1.Points(ser1.Points.AddY(SNMPConvert)).AxisLabel = "SNMP"

        ser1.Points(ser1.Points.AddY(RPCConvert)).AxisLabel = "RPC"

        ser1.Points(ser1.Points.AddY(ICMPConvert)).AxisLabel = "ICMP"

        If WMIConvert + SNMPConvert + RPCConvert + ICMPConvert = 0 Then PieReset()

End Sub

Private Sub PieReset()


        Dim ser1 As Windows.Forms.DataVisualization.Charting.Series

        ser1 = Chart1.Series.Add("Pie Chart")

        ser1.ChartType = DataVisualization.Charting.SeriesChartType.Pie

        ser1.Points(ser1.Points.AddY(100)).AxisLabel = "WMI"

ser1.Points(ser1.Points.AddY(100)).AxisLabel = "SNMP"

ser1.Points(ser1.Points.AddY(100)).AxisLabel = "RPC"

ser1.Points(ser1.Points.AddY(100)).AxisLabel = "ICMP"

End Sub

Private Sub frmMain_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load


End Sub

Private Sub speak()

        Dim SAPI

        Dim prefix$

        Dim length As Integer

        Dim CNumber As Double

        SAPI = CreateObject("SAPI.spvoice")

        length = Len(lblTotal.Text)

        If Mid(lblTotal.Text, length - 3, 4) = "Mbps" Then

            prefix$ = "mega bits per second"


            prefix$ = "kilabits per second"

        End If

        CNumber = Val(lblTotal.Text)

        If Val(lblTotal.Text) = 0 Then

            All$ = "I need numbers to do a calculation. Zero is not an option."


            Exit Sub


            All$ = "The total recommended bandwidth for your" & Monitortotal & "monitors is" & CNumber & prefix$


            Exit Sub

        End If

End Sub

Private Sub picSpeak_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles picSpeak.Click


End Sub

End Class

In larger organizations, it is very common for different people to have responsibilities to manage different blocks of subnet address spaces for their respective departments/divisions/regions. SolarWinds IPAM provides the ability for your IP Address Management tasks to be divided up amongst different people/groups, such as functional groups, geographic regions, virtual server teams, and critical staff.


Perhaps you want to allow your desktop team to have visibility of ip scopes for a particular office floor of vlan's, but without views secure into web infrastructure networks. Beginning with version 3.0, IPAM enables the definition of user access roles based on subnet, group or supernet basis.

Specify which users have what level of permissions (read/write) to certain address spaces (Group, Supernet, or Subnet). It is important to note that if subnets are moved that create hierarchy changes, the inherited roles will be inherited from the new parent.

Any existing customized roles will not be changed or inherited.


When deciding which roles will work best in your environment, determine what is the user really needs access to on a daily basis. The following IPAM user roles are available:


Read/write access and can initiate scans to all subnets, manage credentials, custom fields, and IPAM settings and full access to DHCP management & DNS monitoring.

Power Users can reorganize network components in the left pane of the Manage Subnets and IP Addresses view and full access to DHCP management & DNS monitoring. This role also includes the ability to edit properties and custom fields on portions of the network made available by the site administrator.

The Operator role has read-only access to DHCP Scope, Servers, Reservations, and DNS Servers, Zones, and Records.

These users can also add and delete IP address ranges from portions of the network made available by the site administrator. They can also change the subnet status selection on the Manage Subnets and IP Addresses page. Manage IP address property and custom fields, and edit IP address properties on portions of the network made available by the site administrator.


This role will have Read only access to to all subnets and DHCP Servers, Scopes, Leases, Reservations and DNS Servers, Zones, Records.

This role is defined on a per subnet basis. DHCP and DNS access will depend upon the Global account setting for those nodes.

In a nutshell - after selecting Custom, click Edit to define what the user can and cannot see.


Next you select the desired subnet and define which role this user will have.


Make note of the inherited column on the far right to determine the correct inheritance is being applied.

The following is a good example of the differences a user with a custom role can or cannot see.



If you are interested in detailed steps for setting up IPAM user delegation see this post.

Below is an overview of the all the role operations.The color coded legend is as follows:


The following table below details the various operations that each role can have.


Originally, this article was going to called, "WMI Best Practices." After doing a brief search, I stumbled upon Microsoft's version of this topic. (They are, after all, the creators of WMI.) Let's just say the material was...sparse.


I think helping troubleshoot a WMI node would be more beneficial to our community anyway, so let's go with that.


Troubleshooting a WMI Node.


The following conditions must be met before you can proceed troubleshooting WMI nodes:

  • The node has successfully been added via WMI.
  • WMI is working properly on the remote server.
  • The Hardware Monitoring is installed on the remote server and running.

Using Wbemtest.exe to troubleshoot WMI:

  1. Open wbemtest.exe, usually located at C:\Windows\System32\wbem\wbemtest.exe.
  2. Connect from the problematic node (either the SAM server or the additional poller server) to the remote server using wbemtest.exe.
  3. Click Connect.
  4. In the Namespace field enter:

For IBM and HP enter: \\RemoteServerIpAddress\root
For Dell enter: \\RemoteServerIpAddress\root\cimv2


   5. Enter administrator credentials.


   6.  Click Connect.

   7.  Once connected, click Query… from the main screen. The Query dialog appears.

   8Enter: select * from __Namespace


Replace Namespace with the following:

  • For HP nodes, replace Namespace with HPQ
  • For Dell node replace Namespace with Dell
  • For IBM node replace Namespace with IBMSD

  9.   If the proper Namespace is found, connect to this Namespace.

  • \\RemoteServerIpAddress\root\IBMSD for IBM.
  • \\RemoteServerIpAddress\root\HPQ for HP.
  • \\RemoteServerIpAddress\root\cimv2\Dell for Dell.


  10.  Run a Query for specific information.

         Select Manufacturer, Model, SerialNumber from CIM_Chassis

  • If the test was not successful, re-install the Hardware agent software provided by the vendor or Hardware Monitoring on the remote server with the latest release.


In a previous post I discussed the importance both of IOPS in scaling a system that supports a high volume online transaction processing (OLTP) workload; and that accurately monitoring storage arrays in real-time is key to maintaining OLTP performance.


This post strictly focuses on the Network File System (NFS) as an especially efficient way to increase the IOPS in the work done between your database and storage array. Though I implicitly may seem to promote Network-attached storage (NAS) over storage area networks (SAN), I don’t pretend to address all the factors that you would need to consider in deciding which is best for your system.


All enterprise databases support server clustering. IT professionals debate the relative benefits of a file-based and block-based approach to coordinating server access to disk arrays. Yet, despite challenges in properly setting up NFS on storage arrays, few would disagree that this file system is highly reliable. Being file-based, NFS is fundamentally less complex than blocked-based systems, all of which require a layer cake of software to coordinate writing from the server cluster to disks in the storage array.


Red Hat Enterprise Linux may be the best clustering platform in that its kernel can open files in Direct I/O mode. This direct access to disk blocks from within the logic of NFS gives the database software simpler and so more efficient means to coordinate operations among the servers in the cluster as they all simultaneously attempt to write data resulting from application transactions.


Monitoring NFS and Other Storage Setups


SolarWinds STM, storage performance monitoring software, runs on Red Hat Enterprise Linux and can monitor storage arrays setup with NFS. In this particular case, assuming you choose an NFS-configured array, and your system is delivering the IOPS you need, take a look at STM, storage monitoring tool as a solution to maintain the internal service level agreement (SLA) upon which your system has been architected.


However, depending on the purposes and size of your system, you may have different storage strategies for different parts. Keep in mind that SolarWinds STM, storage monitoring tool is very versatile, supporting different array setups for many different vendors, rolling up monitoring data for all arrays into a single web console view.




It seems to me that any reference to honey pots requires a reference to Winnie the Pooh, so I'll get that out of the way, and we'll all have that song going through our heads all day. A network honey pot works just like the kitchen honey pot. You open one up and the honey pot attracts flys. In networking the flies are hackers or malicious bots. Honey pots operate by appearing to a hacker to be an attractive hacking target.  The honey pot emulates a full production network or network device so a hacker snoops around while the honey pot captures information about the hacker's attack and the hacker. Network Security Engineers respond to the attack and change firewall rules to prevent further atack.


Honey Pot Implementations


I have seen two implementations of honey pots, a DMZ implementation and a production server implementation. Here is an illustration of each type:








The DMZ implementation has an obvious advantage; that the hacker can be tracked before they get into the production network. The server implementation also has an advantage; it detects and traps hackers that have made their way into the production network.So, which one should you use? Both. Implementing one type does not eliminate the ability to implement the other type. Most companies have multiple honey pots in multiple locations. Once an intrusion has been detected it has to be eliminated as fast as possible and the firewalls all should be updated to exclude that attack.


Firewall Security Management


Centralized firewall security management and device configuration management are crucial at this point. As your network grows, the number of firewalls grows. If you have to manually push new rules out to a few firewalls, that can take a lot of time. During this time the attack is probably still active. Time saved in this process by bringing all of your firewalls into a multi-vendor management platform reduces this time, lessening the threat.



As part of our ongoing series of How-To's for SolarWinds IP Address Manager, we will do a quick walk through of managing your DHCP servers in IPAM.


You'll learn how to do the following tasks directly from the IPAM console:

  • Add new or edit existing Microsoft DHCP servers and scopes
  • Manage DHCP scopes on your Microsoft DHCP server
  • Set, update or delete reservations, reservation status and DHCP properties, including IP ranges and exclusions


You'll also learn more about IPAM's customizable dashboard that provides at-a-glance visibility into your IP space usage, including scope utilization and proactive alerts.


SolarWinds IPAM provides powerful and centralized management of Microsoft DHCP servers, along with integrated monitoring of Cisco DHCP servers. And, with the next release of IPAM (see Beta 1 version of IPAM 3.1), you'll be able to manage Cisco DHCP servers right alongside your Microsoft DHCP servers.


View the DHCP Server Management How-To now and start harnessing the power of SolarWinds IP Address Manager!

I'm not a software developer, so I had to wait until Windows 8 RTM hit the virtual shelves two weeks ago. When I downloaded and installed the new Windows operating system on my home laptop, I had high expectations, and my experience since then has been a roller-coaster ride of disappointment and forgiveness. Unfortunately, Microsoft didn't meet my expectations for the new OS initially; but I'm happy to say it's not a total train wreck now that I've played with it for a bit.


What I Expected

I had heard quite a bit about Windows 8 and saw a few of the commercials about it before I upgraded, so I had a pretty good idea of what to expect. I knew they took away the Start menu; I knew they replaced the familiar desktop with a tablet-like interface; and I knew they added their own app store. I was hoping for a beautiful, intuitive desktop experience that would be as easy to use as my iPad, and maybe a little more customizable.


What I Got

As I said, I was initially disappointed when I first started playing with Windows 8. However, as I tinkered, the new OS redeemed itself little-by-little, but not enough so that I'd recommend it (at least not in its current state) to any of my friends. Here's a quick overview of my most lingering criticisms:

  • Some apps open in the Desktop app.
    This is something I certainly did not expect. I thought it was neat that the new Windows 8 apps opened in full screen, just like you would expect on a tablet OS. However, when I clicked on the Google Chrome tile on the Metro Start screen, my browser didn't open in full screen. Instead, another app, called "Desktop," opened, and Chrome started up as a window inside that. The same happened when I opened Word, iTunes, and even Control Panel.
    Incidentally, if you're a Chrome user you can get a full-fledged Google Chrome app for Windows 8. It's not in the Windows app store, but check out Well done, Google.
  • The true Windows 8 apps crash and lack functionality.
    The best example I can provide at this point is the Netflix app. When I first installed it, I was impressed because it provided a tablet-style app for browsing the Netflix library. It's like what I have on my iPad, but on a bigger screen - good enough for me. But after I ran it for about a day, it started crashing. Now it's totally useless, and no amount of rebooting or reinstalling has fixed it. So I went back to my browser. What did I learn? When you watch a series on Netflix in your browser, it provides an auto-play option. The Netflix app does not; you have to go back to the list of episodes if you want to watch the next one.
  • It's not as customizable as you might think.
    One of the first things you'll do after installing Windows 8 is personalize it. During the initial configuration, you're presented with a series of color pallets that you can apply to your Metro Start screen. This is nice if you don't like the default purple, but you can't take it much farther. If you want a custom background with a photo or original illustration, you have to install a third-party app to get it.


But, like I said, it's growing on me.

Criticisms aside, the new OS is actually growing on me. I like several of the multi-tasking and navigation features, and surprisingly, I've even grown to like the Metro Start screen more than I ever liked the classic desktop.


Even my criticisms have started to wane. The Desktop app is actually really nice - it lends some familiarity to the new layout since it's basically the Windows 7 desktop without the shortcuts or the Start menu. I like it because it provides the Windows-style multi-tasking interface. Go figure. Furthermore, I know the apps will get better. And if I need third-party apps to make things work the way I want them too, that's not the end of the world. When all is said and done, it's nice to have something new and shiny to compete with the iPad my toddler has all but stolen.


My Recommendation

So, would I recommend this new OS to my friends? Not yet. Would I recommend it to my mom? No way. This version is still in its infancy, and I'm inclined to say it has a long way to go before it's widely adopted by the public. And it will take some users (like my mom) a long time to get used to some of the new navigation features.


I don't imagine many people reading this are jumping at the chance to upgrade their personal or work systems to Windows 8. As my coworker Bronx says, savvy users and enterprises are more likely to stick with Windows 7 (if not XP) for the foreseeable future (here's his post if you want to read more: Microsoft, have you lost your mind again?). It's a more solid OS, and it's something we've all grown to know and love. However, if you find yourself having to manage or troubleshoot Windows 8 systems anytime soon, rest assured that SolarWinds has you covered. DameWare Remote Support just added Windows 8 support, and recent changes to Microsoft WSUS allow you to patch Windows 8 and Windows Server 2012 systems with SolarWinds Patch Manager, your partner for your patch management issues. For more information about the latter point, check out Patch Manager now patches Windows Server 2012 and Windows 8 systems. With the right patch management software, you can leave all your patch management worries behind.



Visual Basic 101 (part 3)

Posted by Bronx Nov 9, 2012

In part one this series, I described installing the Visual Basic IDE. In part two, we created the foundation of the bandwidth calculator with all the necessary objects put into place. Now, in part three, we'll discuss the coding.

calc (2).pngide2.png

Lesson 4 - The Coding

The first question you may have is, "Where does the code go?" From the design screen, you can either double-click the form (or any object on the form) to be taken to the Code View, or, click the highlighted red icon above the Properties window in the Solution Explorer, as shown below on the right. All of the code for this project will fall between Public Class frmMain and End Class, also highlighted below:


What Does the Code Mean?

The Visual Basic code we will write is a set of specific instructions designed to do what we want within our program, in this case, calculate figures based on certain parameters. Below is a code snippet taken from the calculator with a detailed explanation of what each element is and does. Be aware that this is only a small bit of code. Explaining every line of the calculator in detail is simply not practical. The point of this exercise is to provide an introductory explanation, hoping you'll pursue more on your own. In the end, you will have a working bandwidth calculator for SAM's components.


Code Snippet Explained

The apostrophe (') is a keyword used in Visual Basic to denote a remark or comment. The original keyword in BASIC was REM (which still works) which is short for "Remark." Anything following either of these two keywords will not be executed when the program is run. I've added comments to each line. Comments appear as green text. VB keywords appear as blue. Note: Code is normally indented to make it easier to read.


Note: The only way to get this calculator is to build it following the steps in this series. It will not be available for download nor will SolarWinds offer support for this tool. The code for this tool is made available for educational purposes only.


Private Sub WMICalc() ' This is the beginning of a subroutine (aka: sub, a block of code named WMICalc). This subroutine ends with END SUB, at the bottom. PRIVATE means (in essence) that this code will not execute unless called upon.

On Error Resume Next ' ON ERROR is a keyword that tells the code to do something in the event of an error, in this case, RESUME NEXT (do nothing differently).

    WMIStat = 315 ' WMIStat is a variable we created that holds a number, which can change. In this case it is equal to 315.

    WMINeededBW = WMIStat * WMIMonitorNumbers 'This is a formula using three numeric variables to perform multiplication, as denoted by the asterisk (*).

    WMIConvert = ((WMINeededBW / 1024) * 8) / 1024 ' Another formula of division and multiplication with variables and numbers. Any formula within parentheses will execute first.


    'If...Then statement. This is a conditional meaning, if this happens, do this, if not, do that.

    If WMIConvert = 0 Then lblWMI.Text = "00.00 Kbps" : DoTotal() : Exit Sub

    ' In the line above, if a variable (WMIConvert) is equal to zero, then the code will set the label's text property to read, 00.00 Kpbs. If not, the next line of code will be read.

    ' The colon (:) in the above line acts like a new line of code. If the IF statement is true, first - change the text of lblWMI, then navigate to the sub called DoTotal, then Exit the Sub.


    'The below If...Then statement incorporates the Else statement. So we have IF, THEN, ELSE.

       If WMIConvert < 1 Then ' If the value of the variable, WMIConvert, is less than 1, execute the code below until the ELSE statement is reached. If not, execute the code below the ELSE statement, ending on the End IF statement.

        DoTotal() ' Call the sub, DoTotal and execute the code of that sub.

        WMIsuffix = " Kbps" ' This string variable is now equal to the text within the quotes.

        WMIConvert2 = WMIConvert * 1024 ' The variable, WMIConvert2 is equal to the value of the variable, WMIConvert, multiplied by the number 1024.

        lblWMI.Text = Format(WMIConvert2, "###,###") & WMIsuffix ' The text property of the label, lblWMI, is equal to the number held in the variable WMIConvert2 AND (&) the variable WMIsuffix.

        ' The Format(WMIConvert2, "###,###") formats the value of the variable WMIConvert2 to be in proper numerical readable format (999,123, as opposed to, 999123). In this case the comma is added after every three numbers. The ampersand (&) does not do addition, rather, it concatenates, or joins the two variables (e.g. 5 & 4 would result in 54)

        Exit Sub ' Work is done in this sub, perform no more actions


        DoTotal() ' Call the sub named, DoTotal, and execute the code of that sub.

        WMIsuffix = " Mbps" ' This string variable, WMISuffix, now stores the text within the quotes.

        lblWMI.Text = Format(WMIConvert, "standard") & WMIsuffix ' The text property of the variable, lblWMI, is equal to the number held in the variable WMIConvert AND (&) the variable WMIsuffix. The value is formatted using 'Standard'.

        Exit Sub ' Work is done in this sub, perform no more actions

    End If ' The required statement to close a multi-line If...Then...(Else) statement.

End Sub ' This is the end of this subroutine.

What's Next?

Next is the final installment, the code. In part four, I will reveal all the code for this calculator as well as how to compile it so you can use it and share it with your friends. You need simply copy and paste the code and follow a few simple instructions.


None. You've been good thus far and I don't like homework either.

As some of you may have read recently, SolarWinds conducted a survey of 401 US-based system administrators. We asked a lot of questions about their life inside and outside of the workplace, and as you might expect, we specifically asked what their primary job function was. The top two answers were related to tasks involving installing, operating, maintaining and supporting computer systems. You can review the full survey results here, and discuss the results in our forum.


Performing the various tasks involved with installing, operating, maintaining, and supporting computer systems requires the use of a disparate set of tools. For example, installing operating systems typically requires physical access to machines, but sometimes is done from server-based toolsets that are accessed remotely. Operating computers may seem very simple – you just sit at the keyboard -- but sometimes those systems being operated are across the room, downstairs in the basement, or maybe even across town, so sitting at the keyboard is not physically possible. Maintaining systems is a never-ending, ongoing, task that quite often requires working on multiple systems simultaneously. Supporting systems is even more complex, not just because of the physical location of the systems, but also because, quite often, other computer users are involved in the equation.


One of the things that can help significantly simplify this process of installing, operating, maintaining, and supporting computer systems is a consolidated toolset with a consistent user interface. It also improves productivity by reducing the amount of time a system administrator spends performing "context switches" from one application to another.


DameWare Remote Support provides a comprehensive suite of IT support software that allows a system administrator to manage files, services and logs on a system, to reboot the system, or logon remotely and take full control of a system, or perform remote desktop sharing with a logged-on user to assist them in resolving an issue. It also provides tools for active directory management, including a group policy editor. Being able to perform all of these tasks from a single, streamlined interface, designed for the special needs of a system administrator can significantly increase the efficiency of performing the daily tasks of the job. A fully-functional 14-day evaluation copy of DameWare Remote Support is available for you right now.




Andy McBride

Working with DNS

Posted by Andy McBride Nov 8, 2012

DNS is probably the most broadly used network service today, yet to most DNS users the service is completely transparent. The most common use for DNS is navigating to a web site. Whether you go to a site by typing the site name in browser or by clicking on a link, the first stop is a DNS server. Computers use numerical IP addresses, not canonical names to find other computers. People are much better at remembering names than numbers. DNS servers translate the name used for network device, such a web server, to an IP address for that server. From there IP routing and a few other technologies connect you to the server. It all sounds simple, but here is an analysis of the DNS resolution for from my desk using SolarWinds Engineer's Toolset DNS Analyzer tool.


As you can see, there is a lot going on. Microsoft wants to make sure that people can always access their site. So they have implemented multiple name servers with multiple paths to two authoritive addresses for Compare the DNS map above with the results of an nslookup below done from the same PC.


...and the results of a trace route from my PC.


What we see is nslookup locating the two addresses to Microsoft and the ping tracerout finding a path through the Internet to Microsoft. Microsoft chooses to configure their web server to not respond to ping, so that is where it times out. This is not an unusual practice.


DNS has many other functions including reverse DNS where an IP address is translated to a name. As a former network engineer, I prefer to use IP address rather than DNS names where possible. The problem with continuing that practice is the rapid adoption of IPv6 addresses. Google's IPv6 Address is 2001:4860:4860::8888, and that is an abbreviated address!


The bottom line is the more you know about DNS, the more you understand how critical it is. Check out our other tools for DNS, including many free ones, at DNS Stuff



In a previous post I discussed the idea that making your production system optimally available requires knowing the recovery time objective (RTO) of each component. Triage of any operational issue then becomes an exercise in meeting the RTO for each impacted component and for the system overall.


In this post I cover the recovery challenges for outages of different scale. I’m assuming that a significant part of your business is web-based; meaning that company employees, consultants, contractors as well as customers for the company’s products and services all access networked resources through web interfaces.


Multiple Network Devices Impacted


Let’s take the typical example of a bad configuration file pushed to multiple network devices. The misconfigurations potentially impact any computer on the network whose packets go through those devices.


A combination of SNMP-polled status and forwarded syslog alerts show up in your monitoring console. If users can report their specific problems then you get clues for troubleshooting the larger issue and an evolving idea of its impact. For example, users unable to email but who can call you on their IP phone steer you a review of gateway devices for the SMTP server. If doing so is possible, correlating alerts with user complaints helps in creating a triage queue.


Assuming your monitoring system generates and escalates alerts , and your network configuration management system includes a trustworthy repository of configs and a history of config changes, then you are well on your way to solving a misconfiguration issue within a team of three during the course of an hour.


An Entire Datacenter Impacted


If the datacenter that hosts your equipment goes down, then the challenges in meeting your RTOs increase dramatically. You need more than just coordination within your network operations team. Systerm administrators, network engineers, database administrators, and storage engineers are all scrambling simultaneously.


Ideally, your operations center serves as the point of integration for all concurrent troubleshooting, so that the layers of the production platform—network devices, database and storage, applications servers, web servers—are brought up in the right order. Coordinating triage on this scale requires a operations console that rolls-up status and alerts from all tools at once.


Of course, prior to solving problems in the down datacenter, you have already switched production services to systems in your back-up datacenter, right? In a follow-up post I’ll discuss some aspects of that switchover process.



So you're using SolarWinds Storage Manager (STM), storage monitoring tool and you get an OutofMemory or PermGen Memory error message. What do you do? The solution can be as simple as stopping the Storage Manager service, changing or adding and argument in an .ini or .sh file, and restarting the service. So, here is what you do with your storage performance monitoring tool, STM if the error occurs.


Stop the agent or server service:

  • Windows: Stop the STM Agent Service.
    Stop the Collector service if you are dealing with the STM server.
  • Linux: To stop the agent or server, use command
    /etc/init.d/storage_manager_agent stop.
    Stop the storage_manager_server if you are on the STM server.


Add or Change an argument:


For OutOfMemeory error messages

    •   Increase the -Xmx value to the desired memory value
      Example: -Xmx=1024M

     For PermGen error messages

    • Add -XX:MaxPermSize=256M to the beginning of the EXT_ARGS list.

Example: EXT_ARGS=-XX:MaxPermSize=256M -Xrs -Xms67108864 -Xmx

Finding the files you need to edit:    

  • On Windows:
    STM Agent:<Storage Manager install directory> \SolarWinds.Storage.Agent.ini

Collector service:  <Storage Manager install directory> \webapps\ROOT\bin\SolarWinds.Storage.Collector.ini

  • On Linux:

STM Agent: <Storage Manager install directory> /Storage_Manager_Agent/bin/

Collector service: <Storage Manager install directory>/Storage_Manager_Server/bin/

Start the agent or server service:

  • Windows: Start the STM Agent Service.
    Start the Collector service if dealing with the STM server.
  • Linux: To start the agent or server, use command
    /etc/init.d/storage_manager_agent start
    Start the storage_manager_server if you are on the STM server.

And the memory problem should be resolved on your storage performance monitoring software, STM. Let me know if this solution helps (or not).

This blog is reworded from the KnowledgeBase article, Resolving OutofMemory and PermGen Memory Issues.



The Microsoft Management Console (MMC) is built into Windows to display and group similar tasks in a single view. It comes in a variety of pre-built configurations, or you can create your own. If you find yourself using several Windows management tools to get your job done, creating a custom MMC can make your life a lot easier – putting everything you need right at your fingertips instead of sprawled across one or more desktops.


MMC Basics – Snap-ins and Computer Management

The MMC is basically a container for one or more MMC snap-ins. A snap-in is an administrative view or workspace in Windows that allows you to do a certain task. Think of Event Viewer or the Services pane – these, and similar workspaces, are snap-ins, and you can put as many of them together as you want in a single MMC.


The Services pane is one of the many MMC snap-ins you can add to a custom console to make your life easier.


A great example of what a custom MMC could look like is the built-in Computer Management console in Administrative Tools. This console contains the two snap-ins I just mentioned, along with several others. Use this console to start and stop services, view the Windows Event Log, and manage shares, local users and groups, and hardware devices. You can even use the Computer Management console as a starting place for a custom MMC – just open it in author mode, and then add or remove snap-ins according to what you need to get done.


Tip: To open a console in author mode, open a Command Prompt, and then enter the console's file name, followed by /a. For my Computer Management example: compmgmt.msc /a


An MMC for Every Role

Whether you manage users and groups, address allocation, or resource name mapping, there's a snap-in for you. The following is a list of some of the MMC snap-ins commonly used by SysAdmins:

  • Active Directory Users and Computers (ADUC)
  • DHCP
  • DNS Manager
  • Group Policy Object Editor
  • Remote Desktops


Note: Some of these options won't be available on all systems, depending on the roles they're configured to support. For example, you won't be able to add the ADUC snap-in to the MMC if you're running Windows 7 until you install Microsoft's Remote Server Administration Tools (RSAT).


To add one or more of these snap-ins to a custom console, open the console in author mode, and then select the snap-in from the list provided in the Add/Remove Snap-in dialog (in the File menu). You can even assign a custom parent node if you want to modify the hierarchy of how the console displays your snap-ins. To do that, click Advanced when you're adding snap-ins, and then select the option to allow changing the parent snap-in.


After you create or edit your custom console, be sure to save it for future use. You can either save the console as-is (File > Save), or save the console under a new name (File > Save As).


Connecting to Remote Computers

By default, the MMC connects to the local computer you use to launch it. However, if you want to manage services on another computer, for example, you'll have to specify that computer in the MMC first. To connect to a remote computer, click the Actions menu, and then select Connect to another computer. You can even do this at the command line when you open the console using the argument, /computer=computerName, where computerName is the name of the remote computer.


If you want to connect to several remote systems in a single console view, you'll probably want to check out a third-party remote administration tool. This might also be a good option for you if you don't have the time or technical confidence to create all your custom consoles yourself. DameWare Remote Support, for example, offers most of what you would add to a custom MMC without all the work it would take to make it. View all of your tools in a tabbed interface, and connect to multiple systems without leaving the window. It even includes DameWare Mini Remote Control so you can quickly connect to remote desktops to troubleshoot or administer them in a more hands-on fashion.


So, if you find yourself jumping in and out of several different consoles or remote computers in a given day, try creating a custom console to make your job (and life) a bit easier. Using the tips and tools in this post, you should be up and running in no time, and then you'll have more time to do other things, like reading more posts on Geek Speak.




You're using Storage Manager, for your storage monitoring and you get an error message in your Web console or log files:

     java.sql.SQLException: Table '<table name>' is marked as crashed and last automatic(?) repair failed

What does it mean, and how do you fix it?

Two possibilities for finding and fixing MySQL table crashes


MyISAM storage engine

The MySQL database uses MyISAM as a default storage engine, and the MyISAM table is easily corrupted.


But take heart, you can use the MyISAMCHK command to resolve crashed tables. The article, How to Run a myisamchk to Resolve Crashed mysql Tables, provides detailed instructions on using the MyISAMCHK commands for Windows and Linux to resolve crashed MySQL tables in your Storage Manager powered by Profiler product.

More MyISAMCHK options

The MySAMCHK provides other useful commands such as:

  • Identify all corrupted tables
  • Repair corrupted tables
  • Perform check and repair together for the entire MySQL database
  • Allocate additional memory for a large MySQL database
  • Get information about tables


For more information on these and other MYSAMCHK options, see How to Repair Corrupted MySQL Tables Using MyISAMCHK


Anti-virus, intrusion detection, or back up software is blocking MySQL

A common cause for crashed tables in MySql is antivirus, intrusion detection, or backup software. This can happen when these programs lock files in the MySQL database while Storage Manager is trying to use the files.


To prevent conflicts, add exceptions to these tools so they do not access the <STM Server Install Directory>\mysql folder and sub folders. Storage performance monitoring simplified!


Share your MySQL table crashes

I'd like to hear about your experience with MySQL table crashes, and how you fixed them with your storage performance monitoring software.




With a previous post, we've started a series looking into ways to make your network faster, even greener, with the further possible upside of being cheaper to maintain. Last time, we looked at making your network greener with Cisco's EnergyWise technology. This time, let's look at Unified Computing System (UCS) Servers, another of Cisco's technologies that can make it faster and cheaper to run and maintain your network.


Network Optimization with Cisco UCS

Cisco's Unified Computing System provides a scalable framework for building out both the virtualized and the non-virtualized components of your network. As Cisco states:

Key [UCS] differentiators include:

    • Programmable infrastructure via service profiles that abstract the personality, configuration, and connectivity of server and I/O resources
    • Unified, model-based management applies personality and configures server and I/O resources to automate the administration process.
    • Cisco Port Extender technology simplifies the system by condensing three network layers into one, eliminating blade chassis and hypervisor-based switches and bringing all traffic to a single point where it is efficiently and consistently managed.


Make Your Network Faster and Cheaper with Cisco UCS

As you may be in the process of moving more and more of your network to increasingly virtualized configurations, the top-to-bottom integration of managed services, data storage, and network resources provided by UCS can ease your transition from more traditional hardware to bare-metal and more fully virtualized networking environments. You already know that virtualization can save you money. UCS gives you opportunity both to reduce your hardware footprint, and its associated costs, with virtualization and to increase network efficiency and improve network management with a more tightly integrated architecture.


Managing UCS with SolarWinds

SolarWinds Network Performance Monitor (NPM) is an effective network monitor that offers predefined reports and web console views and resources specifically tailored to display performance data provided by Cisco UCS manager devices. For more technical information about SolarWinds NPM, see the SolarWinds Orion NPM Administrator Guide.


Portions of this document are excerpted from "Unified Computing Technology - Cisco Systems" at

Since Hurricane Sandy hit my home state, New Jersey, the state is now allowing displaced voters to vote via email. What are New Jersey officials thinking? Discovery News suggests that New Jersey's Email Vote May be a Disaster. I appreciate wanting to help people vote, I really do, but email is not exactly secure. Not to mention that many voters throughout the state still don’t even have electricity yet.


Security Not a New Issue


In response to the November 2000 U.S. presidential election’s “hanging chad” issue almost all states have adopted some form of electronic voting. Of these systems, Direct Recording Electronic (DRE) voting machines have become the most commonly used. Standards for voting machine software and their networks (in those states that require vote totals to be transmitted to a central office for verification and tallying), however, are inconsistent across the country and are often dictated by the voting machine software makers, rather than the states.

Developing Standards for the Future


Although New Jersey may be going about the 2012 election in a rather insecure way, they may also be blazing a path into the future of voting. At some point, I can see most, if not all, U.S. states making voting available through secure networks. Or maybe we’ll be voting through a single secure network, maybe something similar to what we use to file taxes online. Sophisticated network management, monitoring, and security applications, like the SolarWinds Orion suite of products and Security Information and Event Management software already exist and are in use at various local, state, and federal government institutions.


So surely, we should soon be able to take part in online voting that includes:

  • Verifying voter identity
  • Correctly counting and recording votes
  • Unadulterated results securely sent from precincts to the counties and on to the states



I have been working with our Web Help Desk team the past couple of months, so I have learned a lot about Service Management. The first thing that strikes me is how much easier it is to implement and use compared to the NOC builds I used to do in the 90"s. The typical ticketing system back then was a client/server application running on a proprietary database. Not only were these systems a nightmare to install, we would also have to train a DBA and the help desk team before anyone could use it. The installation, configuration and testing was about a two week process, performed by me, the NOC Consultant.


Get Service Management Running Quickly!

What was my experience this time around? A lot of things have changed for the better, here is this list!

  • Installation under 20 minutes.
  • Choice of industry standard databases.
  • Configuration and testing in one day.
  • A great deal of flexibility with out feeling like I was lost in a spider web.

That's when it hit me! It really felt like a spider web trying to get those old systems humming. It wasn't the nice, concentric type of web, more the spider on crack CIA experiment web.


Join Us and See for Yourself

On November 15 at 11:00 AM CTS Manish Chacko and I will be hosting a live Webinar feature Web Help Desk Solutions to Difficult Service Management Issues. This will be a one hour deep dive into the flexibility and ease-of-use built into Web Held Desk.  Look for an invitation for this event soon!


Threat Management

Posted by DanaeA Nov 6, 2012

Keeping your IT infrastructure safe is a full-time job and requires a collaborative effort. Threat management can be defined as “the potential for a threat source to exercise (accidentally trigger or intentionally exploit) a specific vulnerability.” This threat can come from within your organization, possibly a disgruntled employee or from an external source (a hacker, for example). Monthly malware updates and device scans aren’t enough anymore. You need the Big Guns now. You need to be alerted when a specific type of event happens, when you have failed logins, and do a log analysis around the time of the event, etc.

SolarWinds Log & Event Manager collects, stores, and normalizes log data from a variety of sources and displays that data in an easy to use desktop or web console for monitoring, searching, and active response. Data is also available for scheduled and ad hoc reporting from both the LEM Console and standalone LEM Reports console. SolarWinds LEM responds effectively with focus and speed to a wide variety of threats, attacks, and other vulnerabilities.

Deploying LEM is just the first step in your network security management process.


JK:  How did you get started blogging (

SL:  I started writing on my blog in 2005.  At the time, I was learning a lot, and I wanted a way to capture the knowledge I was gaining.  I guess you could say that my blog started as more of a knowledge base than anything else.  It wasn’t until 2007 when I liveblogged VMworld 2007 when the site really took off.

JK: Do you take your topics from things you are working on at your job or do you take comments and questions from readers as topics?

SL:  Most of my topics come from whatever I am working on; however, from time to time, someone will email me with a question or a comment, and that might turn into a post.  Sometimes the question is about a problem with which the reader is struggling, and sometimes the question is about one of the books that I’ve written or a presentation that I’ve given.  Pulling topics from whatever I’m working on professionally is pretty common among the other bloggers that I know.

JK: Virtualization technology is fairly mature now.  Are there any virtualization concepts that are still not widely understood?

SL:  I’d say the one thing I see a lot is that customers try to do things the same way in the virtual world as in the physical environment, and that is often not the best approach.  Because VMware and other leading virtualization players out there make it so easy and so seamless to run workloads in virtualized environments, administrators don’t take the time to optimize for a virtualized environment.  This is especially true for virtual desktop environments and business critical workloads.  The virtual desktop environment is one area this is especially true, where people just re-create what they are doing for physical desktops, and they don’t truly optimize for the virtual environment.

I think because vSphere and other virtualization solutions do such a great job of making everything seem the same as it was, people don’t even realize they could be doing more.  People port the application over, it runs, and they don’t understand that they could optimize it and make it run even better than it was running in the physical environment.

JK: Are there still challenges with or objections to moving mission critical applications to virtualized environments?

SL:  If I had to define only a single issue, it’s that organizations don’t realize that their virtualization platforms are capable of supporting mission critical applications because they are just looking at recreating what existed in the physical world.  I think it was Albert Einstein who said, “You cannot fix problems using the same thinking that you used to create them.”  The same applies to virtual environments – you can’t use the same thinking in running a mission critical workload in the physical environment as the virtual environment.  Customers will attempt to run mission critical workloads, but because they did not optimize it or the performance is different, they assume there is too much overhead, etc.  All of the virtualization platforms are very robust and capable of handling mission critical workloads.  Customers just have to go about designing the environment a little bit differently than perhaps they realize.

JK: In terms of looking at performance of the applications, would you say it is a must have to look at the application performance and virtualized elements at the same time?

SL:  You need a comprehensive view of all the different layers in your datacenter.  Now we have another layer – where before we had workloads sitting on bare metal, now we have an abstraction layer.  The abstraction is beneficial in that it gives us hardware independence, workload mobility, and easier disaster recovery.  On the other hand, that abstraction also introduces an inability to see what is happening on the other side of that layer.


Consider this: a VM sees only what the hypervisor wants it to see, but you need to see what the application is doing, what the OS is doing, and also what the host (or hypervisor) is doing. 
Building on the same theme we have been discussing, the problem is that customers look at things using the same monitoring solution they used in the physical environment – one that is not virtualization aware. Because the tool is not virtualization aware, it might not gather information from all the appropriate layers, and this results in incorrect information. This incorrect information prevents people from properly assessing the performance of the application, and whether SLAs are being satisfied.  It’s only through looking at all the different layers that you can get comprehensive information, in my opinion.


As we now move into environments where a single application could compromise multiple VMs and multiple hosts, it becomes necessary to correlate performance across hosts, operating systems and applications to get a holistic view of performance for customers. And I use the term “customers” to mean the consumers of the services IT provides, whether those consumers are internal, as in a business unit, or external (the end users).


JK: I noticed you’ve been writing quite a bit about libvirt recently.  How has libvirt matured over the last year?

SL:  My experience with libvirt has only been in the past few months with my new role.  There is a tremendous amount of promise in libvirt, as with many of the open source projects. Unfortunately, many open source projects still lack some of the enterprise support mechanisms necessary for enterprises to adopt.

Without commercial support mechanisms, where you see the adoption of open source projects is in organizations where they have the ability to look at the code and fix the code themselves, like MSPs and telcos.  These types of companies are already writing solutions for their customers, and they need to keep their costs down, so they leverage their expertise to support these open source projects while also satisfying the needs of their customers.


When I talk about commercial support mechanisms, think about companies like Red Hat. Red Hat has made it possible for enterprises to use an open source project like Linux, in that they can get assistance from Red Hat if there is an issue with the code.


As I said, I think Libvirt is a very promising project in my opinion. By the way, Libivrt is a Red Hat sponsored project, but it is not commercially backed as a product today.  Open vSwitch is in a similar position, although the inclusion of Open vSwitch in several commercial products might change that situation.  We also hear the same thing about OpenStack, which is promising technology but will require commercial backing for broad adoption.


JK: I noticed you participate in user groups around infrastructure coding, can you tell me a little more about this trend?


SL:  Organizations are pressing employees to work at greater scale with fewer resources and at greater speed.  The only way to do that is through automation and orchestration.  Because companies need to do things as inexpensively as possible, we don’t see organizations going out there and paying for these very expensive, highly complex automation/orchestration solutions, which then require professional services to get implemented.  Instead, organizations start write shell scripts, or start looking at open source projects to help automate some of their tasks.  As organizations continue along this path, I see administrators needing to embrace automation and orchestration as a core part of their job or they won’t be able to scale effectively.


For that reason, I have been advising users in these user groups to take a look at Puppet and Chef & others, and to look at ideas from and the culture of the   DevOps space.  Anywhere an organization can apply orchestration and automation, they will reap the benefits of responding more quickly, and having more consistent configurations, which helps with troubleshooting and performance.  I personally am going down that route and am looking extensively at Puppet.


I don’t think necessarily that administrators need to be programmers or programmers need to administrators, but administrators need to have some sort of idea about creating configuration files that might require some quasi-programming, like with the Puppet Domain Specific Language (DSL), which is similar to Ruby.


JK: What other IT trends should administrators pay attention to as they plan for next year?

SL:  I just gave a presentation at a user group meeting in Canada on this topic, and I listed three technologies to which users should pay attention. Here are the three technologies I gave the attendees:

1) Network virtualization or software defined networking (SDN).  This technology is about creating logical networks on top of physical networks, similar to what has been done on the server side.  VMware recently acquired Nicira for this technology, although there are other players in the market as well.
2) Open vSwitch is something I think administrators should really watch.  It is the basis of a number of network virtualization products.  Administrators should understand its role in network virtualization.
3) Automation & Orchestration – It’s important, in my opinion, for administrators to continue to try to bring greater levels of automation and configuration management into the environment.  This is important to deploy workloads more quickly, and have assurance these workloads will operate over time – eliminating configuration drift and similar operational challenges.

You can't usefully determine the necessary availability of a system until you know what level of service you expect from it. The meaning of "high availability" depends on what you need a system to do for you, how often, how quickly; and on the consequences of your expectations being unmet.


Business continuity planning is that part of operations engineering that implements safeguards within a system based on an understanding of acceptable risk.


During trading hours, for example, the New York Stock Exchange consistently requires very low transaction latency and 100% uptime. Visa requires 99.999% uptime from its core systems and network all the time. With very low acceptable risks, these systems warrant almost any effort that mitigates risk through fault tolerance; redundancy of components, real-time replication of critical data, and especially auto-failover strategies.


In contrast, during a recent outage, customers of Godaddy web hosting services were probably surprised to discover the costs of outsourcing their IT needs--in the form of having accepted that their host's marketing boasts about infrastructure in place of an actual service level agreement (SLA) with terms for reimbursement.


Defining a Recovery Time Objective (RTO)


Justifying the costs of up-time means first knowing the point at which the business the system supports becomes negatively impacted. While NYSE and Visa are immediately impacted, your business may be able to go 30 minutes to some hours before the inaccessibility of customer-facing systems results in one or more of the three biggest kinds of loss--reputation, data, money.


If your impact analysis determines that your production system can comfortably withstand 30 minutes of downtime, then 30 minutes becomes your RTO. Everything you do by way of keeping your systems operational implicitly occurs against that RTO.


Since very few systems are capable of an entirely automatic response to an operational issue, recovery strategy most likely involves IT engineers performing triage based on alerts.


Ideally, your monitoring and alerting system gives you an nearly immediate indication of a system issue. You should be able to think of the alert about an issue as the T-Zero for meeting your RTO.


Have a look at SolarWinds networking, storage, and systems monitoring products as providers of the critical T-Zero for meeting your different recovery time objectives.



Visual Basic 101 (part 2)

Posted by Bronx Nov 5, 2012

In part one of this series, I discussed installing and setting up your Visual Basic environment in preparation for creating a bandwidth calculator for use with SAM's component monitors.


In this installment, I'll explain how to create the visual elements of the calculator.


Lesson 3 - The Build

First, open Visual Basic and create a new project. (Use the pictures above as a visual aid.)

Get the Form in Shape:

  1. Stretch the Form to make it about twice as wide as it is high. (Select it and grab a corner node to do this.)
  2. With the Form still selected, go to the Properties window to the right of the Form and find the Text property.
  3. Change the default text from Form1 to Bandwidth Calculator.
  4. Also in the Properties window, change the Name property to read frmMain.

Make a Container:

  1. From the Toolbox on the left, select Groupbox and draw one on the left side of your Form. This Groupbox will house the majority of your controls. Ensure that it's wide enough to do so.
  2. With the Groupbox still selected, go to the Properties window to the right of the Form and find the Text property.
  3. Change the default text from Groupbox1 to Number of Monitors (1-10,000).

Label it:

  1. From the Toolbox on the left, select Label, then draw four labels on the Form inside the Groupbox.
  2. Change the text of these Labels to read, WMI, SNMP, RPC, and Nodes (ICMP).
  3. Arrange them in the Groupbox on the Form as illustrated above.

Get the Textboxes out:

  1. From the Toolbox on the left, select Textbox.
  2. Draw four Textboxes below the Labels you created in the previous step.
  3. Change the Text property of these four Textboxes to read nothing. (Delete the default text from the Properties window for each Textbox.)
  4. Change the name of each Textbox using the Properties window. Rename these four Textboxes to txtWMI, txtSNMP, txtRPC, and txtICMP.
  5. Arrange them under their respective labels (created in the previous steps).

Now we need Sliders:

  1. From the Toolbox on the left, select TrackBar (a slider).
  2. Place four TrackBars on the Form to the right of each Textbox you created in the previous section. (Note: These also go into the Groupbox.)
  3. Change the name of each TrackBar using the Properties window.
  4. Rename these four TrackBars  to TBWMI, TBSNMP, TBRPC, and TBICMP.
  5. Arrange them to the right of their respective Textbox while still keeping them in the Groupbox.

More labeling:

  1. From the Toolbox on the left, select Label.
  2. Draw four labels on the Form inside the Groupbox and to the right of each TrackBar.
  3. Change the Name property of these Labels to read, lblWMI, lblSNMP, lblRPC, and lblICMP.
  4. Change the Text property of these Labels to all read 00.00 Kbps.
  5. Arrange them in the Groupbox on the Form as illustrated above.
  6. Add an additional Label above these four and change the Text property to read Per Protocol.

Adding a Picturebox:

  1. From the Toolbox on the left, select PictureBox.
  2. Place it and size it in the GroupBox as illustrated above.
  3. To change the image of the PictureBox:
    1. Select the Image property for it from the Properties window.
    2. From there, click the ellipsis (...) button, then select Local Resource.
    3. Click Import, then select an image and click OK.
    4. Change the Name property of the Picturebox to picSpeak.

Add the Button:

  1. From the Toolbox on the left, select Button
  2. Place it on the Form and in the Groupbox next to the PictureBox.
  3. From the Properties window, change the Name property of the button to cmdReset.
  4. Change the Text property of the button to read Reset.

Still More Labeling:

  1. From the Toolbox on the left, select Label.
  2. Change the Text property of the Label to read Share of Bandwidth.
  3. Place it outside of the Groupbox and to the top-right of the Form, as illustrated above.

The Pie Chart:

  1. From the Toolbox on the left, under the collapsed heading, Data, select Chart.
  2. Draw a Chart on the right side of the Form and outside of the Groupbox, as illustrated above.
    1. From the Properties window, change the Series property by clicking the ellipsis (...) button.

    2. Select ChartType and then select Pie from the dropdown menu.
    3. Click OK.

More Groupboxes:

  1. Add two more Groupboxes to the Form.
  2. Change the Text property of the first to read Total Number of Monitors.
  3. Change the Text property of the second to read Recommended Bandwidth.
  4. Arrange them on the Form as shown above.
  5. Inside the Total Number of Monitors Groupbox, add a Label.
    1. Rename the Label in the Property window to lblMonitors.
    2. Change the Text property to read 0.
  6. Inside the Recommended Bandwidth Groupbox, add a Label.
    1. Rename it in the Property window to lblTotal.
    2. Change the Text property to read 00.00Kbps.



If you've done everything successfully, you should have a Form that looks something like this:


This probably looks a bit different from what you have, which leads to today's homework assignment.



Today, I've given you the minimum you need to design the calculator. That is, all the objects needed for the calculator to work are now on the Form. By now, I'm sure you've noticed there are many more properties that you can tweak, like the color and size of the fonts. Your assignment is to play with these properties and try and get the calculator to look as close in appearance to the one at the top of this page.


In the next installment, we'll begin coding.....oooooohhhhh.

We are pleased to release a series of new How-To resources for SolarWinds IP Address Manager (IPAM). These How-To guides are short, simple-to-understand documents that will walk you through the process of using or learning to use some key features of IPAM, in order to help you better understand the product and maximize on its functionality.


The first How-To document will cover Automatic Subnet Scanning in SolarWinds IPAM. You’ll learn how to configure and manage the automatic subnet scan settings, including how to automate IP address scanning for all your subnets, a group of subnets, or for an individual subnet. You'll also learn about IPAM's neighbor scanning feature so you can get IP status even if ICMP and SNMP are disabled. Last, but not least, you'll learn how to configure "transient" status settings to help avoid IP address conflicts.


Learn more about Automatic Subnet Scanning and start unlocking the power of SolarWinds IP Address Manager!


View the Automatic Subnet Scanning How-To now.




Now that I’ve had a chance to consider Disney’s purchase of Lucasfilm, I’m excited about what this might do for my beloved franchise.  Disney is a franchise factory, but they’re good at keeping the machinery out of the way.  They strive for great stories with memorable characters.  It may well assimilate Lucasfilm, move the overhead out of the way and let Star Wars run free.  Besides, they can’t mess it up more than it already is. So how then can they deliver more Star Wars but without more Jar-Jar?

If you have a post-Ewockian mindset, you think Ewocks are adorable, the prequels weren’t that terrible and your first baby toy probably had a Disney tag on it. For you this is a non-event.  If you’re pre- Ewockian you properly believe Ewocks are dumb, there are Three True Films and view Episode VI as the beginning of the end.  For us it’s easy to fear the Star Wars Toy Event Horizon.

I had the lunchbox, action figures and the knock-off Force Beam lightsabre. Yes, the fancy modern ones are cool but something is lost when you don’t make your own buuuuwaAAVVVOOSH effects.  But what I really loved was the great story told in an epic swashbuckling style, not an epic technical achievement told in a corporate style.  Nobody likes corporate style, least of all Geeks. 

I’m not saying I don’t like Lucas’s stories, I’m saying is that he’s not a storyteller.  Empire is great because it’s George’s vision, but told by Kershner, Brackett and Kasdan.  Imagine if Disney had some oversight on Episodes I-III.  Pirates of the Caribbean is a dumb franchise, but it’s a dumb, fun franchise.  In 1977 the actors said they were making a dumb, fun movie.  Then the money rushed in and created enormous gravity from which nothing, not even fun, could escape.  Disney manages to extend their franchises  with a nod to what’s already in our brains, but without tweaks.  Han would still shoot Greedo, even on the Blu-ray.

Can they continue the swashbuckling spirit of the original trilogy?  I think so.  An adventure about twin kids with a dead mother, raised as orphans ultimately to confront their terrible father?  Classic Disney.  Midi-chlorians and Jar-Jar would not have made it past the screenwriters.  $4B may be steep for a new Princess, but will seriously boost their cred at CONs

There are a few things that would be fun:


  • ½ of the shelf space at the Disney store being taken over by Star Wars.
  • Disney Death Match between R2D2 and WALL*E.
  • A new female Sith princess, (misunderstood of course)
  • Mickey meets Darth Maul, with spinning double-lightsabre attack and missing ears
  • Star Wars Muppets Holiday Special
  • Eventual reboot of Episodes I-III
  • Star Wars Episodes VII, IIX and IX


There are four reasons I’m optimistic.  First, it can’t possibly be any worse.  Second, there are many untold stories, already known to fans, to draw from.  Third, the creative team at Disney is full of Star Wars nerds.  Finally, Disney is playing the long game and thinking about what they’ll release on the Star Wars 50th anniversary.  And that requires listening to fans.


I do wonder if Lucas is OK. He’s pretty young to walk away just now. His vision brought me great memories, and soon I’ll introduce my younglings to Luke and the gang.  Hopefully he’s just tuckered out, ready to relax in his writer’s cabin, and crank out ideas on yellow notepads.


What do you think? Leave your comments below, and take the survey on thwack:

When Windows offers a built-in RDP client, why would anyone want to use anything else for desktop sharing and remote support? Windows Remote Desktop Connection (RDC) is easy to use, reliable, free, and already there. But there are several features the built-in client lacks that third-party clients provide. The following are just a few of the features not supported by the built-in RDP client.


Non-preemptive Scheduling

Windows RDC is preemptive when you're connecting to desktop operating systems, which means an RDC session will disconnect, or preempt, any current local or remote session. A non-preemptive RDP client, on the other hand, prevents this. Once you are connected to a remote system with a non-preemptive client, no other sessions are allowed until the current one is finished.


Advanced Access Control

Local and domain policies can limit what users can remotely access a computer, but they can't impose limitations beyond requiring the appropriate permissions. For example, what if you wanted to provide access to an executive's system only when they're logged on and only with their permission? With a third-party RDP client you can configure settings for scenarios like this, and some even support server-generated invitations for added control.


Multi-platform Support

Windows RDC works great when all you have to do is connect to other Windows computers with RDP enabled, but what if you need to remotely control Linux or Mac systems? VNC viewers are great in this regard because they support all three platforms.


In-session Tools

With Windows RDC, once you're connected, you're connected. That's it. But with other RDP clients, you can enjoy additional in-session features like live chat (do not read netsend ), screen capturing, smart sizing, and local keyboard/mouse restrictions.



The heat in the tablet battles got turned up to 11 in the past couple of weeks. With Apple announcing the iPad mini, the new Amazon Kindle Fire and all the other competitors adding features and slashing prices, the consumer market is a good place to be. I find it interesting how quickly IT departments have adapted to the explosion of Bring Your Own Device (BYOD) connectivity demands. My tablet has VPN software, so I can access my development VMs from anywhere I get 3G or Wi-Fi. This begs the IT question, "What are all these BYOD users doing and how do we manage them?".

Managing a BYOD World

One thing that makes managing BYODs fairly easy is the fact that they almost all use Wi-Fi to connect to the company network. IT departments can use the tools they probably already have to manage BYODs just like they manage most endpoints. Let's start from the connectivity with wireless network management. Wireless access points grant access to  BYODs just like any other wireless device. This means that the DYODs are subject to the same access and security restrictions. From there, the BYOD traffic usually accesses the Internet using the company IT infrastructure. So you manage whatever the BYODs are doing just like they were any other device. The BYOD traffic crossing NetFlow export interfaces are all NetFlow endpoints, traceable by NetFlow Traffic Analyzer.


A Word for the BYOD Users

BYOD does not equal bring you own rules. I have overheard a couple of BYOD users who were startled to find out they their device was denied access to a blacklisted site. They seemed to believe that because it was their own device it was none of the company's business what the accessed. I'm not judging here, just reminding that the rules apply to BYODs just like they do for your company laptop. That is not my opinion, that is just how the network handles endpoint traffic.



Another way of asking that question: What is Thwack?


An obvious answer: Thwack is the online IT community where you get information on and share your experience related to SolarWinds products that you value. Of course, if you’ve been visiting Thwack for awhile, you have your own specific answer; and if I don’t touch on yours in this post, I would look forward to reading your comments.


You go to the NPM forum , as a user of that product, to get help from others who work in their company’s network operations center (which might be just you in your cube and them in theirs), fielding the alerts NPM sends about network devices on a specific LAN. In the SAM forum other Sys Admins answer your questions and share their experience with templates they are using to keep track of issues with key applications across their respective businesses. If you manage network devices, and NCM fails to back-up a config for one of your devices, your forum’s MVPs confirm your problem or offer you a solution they found. In all cases, you’ll often hear from SolarWinds engineers, product managers, information developers and others with a stake in you getting the best information and as quickly as possible.


With an eye on the future, as part of revealing the kernel of why we’re on Thwack, I want to cover a little history.


The Future is Always Getting Ready


William Gibson famously observed that, when it comes to technology, the future is already here but it’s unevenly distributed.


As Facebook surpasses a billion users worldwide, let’s consider why; especially since we all complain about Facebook not offering a very satisfying social experience.


As someone who worked there at the time, I remember that in 2004 Yahoo! had the most registered users internationally of all internet sites. While Yahoo! was serving up a portal experience, Zuckerberg had just launched Facebook and had staff working out a small space in Palo Alto. Just 8 years later Facebook is synonymous with social networking and Yahoo! seems on the verge of being sold off as piecemeal services that someone might figure out how to grow as specific online communities. Even with the enormous leverage of users already participating daily within its dotcom footprint, and through a mutually reinforcing aglomeration of services, Yahoo! failed to remain a leader in the era of the social web.


Why is the social web so much more valuable than a place where you simply go to use a service or retrieve information?


While Yahoo! still offers an evolving set of services and experiences in which users rapidly publish and exchange an ongoing and cumulative stream of information—text, audio files, photographs, videos, links to other content—both asynchronously and in real-time, what it does not offer, or began to offer only belatedly, are exchanges within a community in which information comes with value already attached by the community itself.


Simply put, successful social web sites are those that use reputation and ratings systems to control access and value related to both interaction and information exchanged through interaction.


For example, think of the different editorial value implied in getting the same news story through and Crowdsourcing, in a word, makes the creation of value within the community accessible to everyone in the community.


If Facebook declines as the leader of the social web it will be because it fails to safeguard the integrity of the social experiences it makes available. My colleague today told me that she doesn’t use Facebook anymore because her parents are on there. Users who stop trusting the interactions and information that the community offers them will migrate to another where the value is higher.


Online, Trust is Almost Everything


Like SolarWinds products themselves,Thwack is only as valuable as the trust you create while here; and that trust depends on how you and others interact and especially on what information you share and how you share it.


If you think of ubiquitous computing as an evolving inevitability, then you are already attuned to technology-mediated relationships becoming increasingly pervasive, so that the computing infrastructure itself tends to disappear into the environment much as urban architecture does.


As IT professionals, we are the ones for whom computing technology will never become invisible, no matter how much out of sight. Even so, perhaps more so than everyone else, we will depend on our technically savvy community to vet issues of mutual importance in figuring out how to make technology better serve its users.

Last week's storm, Sandy, left more than a million people without connectivity, or even electricity.


On September 11. 2001, the attack on the World Trade Center in New York put the largest stress on a telephone network ever. According to CNN Money’s David Goldman, in the article What O.J., Katrina, and 9/11 did to AT&T's network, “People from all over the country tried to contact relatives and friends, and placing calls to Washington and New York was a near-impossibility for much of the day. Virtually every point-to-point network connection in the country was overloaded.”


During Katrina in 2005, mobile networks were overwhelmed with phone calls in and out of New Orleans. Plus, the storm destroyed cell towers in and around New Orleans, greatly diminishing network availability. 


Prevention and Recovery Planning


So how do we avoid overloading, or worse yet, completely losing connectivity in emergencies? And what about if electricity goes out? Not surprisingly, creating a plan helps:

Network recovery requires well-defined plans for:


  • Preventing connectivity loss – System redundancy enables system recovery. Redundancy means bringing secondary resources into service on short notice when primary resources are unavailable. A redundant system could backup data to an out-of- area data center or via cloud computing can ensure valuable data stays safe and accessible.
  • Reestablishing connectivity, even on a temporary basis – Identifying and repairing connectivity to critical systems are the first steps to recovery. Piggybacking onto other technologies, even older ones like land line connections, can help maintain communications during this period.
  • Keep or have access to a power generator. In many states, a company location cannot legally stay open without a generator to keep the lights or a heating/air conditioning system up and working.

For more information on keeping the lines of communication open during a network outage, see the Federal Communications Commission website, at

Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.