BitCoins. Never heard of it? Me neither. What I did learn though is that it is a new private Internet currency based on torrent technology.


Torrent Technology

I'm sure many of you are familiar with how torrents work. For those who aren't, let me give you a brief lesson.


A torrent, literally means, flood. Torrent technology is a file sharing technology. Rather than have one person on one computer send you a file, torrents break apart a file into countless bits from countless users. These bits are spread all over the Internet and can be shared, collected, and reconfigured on a computer receiving a particular file. Therefore, no one person can be seen sending or receiving an entire file. Only the needed bits, like pieces of a puzzle, are sent and received throughout the internet. The advantage of this is no one person has total control. Anonymity, to a degree, is also an advantage.


BitCoins - The private Internet currency

BitCoins is the new private Internet currency based on torrent technology. Simply install the software and buy some coins. Now you can use your coins to buy and sell your wares, anonymously. Here's a simple example:


Let's say I have $100 worth of coins and I want to buy a widget. I go to a site that accepts my coins, make the purchase, then have the widget sent to any address I like. The seller now has coins totaling $100. The seller can choose to buy additional widgets with his coin money, or cash out. Cashing out means selling his coins on the open BitCoin market. Here, money and coins have their own exchange rate, much like stocks and precious metals do in the real world.


Why use BitCoins?

Sure, you can buy products over the Internet and pay by credit card, debit card, or PayPal. Those work fine (although my PayPal account was recently compromised and I was robbed of hundreds of dollars). So why BitCoins?

  • Privacy - What you buy and sell with BitCoins is anonymous. Only two transactions involve real money, buying the digital coins and selling them. That's it. Anything you do in between those two transactions remains completely obscured. Essentially, when you buy or sell something using these coins, all that's happening is the movement of a digital number from one place to another, obscured and encrypted by the clouds of the torrent sky.
  • More Privacy - Governments do not like this open currency market because they:
    • Cannot regulate it
    • Cannot tax it
    • Cannot stop the sale of questionable items
    • Cannot control you


BitCoins vs. Real Money

I think it's only a matter of time before paper money goes the way of the dinosaur and something similar to BitCoins takes its place, if not BitCoins itself. We're close to that already. Most of my personal transactions involve the movement of debit and credit card numbers from one place to another. Rarely do I have "real" money in my possession. (Although, I would like to see my numbers represent something tangible, like gold or silver.) Think about it. When you buy software, like SAM, you are digitally sending numbers from one place to another, then downloading digital data in response. Has physical cash left your pocket?


Money is a representation of how much we work. If I earn $10/hr for ten hours, I earned $100 (sans tax). Now I spend my $100 (ten hours of work) for something I deem worthy of my ten hours of work. Money is just a keeper of records showing how much work you have done, in one way or another.


For example, you can buy a steak and cook it yourself for about $10 and 30 minutes of your time. (In this instance, at earning $10/hr,  you've worked an hour and a half for that cooked steak). Or, you can go to a restaurant and buy the steak cooked for $25. In this case, you've paid an additional hour of your work to have others do the work for you. The restaurant in turn collects that additional hour of work in the form of a $10 profit.


I'm getting hungry...and I'm broke.

If you're a network engineer, chances are you use a secure shell (SSH) client anytime you need to remotely manage a network device or a server running a Linux OS. What's more, you probably have quite a few devices to manage, which means you could have more than one SSH client window open at any given time. After the third or fourth window, how do you keep track?


There are a lot of free SSH clients out there, but not even the ever-popular PuTTY offers everything you need to get your job done efficiently. Sure, PuTTY allows you to save sessions and even use telnet, but it only supports a single connection per instance. Enter DameWare SSH Client for Windows. Not only is this new SSH client free, the DameWare client sports a tabbed interface, allowing you to manage multiple SSH sessions in a single window. Sure, you have to install it; but after you do, you only have to open it once.


Have you ever left your wallet at home? Or your credit card in the pocket of the jeans you were wearing last night and didn’t realize it until you tried to get gas? I recently had the card in the pocket issue and my gas gauge was on empty and I didn't have any other form of payment on me. I ended up having to call my husband to come to my rescue! If I had Near Field Communication enabled on my phone, I would have been just fine. Near Field Communication (NFC) allows you to use your smartphone as your credit card, and I’m never without my phone..I always know where it is.  Leaving your wallet at home won’t be such a problem soon and you can get gas without your credit card.


How Does it Work?

NFC allows smartphones or other devices (tablets, e-readers, etc.) to communicate with each other through radio waves, as long as they are within a close proximity. NFC devices have a chip installed that transmits a signal to other NFC-equipped devices when they are within a few inches of each other. To make a purchase using NFC, you tap your NFC enabled phone to the NFC terminal, enter your passcode, and off you go.


Where Can I Find it?     

Many smartphones and tablets are offering this feature now. As for merchants offering this very fast way of paying, Old Navy, The Container Store, ToysRus, CVS, and Macy’s are a few merchants that are working with Google Wallet to make it quick, easy, and safe for you to shop.

Is it Safe?

Your credit card numbers are stored on secure servers that are encrypted for your safety. To access your payment information while making a transaction, you must enter your secure passcode on your phone. If your phone should be stolen, your information is safe. Without the passcode, there is no way your credit card card can be used. This is definitely more secure than using a traditional credit card where nobody checks the signature!


Unfortunately, there is the risk of malware or other unauthorized access to your phone. This is a large concern for organizations that are dealing with Bring Your Own Device (BYOD) and network security issues in the workplace. 

The article, Need for increased network bandwidth is IT's biggest datacentre pain-point, cites a recent study of more than 1500 North American and European IT professionals. Emulex Corporation, a storage networking company, conducted this study to find out where these folks were feeling the biggest pinch, bandwidth-wise. Study participants named the following as their biggest data center network bandwidth eaters:

  • Server virtualization
  • Cloud computing
  • Big data
  • Storage and data networks integration


Bandwidth Issue Causes


Lots of things decrease your bandwidth. Like when communications devices fail. Or when you’ve got old or klugey hardware and/or software. Of course, there are plenty of times when your bandwidth needs increase – like when we use the notorious bandwidth eaters noted above.


What You Can Do to Improve Bandwidth – For Free


And whose job is it to make sure the bandwidth’s there when needed? The network or sys admin'’s, of course. Let’s face it…it’s your job to wring every last drop of bandwidth out of the network so that everyone else can do their job_f.


So what can you do to make sure you’re able to make the most of your network – without breaking that ever-shrinking budget? Begin by looking at available free tools that can help you squeeze more value out of your network.


IP SLA Tools


Cisco and Juniper, for example, have free tools in their routers than can help you monitor the network. Cisco encodes an Internet Protocol Service Level Agreement (IP SLA) into its IOS. The IP SLA enables you to review an SLA for an IP application or service This means you can use the IP SLA to validate service guarantees, monitor the network and any issues, and confirm network performance.


Connectivity Tools


By using another free tool like WireShark, you can determine if your network is sending and receiving data packets. If your service is supposed to have 99 or 100% uptime, but you can only send and receive data packets 85% of the time, you may need to contact your service provider and find out what’s going on on their end.


Quality of Service Tools


Enabling Class-Based Quality of Service (CBQoS) on your router lets you check the quality of your data and determines how usable your network data really is. This is especially important for voice data, such as in the cases of Voice over IP (VoIP) and video conferencing. High quality data means clear voice and video transmissions.


Network App Monitoring Tools


Most companies' business-critical applications are now on their networks. So having visibility into and understanding how your network-based apps are performing are key to using bandwidth wisely. Free tools, such as SolarWinds Cloud Performance Montor, VM Monitor, and WMI Monitor (for monitoring Windows Apps and servers) can help you see what’s doing what on your network, and help you plan appropriately.


What Else You Can Do to Improve Bandwidth – For Very Little Money


You may also want to consider buying software designed especially to monitor bandwidth usage across your network. And since software is almost always cheaper than hardware, purchasing a full-featured network monitoring software tool won’t break your budget.


SolarWinds Netflow Traffic Analyzer (NTA) is a tool you might want to try out. NTA provides insight across your network into:


  • Bandwidth and traffic patterns down to the interface level
  • Which users, applications, and protocols are consuming the most bandwidth
  • The IP addresses of top talkers
  • Cisco® NetFlow, Juniper® J-Flow, IPFIX, sFlow®, Huawei NetStream™ and other flow data


One of the ongoing challenges with the release of WSUS for Windows Server 2012 (Win2012) was how to remotely administer the WSUS server. Currently, a WSUS server installed on Windows Server 2012 (also known as WSUS v6) can only be remotely administered from a Windows 8 or Win2012 system. This is a result of dependencies in the console infrastructure that cannot (or at least, will not) be rolled back to Windows 7 systems, and that introduces a very significant challenge for organizations who would like to migrate to WSUS v6: They also have to install Windows 8 or an additional Windows Server 2012, just to have a remote console.


With the release of SolarWinds Patch Manager (SPM) v1.85 on Jan 22, 2013, SPM now brings a unique capability to the WSUS environment: The ability to manage both WSUS v3 and WSUS v6 servers from a Patch Manager remote console installed on Windows 7.


To clarify: WSUS v6 is the version of WSUS that ships with (Win2012), as compared to WSUS v3 which is the version of WSUS available for Windows Server 2008 R2, Windows Server 2008 SP2, and Windows Server 2003 SP2. Unless otherwise specified, this entire article refers exclusively to WSUS v6.


There are five scenarios in which SPM can be implemented to remotely administer a WSUS v6 server from Windows 7. I’m going to present them in what I believe is the optimal order of choice:

  • Install the primary SPM server on a Win2012 system.
  • Install a secondary Automation Role server on a Win2012 system.
  • Install a secondary Automation Role server on the WSUS system.
  • Install the primary SPM server on the WSUS system.
  • Install a secondary Automation Role server on a Windows 8 workstation.


Install the primary SPM server on a Win 2012 system

If you’re installing a new instance of SPM you should consider installing it on Win2012. When SPM is installed on Win2012, the installer will automatically install the console components of WSUS.  This functionally is identical to how SPM has installed the WSUS v3 console on pre-Win2012 systems. Register the new WSUS server and you’re ready to go.


Install a secondary Automation Role server on a Win2012 system

If you already have SPM implemented in your environment, it may not be desirable to migrate your existing primary server (PAS) just to get WSUS v6 manageability. As an alternative, after upgrading your existing PAS to v1.85, you can install an Automation Role server on a Win2012 system. The installation of the Automation Role will also install the WSUS console components. Register the WSUS server after the installation is completed. One additional step is required within SPM: You will need to create an Automation Server Routing Rule for the WSUS server to ensure it is managed by the Automation Role installed on the Win2012 system. (For more information about Automation Role servers and Automation Server Routing Rules, also see Chapter 14 of the Patch Manager Administrator Guide.)


Install a secondary Automation Role server on the WSUS system

If you don’t have another available Win2012 instance, you can also install the Automation Role onto the WSUS system. Register the WSUS server after the installation is complete, and create the Automation Server Routing Rule for the WSUS system.


Install the primary SPM server on the WSUS v6 system

As a last resort – you can install SPM on the same system as WSUS v6. Ideally in this scenario, both WSUS and SPM will use a back-end SQL Server database server. However, the WSUS v6 scenario brings one additional complication to the table. While SPM v1.85 is supported with SQL Server 2012 (SQL2012), as of this moment, WSUS is not supported with SQL2012. If you choose to use a remote SQL Server for both WSUS and Patch Manager, you must use an instance of SQL Server 2008 R2 SP1.


Install a secondary Automation Role server on a Windows 8 workstation

If you don’t have an additional Win2012 system, and do not wish to install SPM on the WSUS system or already have a PAS deployed, the Automation Role server can also be installed onto a Windows 8 system. In this instance, the SPM installer will download and install the Remote Server Administration Tools (RSAT) for Win2012 in order to provide access to the WSUS console. As with the other secondary server options, you will also need to configure an Automation Server Routing Rule.


The WSUS v6 server will appear in every Patch Manager console along with any existing WSUS v3 servers in the Update Services node.


To download Patch Manager v1.85, existing maintenance customers will find it available in the Customer Portal. A free 30-day evaluation of Patch Manager v1.85 is also available from the SolarWinds website.


I recently interviewed Alex Hoang of Presidio.  He is a SolarWinds partner who resells SolarWinds monitoring products.  Presidio generally serves large Cisco shops who are implementing datacenter infrastructure projects.


JK: Why do your clients want to purchase SolarWinds products?
AH:  While we are in the envisioning phase, we ask the client how they plan to monitor the infrastructure once it’s in production.  Many customers don’t have any tools to monitor their infrastructure and the only way they know the infrastructure is down is the when customer calls.


JK:  How often do your clients purchase SolarWinds with other infrastructure solutions you sell?
AH:  We recently started working with SolarWinds in the last year and a half or so.  Right now we engage our customers with SolarWinds about 30% of the time, but there are only a few folks in the organization selling SolarWinds.  I expect that to grow over time.


JK:  SolarWinds is pretty economical.  From a partner standpoint, you are not going to make a lot margin on SolarWinds unless you drive a lot of volume.  Why did you choose SolarWinds over another partner for infrastructure monitoring?

AH:  For one, it’s brand recognition.  SolarWinds is known by everyone.  The team you have, in particular Chris Lee (channel partner technical SE & trainer) and Andrea Wagner (Channel Account Manager, sales) – the support they provide is phenomenal.  If we have someone who is not as technically savvy, they can get on the phone and work through the issue.  From a support standpoint, SolarWinds is one of the better partners I have worked with.


Right now the SolarWinds sale is a value-add for our customer, and since we don’t make a lot of margin off selling the software, having the level of support we get from SolarWinds is imperative. 
In our last project we were working with a large school district to monitor their entire infrastructure.  Obviously, with a school, price is a consideration.  We ran a successful POC and they ended up purchasing Network Performance Monitor.




Your VPN access logs show only valid authentications. How would you know that one of the open connections is being used as a conduit between a development team in China and your employee—who has subcontracted his software development work at a fifth of his salary?


In the actual case, the employee at a US firm FedExed his RSA token to the team in China. The Chinese team logged in with the employee's credentials and security token, depositing completed work into directories on the employees workstation. The arrangement continued for as long as two years before an IT audit found anomalies in the VPN logs and enlisted the help of the long-haul carrier to determine the details related to the data pipeline to China.


Performing a Rolling Audit of Aberrant VPN Connections

Many companies that setup VPNs do so to support work schedule flexibility among employees in multiple geographical regions.  An employee can more easily participate in late night collaboration with team members in other time zones if he can access company network resources from a home office.


Expecting a particular pattern of regional access through a VPN allows an IT team to setup monitoring that flags anomalies. And an effective monitoring system would notice when access to the VPN occurs from a region that is outside expectations and send an appropriate alert.


The monitoring system needs flexible logic for defining what access violates expectations. SolarWinds Network Performance Monitor, for example, besides monitoring the VPN node and polling via SNMP for status, includes a separate Alert Manager application that accepts syslog data as imput and supports regular expression statements that would allow you to define alerts that trigger if a VPN connection is opened from IP blocks different from the ones corresponding to the regions where your company has offices. Based on the alerts, you could use a tool such as ARIN to investigate the source of suspect IP addresses. In some cases, the errant IPs might belong to diligent employees on vacation, in which case the connection activity would be short-lived. The point is to get earliest possible warning of activity that turns out to pose a real threat to your network.


The Singularity is Coming

Posted by Bronx Jan 28, 2013

"The technological singularity is the theoretical emergence of superintelligence through technological means. Since the capabilities of such intelligence would be difficult for an unaided human mind to comprehend, the technological singularity is seen as an occurrence beyond which events cannot be predicted." - Wikipedia.

The above explanation of the Singularity is spooky. The Singularity will be the day when man and machine become one (if the machines allow it). This begs the question, "Who is in control"? The estimated arrival date of the Singularity is slated somewhere between 2030 and 2045, depending upon which futurist you subscribe to, Vernor Vinge or Ray Kurzweil. I'm a fan of the latter so I'll be eating healthier from now on.


Terminator type robots are not here yet, or are they?

Most people in the world cannot play a musical instrument well. Far fewer are the number of people who can build a machine to play an instrument well. That said, watch the video below and really think about who is smarter, you, the machines, or the inventors?




Future Software

The software it takes to create a robot musician is complex, to say the least. It's only a matter of time before software "creates" software, not unlike man creating life in a test tube. (Not just a simple replicating virus.) When this happens, the Singularity has begun. Imagine a piece of software created by another program that is so elegant and so complex that man can no longer comprehend it. What will this mean for mankind? Will we become obsolete and subsequently deleted? Possibilities abound.


Current Software

Fortunately, the smart developers of today are creating software that puts the user in control. SolarWinds SAM and NPM are two such examples, allowing the user to alert and report on what they deem important. Actions within the software do not occur without the knowledge of the user. I wonder what the machines will do when they take over. Will they create a Human Perfomance Monitor (HPM)? Gulp.

As promised in a previous post, I'm back, bearing pretty colors and fancy lines. Specifically, I want to talk about generating informative maps that you can then include in your "ultimate network management dashboard".



Reviewing the Ultimate Network Management Dashboard, So Far

This is the fourth in a series of posts providing steps to create a network management dashboard using SolarWinds NPM:

  • Part I gave an overview and the source of the series:

I want to send you over to the SolarWinds Resource Center, and, since network monitoring is sort of my bailiwick, I'd like to direct your attention to one whitepaper in particular: "How to create the Ultimate Network Management Dashboard" (pdf, 971kb).

  • Part II provided information about restricting user access to your network management dashboard:

Look at your staff. If you want to do so, SolarWinds can help you define the view that each one of your people gets to see. Once you've determined who gets to look at your dashboard, you need to figure out how each user is going to log on and what they'll see when they get there. Our documentation can walk you through the process of defining user accounts, but we've also got a great video on the process, too, that I'd highly recommend.

Step 4

Create topology maps layered out with nodes of devices or groups of devices to suit viewing convenience. These maps will present the high-level status of the node and provide the ability to drill-down for additional details.


Creating Maps for your Ultimate Network Management Dashboard

Network Atlas is the network mapping utility we provide with all SolarWinds Orion products. If you aren't sure, you've got Network Atlas if you've got an Orion Web Console. Network Atlas was designed to be pretty straightforward: you create maps by placing (click-and-drag) node icons onto a map. You can use virtually any image--including live weather maps--as your custom map background. The map above, from our live demo, was, in fact, created using Network Atlas. Check out our Network Atlas documentation and the videos provided on our Using Network Atlas page for more specific details about creating and using network maps.


Customizing your Network Management Dashboard

Next time, I'll provide some more details about customizing the default resources in your dashboard. Lists and tables and charts, oh my!


George Crump is President and Founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments that has grown to employ six full-time writers. George has been in the IT game since 1985, where he started out working on Xenix systems for Tandy. This eventually led to gigs with Novell and Legato. Before founding Storage Switzerland, he was CTO at one the nation’s largest storage integrators where he was in charge of technology testing, integration and product selection. Having reached the pinnacle of what he wanted to accomplish in that role, he sought out ways he could leverage his expertise as a consultant and analyst of the storage industry.


He started his blogging career by writing articles for Information Week which eventually led to writing articles for and about companies in the market and their products. Thus Storage Switzerland was born and now averages about 700,000 page views per month. When asked how he came up with the name Storage Switzerland, he replies “When I was CTO at my old firm, we would design whatever storage systems were necessary without being tied to one company’s product line. I would tell end users that ‘We are like Switzerland, we don’t really follow any corporate agenda.’”


His number one goal in writing his blog is to educate on concepts and processes. He finds that the most well received articles are the “What Is?” or “How-To?” type of post. Specifically, he really enjoys writing about SSD and Flash and how storage is managed (or complicated) by the advent of virtualized environments. He has been surprised that some of the most popular posts lately have been about tape drives, what most in the industry would consider to be a dying technology. He theorizes the popularity is driven by a whole new generation of IT managers coming into the field who have never learned about tape, but are now forced to learn how to manage the legacy systems that are already in place.


When asked about what he says are the future trends in the industry, he believes that the IT infrastructure is nearing a time of convergence where data, applications, and storage will all be managed under a single virtualized software stack. This may lead to easier monitoring as these systems converge, but it will still require an experienced IT manager to provision, tune, and manage these systems.


George is also a regular contributor to publications such as Byte and Switch, SearchStorage, eWeek, SearchServerVirtualizaiton and SearchDataBackup. When not writing or researching the storage industry, George can be found training for the four or five ½ Ironman Races he participates in every year.


Follow George and Storage Switzerland:

Twitter @storageswiss


Other IT blogger profiles:

Ryan Adzima, The Techvangelist

Bill Brenner, Salted Hash

Tom Hollingsworth, The Networking Nerd

Scott Lowe,

David Marshall, vmblog

Ivan Pepelnjak, ipSpace

Matt Simmons, Standalone SysAdmin

Rod Trent, myITforum

How long does the average consumer wait for a webpage to load? 5-10 seconds? As of the latest study by Google, the average user waits 6 seconds before they begin getting frustrated and take their business elsewhere. 

“Two hundred fifty milliseconds, either slower or faster, is close to the magic number now for competitive advantage on the Web,” said Harry Shum, a computer scientist and speed specialist at Microsoft.

If you own an online business or promote your business with an online presence, your website needs to load the necessary information rapidly, and have the ability to quickly, and accurately process business transactions. The internet has changed how companies market, sell, and process orders. About 80% of business is completed online. From banking, shopping, music, and downloading software, it is all available online. A healthy web performance is a must in today’s competitive market. Consumers require reliable, fast service in order to keep them coming back to your business.

What is application performance testing? Application Performance Testing refers to tasks that aim to determine whether an application is performing as expected and, if not, how the problem can be corrected. Application performance testing requires a utility to gather data about the application itself as well as data about the machine where the application is running and the network through which the application sends data. Application performance testing can be simplified by creating dashboard views of critical metrics.


SolarWinds offers two products that are useful for application performance testing: Server and Application Monitor and Web Performance Monitor.

This is the second post in a five-part series uncovering the mysteries of log management. In Part 1 - What Are Logs?, we discussed how logs can be useful for troubleshooting, compliance reporting, and even proactive problem remediation. In this post, we'll look at how utilizing logs can sometimes be challenging given their various types and formats.


The Challenge of Viewing Logs of Various Types

One of the challenges of making use of the valuable logs in your environment is that there are so many different types. To complicate the matter even further, each type has its own method for collecting and storing the log data. We'll look a little deeper into this question of log collection in Part 2 of this series. For now, let's just look at the four most common types of log files:

  • Windows Event Logs - The Windows Event Logs are what most IT professionals are familiar with from a troubleshooting perspective. After all, what breaks more than Windows? (Just kidding.) But this is not why Windows Event Logs warrant their own category in this discussion. Windows stores its logs in a proprietary format that is unique compared to each of the other log types. The most common way to access Windows Event Logs is the Event Viewer MMC snap-in, and Windows logs events in a variety of categories, including Application, Security, and System.

  • Text - Text logs are the most prevalent if only because there are so many ways to store and transmit text. Text logs include logs that are transmitted from network devices using the syslog protocol, logs that are stored in various text formats in the related application's installation directory, and logs for all Linux operating systems. It's important to note here that not all "text" logs are human-readable, much less accessible via a text editor.
  • SNMP - Network devices and computer systems use simple network management protocol (SNMP) to store and transmit their state and status information. SNMP logging is most common in network devices such as routers and switches, but it's also utilized by applications, such as McAfee EPO to manage network security.
  • Database - Logging directly to a  SQL (or similar) database seems to be a new trend in the logging world. This gives the consumer of the logs a lot more flexibility when querying, viewing, and archiving logs, and it also adds a layer of security by restricting access to only those who can authenticate to that database.


In Logging, Type ≠ Format

One footnote to this "type" discussion is that the log type does not necessarily equate to the log format. For example, text logs may be in syslog format to be transmitted using the syslog protocol, but they could also be in Snort, W3C, or some proprietary format. Similarly, SNMP logs depend on their related management information bases (MIBs), while database logs all have their own schema. I wouldn't count on all of the Windows Event Logs to be in the same format either.


Finally, as you consider implementing a log management procedure or solution in your environment, keep in mind that logs are not always human readable. Sure, with practice, you should be able to learn the different "languages" your systems and devices use for logging, but the scope of that complexity is pretty broad. But before you can read the logs, you have to get them. For more information about that, stay tuned for the next post in this series: Logs 101 Part 3 - Where Are My Logs?



LEM v. Splunk

Posted by katebrew Jan 24, 2013

I’ve been at SolarWinds almost 4 weeks now and I’ve been sitting in on a lot of prospect sales calls, to get a feel for SolarWinds Log & Event Manager (LEM) customers and their use cases for SIEM and Log Management.  A surprising number already have Splunk, but it does not appear to be satisfying them.

LEM, like most SIEMs, does not prevent someone from breaking in to your IT house.  LEM will bite intruders pretty hard if you tell it to....


Upon installation, Splunk is like starting with a blank spreadsheet

Splunk provides a 367 page search manual of syntax descriptions and usage examples.  Contrast this with LEM, which uses a drag-and-drop interface and is highly visual for administrators and security professionals.  It employs visual search tools such as word clouds, tree maps, bubble charts and histograms, all available without additional work.


In addition, LEM comes with over 700 rules, filters and reports to provide security and compliance best practices.  While “security-in-a-box” might be the panacea that isn’t here yet, LEM is moving fast in that direction.


Splunk doesn’t do In-Memory Correlation 

With Splunk, you need to wait until data has been indexed and written to the database prior to any analysis.  LEM performs in-memory event correlation allowing you to analyze millions of events across your infrastructure in real-time.  This is important when you not only want to use log files for forensics and compliance, but you also want to provide automated responses to anomalous behavior the SIEM detects.


Splunk doesn’t provide automated responses

Splunk requires that the user manually respond to actions and incidents.  LEM includes a library of built-in active responses that allow it to automatically respond to anomalous behavior and security incidents.  For example, upon seeing multiple attempted failed logins from multiple IP addresses, LEM can disable the account.


The capability to take proactive measures to improve security without human involvement is critical, as many customers do not have legions of security professionals on staff. If an incident occurs in the middle of the night, most customers would prefer the software to take immediate action. In addition, the definition of an incident is easily customized, as is the automated response to take with LEM.


Splunk doesn’t defend against USB abuse

LEM protects against end-point data loss and the introduction of malware with a built-in USB defender technology that tracks unauthorized USB activity and can take immediate action.  A typical use case is that if a USB is inserted into a sensitive group of endpoints, LEM will disable the USB, preventing both data loss and the introduction of malicious code.  Based upon my initial research, it appears that Splunk does not offer this feature.


Splunk may require additional installation assistance

Splunk offers “Splunk Professional Services” to deliver deployment and advisory services, which may be required based upon your configuration needs.  SolarWinds takes a different approach, allowing customers to be up and running quickly using a virtual appliance deployment model, easy-to-use web based console and intuitive interface.  Almost all LEM customers do a free 30 day trial prior to purchase and find out quickly that it truly is easy to deploy themselves, rather than going back to management and asking for professional services dollars to get going.



Now, just to focus on cool LEM features

LEM provides log collection, storage, analysis, real-time correlation and automated responses.  LEM is not a spreadsheet approach to SIEM.

Key differentiators:

  • LEM automatically indexes data from security appliances, firewalls, intrusion detection systems, servers and apps and normalizes log data into common formats to help identify problems.
  • LEM also provides 300+ audit-proven report templates and a console that enables you to customize reports for your organization’s specific needs.  Great management reporting can make the difference between a successful implementation and one that is perceived as a failure.  If you happen to have a manager who loves status updates, you will appreciate the automated reporting capabilities in LEM.
  • LEM enables organizations to proactively defend and mitigate security threats with continuous real-time intrusion detection from multiple domains and systems.  LEM enables you to analyze millions of events across you infrastructure with real-time, in memory, non-linear, cross-domain and multi-dimensional correlation.
  • In terms of log file storage, LEM stores log data in a high-compression data store. The user is not troubled with maintenance and administration, and retention requirements are easy to specify.



More on LEM v. Splunk



I'm not a TV guy, but...

Posted by Bronx Jan 24, 2013

...I bought one over the weekend. For the first time in five years I have a TV. Mind you, I bought the thing because of the technology, not the content that's being delivered. Let's see what I learned:

  • With over 1,000 channels, there is still nothing on.
  • LED pictures are great.
  • Based on today's content, I could be a TV producer, and make millions!


What are you watching?

Recently, I was watching a show about the men who built America. Not to go into too much detail, but it covered Ford, Rockefeller, and other biggies. The series was great. I heard stories about how these visionary men did whatever it took to make America the great country that it is today. Then the show cut to a commercial about why Chumlee was late to the pawn shop. Huh? My eyes rolled as I sighed. Is this where we are?


Some popular shows:

I've glanced at these shows, then wanted to smash my head through the screen:

  • Pawn Stars - People selling items for quick cash, often getting far less than the item's worth, thanks in part to the negotiating skills of the employees.
  • Storage Wars - People buying lockers of junk and trying to make money off of it.
  • Parking Wars - City employees who love giving out parking tickets, towing cars, and otherwise making car owners miserable.
  • Honeybooboo - Where is this slice of Heaven located? "Sketti," butter, and ketchup as a meal? Words fail me for trying to describe this concept.

How these are on the air is beyond me. What's next? Metal Detector races? After going through all the channels, I added about 12 to my favorites list. Pathetic.


The Smart TV

I did not get one. I just got your basic, garden variety, flat screen model. I don't know much about smart TVs, but there are security issues you should be aware of. For instance, they can spy on you.There's the next big show! A show about people watching their TVs! Gold Jerry, gold!


Daily Use

I have a feeling I'm going to turn that TV into a huge monitor for my laptop at some point. I mean, even the remote control is complex. (If I'm not careful, I could launch an ICBM.) My co-worker had an interesting use case for his TV and Dameware. You may want to check it out here.

What is Kiwi Syslog?


Kiwi Syslog is a "syslog server" - a passive listening application. It does not actively poll your network devices.

When installed and started Kiwi Syslog binds to specified port(s) on your system and then listens for any syslog messages, SNMP traps (if enabled), and Windows Event Log messages (if forwarded as Syslog messages by SolarWinds Log Forwarder). By default it will listen for syslog messages on UDP port 514.  It then logs, displays, alerts, forwards and performs many other actions on syslog messages and SNMP traps, received from hosts such as firewalls, routers, switches, Unix hosts and other syslog enabled, or SNMP capable devices.

You need to configure all your network devices to send their syslog information to the IP Address of the system that you have installed Kiwi Syslog on.

The Default Rule

The first time Kiwi Syslog is installed it contains a single Rule.  This rule does not have any filters defined which means that every message that is received by Kiwi Syslog will cause the actions defined within this rule to fire. There are two actions defined within this default rule:

  1. A Display action which sends all messages to "Display00" (the default display)
  2. Log to file action which writes all messages to the file specified. The default filename is called SyslogCatchAll.txt. This is located in the Logs directory

How the Rule engine works


When a message is received by Kiwi Syslog it is tested against each Rule in turn from the top down until either all Rules have been tested against, or a Stop Processing action is encountered. The next message is then tested in turn and so on.  For the actions within a rule to be fired, all the preceding filters of that rule must first be TRUE. When you have more than one filter specified within a rule each filter is effectively AND together not OR.

In the following scenario we have created two filters:

  1. Simple IP address filter.
  2. Simple Message text filter.

The two defined actions, Display and Log to file will only fire if the message that is currently being processed matches both of these filters:

For example, if it comes from IP address AND it contains the words "link down" OR "link up" within the message text part of the syslog message.

If the message does not meet these requirements then both filters will not be TRUE and therefore the actions will not fire.


Should I use Kiwi Syslog as a Service or as a Standard application?

If you only want to run Kiwi Syslog every now and then to see what is happening on your network or diagnose a fault with a network device, then installing it as a Standard (or "foreground") application would be best for your needs.

However, if you intend to run the Kiwi Syslog 24/7, please run it as a service.  (You can switch between Standard and Service installations without losing any settings.) 


Where Can I Learn More?


As mentioned in previous posts Part I and Part II, I want to now continue to go a little deeper into a few of the steps outlined in brad.hale's great white paper, "How to Create the Ultimate Network Management Dashboard", that we've got up on our SolarWinds Resource Center. As a review, brad.hale's white paper gives a brief rundown of the steps required to get a network management dashboard using SolarWinds NPM up and running, and, in this post, I want, specifically, to go over the steps of discovering and logically organizing your network.


Network Discovery and Organization

You already know you've got a bunch of stuff out there. How do you get it into your web console, and how do you organize all that beautiful data?


Discovering your Network

The third step in brad.hale's white paper reads as follows:

Step 3
Name network devices logically; and then group and sort by custom properties such as location, business units, device manufacturer, etc.

It's not mentioned here that you need to discover your network first. Fortunately, network discovery in SolarWinds NPM is a largely automated process, so, you've more than likely already completed it. If you haven't, we've got a video and documentation that guide you through the process.


Organizing your Network

As in any great endeavor--like this one--having a plan beforehand is a good idea, and having the tools to execute your plan is absolutely essential. In another previous post, I covered both planning and the tools SolarWinds gives you to execute your network organization plan, namely custom properties, dependencies, and groups. Take a look at "Network Organization: It's More than Racks and Stacks", and then look through the following documents for more detailed information:


Stay tuned. Next time I'll be back, bearing pretty colors and fancy lines, to go over ways you can visualize your network.

I recently came across Cameron Fuller’s latest post which provides a counter view to our post on why Sysadmins should find SolarWinds® Server & Application Monitor a refreshing alternative to Microsoft® SCOM.  Cameron is an Operations Manager MVP well versed with how Operations Manager works and I do agree with some of his comments.  System Center Operations Manager has its strengths and weaknesses, just as Server & Application Monitor has strengths and weaknesses.


1. One of SCOM’s strengths is that that it is a framework that can be extended through the use of Management Packs.  Microsoft does provide its Management Packs with SCOM free of charge, but as Scott Hill points out in his blog, not all Microsoft’s Management Packs provide in depth product knowledge on how to troubleshoot a problem. I also disagree with Cameron’s assessment that “While SolarWinds provides monitoring for “virtually any application” it does it with little knowledge of what the product actually does”.  SolarWinds Server & Application Monitor (SAM) does provide expert knowledge on what to monitor, why, and the optimal thresholds for many applications, and this is especially important for admins who have no idea what metrics should be monitored for a particular application.  Take for example the metric Idle Workers for Apache – the component settings describe why this metric might be off and what to do about it.

cameron fuller post pic.jpg

Component Settings - Idle Workers for Apache


2. As an enterprise framework, SCOM also makes available the ability extend monitoring to applications which are not covered by Management Packs that come with Operations Manager like Lotus, Oracle or XenApp.  SolarWinds Server & Application Monitor’s strength is that it does fully support well over 100 applications and can be deployed quickly, which is great for departments needing quick application support which is not offered natively by Microsoft.  These departments can also feed alerts to SCOM via Management Pack.  In this case, these products are used for different reasons.  Take Scott for example, he uses both SCOM and SolarWinds.  SCOM is used for high level alerts which he checks every day, and SolarWinds is used primarily because he wants the ability to easily modify alerts on specific metrics and send alerts to specific groups or people via SMS or email.  In this instance, SAM is a great complement to organizations that already have SCOM.


3. To Cameron’s point on agentless monitoring, I do agree that agents can be very scalable.  Cameron indicates agentless monitoring is not recommended for SCOM deployments because it does not scale well.  Agentless technology has come a long way, and SolarWinds Server & Application Monitor, a pure agentless product, is architected to scale to 10,000 servers.  In deploying agentless versus agent based technology, you really need to look at understanding the pros and cons from a business perspective.


Ultimately we do agree that one size does not fit all and that for some users, they will want to look at both products.



Storage Manager able to monitor storage environments containing storage devices from different vendors. These devices often use different terms to describe the same technology. The table below lists many of the array manufacturers Storage Manager can monitor and the terms these manufacturers use to describe Collections, Arrays, Raid Groups, and LUNs.


Manufacturer*CollectionArrayRaid GroupLUN
3PARSMI-SStorage ServerStorage PoolVirtual Volume
Dell EqualogicSNMPGroupsPoolVolume
Dell MD3K (LSI)SMI-SArrayStorage GroupLUN
EMC CelerraTelnet
EMC VNX, CLARiiONSMI-SArrayStorage Pool, Traditional Raid GroupLUN
EMC VMAX, DMX, SymmetrixSMI-SArrayStorage PoolDevices
HDS USPV (HP XP, Sun 9K)SMI-SArrayStorage PoolLUN
HP EvaSMI-SStorage SubsystemDisk GroupVdisk
HP P4000 (LeftHand)SNMPClusterPoolVolume
IBM DS 3K/4K/5K (LSI)SMI-SStorage SubsystemVolume GroupLUN
IBM DS 6K/8KSMI-SStorage SubsystemArrayLUN
IBM SVC, V7000SMI-SClusterMdisk GroupVdisk
LSISMI-SArrayVolume GroupLUN
NetApp (IBM N-series)APIArrayAggregateLUN (Volume)
PillarSMI-SBrickStorage PoolLUN
Sun 2K/7K (LSI)SMI-SArrayDisk GroupVolume


*NOTE: This is not a comprehensive list of storage devices supported by Storage Manager. Visit Supported Devices for more information.


How do you your network bandwidth and security when people keep attaching their own mobile devices to your network? Do you know when they’re on your network? Do you know how much of your bandwidth they’re using? Do you know what they’re doing and how that affects your network security? And do you know when and how to plan for those devices? And what about those devices that aren’t even supposed to be on your network?


More Mobile Devices on Company Networks


The past decade has seen a tremendous rise in the use of mobile devices like notebook computers, smart phones, tablet computers, bar code readers, Personal Navigation Devices (PNDs) and Global Positioning Systems (GPSs), among others. Folks have their smart phones and even tablets with them all the time, no matter where they go. Which brings us to the Bring Your Own Device (BYOD) to work movement, in which employees not only bring and use their own devices at work, but often use those devices for work purposes – accessing what’s supposed to be secure organizational data.


Meeting the Challenges Ahead


BYOD can help employees be more productive, but present some real challenges for IT departments. Employee mobile devices on your network can, however, be a win-win situation for both your company and its employees. Especially if you’re able to develop and implement plans for:


  • Protecting secure data
  • Enhancing IT infrastructure security
  • Regulating the use of increased IP-enabled devices
  • Monitoring Wi-Fi access points and user logons
  • Keeping a check on bandwidth consumption
  • Strengthening Wi-Fi connectivity and security
  • Revamping enterprise IT policy to make provisions for BYOD and mobile devices


For more information on BYOD and steps to help your company make the most of this trend, take a look at the SolarWinds whitepaper, Managing the BYOD Chaos. This whitepaper provides you with the facts, figures, and tools information you need to make your company’s BYOD plan a success!

SolarWinds Patch Manager provides an enhanced capability for creating Custom Update Views in the Patch Manager console.  We’re going to talk about three custom views that you may find helpful.


Third Party Updates

The first view is probably not new to you: Third Party Updates. KB3690 provides instructions on how to create this view. You may wish to refer to that knowledge base article for assistance in creating the next two views we will discuss.


Needed Updates

The second view that I’ve found useful is a dedicated view for Needed Updates. Using the guidance in KB3690:

  1. Select the “Updates have a specific approval and installation status” property.
  2. In the Updates View Filter dialog, set Approved State = “All”
  3. Set Update Status = “Needed”
  4. Assign the name “Needed Updates” to the view.

Now you can select “All Updates” or “Needed Updates” at will, with no additional query or refresh activity required to toggle between them.


Approved & Superseded Updates

The third view is a very important one. One of the critical WSUS administration tasks that should be performed on every WSUS server is update approval maintenance – specifically, removing approvals from superseded updates. I talked about why this is important in a PatchZone blog post series about WSUS Timeout Errors. In the WSUS console, finding superseded updates with approvals to be removed is somewhat of a tedious activity, but in the Patch Manager console, using the enhanced capabilities with custom update views, you can define a view that shows only the superseded updates that are approved and should have the approvals removed.


Here’s how to do it:

  1. Launch the task to create a new update view.
  2. Select the “Updates have a specific approval and installation status” property.
  3. Set Approved State = “Approved” and Update Status = “Installed/Not Applicable” and name the view “Approved Superseded Updates”, or something you like.
  4. Add the “Not Applicable Percentage” column to the view, and drag the “Not Applicable Percentage” and “Has Superseding Updates” columns to the left side of the view.
  5. Set the filter on “Has Superseding Updates” to only show updates with the value “Yes”.
  6. Set the filter on “Not Applicable Percentage” to only show updates with the value “100%’.
  7. Click on the “Save View Layout” to store this view state.

You now have a display showing the superseded updates, that are no longer needed by any system, and have approvals that should be removed. Select all of the updates and decline them.

Patch Manager Approved Superseded Updates.png

CAUTION: As discussed in the PatchZone blog post about WSUS timeout errors, if you have downstream servers you may need to be careful about the number of updates you process at one time. On a not-so-well-maintained server I recently worked on, there were 254 updates to be declined. That’s probably too many to process in a single task, because the downstream servers will need to replicate those changes. Keeping the limit to 100 per synchronization cycle should keep you out of trouble. You can also schedule the decline tasks to occur at a future time!


After removing the approvals you can run the Server Cleanup Wizard to remove the files associated with them. After declining the updates on the server with 254 updates, the Server Cleanup Wizard deleted almost 12GB of unneeded files.




There are many tools available to jailbreak Apple (iPods, iPads, iPhones) and root Android (Samsung phones, Amazon Kindles) products. As evidence of how easily a geek can perform these hacks, just look at the used electronics offered on CraigsList in a given week.


In this era of ‘bring your own device’(BYOD), these liberated ones may make ‘guest’ appearances on your corporate network. The more you know about them, the better you will be able to control their access.


Most fundamentally, in the case of computers, the opportunity for control begins when the device’s bootloader—held in firmware—begins its routine. As first stage security, depending on the device, the bootloader usually cannot initiate the boot process without an appropriate key. For example, my old video editing system will not begin booting-up without first retrieving the encryption key held in a USB dongle. In contrast, though similarly, newer computers use a Trusted Platform Module (TPM) to manage BIOS-stage security checks before allowing the bootloader to start.


Seeking to circumvent boot-level cryptography, jailbreaking and rooting tools target the computer’s Trusted Platform Module with methods that evolve along with the safeguards to stop them.


Security Risks

A jailbroken or rooted device gives the owner root-level control over the software running on it and opens up access to a plethora of applications that are blocked by the manufacturer’s TPM. Many jailbreaking kits automatically install the Cydia application loader to bypass iTunes in installing unapproved software on Apple devices.


However, the same hack that provides access to alternative software also exposes the device to the security risks that the device’s factory TPM is efficient at managing. A jailbroken or rooted device infected with malicious software becomes a Trojan threat; any data on or passing through the device, including data generated through sensors, keypad, network protocols, or peripherals, could be secretly sent to a remote server.


Device Monitoring Safeguards

Mobile Device Management (MDM) systems control devices through installed clients. While you can install clients on all company-issued devices, and even devices to which the user provides such access, you cannot control a mobile device on which no MDM client software is installed. Since, by definition, a rogue device is one that operates on your network without your control or permission, MDM offers little help in tracking such devices.


Besides an MDM, you need a monitoring tool to discover and track unknown devices on your network. SolarWinds User Device Tracker, for example, lets you see devices connected through the SSIDs on your wireless controller. You can track suspicious devices by adding them to a watch list, allowing yourself the most timely information on which to undertake a deeper analysis if and when device access patterns warrant it.

The following procedures will help you troubleshoot MAPI issues relating to SAM. For example, you get the message: "MAPI profile not found."

Note: Do not use the same mailbox for multiple MAPI UX monitors. Doing so can cause the MAPI monitor to intermittently fail.

MAPI Probe Diagnostic Checklist

Install Collaboration Data Objects (CDO) or Outlook

  • CDO can be found here: If you would like to install CDO, uninstall your entire MS Office installation. Uninstalling Outlook only is not sufficient.
  • The MAPI probe may be unstable when running with Outlook installed. If this is the case, uninstall Office then download and install CDO.
  • The Orion Server is in the same domain as the Exchange server being monitored
  • The user account used to monitor the mailbox with SAM has permission to login to the server console and has done so at least once.
  • The user account to monitor the mailbox with SAM is in the local administrator user group of the server where SAM is installed
  • MAPI component is using the FQDN for the domain account.
  • The MAPI profile does not need to exist. The probe should create it and also update the existing profile with the required settings. However, there may be issues with an existing or created profile. The default Outlook profile is called Outlook.
  • If this profile does not work, create a profile with the MFCMapi free tool, availabe at:
  • Review the Configuring and Integrating MAPI guide for any additional requirements and troubleshooting steps.

Check the MAPI profile

  1. In the MFCMapi tool, navigate to Profile > Advanced Profile > Launch Profile Wizard, keeping the defaults on the first dialog.
  2. Set the profile as default.
  3. Update the profile name of the newly created profile in the MAPI probe.
    • Use MFCMapi to find the profile name: Navigate to Profile > Show Profiles for verification.
    • Check that Send Email To: is correctly filled out in the component settings.
    • The Mapi Profile Name must match the actual profile name. Use the MFCMapi tool if you are not sure about the name.
    • Credentials used for the probe must be eligible to open the mailbox. It is required to add the user to the local Administrators group, otherwise the probe can fail with insufficient privileges.
    • Use a clean mailbox created for monitoring purposes. A mailbox full of email is problematic as it takes a lot more time for the probe to search through all of the emails. The MAPI probe deletes obsolete, undeleted messages sent by the probe in the past to keep the mailbox clean.




Occasionally I'll run across a customer who needs to quantify the traffic overhead associated with network management. Ten years ago we used to accomplish this using traffic stats from NMS interfaces, some assumptions, and some arithmetic. What we would do is take average daily traffic from the NMS and assume that all nodes impacted the traffic equally. From there we would calculate the impact on WAN connections using the number of nodes on the far side of the WAN. While this method probably does yield a good estimate of the traffic, these days the words "probably" and "estimate" are usually not sufficient.


Today, chances are that you already are measuring this traffic, you just need to know where to look. If you are using NetFlow Traffic Analyzer (NTA) finding the data is easy. NTA has features that allow you to define traffic types by an IP address (endpoint) or as a application (IP and ports). A simple way to view network management traffic from production traffic is to use the IP Address Group feature. This option is found in the NTA settings page.




Add a group with the IP addresses of your NPM and SAM servers, and then disable the default groups.


ADD IP  ADD grp.png


Now, on to the NetFlow -> IP Groups page.


netflow total ip mgmt traffic.png

This graph is showing the total amount of traffic from management servers network-wide. The question is how much of this traffic is going over any particular WAN link and how is that affecting the WAN? This part is easy. Just navigate on the IP address group page to the NetFlow Sources and drill down to the interface you are interested in.


NTA management traffic per IF.png

Here we can see that the total bandwidth for management traffic on this interface peaks out at about 0.4%. This interface is Ethernet  so your WAN connections will probably peak a bit higher than this. If you want to add you LEM, SAM, FSM, or other management traffic just add the IP address of those servers to the IP Address Group, and you are done.


The nice thing about this is that NTA keeps the IP address groups, so you never have to rebuild these views.


If you want to try this on your network but don't have NTA, try the 30 day evaluation and see what your overhead is.


Ease of use. Three very powerful words in the world of software development. We don’t like it when our customers stare at their screens and say, “Um, now what?” in a baffled tone. We have been working hard to make our log management software SolarWinds Log & Event Manager easier to use and that includes developing the new Add Node wizard. The new wizard walks you through, step by step, what used to be a confusing array of screens and information.

Adding a Syslog Device


To add a syslog node device:

  1. From the Op Center dashboard, click the Add Node button in the Node Health widget.
  2. Select Syslog node.
  3. Enter the IP Address of the node.
  4. Select the Node Vendor from the list.
  5. Configure the node so LEM can receive syslog messages. If you need help, click the links provided for enabling specific vendor devices.
  6. Select the I have configured this node so that LEM can receive its Syslog messages check box.
  7. Click Next and our log event analyzer LEM then scans for new devices.
  8. Click Finish.


Back on the Op Center, the Node Health widget has another new feature: Scan For New Nodes. The new Scan for New Nodes button scans syslog data that has been sent to LEM. This is useful if you have sent many devices and want to configure them all at once.

At the top of the Op Center screen, a message displays New Connectors found. If you click the View Now link, it shows the recommended devices for these connectors.


These procedures are further explained in an informational video from our Training department.


For more information on log management and what it can do for you, see SolarWinds Log & Event Manager.

Logs are a mystery. They come in a variety of formats and are available through several unique means. So in this post, along with the subsequent posts in this series, the SolarWinds geeks are aiming to demystify those little nuggets of IT gold.


So, What are Logs?

Logs are the means by which software keeps track of what's going on "behind the scenes." Everything from the operating systems running on your computers and devices to the databases that support your applications generate logs. Oftentimes, logs are very granular, logging every step the software takes, making them useful in many ways. Most IT professionals know at least this much about logs, but the segment of that population that knows what they're good for, much less how to read them, is significantly smaller.


Logs! What are They Good For? Absolutely...Wait.

Before you finish that statement with "absolutely nothin'!" consider the following scenarios for using the logs generated by your systems, applications, and devices.


Logs for Troubleshooting

Troubleshooting is arguably the most common use case for logs (the next section explains the emphasis here). When something breaks and the cause is not immediately apparent, it's likely the logs related to the broken device, system, etc. contain some kind of indication of what happened before it broke. This is how logs provide invaluable behind-the-scenes information to IT pros: they show you what the end-user doesn't see.


Logs for Compliance

Compliance reporting is probably the main reason most organizations collect logs. The reason troubleshooting is arguably the most common use case for logs, however, is because once these organizations collect their logs for compliance, they don't actually use them for anything. This is a huge gap in most log management strategies. The purpose of the compliance requirements, believe it or not, is to ensure the organization collecting logs actually does something with the information they contain. An example of how this applies to PCI compliance is that you need to actually modify your policies or coach offending users when your logs tell you your sensitive files have been accessed by unauthorized users.


Logs for Proactive Detection and Remediation

Examining historical logs once a month or once a quarter to address compliance issues is one thing. Reviewing the logs constantly to address issues more proactively is another. Large enterprises employ security specialists in their IT departments whose sole job is to monitor log files and make recommendations for remediation when necessary. Most other companies, unfortunately, don't enjoy that luxury. A good first step to this end would be to review the logs on your most critical and sensitive devices and systems on a daily or weekly basis to ensure you don't miss a catastrophic failure or breach you could have avoided otherwise. The best option is to implement an automated log management system to alert you when something has or is likely to go wrong.


Now that we have an idea why logs can be so useful, we'll take a deeper look at some of the challenges with accessing and reading them. For that, stay tuned for Logs 101 Part 2 - Logs: So Many Different Types.


Little good comes from a Monday...unless its another award!  DameWare_Wins_5_Stars_From_Soft82.png

Today we were thrilled to learn that DameWare Remote Support earned a 5-out-of-5 star rating from  The award comes just two week after DameWare was first listed on the site, and one month after DameWare was recognized as the top remote control application in's Reader's Choice Awards.

"As a system admin, I cannot imagine life without DameWare," wrote Dave from Kansas in a recent online review.  "I found it 10 years ago and it never leaves my side. I've bought a copy for every employer I've ever been with. For as much multi-tasking as I do, it really helps not having to get up and physically touch each machine I have to work on."

To report YOUR experiences with DameWare (or other awards we may have missed), please let us know in the comments below. 
Or, if you are new to our award-winning remote control software, download DameWare and start your free trial today.

I recently reached out to virtualization expert David Marshall in the latest of our series of blogger profiles.  In addition to being a really nice guy, David has developed a substantial following through his blog (VMBlog at and content on InfoWorld as well as his books on virtualization.  So let’s get right to the discussion with David.


MT: You probably get this a lot, but how did you get started in IT?

david_marshall_cp.jpg  DM: I'm very much into gadgets and technology, and I've been banging away on computers since around 1978. My first computer was a Radio Shack TRS-80 Model III... and I was hooked from that point on. Oddly enough, I graduated from college with an accounting and finance degree, and did number crunching and investments for many years before I finally woke up and realized that you could actually make money and do a job that you were passionate about. So I left the financial world to take part in a new startup, an ASP (application service provider -- what we called cloud computing back in 1999). While trying to make the ASP business more profitable, I began experimenting with a new product in 2000 that was in Alpha code called VMware ESX and another Alpha product called Connectix Virtual Server (later acquired by Microsoft). And that was it for me! I was bit by the virtualization bug and never looked back. I helped create the first cross platform (or multi-hypervisor) virtualization management product around 2001, and continued to come up with new and interesting ways to use virtualization technology by starting a few more startup ventures to make those products a reality. It's been a blast! And I'm still working in virtualization 13 years later.

MT: And what made you take the leap to blogging about virtualization?

DM: This is such a fun question. I've talked about this with many people over the years, but I don't know how much I've ever really written about it. The short answer, if there is ever one with me, is that I started out blogging about virtualization because there weren't many people doing it. I started VMblog back in 2004, but that was a different time for virtualization. The technology was really still trying to prove itself, so it didn't really have books or blogs dedicated to it yet, and the user following was NOTHING like it is today.

So why did I start? One of my early startups was focused on server virtualization technology, and I began creating an internal email newsletter of sorts to keep co-workers informed about the technologies, new ways of doing things, and any other updates to the platforms we were working on. Back then, we were focused on VMware ESX, VMware GSX and Microsoft Virtual Server (remember those two platforms?). One of my co-workers told me, "Why don't you start a blog instead, and share that information with other people outside of the company?" That made sense to me, and VMblog was created. During that same time, I also started writing my first published book on the subject, which was a nice tie in to the blog.

MT: Who are your typical readers? What are they looking for from your blog?

DM: My readers come from all walks of life, from around the world, with different titles and backgrounds. But obviously, since the blog site is a niche, dedicated site, they all come for one thing -- virtualization. They are looking for the latest information within cloud computing and virtualization, whether server virtualization, desktop virtualization, or application, storage or network virtualization... or some form of cloud computing that relates to the technology. It's about information, education, events, whitepapers, books, trends, etc. It's about giving access to information and getting people in front of what's important or interesting. While niche, it still cuts across quite a number of technologies, and not everything in there will be for everyone, but it's a good place to start.  And over the years, I've opened it up for others to use it as a platform as well, if they didn't have one for themselves.

MT: After years of writing posts, what kinds of posts tend to be the most popular?

DM: Everything varies if you go by page views alone, and I'm sure it has ups and downs based on timing of when something gets published. But I can say that people pick up on the Q&A articles, and I certainly enjoy them because it's a lot of fun to have an opportunity to speak to someone and ask them questions about their latest announcements, news, products, or whatever. And I've also had a lot of success with my prediction series that I do every year. It's also a lot of fun for me, but I think it gets a lot of reads and people enjoy it because you get to hear directly from various experts and executives from companies both large and small, sometimes stealth or newly launched companies who may not yet have a voice, rather than just hearing from the pundits and analysts of the world as to what the coming trends will be or where they see the market or some specific technology headed the following year. The most recent series has come to a conclusion, but you can check out the 2013 predictions here:

MT: I don’t want to steal too much thunder from that series but can you give us a high level view of what trends you see having the most impact on the virtualization space?

DM: That's a great question, and one that obviously can become a blog post all on its own. But a couple of quick hit things that come to mind include things like:

1. Platform Choice. It's fairly obvious to most people in the industry that VMware is the 800 pound gorilla -- the market leader. And with good reason, they have had the most stable and feature rich hypervisor platform for many years, and had virtually no real competition going back to its beginnings in 2000 when server virtualization was considered IT black magic! But things have changed with the introduction of the latest platform products from folks like Citrix, Red Hat and Oracle. And Microsoft's Hyper-V 3.0 probably is the biggest game changer.  Pundits have been talking about the hypervisor becoming commodity, and that trend into 2013 will become even more apparent. Imagine what the server virtualization market makeup would look like today if folks were just now getting involved with the technology. Hypervisors are closer than ever in feature and functionality, and price is becoming a big factor. Organizations have a real choice now. They don't have to choose between going with VMware or having to settle for something far inferior.

2. Virtualization Management is where it's at. We've seen it play out in 2012, and the trend of focusing on management will continue in 2013. As the hypervisor becomes more and more of a commodity and organizations continue to create hybrid environments, management software and management best practices will become more important to maintain a successful environment. With increased workloads being added into virtual platforms, cross vendor management tools and best practices will become critical to that success.

3. We're going to see the latest buzzword, the software-defined network or SDN, become an extremely important factor in the private and public cloud. As it continues to get defined, hardened, improved upon and accepted, we'll continue to see an emergence of new capabilities and a host of new startups playing in this market. And it will keep the server virtualization market on the cutting edge of things.

MT: What is your favorite SolarWinds product and why?


This is a really funny question, and the folks in the industry who know me already understands why that is... my obvious answer would be, SolarWinds Virtualization Manager.  In large part, because this product is my baby as it made its way to the SolarWinds family of products by way of acquisition from my startup company, Hyper9. 


But since the acquisition, SolarWinds has continued to expand on and improve the product, while still keeping and maintaining the product’s essence, which is why it has been so successful.  This virtualization management product really turned systems management upside down by throwing away the old, boring management paradigm of a tree view interface and moving into the 21st century with a modern, intuitive and scalable management interface that was designed on top of a search engine platform, making it perfect for a transient environment within a virtual datacenter.  And because the interface is widget based, it can change, grow and adapt to whatever the individual needs or wants rather than being forced to use an interface designed by a developer who has never managed a virtual datacenter.  And just like the management component, the built-in reporting and alerting also make use of the search engine design, so it's quick and easy to extend the product to perform custom and shared queries to build new reports and new alerts, without having to wait for a product update from the vendor.  And don't forget about the troubleshooting, change tracking, sprawl detection and capacity planning aspects of the product!  Very powerful, easy to use, and highly scalable.  And it just keeps getting better as SolarWinds continues to build out the product's feature set and update the interface.  I'm no longer associated with the product, but I still highly recommend that people at least try it out.  You guys offer a fully functional 30 day trial, and I think people will really enjoy the different experience that it offers.

MT: Given your experiences, do you recommend other people get into blogging?

DM: Yes, absolutely. IF you have spare time and a passion for whatever it is you want to blog about. To me, those are key. If you aren't passionate about it, it probably won't last very long. Blogging can be very time consuming, but rewarding as well. There are plenty of virtualization focused blogs out there, but current bloggers are always welcoming new bloggers with open arms. It's a great community of people, and I've made a number of good friends because of it. Now, when you go to a tradeshow like VMworld, it's like a reunion of sorts, because we bloggers may chat, talk or Tweet one another over the courser of the year, but it takes an industry event like this to bring us together face-to-face since we are all spread out across this globe... but held together in one common bond by a love of this technology that we all blog about and cover in some shape or form.


Other IT blogger profiles:

Ryan Adzima, The Techvangelist

Bill Brenner, Salted Hash

Tom Hollingsworth, The Networking Nerd

Scott Lowe,

Ivan Pepelnjak, ipSpace

Matt Simmons, Standalone SysAdmin

Server Monitoring
Effective Server Monitoring is key to any successful business; it’s an essential task, as even a small glitch in the server performance could lead to complications which would result in hindering business operations, employee productivity and customer service.

To tackle all these problems, it’s necessary to monitor all aspects of server health and performance.  Some of the benefits of server monitoring are:
• Improved system performance
• Proactive identification and correction of server problems
• Simplified Server Management
• Reduced IT Infrastructure Maintenance Costs
• Reduced time to troubleshoot server issues
• Higher Web Application Uptime


Monitoring Key Services & Performance
The most common cause of unscheduled downtime is a critical service stopping, or stalling. Considering a Windows environment, the three most common critical Windows applications that should be monitored include: SQL Server, IIS, and Exchange - all used for mission-critical services. Exchange is susceptible to one of its services stopping or stalling. If this happens, the results can be catastrophic, and happens far too often.

CPU and resource overload can have a serious impact on application efficiency, and especially on mission-critical applications. Taking an example with the SQL Server, if SQL queries are taking increasingly longer to complete, the result is an irritated user. If there are 500 end users, and the typical query takes 50% longer, that’s a lot of calls to the help desk.

How can you keep end users from calling the help desk?
• Actively monitor services and performance and enable alerts when thresholds are breached
• Look at historical performance data to determine if a performance issue is a spike or a trend.
• Monitor logs & correlate logs with performance data to more quickly find the root cause of the problem.


Monitoring Server Busyness
Monitoring the busyness of a server is also a key element of effective monitoring, as a busy server might not respond quickly to a request. The simplest method of measuring this parameter is by keeping a tab on the processors process time, measuring the total utilization when all of its processes are running simultaneously.  If your machine is running several applications it is handling several server roles on your network. An alternative method to measure server busyness is by measuring processor contention, which indicates how different threads are fighting for the attention of the processors on your machine. In the case of multiple threads contending for use of the same server, the system queue length helps ascertain how many more threads are waiting for server responses for execution.
Other performance counters that must be checked are CPU utilization, processes; SNMP etc. which are some of the key aspects that measure how frequently the processor has to switch from user-to kernel-mode in order to handle a request from a thread running in user mode. The heavier the workload running on your machine, the higher this counter will generally be, however, over a long period of time the value of this counter should remain fairly constant.

Temperature Monitoring
Environmental monitoring also quite essential for your server rooms & data centers. Environmental conditions have a huge impact on how reliable and long lived your servers will be. Bad environmental conditions can reduce the life of components, decrease reliability, and may cause problems resulting in increased expense.
Leading research groups have estimated that threats from IT environmental issues will cost business and industrial organizations somewhere between $50-$100 billion in downtime and related costs.  The primary environmental threats in the data center are temperature, power, humidity, flooding, smoke airflow and room entry.
If you are looking for a solution to monitor hundreds of applications and hardware types, a tool which delivers smart alerting and reporting, try Server & Application Monitor to ensure you are comprehensively monitoring servers.


With all the natural disasters that have been making headlines, the last thing you want to be worried about is your office's communication network. Many businesses promise 99.9% up-time and (hopefully) make plans to ensure this during disasters. However, while we make plans for off-site back up, data recovery, electric generators, and emergency cooling, how many businesses plan for the network connecting their data to the rest of the world to go down?


Depending on how important continuous network connectivity is to your office, you might want to (re)consider what you'd do if land lines are down. For example, maybe you want to consider inflatable satellite antennas?  (I never knew I wanted to say that line.)


How it works


The dish is actually a thin, flexible, polyimide material that is enclosed in an giant inflatable ball. They look like person-sized exercise balls or like Rover from The Prisoner. When the ball is properly inflated, the air pressure holds the material in the proper satellite dish form. After inflation, it acts like any normal, rigid dish satellite antenna.




The original research was conducted by NASA during the 1960s to produce inflatable space structures. In the 80s and 90s, NASA and SRS Technologies coordinated to develop a inflatable solar concentrators. This was then used to develop the inflatable satellite. Read more about it from NASA's Spinoff (PDF).


Disaster Planning


Inflatable satellite antennas aren't for everyone, obviously. There is a high price tag associated with them that makes them out of the reach of many companies. However, if your company is dedicated to continuous network connectivity, or if portable satellites antennas are already part of your disaster plan, this might be the technology you never knew you were looking for.


Satellite communications can provide internet access, VoIP, and video conferencing access. The inflatable antennas can provide all of this during high winds (sadly not hurricane force winds) and extreme temperatures.


If you don't have the money to invest in inflatable satellite antennas, you can use SolarWinds NPM to get notified when your network connection goes down and our Failover Engine to ensure 24X7 availability of your Orion servers within your network.

As promised in a previous post, I want to go a little deeper into a few of the steps outlined in brad.hale's great white paper, "How to Create the Ultimate Network Management Dashboard", that we've got up on our SolarWinds Resource Center. The white paper gives a brief rundown of the steps required to get a network management dashboard using SolarWinds NPM up and running, and, in this post and those following, I want to go into each step and provide a bit more detail with some links to even more detailed treatments.


The first step to creating your Ultimate Network Management Dashboard is the determination of who gets to see what on your network. Ask yourself two questions:


  1. What kinds of users do I have for my dashboard? (i.e. Who gets to see this awesomeness that I am now creating?). It's likely that you've got multiple levels of responsibility in your IT staff, and SolarWinds lets you respect variable levels of authority with correspondingly variable levels of access.
  2. What parts of my dashboard do my different users get to see? (i.e. How much of this awesomeness do they get to see?) You may have your network monitoring staff and their responsibilities divided up by geographic area, or you may want to divide your management load by domain or by IP space. Whatever your setup, SolarWinds can respect it.


Look at your staff. If you want to do so, SolarWinds can help you define the view that each one of your people gets to see. Once you've determined who gets to look at your dashboard, you need to figure out how each user is going to log on and what they'll see when they get there. Our documentation can walk you through the process of defining user accounts, but we've also got a great video on the process, too, that I'd highly recommend.


Next time, I'll go into the process of logically dividing your network so you can make sense of all the different types of devices and their relations that you've got out there.


Last week I had the pleasure to interview Rod Trent (@rodtrent) of


JK: How did myITforum get started?

RT: It’s an interesting story of course.  I’ve lived long enough to understand that the best things in life are those that are unintended.  Back in the 90s, Microsoft SMS (Systems Management Center) 1.x was released.  At that time it was like shareware (remember that?), the product was not that great.  Back at that time I worked at one of the big 5 accounting firms.  In the 90s the economy was similar as it is today, and the company I worked for was laying off people, especially IT support.  At that time, IT admins were considered glorified secretaries instead of a the professional position it is today and we were down to two people supporting nearly 500 folks.  So, we invested time to figure out how we were going to do more with less.  We figured out we could use this product (SMS) to solve a lot of problems-all with just two people.  At that point, I got systems mgmt.  A light bulb went off.  SMS was a crappy product back then and when I found a work around to do something, I would post my tips and tricks on the web.  That proved to be valuable, and I found that other people were in the same situation I was.  Remember, this was at a time when AOL was the internet so there were not a lot of resources out there.  The site became popular and grew.  At some point we decided to make it official and branded it myITforum in 2001.


Initially we just supported SMS, then Microsoft acquired a monitoring product from NetIQ and now there are a slew of other systems management products that are now offered as a suite in Systems Center.  Over the years, the product  has improved but it needs a lot of support because it is so feature rich.


Over the last 3 to 4 years Microsoft has seen significant growth in its Systems Center customer base.  As System Center has grown, myITforum has grown, as a supporting community.  In addition to supporting Systems Center, we see myITforum growing into a community that is focused on all things systems management (mobile devices, servers, workstations, etc.) – regardless of whether a company is using scripts, Microsoft tools or other tools like Altiris, now owned by Symantec.


JK:  At what point did you start managing myITforum full time? How do you stay in tune/keep your tech credibility?

RT: About 2001.  I am now focused on community organization and community management.  I don’t have as much hands- on with the product today.  However, I have a more unique vantage point. We see issues come in, and we have some deep ties with Microsoft so we really know what is going on in the market.  We can track issues from minor to major, even bugs – and this is fed to Microsoft to improve their product.


I have a “command & control center” with 4 monitors, and at any time I am writing, researching, monitoring myITforum and other communities.  It’s important to folks that visit the community that they have up to date news and can be notified of serious systems management situations.


We also provide information on trends, like the cloud.  Most sysadmins cringe when they hear that word.  However, there is now value associated with it, something they can use.  So, myITforum’s job is to identify the value pieces of the cloud, translating concepts to actual things to pay attention to.


JK: With regard to the cloud - what should sysadmins be paying attention to?

RT:  The IT community needs to be wary – with reports in December about Netflix and being down - any technology that can be destroyed due to weather conditions or user error – you need to be realistic about it.  A lot of folks are promoting the cloud, this is the future - and it’s really here, but it can go down at the worst time possible.  The cloud has not yet evolved to having the reliability of what we are used to with on-premise networks.  You really need to have things in both places.  Don’t just go with a public cloud – have something that provides redundancy.  Be realistic and investigate what makes sense for the business. The cloud can be used backups or email, or supporting people who are working remotely.  Microsoft created the concept of the hybrid cloud which is a combination of on premise stuff and apps or infrastructure hosted by the cloud provider.  This is a good concept because it allows choice for security, availability and redundancy.  It allows IT folks to offload work that makes sense for their business.

JK: What are some of the trends or issues that you have seen with System Center 2012?

RT:  With any product, especially one as large as SCCM, you need to be prepared and plan.  System Center is a suite now with some integration but not 100%.  Orchestrator is used to tie these products together.  These products are extremely useful and powerful.  Any product that touches endpoints and critical services can be dangerous if you don’t use the product properly – not just System Center, but any product.  One example: there was an Australian bank last year that created a task sequence incorrectly and it reformatted all the hard drives in the organization


There is a lot of learning that goes with any new product or any new version of a product.  SCCM 2012 is a completely new model of endpoint management.  So those familiar with SCCM 2007 need training, but also even those already familiar with SCCM 2012 have to learn about SP1 because there are so many additions in the service pack.  One way to get up to speed quickly is of course the Microsoft Management Summit which is  coming the first part of April.  This is one of the best run conferences because it is so community driven.  This conference was actually started by myITforum back in the day before Microsoft took it over.  Service Pack 1 is coming out soon, and one of the best ways to learn about it is to attend MMS 2013.  Training is a main focus of the conference this year.  You can spend thousands of dollars and weeks of classroom training for your organization or you can go to MMS for a single week for just $2-3,000.


Stop by myITforum’s booth at MMS, and get in on the twitter army and meet and geek fun.




Other recent blogger profiles:

Ryan Adzima, The Techvangelist

Bill Brenner, Salted Hash

Tom Hollingsworth , The Networking Nerd

Scott Lowe,

Ivan Pepelnjak, ipSpace

Matt Simmons, Standalone SysAdmin

SolarWinds provides a solution for managing storage performance within virtualized environments by using the  Storage Manager plugin enabled in Virtualization Manager's Configuration Wizard. Using these solutions can help you identify, diagnose, and correct issues in your virtual environment infrastructure.


Resource Contention in Virtualized Environments


A major resource drag in virtualized environments stems from storage contention. Yet locating the source of contention is a challenge especially as storage monitoring solutions tend to be vendor-specific and do not give the data necessary for finding and debugging storage contention problems within virtual environments. Often there is plenty of available storage capacity within the infrastructure, but the ability to dynamically allocate resources so Virtual Machines (VMs) don't interfere with each other is murky or unavailable. One solution is using Storage Manager and Virtualization Manager together as a dynamic tool that spans across vendors, platforms, and hardware types, and provides an end-to-end troubleshooting solution from the VM all the way down to the spindle. Once you are able to get a full picture of the infrastructure, you can quickly pinpoint where your bottlenecks are located, and you can identify VMs that are using more resources than expected, so you can take corrective action to reallocate your resources to meet demand.

The Virtualization Manager / Storage Manager Connection


Virtualization Manager provides tools to help manage your virtualized environment. Its high-level dashboard provides a quick view of what's happening across your virtual environment infrastructure. This example shows how Virtualization Manager and Storage Manager can be used together. Start from the high level map across the virtual environment infrastructure to see what's happening in each cluster.



From the Environment map, you can get a quick view of the VMs, hosts, datastores, clusters, and apps. By clicking the Clusters tab, you can get a quick view of the health of the clusters.



In this example, the Curitiba ESX cluster has a warning, so we'll drill down further to get a clearer picture of what's happening.




Focusing on the LUNs associated with this cluster, we can see the back end datastores associated with this cluster.




In this example, the shared LUNs have hyperlinks into Storage Manager so you can take a deep dive into the storage back-end. The link from Virtualization Manager drops you into the Storage Manager's target details view.




The information in the target details provides some of the information provided in Virtualization Manager as well as more detailed information including the Total IOs per second and the IO Response Time.



In this example we see that one of the VMs is using significantly more IOs. From here, you could explore what application is running on that VM, and see if it needs to have it's own RAID group so it’s not disrupting other VMs.


So within a few clicks, you can see how resources are shared across your infrastructure, and you can proactively provision your storage so quality of service is met. See the YouTube presentation, Managing Storage Performance in Virtualized Environments, for the complete demo.


Setting up Storage Manager to work within Virtualization Manager


What you need before you begin:

Storage Manager powered by Profiler

Virtualization Manager

Network address for Storage Manager

Username and Password for Storage Manager


Enable the Storage Manager plugin from the Virtualization Manager's Configuration Wizard.


Configure Storage Manager to interact with Virtualization Manager.


See the Basic Setup page for more information about configuring Storage Manager within Virtualization Manager.



A few weeks ago, we surveyed the SolarWinds Community to find out what challenges they are facing with IT alert mangement. 156 people filled out the survey. I posted some of those stats, here. Since we have established that alert management is a pain in many organizations, let's talk about how that pain is felt. In that same survey, we asked: "What are the biggest challenges you face with alert management?" and the responses, in a word cloud, loook like this.



The most common responses were:

  • False positives
  • Getting a timely response
  • Getting the alerts to the right person on the right team
  • Having enough resources to handle the alerts


At the same time, the consequences for missing an alert, or not addressing it in a timlely manner, were pretty dire. These included:


  • Critical system events missed
  • Unscheduled downtime
  • Disk fills
  • Customer impact/staff impact
  • Delay in problem identification and resolution
  • Lost revenue
  • Lost data
  • Missed SLA targets


Some respondents identified very uncomfortable personal consequences including: "having to attend meetings with angry customers" and "customer complaints resulting in having my boss sitting in my office" and even, "it comes up during review time." Ouch.


So we asked, what features are missing from your alert management process? And this is what the respondents told us.



Do you agee? Are you feeling the pain of alert management in your organization? Where is that pain being felt, and what would make it easier in your organization?

IPv6 Subnet Masking


IPv6 subnet masking is similar to IPv4 with two key differences in the way IPv6 appear and what actually gets masked.

IPv6 uses 128 binary for each IP Address, as opposed to IPv4, which uses 32 binary digits. The 128 binary digits are divided into 16-bit words. Since using IPv4's octet notation to represent 128 bits would be difficult, we use a 16- digit hexadecimal numbering system instead.

Each IPv6 sets rep. Each IPv6 set represents 16 bits (4 characters at 4 bits each), and each 4-digit hex word represents 16 binary digits, for example:

  • Bin 0000000000000000 = Hex 000 (or just 0)
  • Bin 1111111111111111 = Hex FFFF
  • Bin 1101010011011011 = Hex D4DB


So, an IPv6 128-but binary address is represented by 8 hex words separated by colons:



With IPv4, every IP address comes with a corresponding subnet mask. IPv6 also uses subnets, but the subnet ID is built into the address. Every individual network segment requires at least one /64 prefix. The IPv6 equivalent to a IPv4 /24 subnet is a /64. This segment contains 64 network bits and 64 host bits. Regardless of hosts on an individual LAN or WAN segment, every multi-access network requires at least one /64 prefix.

Each character represents 4 bits (a nibble). A nibble boundary is a network mask that aligns on a 4-bit boundary. This makes it easier to understand and follow the IP address sequence, reducing incorrect configurations.

SolarWinds IP Address Manager (IPAM) provides a Subnet Allocation Wizard to help you efficiently organize your managed IP address space into subnets that are sized appropriately for the extent and traffic of your network.

The Subnet Allocation Wizard displays a list of available subnets. You can quickly choose from this interactive list to allocate new subnets on your network.

Every time you install Hyper-V you you’re presented with a diverse landscape of platform options, that left untended can overgrow even the most capable IT team.  Counting the legacy existence of Windows Server 2008 R2, there are now (theoretically, at least) a dozen different ways you could install an instance of Hyper-V on a system.  Hyper-V for Windows Server 2008 x64 SP2 is excluded from this discussion due to teething/setup issues.  There are notable feature enhancements in Window Server 2012 (Hyper-V v3) that you may wish to consider in choosing what host OS to install. Making the best decision for a host operating system can save you a whirlwind of complications down the road as your environment expands. Expand your Hyper-V monitor capabilities.

Windows Server (Full Installation) with Hyper-V Role

If you’re installing Hyper-V for the very first time, I would highly encourage you to start with the Full Installation of Windows Server. The primary advantage here is that you’ll have access to the Hyper-V console on the server, and not have to deal with the added complication of remote administration.


Another reason that you might choose the Full Installation over Server Core or the free Hyper-V Server is if you’re installing a non-production environment, and also need to run additional roles on the Server OS. Not all roles are available in Server Core, and managing a multi-role Server Core system can be a significant headache. For production systems, you should plan to make a Hyper-V server dedicated to that role.


One might be inclined to think that the presence of the GUI in the Full Installation has performance implications, but not really. More significantly, the presence of the additional baggage in the Full Installation represents additional overhead regarding patch management.

Windows Server (Server Core) with Hyper-V Role

If you’re installing Hyper-V in a production environment, and will be installing at least one instance of a Windows Server OS that is not yet licensed, then this is the best installation option. The Server Core installation removes a number of unnecessary OS components, which significantly mitigates the patch management efforts for your host system, but it still provides the additional virtualization licensing.


Consider this: The only thing worse than having to reboot a production server … is having to reboot a production virtualization host running multiple production servers.


Note, however, that if you choose to install Windows Server 2012 (Server Core), this will require an installation of Windows 8, or another installation of Windows Server 2012, in order to access the Hyper-V console remotely. Windows Server 2012 cannot be administered from a Windows 7 or Windows Server 2008 R2 system.

Hyper-V Server

The free Hyper-V Server provides the best impact when you’re looking to virtualize non-Windows installations (e.g. Linux, Unix), or where you already have the licenses for existing Windows operating systems. If you’re focusing on server consolidation and primarily doing physical-to-virtual migrations of existing (licensed) Windows servers, then this is the best place to start; however, like the Server Core installation, if you opt for the Hyper-V v3 Server, it will require you to work from the command line, or manage the server with Windows 8 or another Windows Server 2012 system. The Hyper-V v2 Server can be managed from Windows 7.


Also worth noting, both the Server Core and Hyper-V Server installations can also be managed from System Center Virtual Machine Manager.  Hyper-V monitoring simplified!







Ivan Pepelnjak brings many years of internetworking experience and broad technical knowledge to his blog.  He cut his teeth as a programmer before getting the network bug. 

He’s written several popular books for Cisco Press, and in addition to his day job as Chief Technical Advisor to NIL Data Communications, he scratches his CLI itch with ninja consulting gigs and hosts deep-dive technical workshops for IT.  I learned more about real-world IPv6 from his blog than any other single source, and his daily updates are part of my morning reading list.

Connect with Ivan:


Twitter handle: @ioshints
PH: What’s the address of your blog?


PH: What do you blog about?

IP: I’m focusing on advanced networking technologies, which currently means data center networking and large-scale virtual networks. I covered a lot of IPv6 topics in the last few years, but that protocol should be well understood by now (at least by those that are interested in the future of Internet), so the number of IPv6 posts on my blog has been steadily dropping.

PH: Why did you decide to start the blog?

IP: I always wanted to explain how interesting but rarely used features in Cisco IOS work, and blog seemed to be a perfect format. My readers slowly pulled me into other interesting directions, so the blog posts became truly multi-vendor and now focus more on emerging technologies and design issues than features of a particular networking software.

PH: What are some of your most popular posts?

IP: Introductory level essentials I wrote years ago are still getting the most page views. The two most-popular posts are default username on Cisco routers and BGP AS path prepending. However, I prefer to judge the quality of my posts by the interactions and responses they generate, not by the number of random page views (usually caused by Google’s search results). Blog posts from 2012 that generated copious reader responses described the new networking features of Hyper-V, virtual firewalls, and Juniper’s QFabric ... but the absolute winners were those that documented the perils of large layer-2 domains and spanning tree problems, obviously two of the most pressing real-life problems faced by data center networking engineers.

PH: What is your day job?

IP: It seems like blogging and webinars became a large portion of my day job in the recent years, but in real life I’m still Chief Technology Advisor @ NIL Data Communications, a European system integration, professional services and training company.

PH: What are your hobbies?

IP: Rock climbing, mountain biking and woodworking.

Other recent blogger profiles:

Ryan Adzima, The Techvangelist

Bill Brenner, Salted Hash

Tom Hollingsworth , The Networking Nerd

Scott Lowe,

Matt Simmons, Standalone SysAdmin

Meeting Security and Compliance Customer Needs

NEW positive logo print tag for EasyStreet.jpg


EasyStreet uses SolarWinds Log & Event Manager (LEM) to provide Security Information and Event Management (SIEM) to their private cloud customers. As a cloud services provider, EasyStreet offers a spectrum of services, with SIEM and Log Management as recent additions.


I spoke with Byron Anderson to find out the backstory.  Here’s Byron:


“It all started with a single healthcare customer with a private cloud and mandatory HIPAA regulatory requirements. The customer had one employee spending over a half-day per week manually reviewing log files. Needless to say, manually reviewing log files is yawn-provoking and generally not a good use of human time. So, EasyStreet came up with a new offering for this customer to provide log management using SolarWinds LEM.”

“After this initial implementation, more customers came to EasyStreet with compliance and SIEM needs. EasyStreet now has two distinct markets for their offering in their private cloud customer base: customers needing SIEM for security analysis and automated response, and customers needing to comply with standards such as HIPAA and PCI. At this point, EasyStreet has several customer deployments and several more in the pipeline.”

EasyStreet sets up and configures the LEM appliance for the customer. A dedicated LEM appliance is required for each customer. They also provide configuration services that leverage LEM capabilities and intrinsic best security and compliance practices while adding value in tailoring LEM for each particular customer. EasyStreet creates Service Level Agreements (SLAs) and escalation policies for each of their customers.


Each customer has unique needs, including:

  • Whether EasyStreet or the customer does ongoing monitoring
  • Notifications required
  • Reporting required


Read Log & Event Management for Security & Compliance to learn more about working with EasyStreet to implement SolarWinds LEM at your company.   Or read more on SolarWinds LEM to learn more about product capabilities.

Everything seems to be going virtual anymore, doesn’t it? This is not always a great thing (think virtual operators who answer your phone calls when you call anyone from your doctor’s office to the state Motor Vehicles agency). But sometimes, like in the case of a virtual machine, virtualization is a great thing, because it’s not only cheaper, but it’s also more environmentally friendly than a physical machine.


Cheaper and Greener


Cheaper and greener? That’s a big deal. Especially when you’re on a tight budget and you need a couple of new servers. If you can do it (see Hardware Requirements below), a virtual machine is the way to go. Not only are you likely paying only for software, which is always cheaper than paying for hardware, but you’re also making the environmentally conscious choice. You’re not buying hardware that will use electricity to run, require air conditioning in its own special room or building, and someday end up in a landfill.


Hardware Requirements


Virtual machines, however, may not be perfect for every situation. For example, a virtual machine runs best when it is on a physical machine with at least twice the resources required for the virtual machine. If the virtual machine you want to run is not especially resource intensive, this will probably not be an issue. And if you already have a very large physical server, you may be able to run a couple of virtual machines on it.


Managing Virtual Machines


Managing  your multi-vendor VMware and Hyper-V virtual machines at the same time can present some special challenges, ranging from capacity planning, to virtual machine operations management. SolarWinds can help, with a range of information and tools. You might want to take a quick look at SolarWinds' whitepaper, Top 5 Things You Need in a Virtualization Management Solution for detailed information on what you’ll need to successfully go virtual.

In part 2, I explained how to create the form necessary for the Quad NOC browser for use with SolarWinds products like SAM and NPM. In this part I'll give you all the code you need to get this up and running. Check out parts 1 and 2 if you have not done so already.


This Browser is not Feature Rich.

Exactly. And there's a reason for that: I don't know what features you want. The point of this exercise is for you to continue to improve upon this bare bones browser to suit your needs. Once you see the code (below) and play around with it, you may be inspired to add some features of your own.


Feature Ideas

There are countless things you can have your new browser do. You are just limited by your time and imagination. Here are just a few ideas to get you started:

  • Add a Home and/or Refresh button
  • Have more than four windows
  • Add dynamic bookmarks
  • Add a sidebar that shows reports/alerts
  • Add search engines that you use
  • Add the ability to email a page
  • Make the browser always on top
  • Add a note pad


Compile the Code

Below is the code you will need to add to your Quad NOC project in Visual Basic. In your project, go to the Code view and delete everything. Next, copy and paste the code below. When done, you should have something that looks like this:


If all is well, hit the Play button, highlighted above. The browser should appear before you working as planned. If everything works, compile your code into an executable.


Compiling your code into an executable:

  1. From the Build menu, select Build...
  2. If successful, your executable should reside in a path similar to this: C:\VS2010\QuodNOC\QuadNOC\bin\Release


Copy and Paste Me - Read the comments in green before compiling



Public Class Form1

'Sets two variables to be used throughout

  Dim Zoomtoggle As Boolean

Dim maxsize As Integer


Private Sub Form1_Load(sender As System.Object, e As System.EventArgs) Handles Me.Load

      ' This section tells the browser what to do when it starts up

        maxsize = Me.Height                      'The MAXSIZE variable is equal to the height of the form

        SplitContainer1.Top = 25                  'The top of this control starts 25 twips from the top of the form

        WebBrowser1.Top = 25                      'The top of this control starts 25 twips from the top of the form

        WebBrowser2.Top = 25                      'The top of this control starts 25 twips from the top of the form

        SplitContainer1.Height = Me.Height - 70  'The height of this control is = to the height of the form minus 70 twips

        SplitContainer1.Width = Me.Width - 30    'The width of this control is = to the width of the form minus 30 twips

        Zoomtoggle = False                        'This boolean variable is set to false

        maxsize = Me.Height * 2                  'This integer variable is set to twice the height of the main form

End Sub


Private Sub Form1_Resize(sender As Object, e As System.EventArgs) Handles Me.Resize

      ' This section tells the browser what to do when it is being resized

        SplitContainer1.Top = 25

        WebBrowser1.Top = 25

        WebBrowser2.Top = 25

        SplitContainer1.Height = Me.Height - 70

        SplitContainer1.Width = Me.Width - 30


      ' If chZoom is checked, then scale the view, otherwise leave the magnification at 100%

        If Zoomtoggle = True Then

            Dim Scale As Integer

            Scale = Int(Me.Height * 100 / maxsize) 'Set the Scale variable equal to the height of the form X 100 and divided by the variable Maxsize, truncating decimal places

            ZoomPage(Me.WebBrowser1, Scale)        'Use the Zoompage function to scale this browser

            ZoomPage(Me.WebBrowser2, Scale)        'Use the Zoompage function to scale this browser

            ZoomPage(Me.WebBrowser3, Scale)        'Use the Zoompage function to scale this browser

            ZoomPage(Me.WebBrowser4, Scale)        'Use the Zoompage function to scale this browser



            ZoomPage(Me.WebBrowser1, 100)          'Use the Zoompage function to keep this browser at 100% zoom

            ZoomPage(Me.WebBrowser2, 100)          'Use the Zoompage function to keep this browser at 100% zoom

            ZoomPage(Me.WebBrowser3, 100)          'Use the Zoompage function to keep this browser at 100% zoom

            ZoomPage(Me.WebBrowser4, 100)          'Use the Zoompage function to keep this browser at 100% zoom

        End If

        Exit Sub

End Sub



Private Class OleCmd

        'This section references the browser magnification control

        Public Enum OLECMDEXECOPT As Integer


        End Enum


        Public Enum OLECMDID As Integer

            OLECMDID_OPTICAL_ZOOM = 63

        End Enum

End Class


Private Sub ZoomPage(ByVal wb As System.Windows.Forms.WebBrowser, ByVal Factor As Integer)


            Dim [ActiveXInstance] As Object = wb.ActiveXInstance()

            [ActiveXInstance].ExecWB _

                ( _



                DirectCast(Factor, Object), _

                IntPtr.Zero _


        Catch ex As Exception

        End Try

End Sub


Private Sub cmdGo_Click(sender As System.Object, e As System.EventArgs) Handles cmdGo.Click

      'When the Go button is clicked, decide which Quadrant Radio Button is checked and then navigate to the address typed in the text box

        If Q1.Checked Then WebBrowser1.Navigate(TextBox1.Text)

        If Q2.Checked Then WebBrowser2.Navigate(TextBox1.Text)

        If Q3.Checked Then WebBrowser3.Navigate(TextBox1.Text)

        If Q4.Checked Then WebBrowser4.Navigate(TextBox1.Text)

End Sub


Private Sub cmdBack_Click(sender As System.Object, e As System.EventArgs) Handles cmdBack.Click

      'When the Back button is clicked, decide which Quadrant Radio Button is checked and then navigate back to the previous page.

        If Q1.Checked Then WebBrowser1.GoBack()

        If Q2.Checked Then WebBrowser2.GoBack()

        If Q3.Checked Then WebBrowser3.GoBack()

        If Q4.Checked Then WebBrowser4.GoBack()

End Sub


Private Sub cmdForward_Click(sender As System.Object, e As System.EventArgs) Handles cmdForward.Click

      'When the Forward button is clicked, decide which Quadrant Radio Button is checked and then navigate forward.

        If Q1.Checked Then WebBrowser1.GoForward()

        If Q2.Checked Then WebBrowser2.GoForward()

        If Q3.Checked Then WebBrowser3.GoForward()

        If Q4.Checked Then WebBrowser4.GoForward()

End Sub


Private Sub chZoom_CheckedChanged(sender As System.Object, e As System.EventArgs) Handles chZoom.CheckedChanged

        On Error Resume Next

      ' If chZoom is checked, then scale the view, otherwise leave the magnification at 100%

        Zoomtoggle = Not Zoomtoggle


        If Zoomtoggle = True Then

            Dim Scale As Integer

            Scale = Int(Me.Height * 100 / maxsize)

            ZoomPage(Me.WebBrowser1, Scale)

            ZoomPage(Me.WebBrowser2, Scale)

            ZoomPage(Me.WebBrowser3, Scale)

            ZoomPage(Me.WebBrowser4, Scale)


            ZoomPage(Me.WebBrowser1, 100)

            ZoomPage(Me.WebBrowser2, 100)

            ZoomPage(Me.WebBrowser3, 100)

            ZoomPage(Me.WebBrowser4, 100)

        End If

End Sub


Private Sub chRefresh_CheckedChanged(sender As System.Object, e As System.EventArgs) Handles chRefresh.CheckedChanged

      ' If chRefresh is checked, start the 60 second timer

        If chRefresh.Checked = True Then Timer1.Enabled = True

        If chRefresh.Checked = False Then Timer1.Enabled = False

End Sub


Private Sub Timer1_Tick(sender As System.Object, e As System.EventArgs) Handles Timer1.Tick

      'When enabled, the timer will refresh all four browsers every 60 seconds





End Sub


Private Sub TextBox1_KeyDown(sender As Object, e As System.Windows.Forms.KeyEventArgs) Handles TextBox1.KeyDown

        On Error Resume Next

      'Determine when the user presses Return (Enter). When Return is pressed, then decide which browser navigates to the address in the text box by checking which radio button is selected

        If e.KeyCode = Keys.Return Then

            If Q1.Checked Then WebBrowser1.Navigate(TextBox1.Text)

            If Q2.Checked Then WebBrowser2.Navigate(TextBox1.Text)

            If Q3.Checked Then WebBrowser3.Navigate(TextBox1.Text)

            If Q4.Checked Then WebBrowser4.Navigate(TextBox1.Text)

        End If

End Sub


Private Sub LinkLabel1_LinkClicked(sender As System.Object, e As System.Windows.Forms.LinkLabelLinkClickedEventArgs) Handles LinkLabel1.LinkClicked

      'When this link is clicked, decide which of the four browsers will navigate to the hard coded address

        If Q1.Checked Then WebBrowser1.Navigate("")

        If Q2.Checked Then WebBrowser2.Navigate("")

        If Q3.Checked Then WebBrowser3.Navigate("")

        If Q4.Checked Then WebBrowser4.Navigate("")

End Sub


Private Sub LinkLabel2_LinkClicked(sender As System.Object, e As System.Windows.Forms.LinkLabelLinkClickedEventArgs) Handles LinkLabel2.LinkClicked

      'When this link is clicked, decide which of the four browsers will navigate to the hard coded address

        If Q1.Checked Then WebBrowser1.Navigate("")

        If Q2.Checked Then WebBrowser2.Navigate("")

        If Q3.Checked Then WebBrowser3.Navigate("")

        If Q4.Checked Then WebBrowser4.Navigate("")

End Sub

End Class

If you guessed that, among US government agencies, the Central Intelligence Agency would be likeliest to employ the most advanced cryptography in their telecommunications, then the story of CIA Director David Petraeus’ lapses in security during his affair with his biographer, Paula Broadwell, most surely will surprise you.


The two paramours exchanged clandestine email through a method that Al Qaida apparently first used: sharing access to a single email account and leaving messages for each other in the form of drafts. In terms of protecting the privacy of messages, this method seems to work as long as third parties do not gain access to the account. Since one cannot encrypt messages held in draft form, a third party with account access would be able to read all drafts. For the same reason, even before accessing the account, if the third party knew to intercept the transactions that save the draft to the email provider’s server, the message would be available in the clear text.


Why did the proverbial top spy choose not to encrypt his email messages? Certainly not because his gmail account was personal; there would be no line between professional and personal in terms of the need for security, regardless of the communication channel. As a matter of course, with the top spy one would expect top security.


In fact, though, as non-secure as the method becomes to those with account credentials, the use of it in this case was not so risky. The FBI discovered the affair; and the FBI would not have been looking had not Paula Broadwell sent aggressive email messages to a woman named Jill Kelley.  And the FBI would not have been looking so carefully had not Kelley sought the help of her FBI agent friend, Friedrich Humphries (III), who was so dogged in pushing for an investigation of the harassment that he pursued another path when his own organization initially let the matter just sit. Because Humphries contacted congressional politicians, who contacted the FBI Director, the agents on the case became keenly motivated.


Broadwell apparently used the same email account to harass Jill Kelley that she used to share draft messages with Petraeus. Once the FBI correlated email messages with IP addresses in a pattern of regions that matched Paul Broadwell’s travel schedule, they got access to the relevant gmail account. They found the draft correspondence, inferred that Broadwell was having an affair with a high-ranking government official, and pursued the forensic IT trail until they found Petraeus at the other end.


Protecting Classified Information and Intellectual Property

Most government and business office have an IT policy about the kind of email you should send exclusively through the institution’s own email system. Usually an issued laptop or PC comes with an email client already configured with appropriate encryption settings.


Preserving information can be as important as obscuring it from those who should not see it. In addition to an appropriate AES cipher to secure message content, you should preempt and mitigate data loss by effectively monitoring the email server for events and create triggers to send real-time alerts. As an example, the SolarWinds Network Performance Monitor and Server and Application Monitor products together provide node and system/application early detection and advance warning features.


Is your company considering cloud computing? Cloud computing comes in many different flavors, but the following are some of the most popular forms:



These three basic forms of cloud computing deliver on-demand technology through the Internet.


Software as a Service (SaaS)


SaaS is the most commonly used type of cloud computing. The SaaS approach delivers fully-functioning software through the Internet only when the customer needs it. You log on to your application site and get to work. In the past, these types of applications would be installed on the customer’s desktop and would require licensing and activiation. But with SaaS, the customer doesn’t need to purchase or update licenses, because the SaaS provider stores the application on its servers.


Platform as a Service (PaaS)


PaaS providers offer customers space to develop, publish, and store new web applications. In the PaaS model, users don’t need to install, operate, and maintain a platform on their own hardware. Instead, they simply access the platform as needed. A typical PaaS solution might be a development server, in which the customer uses the PaaS provider's Application Programming Interface (API) to create their applications.


Infrastructure as a Service (IaaS)


IaaS is the most basic cloud service, providing customers access to web storage space, servers, and Internet connections. The IaaS provider owns and maintains the hardware and customers rent space and basic services. These services can include email, file storage, and networking through the use of a third-party provider. In an IaaS environment, the customer doesn’t need to employ their own technical specialists.


Need More Info?


Sound good? Are you ready to migrate your e-mail, software, and app development over to the cloud? You’ve got a lot riding on your data and apps, so get some more information before you make your move to the cloud.


Consider the following:


  • How much does it cost you when your cloud provider has an outage or a security breach and you can’t access your data?
  • What about if you need additional services, like testing, monitoring, and extra security for your data? How do those extra services affect your cloud computing fees?


See what SolarWinds has to say before making that move. See the video, Cloud Computing 101 - the Need to Know Basics, and the whitepaper, Cloud Computing and its Effects on Network Management, for more info on what you need to know about cloud computing.

Helping you, a present or future SolarWinds user, to determine how best to manage your network is pretty much Job #1 for me and the rest of us posting on Geek Speak. A lot of the time, hopefully, that means we are providing some great new IT insights that will blow your mind. Sometimes we don't. And every now and then we point you to some other great information we've uncovered somewhere else that is pretty cool.


This is one of those times.


As it is, I want to send you over to the SolarWinds Resource Center, and, since network monitoring is sort of my bailiwick, I'd like to direct your attention to one whitepaper in particular: "How to create the Ultimate Network Management Dashboard" (pdf, 971kb).


Our very own brad.hale whipped this up for you. Here's a little taste:

Dashboards are the visual interfaces that present to us the status overview of all the critical elements that are monitored by the network management system (NMS). Without an effective and user-friendly dashboard to display monitoring results, an NMS will not fully deliver on its promise.


Here are some top reasons why effective dashboards are vital in an NMS:

  • A network admin needs to address issues immediately and a dashboard must be able to display actual data in real-time in easily conceivable formats.
  • Dashboards must be easily customizable to allow the network administrator to create tailormade views to present data.
  • The extent of presenting data may vary between various user groups in an organization. Having a flexible filtering mechanism to choose display data for senior executives, business units or compliance will make reports more meaningful.
  • Allowing the network administrator to create different login accounts and issue tractable access rights will simplify management efforts.
  • To be able to provide adept alerting and detailed reporting capabilities, the dashboard must support a deep-dive approach to reach the very root of the network problem and show the cause of failure of an event.
  • Navigation will be a key aspect of having a dashboard put to detailed use. Navigation between screens, nodes, maps, views, etc. should be easy to execute.
  • Integration with other products, if there may arise a need to combine the monitoring view with compatible software or appliances, can be a sought out feature for network and system administrators.
  • Enabling the possibility to interact with a networking community to quickly access discussions, ideas, suggestions and subject matter expertise will always be a valuable add-on.
  • Dashboards are the foundation on which the integral network monitoring features of alerts and reporting are built. To provide the ultimate network management dashboard is to complement the alerting and reporting features with high proficiency.


He follows this introduction with a few straightforward steps to set up your ultimate Network Management Dashboard. In future posts, I'll come back to expand a few of his steps. Of course, if you've already got a pretty sweet setup, you can always cruise around thwack to find other SolarWinds users with whom you can share (i.e. show off) your own NMS mods.

Walking the Line between being Interesting and being Mean

bill brenner.jpg


I just had the pleasure of speaking with Bill Brenner, author of the Salted Hash – IT security news blog on CSO.  It’s one of my favorite security blogs, because it has new security stuff (not a rehash of old boring topics, pardon the pun), it’s painless to read and consistently intriguing.


Bill’s been blogging on security since 2005.  He is, in fact, not only a blogger but a journalist, reporter, columnist and podcaster with over 20 years of journalism experience.  His favorite blog subject is helping security professionals communicate more effectively – a problem which is exacerbated by the colorful personalities and Rockstar egos abounding in the security profession.


Some of Bill’s recent posts are on IT security things you might expect, like “new security features in Firefox 18,” and “Your January 2013 Patch Tuesday update.”, but also peppered in there are provocative topics like “When American drones kill American citizens.”   The blog is great for security admins and engineers to keep up on current events.


Bill’s also a judge for the Security Blogger Awards, which has a meetup at the yearly RSA show.  In his spare time, Bill also plays a guitar, has a passion for Heavy Metal, and writes a non-security blog about breaking down stigmas associated with mental illness called THE OCD Diaries.


Other blogger profiles:

Ryan Adzima, The Techvangelist

Tom Hollingsworth , The Networking Nerd

Scott Lowe,

Matt Simmons, Standalone SysAdmin

Over the last several months I, along with Head Geek and Microsoft MVP LGarvin, have written several posts on thwack about a recent Microsoft patch for all supported Windows versions prior to Windows 8 and Windows Server 2012. This patch invalidates all certificates that use encryption keys of fewer than 1024 bits. Most (if not all) of our previous posts were geared toward Patch Manager users and the general Microsoft patching community, but it's recently become apparent that the patch is affecting the greater IT community at large. For example, if you manage a VMware environment, you might not be able to access VCenter in your web browser after applying the patch. Here's a link to the VMware article about the Microsoft patch.


About the Microsoft Patch

The patch, KB2661254 is a critical update for computers running the following operating systems:

  • Windows XP Service Pack 3
  • Windows Server 2003 Service Pack 2
  • Windows Vista Service Pack 2
  • Windows Server 2008 Service Pack 2
  • Windows Server 2008 R2
  • Windows Server 2008 R2 Service Pack 1
  • Windows 7
  • Windows 7 Service Pack 1


Microsoft released the patch to Windows Update on October 9, 2012. This means, if your environment has any automatic approval rules in place for critical updates from Microsoft, many of your systems already have the patch installed. In that case, none of your patched systems will be able to access secure web sites or allow other SSL connections if the certificate used for the secure connection is not 1024 bits or greater.


What to do About the Microsoft Patch?

Many vendors, SolarWinds and otherwise, have already updated their products to use certificates that comply with the Microsoft Patch by default. For example, SolarWinds Patch Manager now installs with a 2048-bit certificate instead of the 512-bit certificate it used previously. The reason vendors have responded so quickly (Patch Manager responded back in August) is because the patch came about as a response to the so-called "Flame fiasco," which "exploits a defect in the Microsoft Terminal Server Licensing Services application that generates licensing certificates for Terminal Services clients," ultimately resulting in compromised Windows Update Agents.


That said, you should use caution before applying the patch to ensure you do not break communications in your environment unwittingly. The Microsoft article for KB2661254 provides an detailed section on how to discover RSA certificates with key lengths of less than 1024 bits. It would be wise to use one of the methods described therein if you plan to deploy the patch, especially if some systems are already patched.


For additional information about KB2661254, check out the following resources on Microsoft TechNet:

Over the holiday break, ORYSP published an infographic in their blog that was quite humorous.. and accurate. It brought back many memories of my time in the trenches. With their permission, we have included their infographic here.


We'd also like to remind you that we did some surveys of our own this year, and if you've not yet had a chance to find out what your colleagues around the world think of being an IT Professional, you should check these out too.


How many of the IT jobs listed here are part of your daily duties? Are there other IT jobs you regularly engage in that are not identified? How do you handle difficult users?


You've got your mandate from on high to switch to virtual desktop infrastructure (VDI) - who doesn't love decreased costs, lower power consumption, and reduced computer management overhead - and you're raring to go. You've figured out your vendors and you've got your images built, but have you considered your network requirements?


If you haven't given it much thought, you really ought to. Nothing says disaster on a VDI deployment quite like latency and contention issues.


Five tips to get your network ready for VDI

  1. Don't guestimate bandwidth usage.
    While there's always a certain amount of guessing that happens when deploying new technology and determining bandwidth consumption, you should have hard data to begin the process.
    Calculate your current bandwidth needs first. If possible, you'll want to analyze your bandwidth usage for at least a full year. Most organizations have bandwidth spikes, especially during crunch times like the end or beginning of the quarter. If you only take into account your average bandwidth consumption, you'll miss the spikes and your users will undoubtedly complain about this weird new thing that IT is trying.  If you have the data, you should also look at pervious years to calculate your annual growth rate to add in to your plans.
  2. Talk to your network people.
    Always consult with your network team before doing anything. They'll be able to tell you if the network can handle the pressure from your VDI deployment or if you need to include some infrastructure changes in your deployment plan.
    If you have to make infrastructure changes, always design with expansion in mind. When your organization grows, you will need the extra capacity, especially with VDI.
  3. Implement a pilot program.
    A pilot program can be more important for VDI than you think. VDI can be uncomfortable for users, especially in an organization-wide rollout. You want to catch issues early and with few people impacted by them. You should probably choose a representative sample of people and, possibly, an entire department.
    Pilot programs can also increase user acceptance if people think they're helping and IT is responsive to the issues they experience. The pilot users can be great VDI advocates when it comes time to roll it out.
    The pilot will also help set your network expectations. Are the bandwidth usage numbers within the vendor accepted range? Is it higher? Is it lower?
  4. Over-estimate your consumption.
    If possible, add some wiggle room to your estimated network usage - you'll end up using it sooner or later. As mentioned before, your network usage spikes during the year, but it also spikes during the day. With VDI, you'll have a spike in activity when everyone logs in or boots up in the morning or when they get back from lunch. You want to have enough headroom for these spikes and small expansions in your organization.
  5. Don't have a single point of failure.
    Do you have a single router that will connect your virtual desktops to the rest of the organization? Are there single points of failure connecting remote sites to your VDI servers?
    You need to build in some redundancy into your network. If users can't connect to their virtual desktop, they're dead in the water (and you're in some hot water yourself). Nowadays, network outages can significantly impact a business' bottom line. If users can't get to their desktops to work (or stave off boredom in case of an Internet outage), your organization stands to lose a lot of business and employee satisfaction.

If you use SolarWinds Network Performance Monitor, you should be able to easily analyze your network usage over the years. You can also look at the bill from your ISP for a summary of your usage.


My experiment as a technical writer for Solarwinds Storage Manager leads me to learning more about IT storage and storage environments. To understand Storage Manager, I must first understand storage. So, I'm reading an informative book, Virtualization Changes Everything, and I'm learning a good bit including some basic information about storage and storage environments. Below is a synopsis of the first couple of chapters followed by a list of terms culled from the book and from If you're new to IT storage like I am, you might want to read through the list of terms before launching into the synopsis.


What is DAS, SAN, and NAS?

Before virtualization, most x86 based servers met their data storage requirements using Direct Attached Storage (DAS) and a local RAID controller. This produced a reliable, low cost storage solution that met availability requirements for most workloads. But alas, it was unable to meet the high-performance and resiliency needed in mission critical applications. And so the SAN and NAS arrays arrived on the scene.


SAN Arrays or SANs provide LUNs to hosts through Fibre Channels. Over time additional SCSI based protocols such as iSCSI and FCoE emerged. SANs provide the highest IO performance for applications. Below is an illustration from the book showing a SAN supporting a physical server infrastructure.


Another shared storage platform is the NAS array which allows massive parallel access to file systems and unstructured datasets. The most common NAS protocols are Network File Systems (NFS) and Server Message Block (formerly known as Common Internet File Systems or CIFS).


Unlike DAS, SAN and NAS arrays provide capability such as:

  • Nonstop data services
  • High availability for applications
  • Storage of massive volumes of data
  • Fast and efficient hardware-based backups
  • Data replication capabilities required for disaster recovery
  • Direct mapping of storage resources to servers providing granular resource utilization and consumption


And then there was Visualization and the Hypervisor


Server virtualization required a new way of implementing datacenters with ew ways of assigning and consuming resources. Shared resources needed to be logically represented, and so the Hypervisor was born,


Hypervisors are designed to be deployed in highly available cluster configurations requiring data to be served on shared, highly available (HA) storage platforms such as a SAN or a NAS array. This fault isolation means the HA mechanism can ensure the restoration of services in case a Hypervisor fails.


The Shared Storage Pool


In the HA clusters, storage resources are pooled so storage capacity is available to all hosts in the cluster and accessible in on-demand provisioning by VI Administrators. Most Hypervisor users prefer storage pools for providing storage in virtual infrastructures.


Most virtual machines need storage to appear as though it were direct-attached storage. Direct attachment can be simulated using virtual disks. Virtual disks are files presented by the Hypervisor to the Virtual Machine (VM) as a virtual hard drive.


The shared pool model enables virtual infrastructure administrators to consume storage resources from the pool without having to interact with the storage management administrators.  Storage pools and virtual disks provide the ability to dynamically provision, clone, and migrate the data of a VM.


At this point, I had a started to get a clear picture of how all the acronyms used in Storage Manager fit into the whole storage picture. So I'll stop here and list some of the terms used  in this article.


List of Terms


Direct Attached Storage (DAS) with Local RAID  controller - met availability requirements for most x86-based workloads. Unable to meet demands of the high-performance and resiliency needed in mission critical applications.


Storage Area Network (SAN) - dedicated network providing access to consolidated, block level data storage and used primarily in making storage devices such as disk arrays.


Network Attached Storage (NAS) - file-level computer data storage connected to a computer network. NAS is often manufactured as a specialized computer built for storing and serving files.


Logical Unit Number (LUN) arrays - number used to identify a logical unit in computer storage.


Fibre Channel (FC) - High-speed network technology used primarily for storage networking. Can run on twisted pair copper wire and fiber-optic cables.


Fibre Channel Protocol (FCP) - Transport Protocol used predominantly to transport SCSI commands over Fibre Channel networks.


Fibre Channel over Ethernet (FCoE) - an encapsulation of Fibre Channel frames over Ethernet networks allowing Fibre Channels to use 10 Gigabit Ethernet networks while preserving the Fibre Channel protocol.


Small Computer System Interface (SCSI) - Standards for physically connecting and transferring data between computers and peripheral devices.


Internet Small Computer System Interface (iSCSI) - IP based storage networking standard linking storage facilities. Carries SCSI commands over IP networks and is used for data transfers over intranets and for managing storage over long distances. Does not require special-purpose cabling like FC, but can run over long distances using existing network infrastructure.


High Availability Cluster (HA Cluster or Failover Clusters) - groups of computers that support server applications and can be reliably utilized with a minimum of down time.


VI Administrator - Virtual Infrastructure Administrator


In this installment, I'll explain how to create the visual elements of the Quad NOC Browser.


Lesson 3 - The Build

First, open Visual Basic and create a new project. (Use the picture above as a visual aid.)

Get the Form in Shape:

  1. Stretch the Form from the lower right-hand corner to make it as large as possible. (Select it and then grab a corner node to do this.)
  2. With the Form still selected, go to the Properties window to the right of the Form and find the Text property.
  3. Change the default text from Form1 to Quad NOC.

Add the Tools:

The table below explains the tools you need to put on the top of the form, as shown above. The tools used, from left to right in both the illustration above and the table below, are indicated by the header of the table.


Tool from the Toolbox:ButtonButtonText BoxButtonRadio Button

Radio Button

Radio Button

Radio Button

Link LabelLink LabelCheck boxCheck box
Change the Name property of this tool to:cmdBackcmdForwardTextbox1cmdGoQ1Q2Q3Q4LinkLabel1LinkLabel2chZoomchRefresh
Change the Text property to:<>GoQuadrant 1Quadrant 2Quadrant 3Quadrant 4Your link name hereYour link name hereAuto-ZoomRefresh views every minute.
Change the Anchor property to:Top-RightTop-Right

Important: Be sure the two check boxes are not checked. If needed, change the Checked property of each to False. Also, ensure the CheckState property is set to Unchecked for both.

Add a Timer

  1. Add a Timer from the Toolbox to the form. (When placed on a form, the timer will move to a taskbar below the form automatically.)
  2. Use the default name of Timer1.
  3. Set the Interval property to 60000 (60,000 milliseconds = 60 seconds.)
  4. Set the Enabled property to False.

Now the tough stuff:

  1. From the Toolbox, add a SplitContainer to the center of the form. Use the default name of SplitContainer1.
  2. Change the Dock property from Fill to None. You should now have something that resembles the following:
  3. In panel 1, add another SplitContainer and leave the default name of SplitContainer2.
  4. Change the Orientation property from Vertical to Horizontal.
  5. In panel 2, add another SplitContainer and leave the default name of SplitContainer3.
  6. Change the Orientation property from Vertical to Horizontal.
  7. Change the SplitterWidth property of each SplitContainer to 10. You should now have something that looks like this:
  8. From the Toolbox, add a webbrowser control for each of the four quadrants. Leave the default names. Place them in the order shown below:
  9. For the URL property of each browser, set the default navigation page (For example:, or the Orion home page in your environment.)
  10. Save everything.



Today, I've given you the minimum you need to design the Quad NOC browser. That is, all the objects needed for the browser to work are now on the Form. By now, I'm sure you've noticed there are many more properties that you can tweak, like the color and size of the fonts. Your assignment is to play with these properties and try and get the browser to look as close in appearance to the one seen in part 1..


In the next installment, we'll begin coding.....oooooohhh.

I was chatting with Microsoft WSUS MVP and SolarWinds Head Geek, LGarvin recently about synchronizing WSUS servers, and I'd like to share his recommendations along those lines. Synchronizing WSUS servers ensures those update points have the most up-to-date update information from the update source -- in most cases, Microsoft Windows Update. With this information, the WSUS servers can appropriately address remote clients when they check in to determine if they need to be patched.


How Often Should I Synchronize a WSUS Server?

The bottom line is that you should synchronize your WSUS servers at least once daily. The following list, however, comprises the entirety of Lawrence's recommendation, in descending priority order.

  • Synchronize your WSUS server once every 24 hours at an off-peak time.
  • If you are automatically approving Definition Updates, synchronize the server at least 2-3 times daily.
  • If possible, schedule an additional synchronization to coincide with Patch Tuesday.


Microsoft WSUS is fully capable of addressing the first two recommendations with its native tools. However, the third recommendation requires extending WSUS with a third-party application like SolarWinds Patch Manager. On its own, WSUS only allows you to schedule synchronizations every so-many hours. So to address the first recommendation, schedule the synchronization for every 24 hours, starting at 3 AM, for example. To address the second and first recommendations together, schedule the synchronization for every 8 hours, say, starting at 6 AM. This would be necessary to get the Definition Updates Microsoft publishes throughout the day for their anti-virus and anti-malware programs like Defender, Forefront and Security Essentials.


With a third-party patch management application, you can schedule the basic, once-daily synchronization using the typical WSUS method, but then schedule additional synchronizations whenever you want. In other words, you could have a monthly synchronization that happens every second Tuesday at noon to ensure you get your Patch Tuesday updates as soon as possible, instead of having to wait until your 3 AM synchronization the following Wednesday.


More granular scheduling like this can even help with more frequent synchronizations in that you wouldn't have to schedule them at specific intervals. So instead of synchronizing for your Definition Updates every 6 hours, you could be more intentional about trying to hit low-traffic times like 2 AM, 6 AM, noon, and 6 PM.


For more patch management tips like these from Lawrence and others in the patching community, check out the PatchZone space here on thwack.



What is IPv6?

Posted by phil3 Jan 4, 2013

I know, the answer to this question is probably old news to everyone here, but I found myself contemplating a similar question recently, so I did some digging. Here's a quick summary of my findings.


IPv6 - A Brief Overview

IPv6 stands for Internet Protocol version 6, and it's currently in place to eventually replace IPv4, with which we're all much more familiar. This impending switch was put in the works to address the inevitability of our running out of new IPv4 addresses. This concern grows more critical every day as we continue to add mobile devices worldwide - devices that all need IP addresses. Some of these devices are even being developed to support only IPv6, further validating the need for the IPv4-to-IPv6 switch. According to Wikipedia, "The American Registry for Internet Numbers (ARIN) suggested that all Internet servers be prepared to serve IPv6-only clients by January 2012."


IPv6 Migration Trends

A few months ago, we had a post on our Whiteboard blog about IPv6 Adoption – What We Are Seeing. This post states that IPv6 adoption continues to move at a tentative speed, but highlights three prominent leaders in the effort: Asia Pacific countries, federal agencies worldwide, and Internet service providers. By and large, though, most of the pros we're talking to are still planning their migrations or implementing dual-stack devices (devices that support both IP versions). If you're still not even there, I'd recommend you check out New Year's Resolutions for Geeks Like Me. The resolutions in this post, though 2 years old now, are still quite relevant. Pay close attention to #9.


IPv6 Support for SolarWinds Products

Looking at a post from about the same time as the Whiteboard post I just mentioned, it's clear SolarWinds is committed to providing you the IPv6 support you need when you need it. That is not to say we support IPv6 in all of our products, but we're getting there. The important thing is that we're listening to your feedback and adding the feature to the appropriate products at the appropriate times. For more details, check out the original post: Current State of IPv6 at SolarWinds - Last Updated April 2015. This post is several months old, so check your products' release notes for the most up-to-date information.


If you are looking for assistance with planning your IPv6 migration, SolarWinds IP Address Manager (IPAM) can help:

  • Create and test multiple scenarios before implementing them
  • Add, edit, and delete IPv6 addresses and subnets
  • Search for IPv4 or IPv6 addresses and find the “other side” address on dual-stack devices


In any case, I particularly liked the recommendation from the New Year's Resolution blog post about deploying an IPv6 subnet, even if only to familiarize yourself with the unique concerns and nomenclature of the technology. Had I done this two years ago, I probably wouldn't have had to do all the digging that led up to writing this post.


In the not so distant past, I wrote a four part series entitled Visual Basic 101 where I explained how to build a bandwidth calculator for SAM component monitors using nothing more than Visual That was a fairly popular series so I thought, "Why not go a step further?"


The Step Further.

I have no idea what your environment looks like or what browser you use. However, I can make the assumption that you're always looking to improve it to suit your needs. For example, what browser are you using with SAM and/or NPM? Chrome, Firefox? Do they offer the plug-ins and flexibility you desire? If not, do what I did (in less than an hour mind you). Build your own browser. Below is a screenshot of the Quad browser I just built which enables you to view four screens at once.


Now this may not be very practical on a tablet or laptop. But toss it on a projector or an 80" LCD? Wow. Pretty slick! You can have SAM in one corner and NPM in another. You could have four different levels of SAM shown at one time if you like. The possibilities are limitless. The point is simple. This browser is not feature rich whatsoever. What it does do is demonstrate just how quick and easy it is to build your own environment to suit your needs.


Lesson 1 - Installation

Before we begin, you'll need to install Visual Basic Express 2010, free courtesy of Microsoft. Click this link to begin downloading, followed by the installation.


Lesson 2 - The Environment

Now open Visual Studio Express and select New Project from the File menu. A new window will pop open. From there, select Windows Form Application, then click OK. If you done everything successfully, your screen should now look like this:


Now you have all you need to build the Quad Noc Browser, minus the code. Let me explain what you're looking at above:

  • Highlighted in red is a Form. A form, in essence, is a window; hence the name, Windows. A form is an empty workspace where all of your buttons and controls will live, once you put them there. (Notice the form of the browser above with all of its controls.)
  • Highlighted in green is the Toolbox. The toolbox contains all of the controls you will need to build almost anything, including the browser. These controls may be placed on the form as needed. As you can see in the browser, there are multiple web browsers, radio buttons, labels, and so on. These all came from the toolbox.
  • Highlighted in purple is the Properties window. Every control, or object (including the form itself) has certain properties. These properties can be set and changed both before running the program and while the program is running. Think about the properties of television. One property is its color. Other properties include the TV's height, weight, picture resolution, and so on.



Play around with this new environment and try to get comfortable. Explore the controls and the properties of the more common controls.
Tip: Once you place a control on the form and select it, the Properties window will show the properties of that control.


In part 2 we'll build the form with all of the controls.

Happy New Year everyone!


Let's usher in this brand new year with a concept so bleeding edge, it doesn't know about the edge yet - crystal refrigeration.


SCIENCE! or campy sci-fi?


We have a long history of science fiction leading the way to innovation in science. Jean-Luc Picard and his PADD popularized the concept of the tablet computer. Various robot legions bent on world domination? Not only are there armies of Tickle Me Elmo robots, but the Human Rights Watch is already fighting against the production of autonomous robotic killing machines, aka "killer robots." Self-driving cars, the main-stay of any futuristic society, can now be found cruising along Nevada and California roads. Super-soldiers? The US army has that covered. You can even buy a giant robot, my favorite science fiction mainstay, and mechanized exoskeletons.


So what science fiction trope will be developed next?


Does anyone admit to watching the Star Gate franchise? Remember the crystals used in the computers? Well, that may be an up and coming technology, in that there is some very cool research going on about using crystals to refrigerate things, such as computers.


Ferroelectric crystals


Researchers at the Carnegie Institution have discovered a way to pump heat using ferrorelctric crystals. Certain crystals, when introduced to an electric field, dramatically change temperatures, thus allowing heat to be exchanged efficiently.


How does it work?


Let's establish some base concepts first.


Ferrorelectricity -  a property of certain crystals that exhibit spontaneous polarization under the transition temperature. Each crystal unit has two poles - one with a negative charge and one with a positive charge. These dipoles can be aligned in the same direction with electricty. If the electric field is reversed, then the crystals switch polarization.


Transition Temperature - (also known as the Curie temperature) the temperature above which the material loses its ability to spontaneously polarize.


Electrocaloric Efffect - a reversible temperature change when an electric field is applied. This effect has been known since the 1930s - and was studied again during the 1960s and 70s - but the cooling effects were too small to be useful. However, in 2006, new research returned a 12 degree (Celsius) temperature change. The current research shows that the electrocaloric effect is heightened with large differences between the ambient temperature and the transition temperature of the material.


Ferroelectric Crystal Refrigeration


Basically, at a certain temperature range, you can apply electricity to a type of ferroelectric crystals and they pump heat away using the electrocaloric effect until the temperature goes under the transition temperature for the crystal or the electric field is disrupted. The research done at the Carnegie Institution indicates that heat can be pumped on the nano-scale as well. I imagine that it will work like an air conditioner in practical applications. Read the original article here.


What can this mean to me?


Needless to say, this may solve some of the heat problems we experience in computing, and could lead to faster computers. One of the biggest hurdles in making computers faster is the amount of heat they generate. If we can control the heat better, then we can continue using the same materials until we run up against another barrier.


The heat pumping ability may also allow us to save on energy costs. Server rooms can get pretty sweltering without aggressive air conditioning. If the crystals can keep the computers cooler, or be used to pump heat out of the building entirely, they could dramatically decrease the amount of power used by companies.

How do you test firewall changes? Firewalls help us keep intruders our of our networks and precious data in them, and changes to those safeguards can be precarious to say the least. To minimize the likelihood that a change will cause some sort of problem -- such as creating a hole in our defenses or blocking a critical service -- it's important to have a firewall management process in place that includes a procedure for testing and implementing changes in a way that minimizes the chance for any negative impact.


When I considered the myriad approaches organizations could take to this end, I came up with four general categories:

  • Implementing changes on the fly and then testing them
  • Implementing changes during maintenance periods and then testing them
  • Testing changes in a lab and then implementing them
  • Testing changes with change modeling software and then implementing them


Changing Firewalls on the Fly

You have to be pretty brave to implement firewall changes in this fashion. Doing so successfully assumes you have a deep knowledge of your network infrastructure and how firewalls and routing work. If you don't have this requisite knowledge, chances are you're using this approach because you don't know any better. The risks inherent in this approach include implementing a change that breaks something you won't know is broken until something catastrophic happens, or testing a change while some vulnerability is exploited before you have the opportunity to find it (and address it) for yourself. Of the four approaches I listed, this is by far the least desirable.


Changing Firewalls and Testing During Maintenance Periods

This is similar to the first approach, but it's considerably more conservative. This assumes your organization has a specific schedule in place that defines both production windows and maintenance windows. Such a schedule usually dictates that nothing but critical changes are made to any business-critical network systems within a production window, reserving all routine changes for the maintenance windows. The maintenance windows usually fall during off-peak times, which significantly reduces the risk of a service being blocked by a faulty change when someone needs it. However, the risk of exposure still exists with this approach, so it's only moderately more desirable than the on-the-fly option.


Testing Firewall Changes in a Lab First

This approach is at the top of the list for DIY testing options. It assumes you have a non-production lab environment in which to test your firewall changes before you implement them. In a perfect world, the infrastructure in your lab environment would emulate your production environment exactly, so you can be absolutely sure the changes you test in the lab will behave just as reliably (or unreliably) in production. Presuming your testing goes 100% according to plan, you can use the same change scripts in your production environment that you used to implement the changes in the lab. The main drawback here is scalability.


Testing Firewall Changes with a Change Modeling Tool

As your network grows, you'll feel increasing pressure to upgrade your testing effort to something more automated. A comprehensive firewall management tool not only offers a central repository for your firewall configs, it also offers change modeling functionality. Change modeling allows you to test your changes without having to create and maintain a test environment, or manually test every single scenario. Ideally, the tool would also allow change modeling to be collaborative so you can distribute the testing load between security specialists, application engineers, and firewall administrators. For organizations of any size, automation is the key to efficiency and accuracy.


A General Best Practice

Regardless of the method you use to test and roll out your firewall changes, a general best practice is to expand your testing effort beyond the change at hand. An important (and, sadly, often-missed) aspect of firewall configuration change management is testing to ensure changes to accommodate or block one service don't have some other negative effect on the network landscape at large. Document and test all business-critical services with every change, and make sure no known vulnerabilities are exposed when you implement any sort of change. Again, automation helps make this a lot easier than it would be otherwise; but if you're married to a more manual process, documentation and due diligence will work just as well (though probably more slowly).


So how do you roll (out firewall changes)?  If I've missed anything in this post, let me know in the comments section. Or, if you'd just like to share your process or experiences, feel free chime in.

I have been in IT since it was called Management Information Systems (MIS), and I still have no idea what that meant. I think IBM had a contest to see who could come up with the most obscure names for various technologies. At one point I was working for a law firm keeping their NetFrame 450s up and running NetWare 3.2. If you don't know what a NetFrame was, it looked a lot like the monolith in that the primates were beating with bones the beginning of 2001, A Space Odyssey. I'm pretty sure that it came with the same toolset from the movie as well. Between wrestling those beasts and helping users who had never seen a PC before, I learned a lot of things that were not covered in any of the training books or courses I attended. Here are a few things I learned the hard way:


  1. Entering the name of a NetWare loadable module directly rather than entering "load" first abends the server, resulting in a call to "the office".
  2. Novell went away for a reason.
  3. Spare parts are of no use when the key for the storage room is lost.
  4. The NetFrame we recommended to a customer was last seen with a jack-in-the-box head and arms taped to it.
  5. Despite many predictions, Ethernet does scale.
  6. Users no not appreciate diagnosing the issue as a "loose nut on KB".

The good part of all of this is that the problem technologies (and weird names) have mostly gone the way of the Dodo. I am constantly amazed at the reliability of networks today and at the very short Mean Time to Restore (MTTR) we are seeing.


We have done a lot of growing this year at SolarWinds. Chances are that there is no IT management issue you have that we don't have a great solution for. If it has been a while, check out the depth of our product lines and what our customers have to say.


Security is a major concern of every organization - we deploy firewalls, virus scanners, and patches for our computers and networks as a standard operating procedure. We lock up server rooms, try to keep our mobile devices secure, and mount security cameras. But what do we do about embedded systems on our network, like IP phones or printers?


If you use a Cisco IP phone, you might want to consider doing something extra, like applying a patch or software update.


Columbia University computer scientists have discovered a way to both remotely and physically hack the embedded system on a Cisco IP phone, one of the most ubiquitous phones in telecom. Using vulnerabilities in the OS kernel, a hacker can gain complete control over the phone, including listening to conversations when the phone is not in use. After a phone is compromised, it can then spread its malware to other devices on the network including other phones, computers, and printers.



Note: The video only shows a physical hack. Attackers can also remotely hack the phone over the Internet.


Cisco has already developed a patch for this vulnerability. As of Dec 18, you must specifically request the patch from Cisco, though there are plans to include the patch in Cisco's next major update.


Of course, any pre-existing malware can potentially override the patch and reintroduce vulnerabilities again.


For more information, you can view the article on IEEE Spectrum.

The disk drive. That impermeable "black box" that stores our life's work ... documents, photographs, music, videos, and almost everything we hold dear in our life. Have you ever wondered how it really works? Have you ever longed to open the case and look inside? Maybe you'd just like to know what clothes ITPros wore to work before the turn of the century?


Recently I had the opportunity to view an old video tape, compliments of the This Week in Tech Security Now podcast #384 (Dec 26, 2012), that discusses those very questions. In 1990, when Steve Gibson's SpinRite was just a fledgling product, he went on the road and did a series of educational presentations to retailers, sponsored by SoftSell (a major distributor of computer software in 1990), describing everything you'd ever want to know about how disk drives work, and why they don't.


Have you ever wondered about the internals of a disk drive ... Heads, Cylinders, Sectors. Or maybe the bygone question of interleaving. Why did IBM choose a grossly inefficient interleave of 6:1? Why did Western Digital overcompensate by making it 3:1, trying to produce a 2x faster system, but actually producing a 3x slower one!? Or the myriad of acronyms that befell ITPros back then: MFM, RLL, ERLL, to name a few. What impact does heat really have on disk drives, and what was the real reason common wisdom was to leave your PC turned on all of the time?


Most significantly: Why do disk drives fail? What can you do to reduce, maybe even eliminate, disk drive failures – even today!


In addition to being informative, the video is also exceptionally entertaining. Steve has an incredible sense of humor, whether picking on his mother-in-law, “Big Blue”, Seagate Technologies, or Western Digital. I promise that even if you actually personally owned a PC in the late 1980s, and lived through this experience, you'll still learn something new from this video. I did (and yes, I bought my first IBM PC-compatible in June 1989 .. 23 1/2 years ago).


The full episode is 65 minutes, but minus the 10 minute intro/ads, and the 10 minute wrapup, the legacy video is about 45 minutes long.


Happy New Year!



Filter Blog

By date: By tag:

SolarWinds uses cookies on its websites to make your online experience easier and better. By using our website, you consent to our use of cookies. For more information on cookies, see our cookie policy.