Skip navigation

For the last several years a lot of the talk within network engineering has centered around layer 3 technologies. The subjects of WAN optimization, Quality of Service (QoS), WAN acceleration, and routing have consumed our thoughts and dreams, blog posts and podcasts. However, now more than ever, having a strong understanding of layer 2 networking technologies is critical to being successful in this industry.

To the uninitiated, layer 2 networking probably seems simple and straight forward. Take your PCs and servers, plug them into a switch, and it all just works - right? Well, yeah hoss, that's right, so long as you don't run into spanning tree issues, as long as all of your trunks are configured correctly, and unless after consolidating your data centers and virtualizing your servers you've saturated those 1 Gbps and 10 Gbps connections that you thought were overkill when you put them in.

Unfortunately, the best way to learn these technologies is by living through a serious outage or performance issue that was caused by them. Hopefully, you'll never have to learn about Spanning Tree Protocol (STP), VLAN Trunking Protocol (VTP), or dealing with ethernet congestion this way but chances are that if you stay in this industry long enough you will. Between now and then, do what you can start coming up to speed but most importantly, be sure that your management tools make it easy to identify, troubleshoot, and resolve issues of these types. Here are some tips I've picked up over the years to help:

  • Remember that spanning tree is logical and topology based. Keep up to date, detailed diagrams and documention and make sure it's easy to get to
  • Customize your NMS so that your trunk groups are logically grouped together and so that you get events when truck port status changes occur
  • Be sure that your monitoring tools are using 64-bit counters (ifHCinOctets and ifHCoutOctets) and SNMPv2 or v3
  • Don't assume that just because 10 Gbps is a lot of bandwidth that it's enough bandwidth

Flame on...
Josh
Follow me on Twitter

I spent last week out in New Mexico, hanging out at a friend's ranch and watching elk and antelope and looking up at those beautiful blue skies. If you've never spent any time in the high country, the sky and the clouds look quite a bit different from that elevation. I don't know if it's because the air is thinner or if it's just that you're 10,000 feet higher but it's breathtaking.

Getting to and from there, I spent about 26 hours in my truck and my thoughts turned to the clouds. No, not the same clouds I was looking at out in New Mexico but the clouds that we deal with as IT professionals. I've spent a lot of time talking and writing about clouds lately including a blog post over at LoveMyTool.com but it was nice to just stop and think about them for several hours.

When you think about it, we've all been using "clouds" for years. If you define cloud as an environment where shared computing resources are made available to consumers through a self provisioning process, then there are loads of examples at home and at work. Let's take renting movies for instance. Years ago, we all bought our own VCR/DVD/Blue Ray players, drove to Blockbuster to rent movies, watched them at home and then drove them back. Nowadays, we just hit a button on our TV remote and the movie we want to watch is streamed into our home via broadband in real-time. The storage and computing systems are located in the "clouds" provided by NetFlix, Apple, Amazon or whomever you like to use for these services and we're simply consuming the resources on-demand.

There are several advances in technology that have helped enable these changes. Most people consider the catalyst behind cloud computing to be the abudance of available computing capacity, but for me it's really about the availability of inexpensive, high-speed, reliable bandwidth. When I started working at SolarWinds we had a single T-1 connecting the company and all of our systems to the internet - now I have almost 30 times that much bandwidth from my house...

As we begin thinking about how to leverage clould computing as a part of our IT strategies it's easy to get a little intimidated by all the talk. Is cloud the new "big thing"? Will it make my job in IT obsolete? Will IT departments cease to exist in organizations that rely heavily on cloud resources? Ease up a little bit there pardner, take a breath, and slow down. Yes, there are some great areas where we can begin leveraging cloud but there are also new technologies being developed that require localized resources and skills. My best advice - just like with any new technology - get some first hand experience and never, ever let it intimidate you...


Flame on...
Josh
Follow me on Twitter

About 16 years ago I built my first DNS server. Actually, it was 2 servers as DNS is one of those things that you really need to be working all the time. Up until that time I'd been using an old-school true-Unix machine that someone else had setup and I merely maintained. So, I took 2 old Intel-based boxes (Pentium I's if I remember correctly), loaded copies of Linux Redhat on them, and then started reading Cricket's book on DNS and Bind.

If you've never had a reason to get up close and personal with DNS I highly reccommend that you do. Buy Cricket Liu's book mentioned above, read it, and then setup a server or two in your lab or at home. A comprehensive understanding of DNS is a highly valuable skill for anyone in the IT infrastructure side of things and especially for anyone on the network engineering or system administration side of the house. As you all know, application performance is sort of the "dead cat" of the IT management industry. The app guys blame the systems guys, the systems guys blame the network guys, and the network guys blame the app guys - and round and round we go. While all of this is happenign and we're looking at bandwidth congestion, system bottlenecks, and application settings - it could just be that DNS is the culprit.

If you've never resolved an application performance problem to be DNS related then you've either been really lucky or you've been solving the symptoms and not the illness. In even small organizations DNS can sometimes get really complicated. There's a great tool in our Engineer's Toolset called the DNS Analyzer that will give you a graphical representation of an organization's DNS layout. Download it and run your domain thru there. Then run a few of the larger ones and compare the results from Google.com with Suzuki.com and you'll see what I mean.

Long story short, this is one of those technologies that to be considered a senior level technologist in this space you need to be intimately familiar with. No, it may not always be top of mind but if you've got the knowledge in your head somewhere you can usually get to it when you need it...


Flame on...
Josh
Follow me on Twitter

Filter Blog

By date: By tag: