Skip navigation

When storage costs keep getting higher for an organization, there isn’t much a storage admin could do to manage data storage across various storage devices and media. There has to be some prioritization of data stored and it should be decided where to storage what data. If it’s critical data requiring frequent access, then it’d need a high performance and expensive storage array, while data of low importance and backup might be stored in slower and cheaper disks. This process of moving storage between different storage types is known as storage tiering.

 

Examples of Storage Tiering

Tier 1

Mission-critical & recently accessed files

Stored on expensive and high-quality media such as double-parity RAIDs

Tier 2

Seldom-used, classified files, backup

Stored on less expensive media in conventional SAN

Tier 3

Event-driven, rarely used, unclassified files

Stored on recordable compact discs or tapes

 

These are just some examples to understand the concept. Actual storage tiers may vary and depend on your organizational storage access, requirement and hardware availability.

 

Automated Storage Tiering

To overcome the manual efforts involved in moving storage across different tiers, storage vendors came up with the concept of automated storage tiering – which is a storage software management feature that dynamically moves information between different disk types and RAID levels to meet space, performance and cost requirements. [1]

  

Storage admins can decide to dynamically move storage blocks or LUNs across different storage types for different reasons.

  • Progression is where highs accessed data and critical files are moved from cheaper and low performance storage media to high performance SAS or SSDs.
  • Demotion is where infrequently used data can be automatically moved to slower, less-expensive SATA storage.

 

The movement of LUNs across storage tiers is managed by auto-tiering algorithms programmed in the storage management software. Some software use heat-map approaches to determine storage access and usage over a particular time period to dynamically move storage from one tier to another. There are many leading vendors in the market that support this functionality. Some of those are:

  • Dell® Compellent – Data Progression
  • EMC® – Fully Automated Storage Tiering (FAST)
  • HP® 3PAR – Adaptive Optimization
  • IBM® – Easy Tier
  • Hitachi® – Data Systems Dynamic Tiering

 

Some vendors claim that it is possible to perform real-time automated-tiering on storage blocks in small sizes – ranging in KBs to a few MBs.

 

Third-party storage management software will also help provide information on disk storage for LUNs based on storage tier, tiering policy applied and whether auto-tiering is enabled or not. You can check out SolarWinds Storage Manager ver 5.7 which will extend support for EMC FAST VP implementation on VMAX/Symmetrix and VNX arrays. Learn more about Beta trial of Storage Manager ver 5.7 >>

VBA - A quick lesson

Posted by Bronx Mar 27, 2014

Okay, so by now you should know that I like tinkering...and showing you how I do it. See the following for two examples:

Today's lesson will be in VBA for Outlook. The challenge? Schedule a meeting/appointment on a public calendar for all to see while simultaneously sending a specific person a one day reminder to take care of the newly scheduled event. Simply put, control and manipulate both public and private calendars.

Time for the pictures! Here's what I came up with using the Outlook VBE, referencing the Calendar control:

art.png

Simple right? Fill in the fields, pick a day, then Submit. Here's the result of hitting that li'l Submit button:

My personal calendar gets a 24 hour reminder scheduled at the right time.
r2-private.pngr3-private.pngr1.png

Public calendar also gets updated so others can schedule around what is going on.

bcpub.png

Before I show you the code

If you do not know what VBA is or how to access it in Outlook, go figure that out first. The form (Article Scheduler) at the top of this page lives here in the Outlook VBE:

f1.png

You'll need to create the form with the control names I have in the code below. Also, to run this from your Outlook toolbar, create a new Module (I have two above). In the new module, enter these three lines of code:


Sub RunScheduler()

    Scheduler.Show

End Sub

 

Once complete, you can drag the macro button to your toolbar.

tps.png

This is not a tutorial. Rather, it is an example you can tailor to your own needs by examining the code and changing what you want to get the desired effect. A litlle VBA research on your part may be in order.

 

The Code (Put this in the code section for the Scheduler form):

 

    Dim ola As Outlook.AddressList

    Dim ole As Outlook.AddressEntry

    Dim WriteDate As Object 'Date

    Dim EmailAddy As String

 

    Private Sub Calendar1_Click()

        txtMsg.Text = ""

    End Sub

 

    Private Sub CheckBox1_Click()

        CheckBox1.Value = Not CheckBox1.Value

    End Sub

 

    Private Sub ComboBox1_Change()

        txtMsg.Text = ""

    End Sub

 

    Private Sub CommandButton1_Click()

        Dim myItem As Object

        Dim myRequiredAttendee, myOptionalAttendee, myResourceAttendee As Outlook.Recipient

 

        If ComboBox1.Text = "" Then MsgBox("Really? Step 1 is entering an author's name.")

        If CheckBox1.Value = True Then

            Dim objOutlook As Outlook.Application

            Dim objOutlookMsg As Outlook.MailItem

            Dim objOutlookRecip As Outlook.Recipient

            Dim objOutlookAttach As Outlook.Attachment

 

            EmailAddy = ComboBox1.Value

            WriteDate = Calendar1.Value & " 8:00 AM"

 

            myItem = Application.CreateItem(olAppointmentItem)

            With myItem

                ' Add the To recipient(s) to the message.

                myRequiredAttendee = .Recipients.Add(EmailAddy)

                myRequiredAttendee.Type = olTo

                ' Resolve each Recipient's name.

 

                For Each myRequiredAttendee In .Recipients

                    myRequiredAttendee.Resolve()

                Next

 

            End With

 

            myItem.MeetingStatus = olMeeting

            myItem.Subject = "Write an article for tomorrow, due at 8am."

 

            If txtTitle.Text <> "" Then

                myItem.Body = txtTitle.Text & " for " & txtForum.Text & "."

            Else

                myItem.Body = "Write an article for tomorrow, due at 8am."

            End If

 

            myItem.Location = "Your Desk."

            myItem.Start = WriteDate

            myItem.Duration = 90

            myItem.ReminderMinutesBeforeStart = 1440

            myItem.ReminderSet = True

 

            myRequiredAttendee = myItem.Recipients.Add(EmailAddy)

            myRequiredAttendee.Type = olRequired

            myItem.Send()

            ComboBox1.Value = ""

            txtMsg.Text = "Reminder sent to " & EmailAddy & "."

 

            Dim myNameSpace As Outlook.NameSpace

            Dim myFolder As Outlook.folder

            Dim myNewFolder As Outlook.AppointmentItem

 

            myNameSpace = Application.GetNamespace("MAPI")

            myFolder = myNameSpace.Folders.Item(3)

            SubFolder = myFolder.Folders("All Public Folders").Folders("Your Public Sub Calendar").Items.Add(olAppointmentItem)

 

            With SubFolder

                .Subject = EmailAddy

                .Start = WriteDate

                .Save()

            End With

 

        End If

 

    End Sub

 

    Private Sub UserForm_Initialize()

        Calendar1.Value = Now

 

        ola = Application.Session.AddressLists("Global Address List")

        For Each ole In ola.AddressEntries

            ComboBox1.AddItem(ole)

        Next

        ola = Nothing

        ole = Nothing

 

    End Sub


Welcome to the SolarWinds Blog Series, ‘Basics of Routing Protocols’. This is the last of a four part series where you can learn the fundamentals of routing protocols, types, and their everyday applications in network troubleshooting.

In the previous blog, we discussed Open Shortest Path First (OSPF), OSPF message types, and the protocol’s advantages and disadvantages. In this blog, we’ll shed some light on another popular routing protocol: EIGRP (Enhanced Interior Gateway Routing Protocol).

 

What is EIGRP (Enhanced Interior Gateway Routing Protocol)?

EIGRP, a distance vector routing protocol, exchanges routing table information with neighboring routers in an autonomous system. Unlike RIP, EIGRP shares routing table information that is not available in the neighboring routers, thereby reducing unwanted traffic transmitted through routers. EIGRP is an enhanced version of IGRP and uses Diffusing Update Algorithm (DUAL), which reduces time taken for network convergence and improves operational efficiency. EIGRP was a proprietary protocol from Cisco®, which was later made an Open Standard in 2013.

 

EIGRP Packet Types

Different message types in EIGRP include:

  • Hello Packet – The first message type sent when EIGRP process is initiated on the router. Hello packet identifies neighbors and forms adjacencies while being multicast every 5 seconds by default (60 seconds on low bandwidth network).
  • Update Packet – Contains route information that is only forwarded when there is a change. They are only sent to the routes that have partial updates. If there’s a new neighbor discovered, the packet is sent to the router as a unicast.
  • Acknowledgement – This is unicast as a response to Update packet by acknowledging when they receive an update.
  • Query – This packet is sent to query routes from neighbors. When a router loses a route while sending the multicast, Query packet is sent to all neighboring routers to find alternate paths for the router.
  • Reply – These are unicast by routers that know alternate routes for the neighboring routers queried on a network.


EIGRP – Pros and Cons

Speedy network convergence, low CPU utilization, and ease of configuration are some of the advantages of EIGRP. The EIGRP routers store everything as routing table information so they can quickly adapt to alternate routes. The variable length subnet mask reduces time to network convergence and increases scalability. EIGRP also includes MD5 route authentication. Compared to RIP and OSPF, EIGRP has more adaptability and versatility in complex networks. EIGRP combines many features of both link-state and distance-vector. Since EIGRP is mostly deployed in large networks, routers tend to delay sending information during allotted time, which can cause neighboring routers to query the information again, thus increasing traffic.


Monitor Routers Using EIGRP in Your Network

Advanced network monitoring tools have the ability to monitor network route information and provide real-time views on issues that might affect the network. Using monitoring tools in small networks, you can view router topology, routing tables, and changes in default routes. You can also check out overview blogs on RIP and OSPF routing protocols.

John Herbert.jpg

 

We’re getting close to the end of the month, so that must mean it’s time for another installment of our ever-popular IT Blogger Spotlight series.

 

I recently caught up with John Herbert of LameJournal fame, who was kind enough to answer a few questions. In addition to following John’s exploits on LameJournal, you can keep up with him on Twitter, where he’s affectionately known as @mrtugs.

 

SWI: Tell me about LameJournal and how you got started with it.

 

JH: A while back I purchased LameJournal.com with the intent of grabbing the freely available LiveJournal source code and running a spoof site, as if the site itself weren’t sufficiently self-derivative. I sat on the domain for at least five years and failed to do anything with it, mainly because it sounded like an awful lot of work just for a joke.

 

Then in April 2011, I went to a flash photography seminar and was so buzzed about the event I felt that I just had to share my enthusiasm, so I dusted off the domain, installed Wordpress and created my first post beyond the default “Hello World.”

 

That post was looking a bit lonely on its own, and somebody had been asking me to explain Cisco's Virtual Port Channel technology to them, so I put out a post on VPC the next day. Like somebody with a new toy, I then started taking things that were on my mind and turning them into posts, because hey, somebody might be interested, right? Cisco Live, Visio, TRILL, some training I went on, and so forth. While the blog subtitle is “Networking, Photography and Technology,” it became evident very quickly that the content was going to be primarily about networking, with an occasional glance at photography and other technology.

 

SWI: And as they say, the rest is history, right?

 

JH: Yep. Really, it’s ended up being an outlet for anything I think is interesting. One of the things I found hardest when I started blogging was to get over the feeling that the information I wanted to share might not be noteworthy, or is already covered elsewhere. My attitude now—and the one I share with others to encourage them to blog, too—is to say, “OK, was this new or interesting to me? Then blog about it.” After all, if it’s new to me, then it’ll be new to somebody else out there, too, which means I should write the post!

 

With that said, I still get the most pleasure from writing about something that will help other people in some way, especially if I can fill a gap in the information already out there and provide a unique resource. I don’t actively look for those topics, but they’re great when they crop up. Beyond that, I usually writing about real situations—either current or past—that were interesting or challenging so that (a) I have a record of it, and (b) it might save somebody else some trouble later.

 

SWI: I like the way you think! Do you find you get more interst in certain topics than others?

 

JH: Experience has shown there’s not really a good predictor as to whether a particular post will generate interest, but I find there are two general types of posts that have done better. The first are posts describing problems I’ve had and, if possible, how I fixed them. They’re successful because when somebody else experiences the same issue, they search the Web and my post shows up in the results. Even if there’s a frustrating lack of solution, I personally find great solace in knowing that I’m not the only idiot with a particular problem. For example, I’ve written posts about Cisco AnyConnect ActiveX, Office 2013 and iTunes Match that were very popular over time; they seem to have lasting appeal. The other category of posts that do better are those covering new technologies, where information out there is a bit patchy. Examples include TRILL and Cisco's VPC, and more recently discussions about software defined networking. Posts that are topical may be successful short term, but they tend to have less long term interest, which makes sense when you think about it.

 

SWI: Definitely. So, what do you do professionally?

 

JH: I’m a consultant for a professional services company. So, to put it simply, I move packets for other people. Consulting is interesting in part because I get to see so many different networks, teams and company structures, rules, procedures and architectures. I like the insight this gives me, and I find it fascinating to see what each client determines is most important for their network.

 

SWI: Very interesting. How did you get into IT in the first place?

 

JH: I kind of fell into it, really. I've always enjoyed working with computers and was programming SAS (database/statistics software) when a friend suggested I should join the company he worked for and do networking. I really didn't get what it was that he did, despite him trying to explain it, but the pay sounded good so I made the leap and haven't looked back.

 

SWI: What are some of your favorite tools as an IT pro?

 

JH: From the SolarWinds portfolio, Engineer's Toolset has been on my work laptop builds almost continually since the year 2000. Fun fact, I actually joined International Network Services in 1999, and that’s where Don Yonce (one of SolarWinds’ co-founders) was also working. So, I have always felt like I have a special relationship with the SolarWinds products. So, I also typically have SolarWinds free tools installed on my own machines (the Advanced Subnet Calculator is a very old friend of mine!). I’m currently using a MacBook right now, so I’m feeling a little lonely, but since my other favorite networking tool is Perl, I’m all set for that at least. The ability to program in one scripting language or another is a huge benefit to any network engineer in my opinion, and was so even before SDN reared its head.


SWI: And what are you up to these days when you’re not working or blogging?

 

JH: I have a wife and three school-aged children, a home network to perfect and meals to cook. So, beyond working and blogging I mainly eat and sleep. Occasionally, I play some piano, which I find very cathartic, and I’m also on the board of directors for my home owners’ association, which eats up some more time. As my blog title suggests I also enjoy photography, and I really should get out and do more of it.

 

SWI: Well, I hope you’re able to. Switching directions a bit, what are some of the most significant trends you’re seeing in the IT industry right now and what do you this is to come?

 

JH: In the networking world, the buzzword-du-jour is SDN. One way or another, there’s a huge paradigm shift occurring where pretty much every vendor is opening up access to their devices via some form of API, and there’s a growing new market for controllers and orchestrators that will utilize those APIs to automate tasks. Those tasks can be anything from configuring a switch port or VLAN as part of a service chain to instantiate a new service to programming microflows on a switch. I said “devices,” but lest it sound like this just means hardware—the network “underlay”—SDN also extends to the software side of things too, both in terms of encapsulations like VXLAN, an overlay, and features like network function virtualization, which also offers some exciting possibilities.

 

My one fear is that SDN encompasses so much, it’s in danger of becoming another meaningless marketing term like “cloud,” and I'm waiting to see the first “SDN Compliant” sticker on a box. That aside, the innovation in the SDN space, both proprietary and open source, is redefining the way networks can be built and operated, and it’s a very exciting time to be in this industry. The downside is that there’s so much going on, there aren’t enough hours in the day to keep up with all the blog posts that could be written!

 

SWI: Well, that’s all of my questions. Is there anything else you’d like to add?

 

JH: If I may, I’d like to give a shout out and a thank you to all the networking and technology bloggers out there. In many IT and networking teams, there’s that one person who hoards information and believes they’re creating job security by being the only one to understand something, and thus they resist sharing that knowledge with others. Blogging is the polar opposite of that; bloggers take the opportunity to share information that may improve somebody else’s ability to do their job, help them avoid a problem before it happens or just make you smile because somebody else is experiencing the same challenges as you. I stand in awe at the quantity and quality of posts that some people manage to create. I use an RSS reader so that I can follow a large number of those blogs in a manageable way, and I strongly recommend RSS.

 

I would also encourage anyone who reads this to consider whether or not they have something they could share with others via a blog. I look at it this way, if I learned something new today, maybe I could help somebody else learn that thing tomorrow. And to paraphrase "Wayne’s World," I only hope you don’t think it sucks!

As much as we try to understand the importance of password security – whether it’s for a computer login, email account, network device, Wi-Fi or domain access – we don’t seem to meticulously implement it every time we set up or change a password. Password security is a popular topic for IT pros and end-users alike, which has remained a hot “good to know” topic, and not always an “I’ll do it right away” thing.

 

There’s yet another example of a password leakage debacle which reinstates our necessity to enforce stricter password security measures. During the pre-game coverage for NFL Super Bowl XLVIII, the stadium’s internal Wi-Fi login credentials were displayed on a big screen in their network control center which was revealed in a televised broadcast of video footage which showed the big screen and the password – unencrypted, in full visibility! It could, of course, be called an oversight; but when it comes to protecting IT assets and securing data, this is lack of due diligence on the part of the stadium’s IT security team. And they did not review the footage well enough before the telecast and tried to nip it in the bud.

 

Talking about password sharing, let’s discuss some best practices to ensure one, you build a strong password which is hard to guess, and two some things to remember about leaving your passwords accessible to the others.

 

Best Practices to Protect & Strengthen Your Passwords

Password Sharing Doesn’t Make You Noble or Kind: Never share your passwords with anyone unless you are absolutely certain there won’t be regrettable ramifications. You never know whether their system is compromised, whether they leave it written in the open, or they are a gullible social engineering target. Even if you have to share it for some reason, better change it immediately after their use with your login access is fulfilled.

  

Make Them Long, Make Them Strong: Longer passwords are difficult to guess especially if they are alphanumeric, includes special characters, and has a mixture of lowercase and uppercase characters.

  • Have at least 8 characters to make you password. The longer, the stronger.
  • Make passwords more complex and difficult to guess.
  • You can even use password generating software available online to spin up a strong string for your password
  • Do not give your biographical details such as name, date of birth, city in your password as they can also be easily guessed.
  • Try to ensure your passwords don’t contain any common words from the dictionary.

  

Strict No-No for Common/Same Passwords: A hacker has many devious ways such as brute force attacks to get into your system. Having common and same passwords for different sites and purposes is only going to make his life easier.

 

Not All Computers Are Your Friends: Keystroke logging (aka keyboard capturing) has become a common malware that finds entry into unprotected systems quite easily. You may never know it, but your key stokes could be captured as you type out your passwords. There are various types of keystroke capturing software that could swipe your passwords: hypervisor-based malware, API-based, kernel-based, form grabbing-based, memory injection-based and packet analyzers. Always remember to log out of your personal accounts when you are using someone else’s system.

 

Beware of the Eye of Sauran: We watchful of your immediate vicinity when you enter your password to a secure system related financial and other personally-identifiable information.

 

As Much As You Do, Your Passwords Too Need Change: It’s always best to change your password every once in a while, and not use an expired password for at least a year. Whether your system prompts you to or not, do make it a point to periodically change your password.

 

Don’t Make it to The Hackers’ Hall of Fame

Splashdata, a password management company, has released a list of "25 worst passwords of the year" for 2013 which was compiled using data that hackers have posted online (believed to be stolen passwords).

 

1)  123456

6)  123456789

11)  123123

16)  1234

21) password1

2)  password

7)  111111

12)  admin

17)  monkey

22)  princess

3)  12345678

8)  1234567

13)  1234567890

18)  shadow

23)  azerty

4)  qwerty

9)  iloveyou

14)  letmein

19)  sunshine

24)  trustno1

5)  abc123

10)  adobe123

15)  photoshop

20)  12345

25)  000000

 

Top 10 Password Preferences: The Weak & Common Themes

Google has released a list of password selection themes that were most popular based on a study consisting of 2,000 people to understand the procedures used to create passwords. Here are 10 most common and easy-to-break-in ones.

 

1) Pet’s name

6) Place of birth

2) Significant dates (like a wedding anniversary)

7) Favorite holiday

3) Date of birth of close relation

8) Something related to favorite football team

4) Child’s name

9) Current partner’s name

5) Other family member’s name

10) The word "password"

  

Yes, I agree periodic password change is a grind. To top that, you have to remember what you used earlier to not repeat it again. But it’s all worth the effort to manage and secure passwords, than to face the consequences of account breach, data theft and all the other fallouts. And, do ensure to protect your password and save it from those prying hacker eyes!

dlink7

Supporting the Vendor

Posted by dlink7 Mar 24, 2014

Over the last three weeks my posts have focused around end users and remote support tools. This time I want to focus on vendors. In theory you should love your vendors that you work with, there are an extension of your IT team.  I know there are the new vendors trying to get new business and most people try to avoid them like the plague but there is an expectation to every rule. Most vendors have a genuine interest or at least in my mind of benefit of seeing their customer succeed.

 

One thing that can make or break a vendor relationship is remote support. Some vendors have remote support enabled on their systems so they can go directly to the box in question. I am a big fan of this but the paranoid folks are worried that security may be at risk and usually don’t allow it.  I know for Nutanix gear you can set a timer on how long you want the remote tunnel to last. It’s a good option if you’re worried about a vendor using his equipment as jump box.  I think if you’re dealing with a global company this option is great to help remove some of the language barriers that may exist with follow the sun support.

 

If you don’t have the option above the next step is the dreaded Webex/GoToMeeting.  For whatever reason in a time of crisis you can be rest assured you will be downloading the newest client and maybe even playing with a java update. Usually ok but doing console work is usually problematic for the person trying to give support.  My big beef is that it gets people use to accepting remote connections. Humans are easily fooled, self included so if possible I think it best to control access on your own terms. If at all possible if you can extend your current remote tools to share your screen with the vendor I think that is ideal.

 

What do you do for your vendors so they can support your gear on site? Give them a virtual desktop and only give access to their system? VPN with full access? Let them use their own tools?

 

Curious to hear peoples thoughts and if people think of this a security threat.

This is my last post as an ambassador. I've had a ton of fun and appreciate everyone’s feedback and opinion. Now let’s talk about some network diagrams!

No network should be without a good visual representation of the overall design and layout. When push comes to shove any documentation is better than no documentation. For me an awesome network topology is worthy of printing it out on a plotter and handing it on my cube walls. But what techniques can you use to build great visual representations of your network that are both clean and provide adequate details?

Here are several of the techniques I use:

  1. Basic shapes for equipment. To me stencils can get messy where a plan square is simple to organize the layout of connections. I avoid rack view stencils and real images unless Im doing an elevation diagram.
  2. Separate the network into multiple pages. I usually build a separate layer two and layer three diagrams and I will also usually keep network and servers/services in different workbooks all together. Depending on the complexity I will also separate WAN, VPN services etc into their own page. This keeps each page clean and simple.

          If you get annoyed with jumping back and forth between diagrams or tabs then you can use layers. To me this adds more complex and doesn’t always work well. That could just be me though.

  1. Organize the layout in a way that can quickly represent the flow of traffic but always try to avoid crossing connections. For physical cabling and connections I like to keep lines running horizontal and vertical and never run at an angle. Angled and curved lines for me represent logical connections.

 

So, what are the techniques you have used?

What has worked for you in the past and what do you try to avoid?

‘Tis the season of madness – March Madness! It goes without saying that all of us (the basketball fans of course) are wired into the games, watching matches, following scores, rambling on at length, blogging, Twittering and whatnot. That’s totally alright. Every fan would love to get caught up with the heat and show some NCAA love. But what happens when your employees and network users start streaming the games online in the office consuming corporate bandwidth? And this is when madness becomes insanity for the network admins.

 

  • More user complaints about network not being available or being awfully slow
  • Latency and network traffic delay
  • Adverse impact on network traffic quality levels
  • You just don’t have any clue whether to just disable the Internet for all users (height of frustration), or just buy more bandwidth (height of desperation)

 

Bottom line: Mission-critical network users and apps affected due to increased network bandwidth utilization by online streamers and video watchers.

 

So, what is left for the helpless network admin to do? Cry for help, or implement proper bandwidth monitoring utilities to track when, where and by whom bandwidth is being consumed for non-official purposes?

 

Instead of pulling the plug on the Internet, or shelling your IT budget to buy more bandwidth, you should be able to optimize network bandwidth usage by pinpointing incidents of bandwidth spikes and administering necessary measures to prevent network policy offenders from hogging your precious bandwidth for online video streaming – during March Madness or whenever.

 

We still have a couple of weeks to go with the games, and March Madness is truly a great time for all of us fans. But that doesn’t mean network teams should be battling for bandwidth and risking network performance issues. Proactively monitor bandwidth utilization and traffic quality levels, and slam-dunk those bandwidth bottlenecks!

 

Check out this cool infographic from our creative team about bandwidth risk assessment during March Madness!

NTA - March Madness.jpg

UPDATED WITH BRACKET LINK BELOW:


We just couldn’t help ourselves. Reaction has been so good and we are just a weekend away from this year’s SolarWinds Bracket Battle 2: Gamer Edition… Before we “Falcon Punch!” this competition into high gear, let’s just review the gameplay today!

 

We’ve put our quarter down to get dibs on next, and we have enough batteries to keep these controllers running for months.  We’ve blown on our cartridges and bumped up our bandwidth…  We are ready to go.

 

We have selected 33 video games from across platforms and genres to battle it out head to head for supremacy. Each pairing has been debated and set -- over Mountain Dew and Cheetos -- and is based on some shared theme or principle (and, it is not always the most obvious basis of comparison).

 

MATCH UP ANALYSIS

  • For each combatant, we offer links to the best Wikipedia reference page by clicking on the NAME link in bracket
  • A breakdown of each match-up is available by clicking on the VOTE link.
  • Anyone can view the bracket and the match-up descriptions, but to comment and VOTE you must be a thwack member (and logged IN).

 

VOTING

  • Again, you have to be logged in to vote and debate…
  • You may only vote ONCE for each match up
  • Once you vote on a match, click the link to return to the bracket and vote on the next match up in the series.
  • Each vote gets you 50 thwack points!  So, over the course of the entire battle you have the opportunity to rack up 1550 points. Not too shabby…

 

CAMPAIGNING

  • Please feel free to campaign for your favorites and debate the merits of our match ups to your hearts content in the comments section and via twitter/Facebook/Google + etc. etc. etc.
  • We even have hashtags… #swibacketbattle and #levelup… to make it a little bit easier.
  • There will be a PDF version of the bracket available to facilitate debate with your office mates or WoW Raid.
  • And, if you want to post pics of your bracket predictions, we would love to see them on our Facebook page!

 

SCHEDULE

 

  • Bracket Release and Prequel Battle OPENS March 24 at MIDNIGHT
  • 8-bit Battles OPEN March 26
  • 16-bit Battles OPEN March 31
  • 32-bit Battles OPEN April 3
  • 64-bit Battles OPEN April 7
  • Game Over Battle opens April 10
  • Champion of the Arcade will be announced on April 14

 

If you have other questions… feel free to drop them below and we will get right back with you!

 

Otherwise, keep your eyes on this space.  And, here is the link to the Bracket!  FINALLY...

 

Ready Player One!

UPDATE: The Gameplay Rules Post is now UP!

UPDATE v2: Here is the BRACKET!  Go vote!

 

The time has come for SolarWinds’ second annual, old-fashioned (hypothetical) grudge match.  Last year, Spock triumphed over a handpicked stable of Sci-Fi icons. This year, we’re ready to flaunt our gaming knowledge!

 

Welcome one and all to the…

 

SolarWinds Bracket Battle 2 – Gamer Edition!

On March 24th, here at thwack.com, we are once again going to let our community decide who shall stand victorious and whose plug should be pulled and relegated to the stack of cartridges in the closet. The bracket-based, “March Mayhem”-style competition will feature 33 video game titans from various platforms (PC, arcade, consoles of all sorts) and genres (fantasy, first-person shooter, MMOG… the list goes on) battling it out for the last level. Like games are matched against each other in the first round, but then mayhem will ensue as we vote to determine each round’s winner and allow the bracket to develop over the two weeks. No joystick is required to guide your favorites to escape the ultimate “game over.” So, gamers… start mobilizing troops and conjuring up Intimidating Shouts to ensure that the game that engulfs your nights and weekends is crowned champion.

 

Do NOT miss the chance to decide …

 

Do you want Tommy Vercetti by your side?

Can the Umbrella Corporation survive in Rapture?

Which Princess is worth saving?

 

The official bracket and rules of engagement will be released on March 24, 2014 including the “play-in” match up.

 

Trust us, you do not want to Leeroy Jenkins this… March 24 release date people, get in line now and start sharing the news with #swibracketbattle #levelup .

The File Transfer Protocol (FTP) client is an application that runs on a user’s system that initiates file transfer requests—either sending or receiving files via the FTP server. FTP works on a client-server architecture and the FTP client is the entity that creates a Transfer Control Protocol (TCP) control connection with the FTP server. FTP clients were just command line interface (CLI) applications just a few decades ago. They now come in all sorts of easy-to-use, intuitive interfaces to facilitate and simplify file transfers. FTP clients are used for desktops, servers, and mobile devices, and are available as standalone apps, Web clients, and simply extensions to Web browsers.

 

The FTP server can support both active and passive connections with the FTP client. In an active FTP connection, the client opens a port and listens while the server actively connects to it. Whereas, in a passive connection, the server opens a port and listens passively, which allows clients to connect to it.

 

A passive connection is more secure and also preferred by IT admins because data connections are made from the FTP client to the FTP server. This is a more reliable method and it avoids inbound connections from the Internet back into individual clients. In firewalled deployments, all connections are made from the Internet to the server—not from the server back to the Internet. Passive mode is also known as "firewall-friendly" mode. The more secure file transfer protocols (such as SFTP, FTPS) that the FTP client supports, the more secure it becomes.

FTP Client.png

This is what FTP clients essentially do. So, what else can they do? Well, a lot more in terms of making your entire file transfer process simpler and more user-friendly. There are many features available with FTP clients that make your FTP user-experience, customization, and management options simpler and more convenient.

 

Here are some FTP client features:

  • File transfer synchronization: FTP clients synchronize files and folders between local and remote folders. FTP clients can compare folders and display any missing files between the two in different colors for quick identification.
  • Scheduled file transfer: FTP clients can automate file transfers by allowing you to schedule tasks based on who and when the files need to be sent.
  • Post-transfer actions: FTP clients allow you to run scripts, launch applications, send email, and delete original files after completion of a file transfer.
  • Facilitate bulk transfer: FTP clients enable you to upload or download multiple files and entire folder trees.
  • Transfer queue: FTP clients display the file transfer process in queue fashion with progress bars, pause, and resume options for quick controls.
  • Intuitive Navigation: Familiar “side-by-side” transfer window, drag and drop files from your desktop, thumbnail view of files, preview panes and detailed list views.
  • Mobile Views: There are also mobile clients available for FTP server software that allow you direct dashboard visibility and management access from your mobile/smart phone and tablet consoles.

 

A 3rd-party FTP server software such as Serv-U® Managed File Transfer Server provides an out-of-the-box FTP client for simpler and more secure managed file transfers including options to transfer multiple files at once and upload very large files (>2GB).

FTP Voyager JV - Image 3_lg.PNG

Welcome to the SolarWinds Blog series, ‘Basics of Routing Protocols’. This is the third of a four part series where you can learn more about the fundamentals of routing protocols, types, and their everyday applications in network troubleshooting.

In the previous blog, we discussed Routing Information Protocol (RIP), its advantages, disadvantages, and how to monitor routers that use RIP in the network. In this blog, we’ll look more closely at OSPF (Open Shortest Path First) and its applications in large networks.


What is OSPF (Open Shortest Path First)?

OSPF, a link state routing protocol, is used in large organizations for their autonomous system networks. OSPF gathers link state information from available routers and determines the routing table information to forward packets based on the destination IP address. This occurs by creating a topology map for the network. Any change in the link is immediately detected and the information is forwarded to all other routers, meaning they also have the same routing table information. Unlike RIP, OSPF only multicasts routing information when there’s a change in the network. OSPF is used in complex networks that are subdivided to ease network administration and optimize traffic. It quickly calculates the shortest path if topology changes, using minimum network traffic.

OSPF allows network admins to assign cost metrics for a particular router so that some paths are given higher preference. OSPF also provides an additional level of routing protection capability and ensures that all routing protocol exchanges are authenticated.


OSPF Message Types

OSPF doesn’t send information using UDP. Instead, it builds IP datagrams directly, packaging them using protocol number 89 for the IP protocol field. Different message types of OSPF include:

  • Hello Packet – Sent by routers to set up relationships with neighbors and communicate frequently to keep the connection alive. Hello Packet shares key parameters on how OSPF is to be used within the network.
  • Database Description – The description of the link state database for autonomous systems are transmitted from one router to another.
  • Link State Request – This is requested when a portion of the network needs to be updated with current information. The message specifies exactly which links are requested by the device that wants more current information.
  • Link State Update – This contains the updated information for the requested links. It’s sent in response to the LS request.
  • Link State Acknowledgement – This acknowledges the link-state exchange process for link state update message.


OSPF – Pros and Cons

One OSPF routing protocol advantage is that it has a complete knowledge of network topology, allowing routers to calculate routes based on incoming requests. In addition, OSPF does not have limitations in hop count, converges faster than RIP, and has better load balancing. The disadvantage with OSPF is that it doesn’t scale when there are more routers added to the network. This is because it maintains multiple copies of routing information. OSPF networks with intermittent links can increase traffic every time a router sends the information. This lack of scalability means that a link state routing protocol like OSPF is not suitable for routing across the Internet.


Monitor Routers Using OSPF in Your Network

Advanced network monitoring tools have the ability to monitor network route information and provide real-time views on issues that might affect the network. By using monitoring tools in small networks, you’ll be able to view router topology, routing tables, and changes in default routes.

 

Learn more about other popular routing protocols like EIGRP in the blog series.

IP addresses help devices stay connected to the network. Non-availability of an IP address means that a device loses some or all of its ability to access or use the network. The severity of the issue depends on what that device is, i.e. a laptop, PC, or a critical server. DHCP servers play a vital role because they automatically assign IP addresses to devices whenever they enter a network. DNS servers on the other hand provide network clients with the service of domain name resolution.

 

Any problem hindering operation of these servers, be it performance, availability or scalability, impacts productivity of users. So, what are the factors you should consider when selecting DHCP/DNS services for your network?


  • As an organization grows, so too does the number of devices and the need for IP addresses. High performing DHCP and DNS servers are required to meet the growing IP requests of the network. Sometimes, a single DHCP server may no longer be sufficient, meaning more servers to service IP requests may be required. This means more administration and configuration tasks for the administrator.
  • Given the importance of the DHCP server being available at all times, it’s advisable that the DHCP service run on a dedicated server that is unlikely to be affected by other services that consume hardware resources and impact server performance.
  • Most importantly, ensure high availability and increase fault tolerance with redundancy options. This means additional investments, and we all know that services from Microsoft® and Cisco® are expensive. Moreover, investments made on additional licenses for redundancy fall heavy on allocated budgets.
  • At any given time, the administrator needs to quickly know the availability of an IP address, the utilization status of all DHCP scopes, and subnets in the entire IP space.


Microsoft and Cisco are the common DHCP/DNS vendors servicing IP distribution needs in an organization. But, there are also Open Source solutions like the one from ISC (Internet Systems Consortium) Inc. that has a fair share of the market. These services are suitable for use in high-volume and high-reliability environments. In fact, ISC BIND DNS is the reference implementation for every other DNS service today and is widely used by many enterprises. DHCP/DNS solutions from ISC can be downloaded and installed free of cost as compared to other vendors, especially those from Microsoft, Cisco, and other appliance based solutions.


Cost-effective solutions easily stand atop other vendors on license and maintenance costs, community support, and global audience. ISC DHCP and BIND are built around standards, hence are used by most admins, operating distributions, and are integrated into solutions by vendors. Even if you’re currently running your DHCP services on Microsoft or Cisco, you can consider ISC for an affordable failover option that gives you precious time to get the partner server up. However, an important point to be noted is that it’s all CLI (Command Line Interface) based, meaning you’d have to be quite handy with command line configurations. This problem can be overcome with tools that provide convenient integration, including support for ISC DHC. This way you can eliminate frequent use of CLI and don’t have to worry about skilled resources to handle complicated configurations.


Every DHCP and/or DNS outage costs in terms of decreased productivity, increased expenses, and lost revenue. The risk caused by not having a reliable DHCP and DNS deployment seriously impacts your business and reputation. Administrators nowadays are looking at options that help consolidate management of multiple vendors under one platform. The addition of any number of DHCP/DNS servers from different vendors should not disrupt existing management processes.


As we can now see, ISC is a viable solution for DHCP and DNS because it gives the user highly available, reliable, high performing, and scalable services without massive upfront hardware appliance or license costs and recurring maintenance charges.

Last week I was asked about offline VDI.  I was taken back a bit because that hasn’t come in a really long time. My opinion has always been that if you didn’t have the Internet or a link to the database that installing the application locally wasn’t really going to do anything for you. Maybe I am just taking for granted that everyone has some form of high speed available today and if you don’t you’re in a place where you don’t want to be bugged anyways. The classic is, “what happens when I am traveling on the plane?”  Open Microsoft Office like the rest of us or watch a movie

 

Joking aside we’ve become really dependent on the network for work and delivering support.  Myself personally wouldn’t devote a lot of effort with unconnected users.  Maybe I am living off in Never Never land but I think too much time is wasted for the last 20% of use cases.  I would focus on getting my remote users the tools need to ensure the network was rock solid or at least they could connect easily.

 

For road warriors or the office user I really like RAP-3’s from Aruba. I was just leaving a place as their where getting implemented.  The fact that they can setup the VPN I thought was great. Standard enterprise tools could then used for supporting the users. The best part was not worrying about a flaky wireless connection at the other end. With the RAPs you could use 3G/4G to get them to connect to the Internet and all done.  I know Meraki has something similar but I don’t have any experience with them.

 

 

What tools do you provide for your remote users? Just VPN, VDI, nothing? Do people still have to worry about unconnected users?

We know that VoIP & Network Quality Manager (VNQM) is a proactive VoIP monitoring tool that monitors real-time QoS metrics such as jitter, latency, packet loss, MOS and more. But, in this blog we are going to dive deeper and understand what VNQM can do with Cisco® IP SLA operations and how it can help you monitor site-to-site WAN performance.

 

SolarWinds VNQM allows you to automatically discover and add Cisco IP SLA devices to its console, designate monitoring paths (source and destination devices) and configure polling options (frequency). While the challenge has always been to design a tailored set of IP SLA operations for monitoring your devices, VNQM helps you easily build custom IP SLA operations. You’ll be able to add, edit, and remove IP SLA operations in just a few clicks. Let’s look at the various IP SLA Operations you can monitor using VNQM.

 

Cisco IP SLA Operations in VNQM

WAN Connectivity Operations

TCP Connect

Measure WAN quality by testing connection times between two devices using a specific port

ICMP Echo

Measure round trip time between nodes on your network

ICMP Path Echo

Measure round trip time hop-by-hop between nodes on your network

UDP Echo

Measure round trip time between nodes on your network

WAN Quality Operations

UDP Jitter

Measure WAN quality by testing connection times between two devices using a specific port number

ICMP Path Jitter

Measure WAN quality by testing connection times hop-by-hop between two devices

VoIP UDP Jitter

Measure call path metrics on your VoIP network

Infrastructure Operations

DNS

Measure the difference in time from when a DNS request is sent and when the reply is received

DHCP

Measure the response time taken to discover a DHCP server and then

obtain a leased IP address from it

Application Quality Operations

FTP

Measure the response time between a Cisco device and an FTP server to retrieve a file

HTTP

Measure distributed web services response times

 

Designating Paths for IP SLA Operations

For some IP SLA operations, VNQM collects performance statistics by sending traffic over paths between sites you have defined. You can choose to select one of the following paths based on your IP routing protocol.

 

Simple

Contains one source node and only one destination node. This path can be tested bidirectionally.

Fully Meshed

Connects every node you define over distinct call paths to every other node selected

Hub & Spoke

Allows you to designate specific nodes as hubs. Each hub is then connected to all other nodes, with paths representing spokes

Custom

Allows you to define your own paths. All defined nodes are listed under this option, and expanding each node displays a list of all other nodes.

 

Benefits of Cisco IP SLA Operations

  • Monitor WAN performance between different sites/network locations
  • Test effectiveness of network readiness and performance for remote sites
  • Monitor device performance at different sites from a single location
  • Optimize WAN based on IP SLA operations and results
  • Quicker troubleshooting and time-to-resolution based on real-time WAN performance data

 

VNQM Summary View of Top IP SLA Operations

IP SLA Operations Expanded View

(Sample dashboard VoIP UDP Jitter Operation)

   IP SLA.PNG

VNQM 2.png

Welcome to SolarWinds Blog Series, ‘Basics of Routing Protocols’. This is the second of a four part series where you can learn more about the fundamentals of routing protocols, types, and their everyday applications in network troubleshooting.

 

In the previous blog, we discussed the two classes of routing protocols, distance-vector and link-state, and how you can choose which is best for your network. In this blog we’ll shed some light on another popular routing protocol: Routing Information Protocol (RIP).

 

What is Routing Information Protocol?

Routing Information Protocol, or RIP, is one of the most commonly used routing protocols for small homogeneous networks. As a distance-vector routing protocol, RIP is used by routers to exchange topology information periodically by sending out routing table details to neighboring routers every 30 seconds. These neighboring routers in turn forward the information to other routers until they reach network convergence. RIP uses the hop count metric with the maximum limit of 15 hops, anything beyond that is unreachable. Because of this, RIP is not suitable for large, complex networks.

 

RIPv1 vs. RIPv2

There are two versions of RIP. RIP version 1 uses classful routing and does not include subnet information while sending out periodic routing table updates. RIP version 2 is classless and includes the subnet information supporting Classless-Inter Domain Routing (CIDR). Unlike RIP version 1, version 2 multicasts the routing updates to the adjacent routers using the address 224.0.0.9. Network convergence happens much faster in RIPv2.

 

RIP - Pros and Cons

Routing Information Protocol has its own advantages in small networks. For one, it’s easy to understand and configure and it’s also widely used and supported by almost all the routers. Since its limited to 15 hops, any router beyond that distance is considered as infinity, and hence unreachable. If implemented in a large network, RIP can create a traffic bottleneck by multicasting all the routing tables every 30 seconds, which is bandwidth intensive. RIP has very slow network convergence in large networks. The routing updates take up significant bandwidth leaving behind very limited resource for critical IT processes. RIP doesn’t support multiple paths on the same route and is likely to have more routing loops resulting in a loss of transferred data.

 

RIP uses fixed hop count metrics to compare available routes, which cannot be used when routes are selected based on real-time data. This results in an increased delay in delivering packets and overloads network operations due to repeated processes.

 

Monitor Routers Using RIP in Your Network

Advanced network monitoring tools have the ability to monitor network route information and provide real-time views on issues that might affect the network. Using monitoring tools in small networks you can view router topology, routing tables, and changes in default routes.

 

Learn more about other popular routing protocols like OSPF and EIGRP in the blog series.

This topic doesn't fall into the realm of geeky networking but it can be used to simplify your work life and ultimately simplify the management and day to day operations of your team.

I am a firm believer that individuals grow stronger and more knowledgeable as a team than on their own. Sure there is a fair amount that has to be done on your own but having a teammate or group to push you helps a ton.

From a business stand point you wouldn't want critical services and applications to have a single point of failure. So why would you want to have a single source of knowledge on your team? So if most companies out there struggle with just getting a single experienced and knowledgeable person in vacant positions how are they going to have multiple people on a team share that load? To me its all about building up everyone on the team. You don’t have to rely on hiring from outside if you have been ramping up entry-level and Jr admins from within.

For me I try and approach my job and projects with the mindset that I will not be around in three to six months. This mindset forces me to document as much as possible and work with those around me to get up to speed for when they have to take over my responsibilities.

Spending your time with knowledge transfer and helping others grow also has benefits for yourself. If you’re the only one who knows how to do your job then taking vacation is always painful or non-existent. Also if there is no one else around to take over your role how are you going to be able to set it aside and take on that big project that you want. Even better how will you get promoted if you cant be replaced?

So how do you guys approach this and how do you build your teams knowledge?

How do you approach documentation of knowledge? ( ex. Wiki, SharePoint, internal blog)

What is your stance on certifications?

What type of mentorship and training does your team have in place? 

Mailbox server role in Exchange 2013 is quite simple actually. All you have is mailbox and public folder databases, and email storage. The mailbox server role consists of mail database, replication, storage, information store, RPC requests, and calendaring and resource bookings. Although it’s crucial to monitor the mailbox server role, it’s also equally important to look at other areas within Exchange.


Monitor ActiveSync Connectivity: Microsoft’s Exchange sync protocol is ActiveSync which is optimized to work when networks have latency and bandwidth issues. You can monitor ActiveSync based on the HTTP protocol and find out whether mobile devices are having issues connecting to Exchange server and also whether or not users can still access email, folders, contacts, and calendar information offline.


Active Directory Driver: The Active Directory (AD) driver in Exchange server allows Exchange services to create, modify, delete, and query AD domain service data. The AD driver utilizes Exchange AD topology information which allows the driver to access directory services. If there is an issue with the service then it affects Exchange services causing bottlenecks.


Client Access Role: To keep your email up and running, you should monitor the client access server role. The client access server connects the client requests and routes it to the appropriate mailbox database. Monitoring client role components such as Exchange POP3, IMAP, unified messaging, etc. will tell you whether end-user performance is having an impact.

 

Server & Application Monitor has various component monitors to monitor your entire Exchange environment. All you have to do it select the monitor that best addresses your pain area, assign and monitor, and get notified whether there is warning or critical alert.

 

Check out the SAM templates & AppInsight for Exchange here.

Ho ho, what's this?

Posted by Bronx Mar 13, 2014

Yup, the new SolarWinds help system has finally been launched, thank God. You can check out SAM's new system here. So, what does this mean for you? Not much really, but there are a few things I'd like to bring to your attention.

 

Remember when Help had every product? (See below.) The tech writers here called that, MegaHelp, and with good reason. It took hours just to build that monster. (I think the marketing folks back then concluded that if a user saw every product, they may want to buy something else.) That's like me looking for motorcycle instructions and stumbling across the "How to land a 747" page. Not so good. Now each product has its own separate Help system. See?

old.pngnew.png

 

Note the differences. A few books for each product rather than 50 books!

 

How did we make the transition?

The old fashioned way...we worked harder, and a li'l smarter. In addition to our regular workload, each tech writer was responsible for making sure every link in their product's UI was inserted into the new system, in addition to the cross-references within each document. Naturally, there's no tool for doing this and we were facing thousands of copy & paste operations, especially me! (SAM is rocking about 2,300 pages when all is said and done.)

 

Being a programmer, my first thought was to build a tool to automate this. The tool I built worked, for the most part. It extracted all the links and cross-references from our original documents and "pasted" them into the new ones. (When I say, "...for the most part," I mean there were some square peg/round hole moments that we all had to sort out.) The end result though was a much faster conversion with hands and fingers saved from not copying & pasting 20,000 times.

 

Is it perfect?

I think we did our best to make the transition as easy as possible for you, the user. That said, it is possible that there may be broken links and/or delays. If there are, I apologize in advance. It took several months to convert our entire help system to a new platform. Working the conversion while adding new content simultaneously pushed all of the tech writers to the edge. If you spot a problem, please post it here and we'll fix the issue as soon as possible. If you don't experience any problems, great! That's our goal. We are after all, Technical Writers - The Unsung Heroes.

 

What's in it for you?

First off, many fewer books to sift through. Excellent improvement IMHO. Next, we have a more robust search engine that will only search within your product's docs.

ps2.png

The New Books.

You would be surprised how much info is included just in SAM's AppInsight for Exchange section. Why not show you? Okay.

AIE.png

KB Articles

Another tip: Search our KB system. We have a great deal of articles already in place for AppInsight for Exchange. And look, I've already done the search for you!

SolarWinds Knowledge Base :: Search :: SolarWinds Knowledge Base :: AppInsight for Exchange

 

As you can see, there's a ton of information on everything AppInsight for Exchange. Why? Well Kate from SAM Tech Support, um, well, er, uh, her kung fu is stronger than my kung fu. What Kate wants, Kate gets. (She quite literally is a black belt in kung fu and I have the picture of her black eye to prove it.) I do whatever I can to make her life easier, so before you go calling support, take a peek at the brandy new help files. (Plus the darned thing just looks cooler.)

 

In the End.

I'd like to take this opportunity to thank all of the tech writers here at SolarWinds. We were under a great deal of stress creating new content while making this transition work. I know some even took it on the chin. Hope you thwackers take advantage of our hard work. My much needed vacation starts next week.

 

Goodnight Doc Torok, wherever you are!

DHCP services, be it from Microsoft®, Cisco® or open source, save administrators from routine problems that can occur during manual IP address assignment. They offer services that aid in centralized management of all TCP/IP client configurations, including IP and gateway address. They also enable DNS records to be updated on the client while DNS servers maintain mapping the client name to an IP address and vice versa.

 

However, one of the real problems that administrators face with DHCP/DNS management is keeping track of multiple DHCP/DNS servers operating in the same network. Some common pain points in everyday DHCP/DNS administration are:

 

Blog-Image.jpg

Missing Link between DHCP/DNS Setups & Efficient IP Address Distribution

 

Network issues and user downtime many times occur due to inefficient IP address management. To curb this problem, there needs to be a medium that can congregate relevant IP address data and also provide options for DHCP/DNS administrative and configuration tasks – all within a single interface.

 

A recent survey on thwack® indicated the top IP address management capabilities that administrators deemed as “Must Haves” were:

•    Data to accurately provision for IP addresses

•    Automation of routine DHCP/DNS configuration tasks

•    Real-time monitoring of IP address usage and IP resource utilization

•    Reduce manual effort in managing and maintaining  IP address documentation

 

What should you look for?

 

It’s high time that administrators consider their options for flexible IP address management solutions. Ideal solutions will not only automate, manage, and track your IP space, but also provide DHCP/DNS management within the same tool.

 

Specific features that a solution or tool should have in order to ease the hassles of DHCP/DNS administration are:

 

  • Enable unified management from a single interface irrespective of the underlying vendor
  • Consolidated view with manageability options for both your DHCP- scope/subnets and DNS- zones/records
  • Automatically sync changes with the associated servers eliminating the need for manual updates
  • Support for open source DHCP/DNS services providing friendly UI and reducing the use of complex CLI commands
  • Active alerts/notifications on IP conflicts, full subnets/scopes, DHCP/DNS server status, etc.

 

With increased IP address demands and complexity in networks, spreadsheets and other in-house developed tools fall short of meeting administrative requirements of IP space management. Additionally, Bring Your Own Device (BYOD) and the Internet of Things (IoT) mean more devices/IP requests in the network. This means there will be a need for more DHCP servers to handle IP address distribution. The choice of DHCP/DNS services depends on the organization and most of them tend to have a mix of vendors for their own reasons.

So, rather than making more investments in resources or appliances to manage your DHCP/DNS infrastructure, it would be sensible to invest in a solution that provides a common management console irrespective of the underlying vendor. Consolidate your multi-vendor DHCP, DNS server management to improve IP space administration while being able to easily scale without disrupting current management processes.

Privacy issues\regulations are the worst that has happened to corporate IT. This my be a Canadian thing or related to working with unionized employees but those are two issues that have shaped my opinion anyways. Not sure how it affects other geo’s around the world but it seems like we have bent of backwards for the employee to the point of craziness.

 

I’ve always been more about getting the job done and worrying too much about privacy in the workplace. I am of the opinion if you don’t want people knowing what your doing, don’t do it at the work place.  I’ve used at least three pieces of software that required permission from the user before starting a remote control session. Most times it ended up in my using the local administrative account to bypass it. After 3 or 4 times of missing the user at their desk or the person that needs helps needs  go on “break “you just want to get the work done and move on.

 

From a virtual desktop perspective, I would always make sure I could use the vSphere console to mirror a remote session. By default if you’re using PCoIP the vSphere console would be black and then required some more hoops to be jumped through before you could help the user. To switch the behavior you could switch the registry key.

 

HKLM\SOFTWARE\VMware, Inc.\VMware SVGA DevTap\NoBlankOnAttach : DWORD: 1

 

Never had to deal with too many corporate polices with privacy but do know lots of people with a sense of entitlement. The reality is the employer has a right to track anything you do with a corporate device.

 

Has privacy gotten in your way? Is privacy really strong in your workplace?

This post will serve as a ready reckoner to those who are looking to understand and implement Microsoft® Exchange Server® roles. Exchange Server 2007, 2010, 2013 have different architecture models when compared to the new 2016.

 

Server Roles in Exchange 2007 & 2010

There are five server roles in Exchange Server 2007 & 2010 as follows:

  1. Mailbox Server: Hosts the mailbox and public folder databases and also provides MAPI access to Outlook clients
  2. Client Access Server (CAS): Hosts the client protocols, such as POP3, IMAP4, HTTPS, Outlook Anywhere, Availability service, and Autodiscover service. CAS also hosts Web services.
  3. Hub Transport Server: responsible for all email flow in the organization, internal routing and policy enforcement
  4. Edge Transport Server: A special transport server intended for installation in DMZ networks to provide secure inbound/outbound email flow for the organization
  5. Unified Messaging Server: Provides VoIP capabilities to an Exchange Server in order to integrate e-mail, voicemail and incoming faxes as part of an inbox.

 

The Mailbox Role is generally installed along with the CAS, Hub Transport Server and Unified Messaging Server roles on a single server. The Edge Transport Server Role sits on the perimeter and is not part of AD.

 

Server Roles in Exchange 2013

Exchange Server 2013 consolidated some of the Exchange Server roles from 2007/2010. After Hub Transport server and Unified Messaging server roles were discontinued in 2013, it now has only the following server roles for installation as follows:

  • Mailbox Server: This server role hosts both mailbox and public folder databases and also provides email message storage. The Mailbox server role has 2 Transport services:
    • Hub Transport Service: Similar to the Exchange 2007/2010 Hub Transport server role, this service provides email routing within the organization, and connectivity between the Front End transport service and the Mailbox Transport service
    • Mailbox Transport Service: This service passes email messages between the Hub Transport service and the mailbox database

Mailbox servers can be added to a Database Availability Group, thereby forming a high available unit that can be deployed in one or more datacenters. DAG is the base component of the high availability and site resilience framework built into Microsoft Exchange Server 2013. A DAG is a group of up to 16 Mailbox servers that hosts a set of databases and provides automatic database-level recovery from failures that affect individual servers or databases.

  • Client Access Server: Exchange Server clients such as Outlook, Outlook Web App and ActiveSync connect to CAS for mailbox access. CAS authenticates, and redirects or proxies those requests to the appropriate Mailbox server. CAS has 2 main components:
    • Client Access Service: This handles the client connections to mailboxes
    • Front-end Transport Service: This handles all inbound and outbound external SMTP traffic for the Exchange organization, and can act as a client endpoint for SMTP traffic

The CAS role does not perform any data rendering functionality in 2013 and only provides authentication and proxy/redirection logic, supporting the client Internet protocols, transport, and Unified Messaging.

  • Edge Transport Server: Though this role was discontinued with Exchange Server 2013, the latest SP1 of 2013 has reintroduced this role. Edge Transport servers minimize attack surface by handling all Internet-facing mail flow, which provides SMTP relay and smart host services for your Exchange organization, including connection filtering, attachment filtering and address rewriting.

 

Server Roles in Exchange 2016

Exchange Server 2016 consolidated some of the Exchange Server roles from 2013. The Client Access Server role was consolidated with Mailbox Server role and the Edge Transport Server role has been introduced again. The reason for consolidating Client Access Server with Mailbox Server is to reduce the number of servers and cost of hardware. Separate server roles can result in long-term cost disadvantages as you may purchase more CPU, disk, and memory resources than you will actually use.

  • Mailbox Server: The Mailbox server role in Exchange Server 2016 is the only mandatory server role, and the consolidation reinforces the recommended practice since Exchange Server 2010 to deploy Exchange as a multi-role server instead of deploying individual roles to separate servers.
  • Edge Transport Server: This role will be much the same as Edge Transport in previous versions of Exchange, designed to sit in perimeter networks and provide secure inbound and outbound mail flow for the organization. Note: Edge Transport servers are not mandatory.

 

Exchange Server 2007 Roles

Exchange Server 2010 Roles

Exchange Server 2013 Roles

Exchange Server 2016 Roles

Mailbox Server

Mailbox Server

Mailbox Server

Mailbox Server

Client Access Server

Client Access Server

Client Access Server

Compared to Exchange 2010 and 2013, Client Access Server role is consolidated with Mailbox Server role.

Hub Transport Server

Hub Transport Server

Hub Transport Server functionality has been divided between the Client Access server and Mailbox and is no longer a dedicated server role.

Compared to Exchange 2010, Hub Transport Role is consolidated with Mailbox Server role.

Unified Messaging Server

Unified Messaging Server

Unified Messaging Server functionality has been divided between the Client Access and Mailbox server and is no longer a dedicated server role.

Compared to Exchange 2010, Unified Messaging Server role is consolidated with Mailbox Server role.

Edge Transport Server

Edge Transport Server

Exchange 2013 does not contain the Edge Transport server role.

Edge Transport Server

 

Learn More:

Find out how to discover, configure and monitor Exchange Mailbox Server using SolarWinds Server & Application Monitor: http://thwack.solarwinds.com/community/solarwinds-community/product-blog/blog/2013/12/18/appinsight-for-exchange

 

References:

  1. Microsoft Exchange Server News - Tips - Tutorials
  2. Resources and Tools for IT Professionals | TechNet
  3. Exchange Server 2016 Architecture | TechNet

automate.png

Eric Wright (Discoprosse) talk on this topic from a server/orchestrations side in his February post Server Build Processes, Monitoring, Orchestration and Server Lifecycle Management  but I wanted to circle back around and add networking into the mix.

There is a ton of information and views on the topic of network automation and in order to keep this post short and open Im only going to graze the surface of these topics. Also Im going to speak mostly for the Data Center space but this topic could also pertain to the enterprise network as well.

 

Centralized Controllers and Network Automation

The big challenge with modern data centers is how to manage and deploy these massive infrastructures and their growing complexity. Imagine managing hundreds of switches, thousands of VLANS, possibly spanning multiple data centers, servicing hundreds of customers. Each customer needs their own space and must be isolated from everyone else. These customers also expect turn-up as fast as possible.

Network vendors have addressed these challenges and design shifts in multiple ways. Both Cisco and Juniper have deployed a controller based solution where access and top of rack (ToR) switches are controlled and managed by a central controller or master switch. This design reduces the number of devices administrators need to manage and also simplifies deployment of changes or add/removes. Multiple vendors are also opening up there switches allowing 3rd party software or controllers to manage their switches.

Controllers to me are the network industry dipping our toes in the orchestration/automation realm. As I'm sure most of you know there is a lot of activity right now in the network automation realm.  

Several open source projects are out leading the charge on network automation such as OpenStack, Open Daylight and I'm sure there are others. Vendors are also making a move in this realm such as Cisco's ACI. You are also starting to see new companies pop up with some really cool products such as tail-f.

Everyone sees this topic differently so I want to hear what you think. To me automation will remove the redundant manual configuration of network equipment and speed up and simplify network deployments even further. It will also give us better control and allow us to quickly adapt and alter networks with changing traffic patterns.

Anyone here been playing with these tools?

How do you see SDN and Network automation changing our day-to-day roles?

Deep packet inspection (DPI) is a technology used to capture network packets as they pass through routers and other network devices, and perform packet filtering to examine the data and find deeper information about the data carried by the packets.

 

Unlike stateful packet inspection (SPI aka shallow packet inspection) which only looks at a packet's header and footer, DPI examines the header, footer, source and destination of incoming packets, and the data part of the packet, searching for illegal statements and pre-defined criteria, allowing you to decide whether to let the traffic pass through your network or not. DPI makes it possible to find, identify, classify, reroute or block packets and help you determine—based on the content contained in the data packets—whether the traffic is secure, compliant, permitted and genuinely required by the end-user/endpoint application or not.

dpi 2.pngDPI.png

 

SOME MAJOR APPLICATIONS OF DPI

  • Deeper network traffic forensics to aid flow-based analysis
  • Application-aware network performance monitoring
  • Network-based application recognition
  • Network traffic regulation and control
  • Network security (to identify virus, spams and intrusions)

 

DIFFERENCE BETWEEN FLOW ANALYSIS & DPI

Flow-based network traffic analysis allows you to intercept the network traffic flow as it passes through flow-enabled network devices (routers and switches). Flow analysis provides comprehensive data to validate quality of service, type of service and class of service of the network packet, its source and destination IP address, etc.

 

DPI performs deeper packet filtering and forensics and examines every byte of every packet that passes through the DPI probe. DPI has the ability to inspect traffic at layers 2 through 7 which allows you to get detailed information of what content (not just the type of content, but the content itself) is passing through your network.

 

DPI BROKERS THE MARRIAGE OF NPM & APM

While network performance monitoring (NPM) and application performance monitoring (APM) have been important to monitor the health of the IT infrastructure, they always operated in two different silos – network side and systems side. The challenge had always been to get insight into how applications are being impacted as you study the network performance.

 

With DPI, you will be able to analyze the packets in detail and determine if they contain any insecure or unrelated content that could affect the performance of the end-user application receiving the packet. Alongside application availability and performance (via APM), you can also mon

itor the root-cause of potential application issues due to information contained in network packets. This is called APPLICATION-AWARE NETWORK PERFORMANCE MONITORING (AA-NPM).

 

DPI enables AA-NPM by allowing you to get deeper metrics such as network response time (NRT) and application response time (ART).

  • NRT fetches you information on how long it took for the network devices to respond when they get a packet transmission request.
  • ART fetches information on how long it took for the application receiving the packets to respond.

 

Both these metrics cover the network-side and application-side of the issue and clearly shed light on where the issue exists.

 

DPI is an important aspect of network monitoring and we’ll soon start seeing DPI probes being offered along with network monitoring and application monitoring tools for deeper packet forensics and network security.

Quality of Service (QoS) is used in enterprise networks to ensure that business-critical applications have the required priority and are not bogged down by non-business traffic when passing through the enterprise WAN link or even when traversing the Internet.

 

Cisco devices support a QoS model where packets can be treated with priority even by Intermediate Systems (IS) depending on its DSCP value. Based on a packet's DSCP value, the traffic is put into a specific service class and traffic conditioning functions such as marking, shaping, and policing are done to it. To ensure priority for preferred packets even after it leaves the network, the DSCP markings are done to the outbound traffic at the edge.

 

Take a traffic conversation moving from the LAN to the WAN with the default priority:

 

 

Source IP

 

Source Interface

 

Destination IP

Destination Interface

Port / Protocol

 

DSCP Value

 

192.168.1.10

 

FastEthernet 0/1

 

  1. 74.125.224.68

 

Serial 1/1

 

2654 TCP

 

Default

 

To achieve service delivery when this conversation moves over the WAN, a DSCP based QoS policy  that changes the packet’s DSCP marking from ‘default’ to a high priority ‘EF’ is applied on the outside of the serial interface.

 

 

Most enterprises use NetFlow for traffic analytics because NetFlow can provide details while not being resource intensive on the device as well as on the bandwidth. When enabling NetFlow on a Cisco device, the options available are Ingress NetFlow or Egress NetFlow and a majority of the network admins use Ingress NetFlow. With Ingress NetFlow, the IN traffic across an interface is captured. Because NetFlow data also has information about the interface through which the IP conversation exited the device, the same conversation can be attributed as the OUT traffic for the exit interface. So all NetFlow reporting tools can construct the OUT traffic for an interface from the information captured by Ingress NetFlow.

 

For the TCP conversation we discussed, Ingress NetFlow captures IN traffic at the Fa 0/1 interface where no QoS policy was applied and the DSCP marking was “default”. This conversation exits the router through Serial 1/1 and so the same conversation is attributed as the OUT traffic for Serial 1/1.

 

And that is the downside. Since traffic was captured by NetFlow from the inbound of Fa 0/1 where there was no QoS policy, the conversation was captured when its DSCP marking was on ‘default’. When the same conversation is attributed as the outbound of Serial 1/1, it will still be shown to have a ‘default’ DSCP marking though in reality the packets have been altered to have an ‘EF’ marking while it was exiting the Se 1/1. This is the behavior with any NetFlow reporting tool.

 

Then there is Egress NetFlow. Egress NetFlow captures the OUT traffic from an interface and from this OUT traffic, the IN traffic for the entry interface is constructed.

 

 

Source IP

 

Source Interface

 

Destination IP

Destination Interface

Port / Protocol

 

DSCP Value

 

192.168.1.10

 

FastEthernet 0/1

 

  1. 74.125.224.68

 

Serial 1/1

 

2654 TCP

 

EF

 

In our example, Egress NetFlow captures traffic when it exits the Serial 1/1 interface but with the correct outbound DSCP marking of EF. This way, your NetFlow reporting tool can report on IP conversations with the modified DSCP marking rather than the pre-QoS policy DSCP marking.

 

There are other advantages too with Egress NetFlow – such as where you use WAN compression, Egress NetFlow captures traffic after the compression and not at the original level. This way, you see the actual volume of traffic that exited your device and not pre-compression traffic volumes.

 

To apply Egress NetFlow on your interfaces, use the command “ip flow egress” (traditional NetFlow) or “ip flow monitor monitor_name output” (Flexible NetFlow) and that should get you ready for traffic capture with the correct DSCP values. And if you have not yet used NetFlow, try it with a network traffic monitor to monitor traffic as well as to validate your QoS policy performance.

 

 

30 Day Full Feature Trial | Live Product Demo | Product Overview Video | Twitter

When your networks evolve dynamically, routing protocols (BGP and OSPF) automatically get complicated in terms of updating optimal traffic path based on the state of network devices and routing configuration. But, it can also create problems such as hardware failures and misconfigurations which can be increasingly difficult for network administrators to manage. Dealing with issues like loss of connectivity or slow network can be avoided if admins can find out the root causes like route flapping, configurations errors or missing routes in routing table soon enough before it affects the users.

 

Manual troubleshooting can be tough!

Experienced network professionals would vouch that the best way to troubleshoot a network routing issue is to use the OSI Model and go layer-by-layer. You can refer to this detailed whitepaper and read more about the difference between manually troubleshooting routing issues and automating route monitoring. More importantly, you would need a tool that has intelligence to analyze and monitor network routes and changes to those routes.

 

How-to troubleshoot network routing issues

Do you want your network to function properly? Then routing should be the key focus create efficient network management. It is very important that you find the root cause of the routing issue and bring your network back on track.

Some of the common routing issues faced by network admins are:

  • Rule based routing configuration errors
  • Disparity in routing neighbor parameters
  • Routing loops
  • Irregular routing and rogue network technologies.

The following slideshare will help you understand network route troubleshooting in 7 easy steps.


As you’ll see, the resources available in network management system will provide you with all the routing details. With a quick glance you will know everything about your routing data. If you combine real-time network routing information with network performance data and statistics, you can easily monitor all the changes in default routes, routing tables, BGP transitions and flapping routes. This will make your network troubleshooting much easier than manual processes.


View More

Whitepaper – Network Route Monitoring

Video – Monitoring Network Routes using NMS

sqlrockstar

What Is Latency?

Posted by sqlrockstar Employee Mar 4, 2014

I brought my daughter to an event last weekend. At one point during my presentation she looked up at a word on the screen and asked:

 

"What is latency?"

 

And she did it with a smug look on her face...the look like she was challenging me to explain something in a simple enough way for her to understand.

 

So, I answered her...

 

"Latency is a measure of how long something takes. For example, when we call you down for breakfast and it takes you ten minutes, that's a measure of your latency."

As the attendees started laughing the look on her face went from smug to slightly annoyed.

I think I won that round.

Welcome to SolarWinds Blog Series ‘Basics of Routing Protocols’. This is the first of a four part series where you can learn more about the fundamentals of routing protocols, types and their everyday applications in network troubleshooting.

 

What is a Routing Protocol?

Routing protocols are used to determine the optimal path for data communication between nodes in a network. Routers use them to share routing information between other routers for building global routing tables dynamically.  Routing protocols are implemented when your network grows to the point where static routes are unmanageable.  Dynamic routing table management adjusts automatically for topology and traffic changes.

Routed Protocols on other hand contain the data required for a packet to be sent outside of its host network. They are the traffic that routers direct from source to destination. Examples are IP, HTTP, SSH, and SIP.

 

Different types of Routing Protocols

There are two major classes of routing protocols – Exterior Gateway Protocol (EGP) and Interior Gateway Protocol (IGP). EGP is a routing protocol used to exchange routing information between autonomous systems. For instance, EGP is used in data transfers between ISPs (Internet Service Providers) to ISPs or between autonomous systems to ISPs.

IGP (Interior Gateway Protocol) is used for exchanging routing information between routers within an autonomous system. For instance, IGP can be used in data transfer within your organization’s local area network. IGP can be further classified into two categories – Distance Vector and Link State Routing Protocols.

 

Distance-Vector vs. Link-State Routing Protocols

In Distance-Vector Routing Protocols, routers communicate with neighboring routers periodically informing them about network topology changes. Whereas in link-state routing protocol, routers create a roadmap of how they are connected in the network. By calculating the best path from that router to every possible destination in the network, link state routing protocols form the routing table.

RIP (Routing Information Protocol), RIPv2, IGRP (Interior Gateway Routing Protocol), EIGRP (Enhanced IGRP) are part of Distance-Vector Routing Protocols. OSPF (Open Shortest Path First), IS-IS (Intermediate System to Intermediate System) are part of Link-State Routing Protocols.

 

How to find the best routing protocol for your network?

Distance vector routing Protocols like RIP and EIGRP are ideal for small networks that are simple and non-hierarchical. Enterprises use link state routing protocols like OSPF and IS-IS for their large and hierarchical networks whilst distributed networks will use BGP to establish routing information between autonomous systems. For instance, network administrators using OSPF will have advanced knowledge about complex networks which helps them in troubleshooting routing related issues.

Additionally, network admins choose routing protocols based on convergence time, the time taken for all the routers to collect the status of current topological information about the network. If you have three routers in the network and one of the links that connects the network has failed, the information about the status should be immediately available in all the routers by the process of convergence. The slower they converge the harder they become for network admins to troubleshoot. Focusing on easing network route management, network configuration and troubleshooting are important when admins manage large enterprise networks

 

How can you monitor and troubleshoot route related network performance issues?

Advanced network monitoring tools have the ability to monitor network route information and provide real-time views on issues that might affect the network. Also, you can now view router topology, routing tables, changes in default routes, BGP transitions, and flapping routes.

 

You can learn more about popular routing protocols like RIP, OSPF and EIGRP in the blog series.

I don’t deny that great remote support tools are needed.  It my IT career the best I’ve been able to muster most times is  RDP and the joy of combing thru the event logs for server support. On the desktop side of the house I’ve usually had the pleasure of some outdated MS product to work with because it was “free”. This is really sad since I’ve worked in healthcare and in oil & gas with healthy budgets. I am not sure why or how it became acceptable to have MacGyver based support tools or procedures when we would have just spent a healthy part of the budget on a product we now have to support.  I guess most times it has to do with new projects are sexy and exciting and support is not and relegated to the background. How does one go about changing the paradigm?

 

I think the first step is to figure out how much time your spending on support. Chances are it’s a lot more than you think. Studies find 70% - 80% of time in IT is spent keeping the lights on instead of moving the needle.  Even if you speed up support calls by 5% it could represent a fairly could chunk of money. 

 

BYOD is another chance to reinvent the support equation. Most support products are built for the land behind the corporate firewall. Having both support and the end user connect to a VPN is problematic. Having a support tool that can integrate to the business applications, traditional or SAS and connect directly to the user would be huge.

 

Some other things that would be core to helping out support would be:

 

  1. 1) Event and logging\correlation tools to present current service levels to a web page to prevent multiple phone calls from hitting the helpdesk
  2. 2) If the end user was submitting a ticket online, a downloadable tool would automatically do a self-diagnostics and submit results in the ticket. A part of this would be a network assessment.
  3. 3) Making departments fight for the after hour Service Level agreements
  4. 4) Location based printing so if the first printer is down it will redirect to the next closet.
  5. 5) Arm support with the same tools that the clients have. Seems basic but I’ve seen where the executes get MACs and the support team is on windows. Just makes life a little harder.

 

Is support the ugly duckling in your organization?  If you could help support what you do first? What tool would you buy to help the cause?

Yes, we made it to the RSA! Our product experts went to RSA Conference this week held at Moscone Center, San Francisco. Our Head Geeks Patrick Hubbard and Lawrence Garvin were at the booth (#2507, South Expo) talking to security practitioners from across the world. We had a great show at RSA. People stormed our booth and were excited to talk to us. There were a lot of happy customers singing praise of the SolarWinds products that they were already using and showing adulation to the security products that we were featuring at RSA. We had a diverse audience visiting our booth including security admins, security analysts, network admins, IT admins, CISOs, CEOs, from a wide range of industry sectors and geographies, etc.

  

IT Security Solutions from SolarWinds

We featured an array of our IT security software at RSA and gave demos to tons of people. Some of our security solutions featured in RSA were:

  • SolarWinds Log & Event Manager (LEM) – full-functioned SIEM software for centralized log management, threat detection, automated remediation, and compliance
  • SolarWinds Firewall Security Manager (FSM) – multi-vendor firewall security management software for firewall rule analysis & clean-up, change management & impact analysis, and security auditing
  • SolarWinds Patch Manager – centralized and automated patch management, software asset inventory, patch vulnerability assessment and patch reporting
  • SolarWinds Serv-U Managed File Transfer Server – managed file transfer and secure file sharing software
  • SolarWinds Network Configuration Manager (NCM) – network configuration, change and compliance management software which was selected as the winner in SC Magazine Awards for BEST RISK/POLICY MANAGEMENT SOLUTION category. This was the icing on the cake at RSA as we got the news when we were featuring NCM at the conference.
  • SolarWinds User Device Tracker (UDT) – automated network device tracking, switch port mapping, and monitoring rogue device connections

 

It was a successful show for us overall reinforcing our footing in the IT security and SIEM technology landscape where we offer a bunch of unbelievably affordable and effective security products to IT teams far and wide.

 

A happy customer quothed SolarWinds products are like the tires of a car, and are so vital and essential that the network and IT infrastructure cannot work the way one wants it to unless one has SolarWinds.

 

I was there as one of SolarWinds’ Beast Masters (booth staff) @ RSA 2014, and I loved the show!

 

PS: If you had visited our booth at RSA, you would now be the proud owner of our “Manage the IT Beast” t-shirt which was a rage at the conference, and we had people queuing up to grab one.

 

Manage the IT Beast.png

Hey Everyone!

My name is Ryan Booth and I’ve been given the opportunity to post as an Ambassador focusing in on the Network Management section of Thwack. I'm excited to get the opportunity to bounce my thoughts and ideas off everyone and look forward to the conversations.

 

The overall theme for my posts will be the growing complexity of networks and how to still keep things simple (aka KISS).

 

Everyone knows K.I.S.S as Keep It Simple Stupid and it’s something I push with every change, design, and project.  Networks are getting increasingly complex with everything from converged storage/data networking to automated workload migrations in a virtualized environment. BYOD, mobile devices, and virtual desktop infrastructure (VDI) are also making the enterprise environment super crazy.

 

So my question is: How do you balance increasing complexity in your network but still maintain simplicity?

 

Here are several open-ended questions to get the conversation going:

  • What makes more sense, five 9’s and High Availability (HA) or being able to quickly recover?
    • HA = complexity. Do you really need three, four or five 9’s?
  • Do you design a perfect fit each time or stay consistent across the network?
    • Ex. Deploy latest switch/server model or stick with the same model used in every closet?

 

A little about me:

 

I’ve been in the IT game for about 10 years now with the majority of my experience focused in on routing and switch (8+ years). I currently hold 3 CCNP certifications (R&S, SP, and Design), and am working towards my CCIE. I also have experience with servers, both Windows and Linux, along with some virtualization experience but my passion is routing and switching.

 

I blog at my own site blog.movingonesandzeros.net and can be found on Twitter @That1guy_15. You can also find me on various forums and hangouts under That1Guy15.

So enough about me, let’s get this ball rolling!

Filter Blog

By date: By tag: