Skip navigation
1 2 3 Previous Next

Geek Speak

1,766 posts

geekspeak_memorialday.png

The DevOps Days - Washington, DC event is right around the corner and it turns out I happen to be sitting on 3 tickets. You know what that means!

 

As I did for the Columbus DevOps Days, I'm going to be giving these babies away 100% free to the first people who post a selfie.

 

A selfie that is worthy of it's own $55,000 kickstarter. A selfie that shows your creativity, humor, patriotism, and flair.

 

That's right, I want a

SUPER MEGA GEEKY WARHEAD MEMORIAL DAY SELFIE

 

Now I'm not here to tell you what that means. (heck, I just made up that sentence 10 seconds ago). Heck, even if you don't celebrate Memorial day (looking at you, jbiggley), but you can get your Geeky cheeks down to DC next week, those tickets could be yours.

 

All I'm saying is that if you are able to attend the DevOps Days event in DC on June 8 & 9, and you post a selfie in the comments below, then one of those golden tickets is yours yours YOURS!!

 

So, have a safe, responsible, geeky, fun, relaxing, non-stressful Memorial Day (or as you call it in other countries, "Monday") and start selfie-ing!

All-flash storage array (AFA) provides two major benefits for the data center. First, AFA enables capacity efficiency with consistent performance and a reduced storage footprint. Second, AFA usually includes a software overlay, which abstracts storage hardware functions into the software. Think software-defined storage. These features include deduplication, or the elimination of duplicate copies of data, data compression, and thin provisioning.

 

These two qualities combine to form a dynamic duo of awesomeness for infrastructure teams looking to optimize their applications and maximize the utility of their storage arrays. All-flash storage essentially optimizes CPU utilization, so the number of IOs per second (IOPs) per host increases and also reduces the number of host servers needed to service the IOs.

 

So all that glitters is gold, right? Not so fast. The figure below shows how AFA affects the data ecosystem. In the past, traditional storage performance was measured in terms of the number of IOPs and the most influential variable is the number of spindles. With AFA, spindle count doesn’t matter, so performance centers on average latency, and that latency is influenced by the number of applications that will be piled onto the AFA. This means the bottleneck moves from spindle count to hitting the storage capacity limit as well as running hot in the other subsystems in the overall application stack.

trad vs afa.png

Are you considering AFA in your data center environment? Have you already implemented AFA? What issues have you run into, if any? Let me know in the comments below.

 

And don't forget to join SolarWinds and Pure Storage as we examine AFA beyond the IOPs to highlight performance essentials and uptime during a live webcast on June 8th at 2PM EST.

sqlrockstar

The Actuator - May 25th

Posted by sqlrockstar Employee May 25, 2016

This week's edition comes to you on the day of my 19th wedding anniversary, which apparently is the Bronze anniversary, so I'm going to arrange for my wife and I to visit the Basketball Hall of Fame and look at the various bronzed pair of sneakers therein. OK, I'm kidding. We are actually going to spend our anniversary at the middle school band concert which is as romantic as it sounds.

 

Anyway, here is this week's list of things I find amusing from around the internet...

 

Google Patented a Sticky Car Hood That Traps Pedestrians Like Flies

Seriously. Human flypaper. Every car will be tinted yellow during pollen season. The future is dumb.

 

For Microsoft, Its Achilles’ Heel Is Excel

Excel is the dirty secret that powers millions of business worldwide every day. Achilles heel? Sure. But Microsoft keeps pushing it forward with things like PowerBI. Face it, Excel runs the world, and Microsoft runs Excel. Neither are going away anytime soon. Personally I'm looking forward to seeing Excel on Linux just to see the look on adatole's face.

 

Ditch the data dump

At first I thought they were talking about backups, but the article is about analytics and how your business end-users need data, lots of data, fast and accurate data, more than ever before. This is a trend that is not going away, and if you ignore your business end users need for data analytic tools you will find yourself with a growing Shadow IT problem.

 

Fast.com Shows You How Much Your ISP is Screwing You

I think I'd like this site better if it wasn't backed by Netflix, who has an axe (or two) to grind with ISPs. If only there was a tool that would help you get similar details...

 

D.C.’s Metro Catches Fire More Than Four Times A Week

That's a bad thing, right? I have to admit I am a bit surprised this article didn't end with a #ThanksObama.

 

The Changing American Diet

Nice visualization of how Americans have changed their diets over the past 30+ years. Sadly, bacon isn't at the top of the list.

 

The Electric Car Revolution Is Finally Starting

I'm filing this under "But this time we REALLY mean it!"

 

Spring is finally here, which means we can get started on training the next generation of systems administrators:

 

FullSizeRender.jpg

dreamstime_xl_3788403.jpg

 

Despite my protestations to the contrary, and my sincere belief that everything was better when we all used command lines, time marches on and my shouting at kids to get off my lawn routine just is not carrying much water any more. We humans are visual creatures, and visualizing the good and the bad helps us make sense of our environment. It is one reason among many why video conferencing has taken off over the last few years; we get so much more out of a conversation when we can see the other person’s facial expressions along with hearing their voice.

 

So it goes in the world of network troubleshooting as well. For years we stared at text data streaming across our screens, whether from real-time monitoring applications (home grown or otherwise,) or the output of tools we manually ran like ICMP echoes, path traces, etc. But we were limited in what we could see in the proverbial code. We looked at the matrix, but never experienced it.

 

Eventually we started to take what we already had, our formerly static network maps, and automate them somewhat to show real-time data. Now our NOC operators, or anyone else who regularly looked at the network, could “see” problems as they were happening. And if a picture is worth a thousand words, then a network map is worth a hell of a lot of static textual printouts. But even though we could now see trouble spots in the network, and bathe the boss in the radiating confidence from a big green button, we still had our feet firmly planted in the old ways.

 

Half-in, half-out might be the best way to view where we had gotten to by the time our green buttons and network maps became firmly lodged in the standard operating practices of NOC operators everywhere. Graphical maps with flashing buttons, and all the real meat came in the form of text boxes filled with data. Even more hampering was the fact that a lot of our visibility came from legacy tools that had not kept up with the ever-changing reality of our networks. Networks which used to be somewhat flat, open, and largely not prone to asymmetrical routing or very complex application stacks. And god knows the cloud was not even a buzzword at that point, let alone a concept with real solutions behind it.

 

Fast-forward to today where we do have the application stacks, mass virtualization, the cloud, containers, virtualized networks, and a roughly eighty-percent chance of asymmetrical routing both inside and outside of our networks. It is pretty clear that the legacy tools we have been using are just not cut out for today’s reality. What is really needed is a new set of tools, a new way of looking at our networks and our traffic patterns, which take into account all of these challenges we face in modern networks. We need something to keep up with the speed and agility with which our networks and our response times, and the business, demand.

 

Our tool sets going forward are certainly going to have to cope with a much more complex and ever changing landscape than at any prior time in our history. We’ll need to be able to not just monitor network paths and look for latency which may or may not be artificial, but also to look much more accurately into the performance of our businesses’ applications quickly, visually, and in a way that allows us to hone in on a particular area or device in our networks. A tool to solve all of these complex problems—to help those of us in the network world look like heroes to our bosses—would be a tool in high demand in the marketplace. One step closer to hearing and seeing no evil, and to not having to speak it as well. Of course, we will still give our boss that shiny green button.

If you've ever read the “Adventures of Sherlock Holmes” by Sir Arthur Conan Doyle, you're probably familiar with some of the plot contrivances. They usually entail a highly complex scheme that involves different machinations, takes twists and turns, and requires the skills of none other than The World's Greatest Detective to solve.

 

Today's government networks are a bit like a Holmes story. They involve many moving parts, sometimes comprising new and old elements working together. And they are the central nervous system of any IT application or data center infrastructure environment – on premise, hosted, or in the cloud.

 

That's why it's so important for IT pros to be able to quickly identify and resolve problems. But the very complexity of these networks can often make that task a significant challenge.

 

When that challenge arises, it requires skills of a Sherlockian nature to unravel the diabolical mystery surrounding the issue. And, as we know, there's only one Sherlock Holmes, just as there's only one person with the skills to uncover where the network problems lie.

 

That would be you, my dear federal IT professional.

 

Your job has changed significantly over the past couple of years. Yes, you still have to "keep the lights on," as it were, but now you have even greater responsibilities. You've become a more integral, strategic member of your agency, and your skills have become even more highly valued. You're in charge of the network, the foundation for just about everything that takes place within your organization.

 

To keep things flowing, you need to get a handle on everything taking place within your network, and the best way is through a holistic network monitoring approach.

 

Holistic network monitoring requires that all components of the network puzzle – including response time, availability, performance and devices -- are analyzed and accounted for. These days, it also means taking into consideration the many applications that are tied together across wireless, LAN, WAN, and cloud networks, not to mention the resources (such as databases, servers, virtualization, storage) they use to function properly.

 

Network monitoring and performance optimization solutions help solve the mystery entwined within this diabolical complexity. They can help you identify and pinpoint issues before they become real issues – security threats, such as detection of malware and rogue devices, but also productivity threats, including hiccups that can cause outages and downtime.

 

And, let's not forget a key perpetrator to poor application performance: network latency. Network monitoring tools allow you to automatically and continuously monitor packets, application traffic, response times and more. Further, they provide you with the ability to respond quickly to potential issues, and the ability to do this is absolutely critical.

 

As Sherlock said in “A Study in Scarlet,” "there is nothing like first-hand evidence." Network monitoring solutions provide just that – first-hand evidence of issues as they arise, wherever they may take place within the network. As such, implementing a holistic approach to network management can make solving even the biggest IT mysteries elementary.

 

Find the full article on Defense Systems.

Patrick Hubbard

DevOpsDays Daze

Posted by Patrick Hubbard Administrator May 23, 2016

I attended DevOpsDays last week, and have had time to get my head around what’s going on with SolarWinds customers. And this being thwack, (my safe place), I want to brag on you all a bit. Something amazing is happening, and you, the members of thwack, are at the center of it. Sharp IT admins, from very small companies all the way to our largest multinational corporate users, are actually making the move to DevOps.

DevOpsDaysTalk.jpg

 

You’re not doing it because someone like me told you it can help. You’re doing it because it’s helping you get a handle on production, the crazy-complexity you didn’t ask for, and  reducing breakage. It’s letting you test- scratch that- QA changes before you make them. For some of you, it’s even enabling the holy grail of IT we all dream about: business-hours production changes (without the need for maintenance windows) using continuous delivery processes.

 

Speaking to you on the phone, or meeting with you at tradeshows, gives me hints that the movement is growing. But DevOpsDays Austin gave me a chance to experience a full immersion class in what you’re actually doing in the field. First, and this seems to be the same everywhere, more than half of the attendees were current customers or former users working to bring SolarWinds to their new gig. And while that was a little surprising at the cloud mothership show, (AWS Re:Invent), DevOpsDays engineers didn’t materialize out of thin air into Linux and Scrum. They got the DevOps religion while keeping the lights blinking on all the same gorp that everyone else deals with.

 

Tuesday I gave a presentation about using our SWIS API to turn Orion into an IT automation platform, which is not something we normally talk about from the stage. SolarWinds’ primary and eternal internal design requirement is to be easy to use, right out of the box. It’s a secret – okay, an open secret – that  Orion is also hugely customizable, and you regularly stun us with how powerfully you’re integrating it into your operations. I concluded my remarks by inviting customers interested in integrating SolarWinds into their DevOps processes to visit our table, and – wow – did you take me up on it.

 

DevOpsDaysConnie.jpgYou often start with inventory management-driven discovery and automated monitoring, followed by network config change automation using NCM. The third step seems to be split between integration with your helpdesk systems and sophisticated alert suppression and report customization based on customer properties updated via the API. Most amazing is how many of you are doing this integration and customization solely by thwack threads and a little bit of Tim Danner magic.

 

My takeaway is to ask the community if this is something we should talk about more. Should we surface more of the amazing customizations you advanced users are doing, or will that confuse new users who, like you once did, are starting out of the box and just want to get going quickly?  Let me know what you think. Keep your comments coming in the SolarWinds Lab live chat, or say hello at a live event like SWUG, Cisco Live, VMworld, or Microsoft Ignite. You are advancing the future of IT by embracing your internal programmers. Let me know how I can help.

IT is changing at an accelerating rate with plenty of IT jobs at stake. And yet, doing a job may not be enough in this IT-as-a-Service paradigm that hybrid IT is ushering in. With IT jobs evolving and forking into multiple paths, deciding which path to take and when becomes integral to continuing one's prosperous IT career. Too bad there's not a tool like SolarWinds NPM 12 with NetPath for the IT career path, because it would be cool to visualize one's IT career on a hop-by-hop basis.

760x410_netpath_screen_closeup.png               

 

 

The IT career path is one of the hottest topics that I regularly discuss with my industry friends and peers, when we aren't talking about current IT trends. This brings up an important task that many IT professionals often overlook, which is building a competent, trusted network of techie friends, peers, colleagues, and resources. This is one of my golden rules that has served me incredibly well in my career. I make a concerted effort to continually do what I can to earn -- and return -- mutual trust.

 

In my career, I form and leverage a network of trusted IT advisors, who have helped me progress in my career path. From the Office of the CTO, working on virtualization, to Global Solutions Engineering, working on the first instances of converged infrastructure, every opportunity was presented and taken largely thanks to my trusted network of IT advisors. This extends to my time at a cloud start-up, and even my decision to accept the role of SolarWinds Head Geek. My circle of trusted advisors continue to play a major role in my life and my career, especially with so many opportunities presenting themselves in this hybrid IT world.

 

So I ask you again: do you have a network of trusted IT advisors helping you advance your career? If not, what are you waiting for? Let me know below in the comment section.

convention2.png

As I mentioned a while ago, I've returned to the world of the convention circuit after a decades-long hiatus. As such, I find I'm able to approach events like Cisco Live and Interop with eyes that are both experienced ("I installed Slack from 5.25 floppies, kid. Your Linux distro isn't going to blow my socks off,") and new (“The last time I was in Las Vegas, The Luxor was the hot new property”).

 

This means I'm still coming up to speed on tricks of the trade show circuit. Last week I talked about the technology and ideas I learned. But here are some of the things I learned while attending Interop 2016. Feel free to add your own lessons in the comments below.

 

  • A lot of shows hand you a backpack when you register. While this bag will probably not replace your $40 ThinkGeek Bag of Holding, it is sufficient to carry around a day's worth of snacks, plus the swag you pick up at vendor booths. But some shows don’t offer a bag. After a 20-minute walk from my hotel to the conference, I discovered Interop was the latter kind.
    LESSON: Bring your own bag. Even if you're wrong, you'll have a bag to carry your bag in.
  • What happens in Vegas – especially when it comes to your money – is intended to stay in Vegas. I'm not saying don't have a good time (within the limits of the law and your own moral compass), but remember that everything about Las Vegas is designed to separate you from your hard-earned cash. This is where your hard-won IT pro skepticism can be your superpower. Be smart about your spending. Take Uber instead of cabs. Bulk up on the conference-provided lunch, etc.
    LESSON: As one Uber driver told me, "IT guys come to Vegas with one shirt and a $20 bill and don't change either one all week."
  • Stay hydrated. Between the elevation, the desert, the air conditioning, and the back-to-back schedule, it's easy to forget your basic I/O subroutines. This can lead to headaches, burnout, and fatigue that you don't otherwise need to suffer.
    LESSON: Make sure your bag (see above) always has a bottle of water in it, and take advantage of every break in your schedule to keep it topped off.
  • Be flexible, Part 1. No, I'm not talking about the 8am Yoga & SDN Session. I mean that things happen. Sessions are overbooked, speakers cancel at the last minute, or a topic just isn't as engaging as you thought it would be.
    LESSON: Make sure every scheduled block on your calendar has a Plan B option that will allow you to switch quickly with minimal churn.
  • Be flexible, Part 2. As I said, things happen. While it's easy in hindsight (and sometimes in real-time), to see the mistake, planning one of these events is a herculean task with thousands of moving parts (you being one of them). Remember that the convention organizers are truly doing their best. Of course, you should let staff know about any issues you are having, and be clear, direct, and honest. But griping, bullying, or making your frustration ABUNDANTLY CLEAR is likely not going to help the organizers regroup and find a solution.
    LESSON: Instead of complaining, offer suggestions. In fact, offer to help! That could be as simple as saying, "I see your room is full. If you let me in, I'll Periscope it from the back and people in the hall can watch remotely." They might not take you up on your offer, but your suggestion could give them the idea to run a live video feed to a room next door. (True story.)
  • VPN or bust. I used to be able to say, "You’re going to a tech conference and some savvy person might..." That's no longer the case. Now it is, "You are leaving your home/office network. Anybody could..." You want to make sure you are being smart about your technology.
    LESSON: Make sure every connected device uses a VPN 100% of the time. Keep track of your devices. Don't turn on radios (Bluetooth
    , Wi-Fi, etc.) that you don't need and/or can't protect.
  • Don't bail. You are already in the room, in a comfortable seat, ready to take notes. Just because every other sentence isn't a tweetable gem, or because you feel a little out of your depth (or above it), doesn't mean the session will have nothing to offer. Your best interaction may come from a question you (or one of the other attendees) ask, or a side conversation you strike up with people in your area.
    LESSON: Sticking out a session is almost always a better choice than bailing early.
  • Tune in. Many of us get caught up in the social media frenzy surrounding the conference, and have the urge to tweet out every idea as it occurs to you. Resist that urge. Take notes now – maybe even with pen and paper – and tweet later. A thoughtfully crafted post on social media later is worth 10 half-baked live tweets now.
    LESSON: You aren't working for the Daily Planet. You don't have to scoop the competition.
  • Pre-game. No, I'm not talking about the after-party. I mean make sure you are ready for each session prior to each session. Have your note-taking system (whether that's paper and pen, Evernote, or email), preloaded with the session title, the speaker name, and related info (Twitter handle, etc.), and even a list of potential going-in questions (if you have them). It will save you from scrambling to capture things as they slide off the screen later.
    LESSON: Ten minutes prepping the night before is worth the carpal tunnel you avoid the following day.
  • Yes, you have time for a survey. After a session, you may receive either an electronic or hard copy survey. Trust me, you aren't too busy to fill it out. Without this feedback, organizers and speakers have no way of improving and providing you with a better experience next time.
    LESSON: Take a minute, be thoughtful, be honest, and remember to thank people for their effort, in addition to offering constructive criticism.

 

Do you have any words of advice for future conference attendees? Do you take issue with anything I’ve said above? I’d love to hear your thoughts! Leave a note in the comments below and let’s talk about it!

Tom Hollingsworth

GPS For Your Network

Posted by Tom Hollingsworth May 18, 2016

I remember a dark time in my life when I didn't know where I was going. I scrambled to find direction but I couldn't understand the way forward. It was like I was lost. Then, that magic moment came. I found the path to my destination. All thanks to GPS.

It's hard to imagine the time before we had satellite navigation systems and very accurate maps that could pinpoint our location. We've come to rely on GPS and the apps that use it quite a bit to find out where we need to go. Gone are the huge road atlases. Replacing them are smart phone and GPS receivers that are worlds better than the paper of yesteryear.

But even GPS has limitations. It can tell you where you are and where you need to be. It can even tell you the best way to get there based on algorithms that find the fastest route. But what if that fastest route isn't so fast any more? Things like road construction and traffic conditions can make the eight-lane super highway slower than a one-lane country road. GPS is infinitely more useful when it is updated with fresh information about the best route to a destination for a given point in time.

Let's use GPS as a metaphor for your network. You likely have very accurate maps of traffic flows inside your network. You can tell which path traffic is going to take at a given time. You can even plan for failure of a primary link. But how do you know that something like this occurred? Can you tell at a moment's notice that something isn't right and you need to take action? Can you figure it out before your users come calling to find out why everything is running slow?

How about the traffic conditions outside your local or data center network? What happens when the links to your branch offices are running suboptimally? Would you know what to say to your provider to get that link running again? Could you draw a bullseye on a map to say this particular node is the problem? That's the kind of information that service providers will bend over backwards to get from you to help meet their SLAs.

This is the kind of solution that we need. We need visibility into the network and how it's behaving instantly. We need to know where the issues are before they become real problems. We need to know how to keep things running smoothly for everyone so every trip down the network is as pleasant as an afternoon trip down the highway.

If you read through this entire post nodding your head and wanting a solution just like this, stayed tuned. My GPS tells me your destination is right around the corner.

sqlrockstar

The Actuator - May 18th

Posted by sqlrockstar Employee May 18, 2016

I am in Redmond this week to take part in the SQL Server® 2016 Reviewer's Workshop. Microsoft® gathers a handful of folks into a room and we review details of the upcoming release of SQL Server (available June 1st). I'm fortunate to be on the list so I make a point of attending when asked. I'll have more details to share later, but for now let's focus on the things I find amusing from around the internet...

 

How do you dispose of three Petabytes of disk?

And I thought having to do a few of these for friends and family was a pain, I can't imagine having to destroy this many disks. BTW, this might be a good time to remind everyone that data can never be created or destroyed, but it most certainly can be lost or stolen.

 

The Top Five Reasons Your Application Will Fail

Not a bad list, but the author forgot to list "crappy code, pushed out in a hurry, because agile is an excuse to be sloppy". No, I'm not bitter.

 

Audit: IT problems with TSA airport screening equipment persist

"The TSA's lack of server updates and poor oversight caused a plethora of IT security problems". Fortunately no one has any idea how many problems are in a plethora. Also? I know a company that makes tools to help fix such issues.

 

AWS Discovery Service Aims To Ease Legacy Migration Pain

Something tells me this tool is going to cause more pain when companies start to see just how much work needs to be done to migrate anything.

 

Bill Gates’ open letter

Wonderful article on how much the software industry has changed over the past 40 years. It will keep changing, too. I see the Cloud as a way for the software industry to change their licensing model from feature driven (Enterprise, Standard, etc.) to one driven by scalability and performance.

 

How to Reuse Waste Heat from Data Centers Intelligently

While this might sound good to someone, the reality is the majority of companies in the world do not have the luxury of building a data center from scratch, or even renovating existing ones. Still, it's interesting to understand just how much electricity data centers consume, and understand that the power has to come from somewhere.

 

How Much Does the Xbox One’s “Energy Saving” Mode Really Save?

Since we're talking about power usage, here's a nice example to help us understand how much extra it costs us to keep our Xbox always on. If it seems cheap to you then you'll understand how the cost of a data center may seem cheap to a company.

 

This week marks the fifth anniversary of my seeing the final launch of Endeavour, so I wanted to share something related to STS-134:

LaRockLaunch.jpg

 

Lastly, if you've been enjoying The Actuator please like, share, and/or comment. Thanks!

In the past few years, there has been a lot of conversation around the “hypervisor becoming a commodity." It has been said that the underlying virtualization engines, whether they be ESXi, Hyper-V, KVM etc. are essentially insignificant, stressing the importance of the management and automation tools that sit on top of them.

 

These statements do hold some truthfulness: in its basic form, the hypervisor simply runs a virtual machine. As long as end-users have the performance they need, there's nothing else to worry about. In truth, though, the three major hypervisors on the market today (ESXi, Hyper-V, KVM) do this, and they do it well, so I can see how the “hypervisor becoming a commodity” works in these cases. But to SysAdmins, the people managing everything behind the VM, the commoditized hypervisor theory isn't bought quite so easily.

 

When we think about the word commodity in terms of IT, it’s usually defined as a product or service that is indistinguishable to it’s competitors, except for maybe price. With that said, if the hypervisors were a commodity, we shouldn’t care what hypervisor our applications are running on. We should see no difference between the VMs that are sitting inside an ESXi cluster or a Hyper-V cluster. In fact, in order to be commodity, these VMs should be able to migrate between hypervisors. The fact is that VMs today are not interchangeable between hypervisors, at least not without changing their underlying anatomy. While it is possible to migrate between hypervisors, the fact of the matter is that there is a process that we have to follow, including configurations, disks, etc. The files that make up that VM are all proprietary to the hypervisor they are running on and cannot simply be migrated and run by another hypervisor in their native forms.

 

Also, we stressed earlier the importance of the management tools that lie above the hypervisor, and how the hypervisor didn’t matter as much as the management tools did. This is partly true. The management and automation tools put in place are the heart of our virtual infrastructures, but the problem is that these management tools often create a divide in the features they support on different hypervisors. Take, for instance, a storage array providing support for VVOLs, VMware’s answer to per-vm-based policy storage provisioning. This is a standard that allows us to completely change the way we deploy storage, eliminating LUNs and making VMs and their disk first-class citizens on their subsequent storage arrays. That said, these are storage arrays that are connected to ESXi hosts, not Hyper-V hosts.  Another example, this time in favor of Microsoft, is in the hybrid cloud space. With Azure stack coming down the pipe, organizations will be able to easily deploy and deliver services from their own data centers, but with azure-like agility. The VMware solution, which is similar, involving vCloud Air and vCloud Connector, is simply not at the same level as Azure when it comes to simplicity, in my opinion. They are two very different feature-sets that are only available on their respective hypervisors.

 

So with all that, is the hypervisor a commodity?  My take: No! While all the major hypervisors on the market today do one thing – virtualize x86 instructions and provide abstraction to the VMs running on top of them - there are simply two many discrepancies between the compatible 3rd-party tools, features, and products that manage these hypervisors for me to call them commoditized. So I’ll leave you with a few questions. Do you think the hypervisor is a commodity?  When/if the hypervisor fully becomes a commodity, what do you foresee our virtual environments looking like? Single or multi-hypervisor? Looking forward to your comments.

The other day we were discussing the fine points of running an IT Organization and the influence of People, Process and Technology on Systems Management and Administration, and someone brought up one of their experiences.   Management was frustrated at how it would take days for snapshots on their storage and virtualization platform was looking to replace their storage platform to solve this problem.  Clearly as this was a technology problem they sought out a solution which would tackle this and address the technology needs of their organization!  Chances are one or more of us have been in this situation before, so they did the proper thing and looked at the solutions!  Vendors were brought in, solutions spec’d, technical requirements were established and features were vetted.  Every vendor was given the hard and fast requirements of “must be able to take snapshots in seconds and present to the operating system to use in a writable fashion”.  Once all of the options were reviewed, confirmed, demo’d and validated they had made a solid solution!

 

Months followed as they migrated off of their existing storage platform onto this new platform, the light at the end of the tunnel was there, the panacea to all of their problems was in sight! And finally, they were done. Old storage system was decommissioned and the new storage system was put in place.  Management patted themselves on the back and they went about dealing with their next project, first and foremost on that list was the instantiation of a new Dev environment which would be based off of their production SAP data.   This being a pretty reasonable request they proceeded following their standard protocol to get it stood up, snapshots taken and presented.  Several days later their snapshot was presented as requested to the SAP team in order to stand up this Dev landscape.  And management was up in arms!

 

What exactly went wrong here? Clearly a technology problem had existed for the organization and a technology solution was delivered to act on those requirements.   Yet had they taken a step back for a moment and looked at the problem for it’s cause and not its symptoms they would have noticed that their internal SLAs and processes are really what was at fault, not the choice of technology.  Don’t get me wrong, some technology truly is at fault and a new technology can solve it, but to say that is the answer to every problem would be untrue, and some issues need to be looked at in the big picture.   To give you the true cause of their problem as their original storage platform COULD have met the requirements; was their ticketing process required multiple sign-offs for Change Advisory Board Management, approval and authorization, and the SLAs given to the storage team involved a 48-hour response time.  In this particular scenario the Storage Admins were actually pretty excited to present the snapshot so instead of waiting until the 48th hour to deliver, the provided it within seconds of the ticket making it into their queue.

 

Does this story sound familiar to you or your organization? Feel free to share some of your own personal experiences where one aspect of People, Process or Technology was blamed for the lack of agility in an organization and how you (hopefully) were able to overcome it?  I’ll do my best to share some other examples, stories and morals over these coming weeks!

 

I look forward to hearing your stories!

It was all about the network

 

In the past, when we thought about IT, we primarily thought about the network. When we couldn’t get email or access the Internet, we’d blame the network. We would talk about network complexity and look at influencers such as the number of devices, the number of routes data could take, or the available bandwidth.

 

As a result of this thinking, a myriad of monitoring tools were developed to help the network engineer keep an eye on the availability and performance of their networks and they provided basic network monitoring.

 

It’s now all about the service

 

Today, federal agencies cannot function without their IT systems being operational. It’s about providing critical services that will improve productivity, efficiency, and accuracy in decision making and mission execution. IT needs to ensure the performance and delivery of the application or service, and understand the application delivery chain.

 

Advanced monitoring tools for servers, storage, databases, applications, and virtualization are widely available to help diagnose and troubleshoot the performance of these services, but one fact remains: the delivery of these services relies on the performance and availability of the network. And without these critical IT services, the agency’s mission is at risk.

 

Essential monitoring for today’s complex IT infrastructure

 

Users expect to be able to connect anywhere and from anything. Add to that, IT needs to manage legacy physical servers, new virtual servers, and cloud infrastructure as well as cloud-based applications and services, and it is easy to see why basic monitoring simply isn’t enough. This growing complexity requires advanced monitoring capabilities that every IT organization should invest in.

 

Application-aware network performance monitoring provides visibility into the performance of applications and services as a result of network performance by tapping into the data provided by deep packet inspection and analysis.

 

With proactive capacity forecasting, alerting, and reporting, IT pros can easily plan for future needs, making sure that forecasting is based on dynamic baselines and actual usage instead of guesses.

 

Intelligent topology-aware alerts with downstream alert suppression will dramatically reduce the noise and accelerate troubleshooting.

 

Dynamic real-time maps provide a visual representation of a network with performance metrics and link utilization. And with the prevalence of wireless networks, adding wireless network heat maps is an absolute must to understand wireless coverage and ensure that employees can reach critical information wherever they are.

 

Current and detailed information about the network’s availability and performance should be a top priority for IT pros across the government. However, federal IT pros and the networks that they manage are responsible for delivering services and data that ensure that critical missions around the world are successful and that services are available to all citizens whenever they need them. This is no small task. Each network monitoring technique I discussed provides a wealth of data that federal IT pros can use to detect, diagnose, and resolve network performance problems and outages before they impact missions and services that are vital to the country.

 

Find the full article on our partner DLT’s blog, TechnicallySpeaking.

The increasing rate of change in applications and its amplitude footprint are causing a lot of consternation within IT organizations. It’s no coincidence, either, since everything revolves around the application, which is innovation personified. It’s the revenue-generating, value-added differentiation, and it's potentially an industry game changer. Think Uber, Facebook, Netflix, Airbnb, Amazon, and Alibaba.

 

Accordingly, the rate and scale of change are products in the application lifecycle. For instance, applications deployed in a virtualization stack will live for months or years, while applications deployed in a cloud stack will live for hours or weeks. Applications deployed in containers or with microservices will live for microseconds or milliseconds.

AppLifeCycle.png

From my Interop 2016 DART Framework presentation.

 

For IT professionals, it’s good to know where job security is. As such, I’ve been keeping monthly tabs of the number of jobs with the key words virtualization, cloud, or (containers AND microservices), on dice.com. In the past year, since June 2015, the number of jobs with the key word "virtualization" has remained flat with around 2600 job openings. In that same time frame, the number of cloud jobs has increased by over 30% to 8900 job openings, while the number of container/microservices jobs has more than doubled, reflecting almost 600 job openings.

 

These trends re-affirm the hybrid IT paradigm and the need to deal efficiently and effectively with change in their application ecosystem. Let me know what you think in the comment section below.

The vast majority of my customers are highly virtualized, and quite potentially using Amazon or Azure in a shadow IT kind of approach. Some groups within the organization have deployed workloads into these large public provider spaces. It’s simply due to these groups having the need to gain access to resources and deploy them as rapidly as possible.

 

Certainly Development and Testing groups have been building systems, and destroying them as testing moves forward toward production. But also, marketing, and other groups may find that the IT team is less than agile in providing these services on a timely basis. Thus, a credit card is swiped, and development occurs. The first indication that these things are taking place is when the bills come.

 

Often, the best solution is a shared environment in which certain workloads deployed into AWS, Azure or even Softlayer, into peer data centers for a shared, but less public workload provide ideal circumstances for the organization.

 

Certainly these services are quite valuable to organizations. But, is it secure, or does it potentially expose the company to vulnerabilities of data and/or potentially an entrée into the corporate network? Are there compliance issues? How about the costs? If your organization could provide these services in a way that would satisfy the user community, would that be a more efficient, cost-effective, compliant, and consistent platform?

 

These are really significant questions. The answers rarely, though, are simple. Today, there are applications, such as Cloudgenera which will analyze the new workload and advise the analyst as to whether any of these issues are significant. It’ll also advise as to current cost models to prove out the costs over time. Having that knowledge prior to deployment could be the difference between agility and vulnerability.

 

Another issue to be addressed with opening your environment up to a hybrid or public workload is the learning curve of adopting a new paradigm within your IT group. This can be daunting. To address these kinds of shifts in approach, a new world of public ecosystem partners have emerged. These tools, create workload deployment methodologies that bridge the gap between your internal virtual environment, and ease or even facilitate that transition. Tools like Platform9’s create what is essentially a software tool that allows the administrator to decide from within vCenter’s Platform9 panel where to deploy that workload. The deployment of this tool is as simple as downloading an OVF, and deploying it into your vCenter. Platform9 leverages the VMware API’s and the AWS API’s to integrate seamlessly into both worlds. Simple, elegant, and learning curve is minimal.

 

There are other avenues to be addressed, of course. For example, what about latencies to the community? Are there storage latencies? Network latencies? How about security concerns?

 

Well, analytics against these workloads as well as those within your virtual environment will no longer be a nice-to-have, but actually a must-have.

 

Lately, I’ve become particularly enthralled with the sheer level of log detail provided by Splunk. There are many SIEM (Security Information and Event Management) tools out there, but in my experience, no other tool gives the functional use as Splunk does. To be sure, other tools, like SolarWinds provide this level of analytics as well, and do so with aplomb. Splunk, as a data collector is unparalleled, but beyond that, the ability to tailor your dashboards to show you the trends, analytics, and pertinent data against all of that volume of data in a functional at-a-glance method. The tool’s ability to stretch itself to all your workloads, security, thresholds, etc., and to present it in such a way that the monitor panel or dashboard can show you so simply where your issues and anomalies lie.

 

There is a large OpenSource community of SIEM software as well. Tools such as OSSIM, Snort, OpenVAS and BackTrack are all viable options, but remember, as OpenSource, they rarely provide the robust dashboards that SolarWinds or Splunk do. They will, as OpenSource, cost far less, but may require much more hand-holding, and support will likely be far less functional.

 

When I was starting out in the pre-sales world, we began talking of the Journey to the Cloud. It became a trope.  We’re still on that journey. The thing is, the ecosystem that surrounds the public cloud is becoming as robust as the ecosystem that exists surrounding standard, on-prem workloads.

Filter Blog

By date:
By tag: