Skip navigation

Whiteboard

10 Posts authored by: kvanzant

[Ed. NOTE:  Today's Whiteboard post comes from Sanjay Castelino, SolarWinds' VP of product management and product marketing.]

Some time ago, Kenny VanZant wrote a series of posts on the Consumerization of IT, and lately I’ve been thinking quite a bit about one aspect of this trend – being customer-driven.  What do I mean by being customer-driven and aren’t most successful IT management vendors customer-driven? 

Well the short answer is that customer-driven companies view the market through the lens of deep understanding of the problems and challenges faced by the majority of their customers.  Of course this sounds obvious and you’d expect that any company would want to understand and solve for the 95% rather than the 5%. That said, it probably doesn’t surprise you to learn that many companies don’t (or can’t) do this. Having managed products and product marketing for a while now, I realize that there are some very clever ways of faking being customer- driven…


Do what your biggest customers ask you to do. 
This is likely the most common of the approaches employed by many of the large IT vendors  -- do what their biggest customers want, with the assumption that what the biggest customers want till ‘trickle down’ to everyone else.  Many of these companies truly believe their customer-driven because they’re listening to customers, and they are bolstered by analyst firms who also spend all of their time with the biggest of the Fortune 500. This is great for the biggest customers, but typically your biggest customers have the most niche needs and so inevitably if you’re one of the rest of the customers you get little value out of this strategy. 

Copy the market leader.  Another more difficult one to spot is employed by the smaller, point solution guys trying to get a foothold and promising to do everything that you need for less - essentially a “copy-cat” strategy.  And, while it may look customer-centric, it really isn’t and could come back to bite you. Where this strategy normally comes to roost is when you actually have a new problem to solve – the vendors don’t know how to deal with this problem. What’s wrong with this approach you might ask?  Well, you can reverse engineer anyone’s product, but without the benefit of the customer conversations and background that went into the development it’s hard to know what’s really important to the customer.  Recently, I saw one vendor introduce a plug-in that had a better and cheaper open-source alternative.  The vendor wants their customers to pay thousands of dollars for something that is available for $100!  A customer-driven company would just point their customers to the best solution to solve their problem (and it doesn’t hurt that it’s cheaper).  I don’t know about you, but I firmly believe that you get what you pay for – you can’t just check the feature list, you have to understand the problems customers are trying to solve and build a product that best solves these problems AND meets the affordability criteria. 


Gaze deeply into the Crystal Ball. Finally, there’s the new management vendors solving some new problem that no one else solves, whether its cloud computing or virtualization.  These companies speculate what problems their future customers might run in to and then build technology to solve those problems. But, none of these technologies develops in a vacuum… they are interdependent on other aspects of the IT infrastructure.  When you fail to consider the ecosystem and try to game the market by projecting more than 18 months out, you can miss critical challenges or you begin solving for problems that can be addressed more effectively by integration with existing solutions.   Of course, if you’ve ever looked at these products you find that maybe 1 in 10 is truly useful – not companies you want to take a big bet on.


If you have read this far, the picture is not all that encouraging… so, how do you tell the real thing?   You start with the customers – how active is the community around the vendor you’re considering?


Community involvement means that customers care and are invested in the product.  For any business that’s been around a while customers only stay involved because they feel that they’re being heard.  If you can find the most dedicated customers – the MVPs – they’ll be able to tell you how the company has worked out for them in the good times and the bad, when they really needed help.  There are many more indicators, but I always believe that the community health is what’s going to give you the clearest picture of whether you’re working with a customer-driven company or not.


Customer driven companies are the ones that ultimately benefit you the most, and if you’re thinking that you’re vendor is running one of the above plays maybe it’s time to switch?

I travel a lot.  This week, it's with the family for Spring Break.  Last week, it was a trip to NYC.  Every time I pack my bags to hit the road, I can't help thinking about the unfulfilled promise of free wifi.

Does anyone else remember the days when free wifi was going to be available everywhere folks gathered? Cities across the country were promoting plans to roll out downtown networks. McD*nalds was the first of dozens of retailers who would make it free for customers to use wifi in their stores. Hotel lobbies and airports were right behind. Everyone was going to be connected to 10Mbps (or better) wireless internet goodness all the time. I even bought one of those cool “wifi detectors” knowing that if I found it, it would probably be free & easy to connect.

OK – so it wasn’t cool…whatever.

The point is that somewhere it all went horribly wrong – and I’m not entirely sure why. Cities pulled the plug, most hotels charge for it, 1 out of 5 airports gives it away, and you can almost guarantee that the larger the retailer, the more likely their wifi isn’t free (without some other string attached, at least).

I know one thing that happened was that the Service Providers, T-m*bile, Wayp*rt, Boing*, and others sprung up to monetize the early market by wholesaling access networks to companies who didn’t want to foot the bill to build one of their own. They had a “pay for access” model, and a revenue share for the company, to boot. Those of us who were traveling for work, and needed to connect at any price made it acceptable to pay $10 or $15 for an hour of access (we’re just going to expense it anyway, right?) – and this model seems to have stuck. Have companies and their traveling workforce enabled this model to survive, and kept these services from being free everywhere?

It seems like today, we are in a weird place with public wifi. It’s kind of available if you are willing to look for it – intentionally free some places, accidentally free some places, and (mostly) for a fee at some others. I intentionally choose to do business at the places where it’s free.

That's just me trying to do my part to make that wireless network future a reality.

Ahh, the predictably irrational nature of technology hype cycles.  We fall in love with a concept, bludgeon each other & the IT community to death with it, then wait – panting from our marketing sprint – for the sounds of the coming revolution.  Invariably, we are greeted not by a roar, but by a slower, steadily-building drumbeat.  Often, the reality eventually becomes deafening, but usually not until the marketers, pundits, and vendors have moved on to the next shiny thing. 

Today, it’s all “cloud” all the time.  The hyperbole over the impact of cloud computing has reached a fevered pitch.  I recently fielded a question from an analyst as to what would happen “when all employees worked from home or coffee shops using public Internet access to reach all their cloud-based business apps.”  My answer was, “when that happens, forget cloud companies…go long in Starbucks.”  Seriously, people – can we have some reality mixed with our excitement?

Thankfully, this topic has lately adopted some common terminology that allows two people to have a reasonably coherent conversation – using Software, Platforms, or Infrastructure as a Service (SaaS, PaaS and IaaS, respectively) to delineate what in the world one is talking about.  It doesn’t take much to see the success of SaaS solutions in the market.  While we don’t have any pure SaaS offerings in our product portfolio yet, we are huge consumers of SaaS ourselves.  Sales, HR, tech support, legal, marketing operations, product management, and other key business functions all utilize SaaS tools instead of home-grown, or packaged offerings.   This model of buying software, like the open-source model or the packaged software model, is here to stay.

It also doesn’t take much squinting to see the huge potential in IaaS solutions.  Reducing the hurdle to creating new products and applications, expanding server capacity, lowering operating costs, etc. are all clear and real benefits.  Only a matter of time before this is commonplace, even if, as I expect, there will be a massive over-build of capacity, and the obvious market correction (think fiber networks of the early 2000s…). 

Platform as a Service offerings might actually be the most intriguing to me at the moment.  SaaS products expand markets to customers previously not able to be reached by traditional software solutions.  IaaS lowers the cost of computing to the point of enabling new players and products to emerge.  But PaaS could enable entirely new markets and business models, and create more disruption than either SaaS or IaaS.  Think of the explosion of quality blogs and bloggers, and the disruption that has had on the traditional, profit-minded, media companies – enabled by point-and-click blog platforms (available as a service), and a rich array of pluggable blogging widgets.  Substitute “application components” for “blog widgets”, and “PaaS platforms” for “blog platforms”, and you get the point.  Now, turbo-boost the entire analogy with the fact that bloggers, by and large, had no real profit motive.  Folks using PaaS in the future are going to be companies who are “solely” profit-motivated, trying to disrupt existing solutions and markets.  I can’t wait to see what cool stuff comes out of that wave, however distant it might be…

In any event, I am fully on board with the fact that cloud-based computing will become a very large and vibrant piece of the technology and IT landscape.  I don’t think it will arrive as quickly some think or want – but the drums are definitely beating.

Today, we announced the acquisition of the assets of Tek-Tools, a leader in the Storage Resource Management (SRM) market.  Obviously, this signals our belief that the same approach we have used in the network management space will also work in another area of IT infrastructure.  Namely, to provide powerful, affordable, easy-to-use solutions to the IT pros who deal with the daily pain of keeping the business going.  Just like the network management space, the storage management market has existing solutions from vendors not named SolarWinds, but many of them are expensive, hard to use, and not as multi-vendor friendly as they should be.  We think we can make that picture better for thousands of storage management teams worldwide, and do it quickly.  

But the part that gets us the most excited about Tek-Tools is the growing convergence of storage with the network, and with virtualized computing infrastructures.  We've heard loud and clear that it's not good enough just to look at one or two pieces of the picture in these virtualized environments.  You need the entire picture of performance.  Look at Cisco's UCS platform - that's multiple types of infrastructure in a single system.  Who wants to use different tools to manage the performance of that?  Now, I know that's just one example, and it's certainly not a mainstream solution yet - but it's an example of wherevirtualization will take IT environments.  Technology interdependence, higher management complexity, IT silos broken down.

We see that future, and hear the customer pain points, and we believe this acquisition gives us the technology to respond.  Let us know what you think!

For what seems like 20 years, IT industry analysts, pundits, vendors and CIOs have been advertising the coming “unification of IT”-  breaking down the silos of traditional enterprise IT infrastructure and teams: systems, apps, networks and storage. Despite all this talk and all the supposed agents of change, from processes (see ITIL), to centralized technology (see CMDBs) to philosophies (see “Business-aligned IT”), nothing’s really driven a truck through those silo walls. But, virtualization may finally change that.

As virtualization moves from the test lab to the data-center or server room of almost every sized business, IT teams can’t help but first decry -- then deal with -- the fact that lines of responsibility, visibility, and control are blurred by this technology. Servers can move from one host to another, from one network port or device to another, taking their traffic and security needs with them. Server teams now depend on the storage team to guarantee I/O and space requirements from newly networked storage. The storage team may require security & bandwidth policies from the network team. It’s all inter-related – really (not virtually).

This confusion forces those folks to talk – to determine how they are going to plan for and execute changes (ITIL), and how they are going to track all these pieces (CMDB), so that they can collectively deliver the great value to the business that virtualization promises (Business- aligned IT). Those parenthetical themes may owe their growth to virtualization, when it’s all said and done.

For sure, it’s been hyped. “Virtualization changes everything!” they say at conference after conference. Lots of agendas are attached to those claims, but of all the hyped facets of this technology wave, from rapid development and server cost savings, to energy-reductions and cloud computing, virtualization’s lasting legacy could be the role it plays in making network, server and storage teams talk to each other without a pointed finger between them.

So, we defined the trend in Part 1 -- the power end users have obtained by searching for solutions and researching their options. Then, in Part 2, I set up how this trend is changing the expectations and relationship between vendors and buyers.  All of this leads up to a consideration of how businesses adapt and embrace consumerization...

In my view, it comes down to "All or Nothing" in the operating model.

The thing about a company that embraces the idea of the consumerization of IT – the movement that gives “power to the people” within IT organizations – is that they will look essentially nothing like a “traditional” enterprise software company. If you take all the attributes of that traditional enterprise software company (say 1.0, to borrowed the overused mnemonic device) and stacked them on one side of the whiteboard, then took all the attributes of an “enterprise software 2.0” company and put them on the other side (as I’ve done below), you will see that they share no common elements in their operating model.

 

That’s because the operating model is exactly that – a model of how various parts of the company fit together to deliver something whole to the customer. You can’t take an Enterprise 1.0 company, and layer an Enterprise 2.0 component on top, and make it work – the system isn’t set up to reward the right behaviors for that 2.0 process, making it doomed from the start. For example, if an enterprise 1.0 company tries to be cool, and launches an interactive user community, it doesn’t change whom they take their product priorities from. They have to prioritize feedback that matters to the executives at their largest customers & prospects, not the feedback of the community. So much for the community which ends up wilting on the vine – and, if history is any guide, probably gets spun out to a 3rd party. The same argument can be made for why they keep their prices high, their products complicated, their contracts long, etc. – they are rewarded for these elements, because the “fit” with the rest of their model.

Today, the most popular element of the enterprise software 2.0 model to “import” into a 1.0 company is the “try before you buy” product, marketed over the web. I see lots of companies that suddenly have a free version, or a trial version of their tools available on a slick-looking landing page. The problem is that most of those companies are just using that as a Trojan horse for a 6 or 7 figure sales cycle for some “other” product. The one that gets all the developers and all the new features.

So, if you want to know if an enterprise software company is 1.0 or 2.0, you need to look at the whole picture. It’s all or nothing when it comes to the way you operate your company.

Let's take a short break from the Consumerization Series... and talk a little about power. Specifically, managing power consumption.  Early last week, Jennifer Geisler highlighted Energy Management Without Borders on the Cisco Innovation blog, highlighting milestones for Cisco's EnergyWise technology. That got me thinking about their approach to this whole energy management space.

When I worked for Cisco in late nineties, they released their first VoIP handset, along with VoIP support in multi-service edge devices. I recall our collective, initial intrigue as we picked up a $1000 VoIP handset and talked to someone on the other end of the demonstration set-up. Sounded just like normal. Expensive? Yes, but kind of cool nonetheless. I don’t recall thinking “wow, in a decade or so, Cisco is going to mow down a bunch of incumbent vendors with this technology & usurp the enterprise telephony budget”. But that’s exactly what happened.

I see similar beginnings in what Cisco is doing with its EnergyWise technology. EnergyWise, which is built into a host of Cisco switches and other devices, allows for centralized, policy-based control of power consumption for those devices, and some of the devices they connect. Think about the ability to manage when wireless APs, VoIP phones, POS terminals, and eventually PCs and Servers are in full power mode and when they are not.

Again, sounds cool – but isn’t there a facilities department that already handles that? Sure. And they have existing vendors, and very capable software to do some of the very same fancy things. But I’m not sure that matters. This problem is one of setting policy, driving automation, monitoring and reacting to changes quickly. IT has been doing those things forever & most facilities teams just haven’t needed to be as reactive to the business.

It may be a while away, and I can certainly see significant hurdles that Cisco will have to overcome – but I can squint and see a day when power management will be a standard service provided by the IT team, just as voice is today through VoIP. Belief in that vision is why we’ve invested to bring EnergyWise monitoring and policy configuration to our Orion NPM and NCM products this year. Download them & let us know what you think on the EnergyWise forum on thwack.



In part one of this series, I wrote about the changing landscape for how vendors and enterprise buyers (IT folks, in our case) connect – through the consumer-learned behaviors of search and research. This means that vendors must endorse “needs-based” marketing, not “transformational” marketing. Showing up where someone is searching for a solution is an entirely different skill set than advertising a solution to a problem people aren’t entirely aware they have. I like to compare it to selling aspirin versus vitamins. Or SolarWinds versus HP. Big difference.

But the impact of the Consumerization of IT goes much further than just defining how IT users find a vendor or solution. It has fundamentally changed what those users expect from their vendor – including changing the products themselves. We are used to thinking that anything worth paying for in IT must be complex stuff, so complex implementations & complex products are tolerated and expected. Not anymore.  Shelf-ware is totally taboo, ease of use is a critical feature, and immediately demonstrable value are the expectations. The trade-off might be fewer features, or starting smaller, but the productivity gains of a rapidly deployed solution have been proven. The try-before-you-buy approach was novel to IT solutions a few years ago, but is now the expectation. The immediate-gratification of our consumer lives is fairly fully integrated in the workplace.

Beyond the product itself, other impacts of the Consumerization shift are just as palpable. Potential buyers want to know what users think, users want to talk with other users, everyone wants to hear from the company. Communities, ratings & reviews, user-generated content. These are all examples of the higher level of user engagement and interaction that characterize the “consumerized” enterprise solutions today.

How a company thinks about their products, and how it communicates with its users – essentially the entire customer relationship – has changed dramatically through this evolution. In the last part of this series, I’ll talk about how to tell the companies that “get it” from those who don’t.

Any blog worth its salt has to have a “3-part series” at some point.  So here we go.   This is the first post of three on the topic of the Consumerization of IT – a macro-level business trend that includes many topics. So, let’s start with a definition:  The Consumerization of IT refers to the introduction of consumer-oriented technology and behaviors into the realm of Enterprise IT. The first part of that definition refers to workers integrating technologies like iPhones, Flip Video cameras, Skype, Facebook, Twitter, and other originally consumer-targeted technologies into their workplace and workflow.  While that’s certainly a major trend (and the one everyone is talking about), it’s the latter part of the definition – consumer behavior in the enterprise - that might influence business the most, and in my opinion, will redefine the way technology is built, deployed and used within IT. 

What consumer behaviors have changed in the past 15 years that have the power to reshape the way enterprise IT works?  I say it all starts with the action verb that has most defined the internet era: search (and its sibling word, research). 

The power to index and search web-sites has driven a virtuous cycle of user search and vendor publishing that has totally changed the way people find solutions to problems.    Today, you are just as likely to search for a solution to a problem at your office, as you are to search for the digital camera that best fits your needs.  Seeking out solutions and researching purchases is now a basic skill that all, well most, technology buyers possess. But, 20 years ago, the flow of information about work-related technology was controlled by the individuals who sold the technology – and if those sales people only engaged the CIO and his direct reports, then he’s the one who got educated. 

Today, most tech vendors are pretty transparent with product info, either by choice or by force.  There are pictures, community sites, and user ratings and review sites for almost all consumer products – and an increasing number of enterprise products.  Some vendors (SolarWinds, to be sure) have even redesigned their selling and marketing model to focus on users who are searching out solutions, instead of trying to convince them of a problem they didn’t know they had.  You can tell the vendors who don’t like this change – just try to get a screenshot, feature info, or pricing for a software product from one of the traditional enterprise software vendors.  Good luck.

So, the foundation of the Consumerization of IT is the power end users have obtained by searching for solutions and researching their options.  That‘s a tectonic shift that changes the relationship between vendor and consumer, no matter whether it’s at home or work.  I’ll cover that relationship next.


Hasn’t every technology sector explained how the inimitable Moore’s Law has impacted it?  Well, I don’t recall ever seeing Moore’s Law referenced in Network Management, but here goes...

Over the years, there have been numerous technologies implemented in network devices - layer 3 devices in particular, like routers – built expressly for the purpose of improving visibility and monitoring of the network.  The two most obvious examples are NetFlow and IP SLA (formerly SAA), both Cisco technologies, introduced more than a decade ago.   Why are these technologies just now becoming table stakes for network management?  It’s because when they were introduced, they put such a toll on the router processor, that they negatively impacted routing performance, and that was a non-starter for most network engineers.

When I spent time at Cisco in the 1990s, router configurations that disabled certain features, reduced conflicting or redundant parameters, and thus resulted in the fastest router performance were hotly traded commodities – passed from engineer to engineer.  And management “features” that hurt performance were the first to go.  No NetFlow, no IP SLA.  Those were the rules. 

Today, router technology has progressed so far that you could favorably argue that they shouldn't even be called routers anymore.  It’s all about the extra features.  Why is that?  Because, router processors have continued to improve in performance according to Moore’s Law – doubling roughly every 2 years.  Today, you can turn on NetFlow exporting, and create a number of IP SLA tests in an edge router, and have essentially negligible performance impact on the routing performance (of course, there are exceptions to this, still, but I’m generalizing). 

Take a look at Cisco’s new ISR G2, which launched yesterday.  The new devices make significant improvements on performance and operational efficiency, by dividing key functions like routing onto one core of the CPU and ancillary functions, like IP SLA and NetFlow, onto the other core (or in some cases onto a separate physical processor altogether). 
Our Head Geek, Josh Stephens, recently did a great Q&A with Brad Reese over at Network World on IP SLA adoption and addressed the concerns around performance impact … more here.

So, there’s really no reason not to leverage these technologies in network management.  Let’s all take a moment to thank good ole’ Moore’s Law for taking a turn with Network Management.

Related SolarWinds Products:
Orion Netflow Traffic Analyzer (NTA)
Orion IP SLA Manager

And, if you want to read another perspective on Moore's Law and Cisco IOS, check this out.

Filter Blog

By date: By tag: