Skip navigation

Over the last few weeks those of you who are members of the SolarWinds community, thwack, have seen a number of changes that culminated with the switch over to the new platform that will carry our community efforts forward.  The technology changes are only the start of where we’re headed and we hope to make community a much bigger part of your relationship with us, with a whole set of new content and tools.  But before I go there, let’s talk about the vision we have for thwack.

 

For a number of years now, thousands of you have interacted with the SolarWinds product managers and support team to get the most out of our products.  You’ve given us a ton of valuable feedback, celebrated our successes with great releases, and beaten us up if we haven’t lived up to your expectations. Frankly, you’ve been integral to our ability to build great products.  But for some time, we have wanted to expand the dialogue beyond our products into more thought provoking content, educational content, useful tools, etc.  You know, look beyond our backyard…

 

Some of this has already started, for those of you who don’t know, we now have 3 blogs at SolarWinds:

 

  • Product blog: You guessed it; we talk about products here, what’s new, how to do interesting new things – in general how to get the most out of the product.  If you’re an NPM network monitor or SAM (formerly APM) customer the product blog feed is available directly in the product.
  • Geek Speak: This is the place to learn about all the ins and outs of technology, we try to make sure the content here is all about the technology and technology events, tips and tricks… really the nuts and bolts content.
  • Whiteboard: Our corporate blog (it’s not as stuffy as a corporate blog might sound) – this is our perspective on the IT market the winners, the losers, who’s dressed well on the red carpet etc. (But, you have probably already figured this one out...  since you are here, reading this post.)

 

But the blogs are just the start of what we see ahead, we’ve got exciting free tools in the works, posts from SolarWinds ambassadors (folks outside of SolarWinds), new incentives to get you to explore some of this stuff.  And that is only the beginning; the community team has some other “fan-favorite” ideas they are putting into motion. The idea is to make thwack a place where you can come to learn and interact with other IT users – it’s bigger than just SolarWinds and our products, it’s about IT and it’s about you.

 

So we’re at the beginning, but I encourage you to take a look around, tell us what you like, tell us what you want to see, and know that we’re working to take community to a whole new level at SolarWinds so if you haven’t jumped in, now’s the time.

If you read the SolarWinds blog and have seen my recent posts, it won’t be any surprise to you that we believe the virtualization market is going through a period of transformation. We believe this because our customers are telling us that, while VMware is still a significant portion of their environment, they’re beginning to consider alternatives. We could (and have) spent days hypothesizing about the reasons for this shift, but I’d rather just talk about what SolarWinds is doing to accommodate it.

 

I’m excited to announce that SolarWinds Virtualization Manager now supports Microsoft Hyper-V! There will still be a few folks out there who wonder why we decided to do this, so here is a little bit of our thought process:

  • Microsoft Hyper-V adoption is growing faster than any other hypervisor in the market right now. This one is pretty obvious.
  • We believe there is a contingent of organizations out there who remember, or are still experiencing the many pains of vendor lock-in and want to have a dual hypervisor strategy in order to avoid it going forward.
  • There are companies that don’t trust their infrastructure vendor to tell them how much new infrastructure they need to buy. Many think that VMware vCenter Operations telling them that they need to buy more VMware vSphere licensing is kind of like having a fox guarding the hen house. So, there is a viable market for a third-party, unbiased virtualization management tool that doesn’t have a horse in this race.
  • There is a huge base of Microsoft administrators out there that will help accelerate adoption.
  • Microsoft’s planned enhancements proposed for Hyper-V version 3 this year have more and more customers planning for a dual hypervisor environment.
  • Multiple management consoles are hard to use, clunky, and don’t offer you the flexibility you need in measuring and managing your virtual infrastructure. Virtualization Manager now allows you to see both VMware and Hyper-V environments in a single pane of glass.

 

Since we already know that Hyper-V penetration is accelerating, SolarWinds has a unique ability to bridge a major Hyper-V functionality gap in storage visibility. Because we’ve created integration capabilities for SolarWinds Storage Manager within SolarWinds Virtualization Manager, you can now see what was previously invisible in your Hyper-V environment. When using these two products together, you can drill down from the Hyper-V (or VMware) VM to the storage LUN servicing that VM, you can see all of the other VMs serviced by that LUN (to help you pinpoint if a particular VM is hogging storage I/O), and you can even see the physical disks associated to the LUN.

 

So, if you think any of these points are valid, you should give the new SolarWinds Virtualization Manager 5.0 a try today with our free 30-day trial today!

Andrew Hay recently did a great post on Dark Reading on whether big data was really just a buzzword being thrown around by the SIEM vendors to get attention, and I have to say I tend to agree with the essence of his conclusion.  SIEM (and log management) vendors who have 10 year old architectures claiming big data is what they do probably can’t deliver unless they’ve embraced new big data technology and concepts.  But, I don’t think that’s the end of the story.

 

While the concepts of Big Data can be used to analyze everything from finance to the weather, most of you are probably more interested in what it means to you when your boss asks you what you’re doing about big data (probably right after they sign off on buying that new array).  So let’s talk about it.  Big Data for IT can be used to manage security threats, application performance issues, identify business insights that go beyond basic trends, and identify operational issues before they become operational problems.  All of these are worthy uses of technology and probably your time but getting these insights from big data ain’t so simple (“ain’t” is the little bit of Texas coming out in me).

 

There are 2 fundamental problems in getting to value from big data.

  1. You need to collect it all – as Andrew pointed out in his Dark Reading post there are several issues with systems architected around decade old technology.  The problems are both in terms of the storage systems which Andrew pointed out (i.e. database) and the organizational structure used to keep and analyze the data (i.e. transactional vs OLAP schemes).
  2. You need to analyze, identify problems, and then automate the identification for the future. This is more challenging because big data doesn’t come with a “walk through guide”, you have to figure it out. It requires someone with real computational science expertise to find the needle in the haystack and know that it’s meaningful to your business.  Also, once identified it’s not clear that the same system used for analysis would be any good for automation and real-time detection.  In fact it’s likely that they are different applications at a minimum.

 

For many of you these 2 things may make big data an impractical luxury, but if you’re in the business of securing your network or really trying to deliver high availability you may want to read on.   You have a few options:

  • Jump in with both feet: Build a system based on big data technologies like Hadoop, and Google MapReduce, or even more platform oriented products like Splunk, hire your computational scientist and go at it.  It’s not impossibly hard – check out this case study about Zion’s Bank.  But set your expectations right because this isn’t going to be a deploy it, gather data, and voila results kind of project.
  • Or you could look for a product that can give you a few more practical tools.  First make sure you can collect machine data and act on it in real-time.  If you are writing the data to the database, then making it available to an analysis engine it’s not going to help you react fast enough to a security threat. Second, get a product that can give you some visualization tools for the data, things like word clouds, Tree maps, bubble charts, histograms etc will help you begin your data exploration exercise. This will help you figure out what to search for – IT search by itself isn’t going to cut it.  Third, make sure it’s easy to build rules – no handwritten query languages please, we are in the age of drag-and-drop.  Lastly, make sure your system can take actions, if all it can do is alert you then it’s not really helping to stop the problem it’s just helping you know that there’s a problem – big difference.

 

Big Data is here to stay, but my advice is to get practical, if you don’t know what you want to do with the data stop.  If you do know what you want to do with it then find a solution that gets you the key parts of the value without having to hire your very own computational scientist.

So I’ve been reading a lot of commentary on the impact of BYOD and even more opinions on what should be done, stop it, help it, control it, manage it, ignore it.  In fact, we recently worked with Network World on a survey around this topic.  While I understand the perspectives on what to do, in all that I’ve read I have yet to see someone approach this from the customers’ point of view.  I have been thinking about what the reason is for this movement and I’ve bounced around a few places.

 

  1. It’s just a continuation of the consumerization of IT – we saw it with the adoption of SaaS and this is just a sequel.  Users saw that they could push new technology on the business without the cooperation or consent of IT.
  2. It’s cool – let’s face it, many of these devices started out as a new toy and now we just want to use the new toy at work.
  3. It’s about easy, like the Staples Easy button, knowledge workers are busy folks and have a million things going on and any way to help make life easier is welcome.  It turns out my tablet made my personal life easier so why don’t I see if it can make my corporate life easier.

 

I think the real answer is that there’s a little truth to each of these reasons.  I’m going to start with the last one first because I think it’s the most relevant.  Work has blended into our personal lives in ways that didn’t exist 30 years ago.  The generations in the workplace today walk around never unplugged from the office.  So after pushing us to stay connected it’s not surprising that when I find an easy way to solve a data management (it’s all data) problem in my personal life I want to bring it to work.  And saying no is not an option for IT, mostly because the biggest demand for a solution is coming from the executives.  A recent survey we conducted at SolarWinds illustrates this.  When asked about the results of allowing personal mobile devices on the network 51% of respondents said it increased productivity and about 50% said it increased the ability to work from home.

If you believe my premise that the users have found a better way, then the question becomes what do we as an IT organization need to do to enable the users while still hitting on all the rules and requirements that we’re required to enforce (compliance, security etc).  So far, for IT organizations that have allowed personal devices on the network has meant more traffic (40% said this in our survey) and more helpdesk requests (44%).  So let’s peel the onion back a bit on what might make life a bit easier.

 

The first thing to realize is that users don’t need everything (they might want it though).  What users need is access to their primary sets of data – what do I mean by this?  Well not too many folks want to open massive spreadsheets, build ppt decks from scratch, or do compute intensive tasks on their tablet – there are a few, and your requirements may vary, but I’m going to propose that there’s a good 80% of users who are knowledge workers that don’t want to do the things I mentioned above. What they do want is access to email, key corporate apps (many of which are already accessible through a browser with a VPN), and maybe a few special data sets (operational reports).  So if we scale the requirements back to this then can you build a better mouse trap?  I think so.

 

I won’t go through all the options in this post because most of them aren’t new, but if you’re thinking about BYOD before you think about the answer, see if you can put a finger on the problem.As a side note, here are a couple of good pieces I read on BYOD that talk about some of the challenges and the ‘what’.

 

The new iPad has CIOs quaking in their cubicles – GigaOm

Bring your own device debate – ZDNet

3 Predictions on the Future of Enterprise Software -TechCrunch

We’ve all heard the story: organizations are standardized on VMware, and they’re not going to change. IT departments live and die by the scale they build into their environment. They daily suffer the consequences of previous regimes that left them managing a complex ecosystem of hardware, OSes, applications, etc. Based on that, we KNOW that no organization will want to add the complexity of multiple hypervisors to their virtual environments…OR DO WE???

 

A Q3 2011 survey by independent market research company Vanson Bourne indicated that 38 percent of businesses are planning to switch hypervisor vendors within the next year due to licensing models and the robustness of competing hypervisors. We’d be naïve to think that that means organizations are totally removing VMware from their environment. The fact is that VMware just works, and it works very well. That’s why they are almost always at the center of these discussions. They have owned the market, and are the incumbent in most situations. We’re really pretty comfortable with, and maybe even accustomed to, deploying mission critical applications on VMware vSphere virtual infrastructure. However, there is a growing noise in the virtualization marketplace. Companies, large and small, are talking about Microsoft Hyper-V. But, why? I believe one major reason is because of the hype surrounding Hyper-V 3.0.

 

But first, let’s step back for a second and look at why companies are looking at Hyper-V, even today, presumably months before the 3.0 release. I see two primary drivers for this consideration:

 

  1. Organizations realize that VMware is getting a lock on their business, and that monopoly power makes people nervous – they want to keep a second option open to improve their negotiating position for their next contract with VMware or in case VMware continues to raise their prices even more.
  2. Companies want a hypervisor vendor and a roadmap that will not make them regret their investment – RedHat or OpenStack don’t give them that confidence yet because they’re relative unknowns in the virtualization landscape. Only VMware and Microsoft have an established track record here.

 

To date, Hyper-V has not been able to compete head-to-head with VMware from a technical standpoint. VMware had a big jump on Microsoft, but Microsoft has been investing heavily in the last few years to catch up. While Microsoft hasn’t bridged the technical gap yet, there is lots of buzz about upcoming functionality in Hyper-V 3.0 that will add key functionality that will help them cross the technical chasm. The major features expected in Hyper-V 3.0, expected out later this year, are as follows:

 

  1. Improved Scalability – larger cluster support, native NIC teaming, and new VHDX file that will support up to 16 TB virtual disks. This enables Hyper-V to handle more mission-critical applications and workloads – particularly for Microsoft applications like, like SQL & Exchange.
  2. Extensible Virtual Switch – providing partners the capability to build in advanced networking features, like capture extensions lets them build tools to monitor and sample traffic. This also creates opportunities for networking vendors to create third party virtual switches like the Cisco Nexus 1000V for VMware.
  3. Live Migration Enhancements – Live Migration will perform much more like vMotion with live storage migration (previously, this required downtime) and concurrent live migrations of virtual machines. Microsoft may gain an advantage on VMware in this area because shared storage is not required for Live Migration. This could allow Hyper-V access to the environments with no shared storage such as small businesses and large public clouds. VMware and Xen require shared storage. RedHat’s RHEV is the only other hypervisor that does not require shared storage for live migration.
  4. Replication - embedded host-based replication will help Microsoft make Hyper-V a better fit for mission-critical applications and enables organizations to use branch offices as failover targets.

 

So, it’s obvious that Microsoft is extremely focused on competing with VMware technically. Hyper-V will still needs some capabilities strengthened to be a viable competitor to VMware’s DRS and other advanced functionality in more sophisticated environments. However, most moderately-sized and smaller organizations aren’t ready to actually implement this technology yet. So, Hyper-V still has some time to catch up. We’ll just have to keep an eye on their roadmap to see what features they’re planning to introduce next!

 

In the meantime, we’ve just released a new free tool called VM Monitor for Hyper-V that will allow you to start gaining some visibility into a Hyper-V server (we also have a VMware version of the same tool called, you guessed it, VM Monitor for VMware). Stay tuned next week for more exciting announcements around support for Hyper-V in SolarWinds products (wink, wink)!

A popular trope in science fiction involves an individual who’s been time-shifted into a future culture (frozen in ice, cryogenic sleep, time machine, what have you), and the reader or viewer then learns about the new culture by what the protagonist experiences or reads.  I’ve been thinking that if our time traveler arrived were a sysadmin who arrived in 2012 from a 5 or 10 years in the past, he or she would initially read the tech blogs and online press and think that cloud was the reigning technology and that servers in data centers and with actually applications running on them disappeared like so many 8-track tape players.

 

Of course, if the story were allowed to continue, at some point our time-traveler would get a job, maybe as a sysadmin, and then shock would set in when the cloud-enabled world was hardly to be found.  The server room would still be full of racks of physical servers.  Sure, lots of them would be running a hypervisor with virtual machines, but it’s still a far cry from the cloud-filled IT world that the hype seems to imply.

 

Am I saying the cloud is all hype?  Definitely not.  But like most technologies, there’s what some have termed macro-myopia, which is the tendency to overestimate the short-term impact of a technology and dramatically underestimate its long-term impact.  It’s clear that cloud will impact IT in a huge and probably deeply transformational way.  Eventually.   But today, most computing resources are still on solid ground.

 

How do we know?  First off, SolarWinds sells enterprise software at a really low price, which results in literally thousands of transactions every quarter.  On top of that, our product management team is reaching out to customers and non-customers alike on a daily basis, asking for what they need.  In all of these conversations, we’re focused on current pain points.  We discuss the problems that need immediate attention.  While we now have a large portfolio that can solve a lot of pains, we still get asked for things we don’t do (yet).  So how often do we get asked for help with managing cloud infrastructure?  I won’t say never, but if I did, it wouldn’t be far off.

 

We’ve also done more formal data collection.  When we surveyed 90 customers, we found that 56% of customers said they weren’t running anything in public or private clouds.  About 29% said they aren’t even thinking about cloud.   Only about 5% were running critical applications in a public cloud.  That goes up to 9% if you throw in non-critical apps.  Private clouds are more in use, with over 40% of users running something in a private cloud, but when we’ve drilled in with end users, private cloud is often just virtualized environment that’s been “rounded up” to a cloud:  There’s no self-service, no abstraction of the server from the end user.

 

We did a separate survey about cloud plans, and with 88 respondents, it told a similar story.  Roughly 70% of respondents had no plans to do anything with cloud in the next year.  Only 16% were planning a cloud initiative in the next 6 months.

 

BTW, if you’re tempted to dismiss these results because SolarWinds is “an SMB player”, let me set the record straight:  Just because we’ve figured out how to sell to customers with only a few hundred employees does not mean that we only sell to that segment.  Our customers—including those in this survey—range in size from hundreds to tens of thousands of customers.  We cover a huge swath of the market.  We just do it without talking to CxOs, who are, perhaps, more susceptible to vendors who “cloudwash” their solutions, given that the CxO can’t easily drill down further than what’s presented in a slide show.

 

Why am I throwing a cold, wet blanket on the cloud party?  Again, we believe cloud is coming, but it’s not here yet, and it probably won’t be here for a while yet, maybe closer to 2020, if IDC is to be believed  (see Public, Private and Hybrid Cloud Adoption – Competing for 2020; IDC). In the meantime, real IT professionals have real problems with their non-cloud environments right now.  And when I look around at the big IT management vendors like VMware, Microsoft, CA, and BMC, they are pushing cloud this and cloud that 7/24/365.  The cloud focus is just as true of startups (although that make sense because startups are all about the future).    Who’s left to look after the problems of today?  That would be us.   SolarWinds continues to focus on delivering powerful, low-cost software that are truly easy to use.

 

We aren’t ignoring cloud.  When it becomes a need for mainstream IT people, we’ll have products that address their pain point.  Count on it.  Until then, if any of our competition wants to pull their heads out of the clouds, we wouldn’t mind a little company in the here and now...

About a month ago, we announced our first release of DameWare since it became part of the SolarWinds family. We are really pleased with the feedback that we have been getting on this release from DameWare customers as well as SolarWinds users who have taken a few moments to check it out.  The DameWare products allow system administrators to easily manage Windows servers, workstations, desktops, and laptops from remote locations. The release brings a couple of great features that the community has been clamoring for, as well as some tweaks and fixes that will make every user’s experience a bit smoother.

 

What’s New in DameWare?

Chat – Now you can chat online with your remote user as you troubleshoot or configure the remote machine. Now, there is no need to open WordPad™ to type back and forth. Just one click on the chat button in the top menu, and a chat window with the remote user opens automatically.

DW MRC Chat Button.jpg DW MRC Chat Window.jpg

Screenshots – Click to quickly capture and save a screenshot from the remote machine. Now there is no need to go through the process of pasting your screenshot into Paint™ and then saving it. One click, one step, and it’s saved! This feature is invaluable in troubleshooting scenarios such as documenting errors or configuration settings, especially since a picture is worth a thousand words!

DW MRC Screenshot Button.jpg

DameWare still has the same super cost-effective pricing model (priced per admin user instead of by managed machine) and all the great functionality you’ve come to expect. In fact, WindowsNetworking.com recently recognized DameWare NT Utilities with two 2011 Readers’ Choice Awards, taking the top spot in two categories -- Administration Tools and Remote Control.

 

Check it out for yourselves!

We've never paid much attention to these buzz words.   Sure, SolarWinds definitely employs concepts of “Consumerization of IT” – easy to use UI, easy to acquire/deploy software, etc.   While users have, for some time, been able to use mobile devices to view use our solutions, we hadn't innovated to the degree we are seeing in social media.  For example, with Yelp, you can search for businesses around you, link to the phone number or website to get the menu, check in, write a review, and pass all this info on to your FB friends.  Recently, I have been trying a lot of vegetarian restaurants, and the app has picked up on that and recommended some places to try.  This app is all about the context of me – my location, my preferences.  We have not seen this kind of innovation in IT – until now…

 

SolarWinds is taking Consumerization of IT to a whole new level.  With our new product, SolarWinds Mobile Admin, sysadmins can get contextual IT information from any mobile device.  Mobile Admin is a single pane of glass for  everything IT.  It provides a dashboard with alerts, tailored to each individual’s preference.  For example, you can define what alerts you receive on your mobile device, depending on severity, or node.  It also provides contextual navigation between IT applications.  For example, you see a ticket from the service desk that a user is locked out of their email, from there, you can launch in context to the node, see the actions you can take, and triage the problem; i.e. launch to Active Directory and unlock their account, change permissions or group memberships.  All this happens from one UI on your mobile device.    Most mobile IT apps provide visibility to what is going on, but only one provides 500 functions for taking action to triage or fix the issue.

 

Is this a big deal?  Yes, for two reasons.  One is that this provides IT professionals with a mechanism to do their job from anywhere which impacts customer satisfaction to a great degree.  No longer does the sysadmin need to run to the office or run home to fix a problem – they can do it from the checkout line or step away from dinner momentarily.  Secondly, innovation provides valuable perspective and opportunity.  By providing IT with a new workspace, we can now look at the complex and frequent problem of product integration with a new lens.   This new lens will force IT management software developers to re-think how they approach developing software for the end user.

What happened to the CMDB (Configuration Management Database) - the heart and soul of the "new" management frameworks? Well, they are still around as part of the Big 4 and others portfolios - but the onset of virtualization and cloud has thrown them a "curveball."

 

Just to recap, a CMDB was defined as "a repository of information related to all the components of an information system" [1]. This was updated in ITIL v3 to the CMS or "Configuration Management System" as "a set of tools and data that is used for collecting, storing, managing, updating, analyzing and presenting data about all configuration items and their relationships" [2]. A broad goal of the CMDB can be defined as a being a key enabler in delivering IT as a service through improved incident and problem management (root cause and impact analysis), change management, auditing and compliance amongst others. A worthy goal indeed - and a goal that is more relevant than ever with virtualization and cloud getting us closer to true service delivery models and blurring the traditional boundaries across technology domains, organizational roles and management disciplines.


Virtualization and the clouds they help enable bring their own set of challenges through. Bernd Herzog does a great job in his article here and here at describing many of them. 


Some similar key challenges we see brought about by virtualization and cloud include:


  • New Objects - We have a new class of objects to deal with. Virtualization has in essence defined a new "blueprint" for the datacenter. There are new abstractions like clusters, resource pools and so forth. This is important because it doesn't always make sense anymore to manage in the weeds every piece of data - we need to bubble it up to the right level of abstraction upon which decisions can be made.
  • New Relationships - How does this new set of objects relate to one another? VMs, files, hosts, clusters, datastores, and applications - we need to understand how all of these things tie together, whether we are trying to troubleshoot a performance issue, plan for capacity or understand the impact of changes.
  • Rate of change - It was arguable whether traditional CMDBs could keep up with the rate of change in a physical environment, let alone a virtual one. Technologies like vMotion, Storage vMotion for example mean we need to have continuous discovery to keep the new relationships above up to date.
  • More than configuration - Virtualization is blurring boundaries - configuration changes can strongly correlate to performance, and affect capacity and so on. We need a unified approach that brings (and ties) together performance and service assurance information in addition to configuration data.
  • Needle in a haystack -  Many CMDBs offer rudimentary search capabilities - but how do we leverage this to search across unified and integrated data sets to make sure the right people get the right insights they need?


When you speak to those with highly virtualized environments, it's not uncommon to hear comments on the CMDB like "yeah, that's the data warehouse thing that we populate with information occasionally - we don't use it" or "CMDBs don't work for virtualization/clouds - they don't keep up".

 

Take Away - The goal of a CMDB is more relevant than ever, how it gets delivered has to change in a virtual environment.

 

[1] http://en.wikipedia.org/wiki/Configuration_management_database

[2] http://wiki.en.it-processmaps.com/index.php/Service_Asset_and_Configuration_Management

This week LogLogic entered a definitive agreement to be purchased by TIBCO in yet another deal for the log management/SIEM space.  I have to admit the deal caught me a little off guard as the security and compliance side of log management didn’t seem like an immediate fit with TIBCO, but then I saw the hint of where this is going in the press release and it makes lots of sense.  Now I’ll pause here to say that I know nothing definitive about TIBCO’s plans with LogLogic so consider this speculation, opinion, water cooler chatter if you want to go there.

 

TIBCO is probably best known for their enterprise data integration solutions, combining and transforming data from multiple systems in the enterprise so it can be shared through one common infrastructure as opposed to fragmented integrations.  They have started to pitch an analytics message which is a nice logical extension – data is flowing through us so why not provide the tools to analyze it – and in that context the LogLogic deal makes a bit more sense.  It plays into that ongoing flavor of the Month/Quarter/Year  - Big Data!  This is not new for the log management space of course, Splunk, a company I know you’ve all heard of pitches this same message.  Look through the massive quantities of machine data and find the trends/patterns/connections in your data that can provide true business insight today.

 

So what does this mean for LogLogic in the SIEM space?  Will they continue to be a player or will they go the way of many acquisitions, disappearing into the big company machine.  I suspect it will be hard for them to keep their identity once they’re part of TIBCO, it’s hard to imagine TIBCO doing a lot around the SEIM space or wanting to position the acquisition around log management vs doing the big enterprise pitch around big data.  But I guess only time will tell – in the mean time we’ll keep moving forward and carrying the torch of providing affordable log management and SIEM solutions that meet the needs of IT users everywhere.

Storage is undergoing a pivotal transformation caused by the combined pressure of virtualization on performance, and the need to reduce costs. Long gone are the days of static storage configurations with easy to diagnose issues. Today's storage is dynamic, constantly self-optimizing to meet the changing demands of the infrastructure and the business environment.  This year will see several trends accelerate and a few new ones appear.

 

Storage Efficiency:

Server virtualization has accelerated the need for arrays to self-optimize their storage, and this will be the biggest story of the year.  Thin provisioning, tiering and de-duplication all play a part, but the real driver this year is tiering. The ability for the array to spread storage I/O across performance tiers transparently to the host or server has revolutionized the ability of the administrator to maximize usage and minimize costs.  This capability will continue to push overall storage utilization rates higher while letting administrators fine tune costs in ways only imagined just a few years ago. This technology has the track record and the vendor breadth to become ubiquitous in 2012.

 

SSD/Flash:

SSD is hitting it stride and will expand in 2012 in a big way by leveraging the aforementioned tiering capabilities to make SSD more cost effective and easier to justify. As memory prices drop along with the sudden spike in disk costs, consumers will have an easier time to justify adding SSD to every array they purchase.  Leveraging the speed of SSD, users can buy more high-capacity, slower disks to reduce the overall cost per TB to contain costs while not compromising performance.

 

Convergence of Compute and Storage: 

Storage and compute have traditionally been at arm’s length, but server virtualization is driving these together where we expect to see new devices emerge.  Some call this embeded storage, but really these are converged devices that allow for simple and quick configuration of virtual environments, and with a straightforward scale up architecture.   These devices will often leave the storage details to the virtualization layer.

 

New Tools:

As storage advances and integrates deeper and deeper into the rest of the infrastructure, new tools will be needed for monitoring and management. Why? First, it becomes more and more difficult to diagnose performance issues because of the dynamic nature of the infrastructure.  Without the proper combination of tools that show end-to-end performance of storage, network, compute and applications, fixing problems will be difficult.  Second, as we rely on space-saving technologies like tiering and thin provisioning, we are in essence living closer and closer to the "edge" of running out of physical storage. A simple mis-configuration, end-user mistake or unexpected growth could push us over the edge, causing a catastrophic failure.

 

Cloud Storage becomes more mainstream:

Users have been experimenting with Cloud Storage similar to how they tried server virtualization – on non-critical or test environments. In general, cloud storage clearly has traction with backups or disaster recovery.  However, this year users will begin to expand this trend to include storage on demand to provide just-in-time elasticity to their oversubscribed, tiered infrastructure. Without this safety net, users won't risk maximizing the utilization of their own storage.  Expect the storage vendors to push cloud connectivity as an add-on to that array sale.

 

So 2012 will be very exciting year for storage as automated storage optimization becomes standard among arrays, vendors continue to pursue deeper integration with virtualization, and cloud storage moves more mainstream.

Filter Blog

By date: By tag: