Whiteboard

1 Post authored by: bmrad

Storage is undergoing a pivotal transformation caused by the combined pressure of virtualization on performance, and the need to reduce costs. Long gone are the days of static storage configurations with easy to diagnose issues. Today's storage is dynamic, constantly self-optimizing to meet the changing demands of the infrastructure and the business environment.  This year will see several trends accelerate and a few new ones appear.

 

Storage Efficiency:

Server virtualization has accelerated the need for arrays to self-optimize their storage, and this will be the biggest story of the year.  Thin provisioning, tiering and de-duplication all play a part, but the real driver this year is tiering. The ability for the array to spread storage I/O across performance tiers transparently to the host or server has revolutionized the ability of the administrator to maximize usage and minimize costs.  This capability will continue to push overall storage utilization rates higher while letting administrators fine tune costs in ways only imagined just a few years ago. This technology has the track record and the vendor breadth to become ubiquitous in 2012.

 

SSD/Flash:

SSD is hitting it stride and will expand in 2012 in a big way by leveraging the aforementioned tiering capabilities to make SSD more cost effective and easier to justify. As memory prices drop along with the sudden spike in disk costs, consumers will have an easier time to justify adding SSD to every array they purchase.  Leveraging the speed of SSD, users can buy more high-capacity, slower disks to reduce the overall cost per TB to contain costs while not compromising performance.

 

Convergence of Compute and Storage: 

Storage and compute have traditionally been at arm’s length, but server virtualization is driving these together where we expect to see new devices emerge.  Some call this embeded storage, but really these are converged devices that allow for simple and quick configuration of virtual environments, and with a straightforward scale up architecture.   These devices will often leave the storage details to the virtualization layer.

 

New Tools:

As storage advances and integrates deeper and deeper into the rest of the infrastructure, new tools will be needed for monitoring and management. Why? First, it becomes more and more difficult to diagnose performance issues because of the dynamic nature of the infrastructure.  Without the proper combination of tools that show end-to-end performance of storage, network, compute and applications, fixing problems will be difficult.  Second, as we rely on space-saving technologies like tiering and thin provisioning, we are in essence living closer and closer to the "edge" of running out of physical storage. A simple mis-configuration, end-user mistake or unexpected growth could push us over the edge, causing a catastrophic failure.

 

Cloud Storage becomes more mainstream:

Users have been experimenting with Cloud Storage similar to how they tried server virtualization – on non-critical or test environments. In general, cloud storage clearly has traction with backups or disaster recovery.  However, this year users will begin to expand this trend to include storage on demand to provide just-in-time elasticity to their oversubscribed, tiered infrastructure. Without this safety net, users won't risk maximizing the utilization of their own storage.  Expect the storage vendors to push cloud connectivity as an add-on to that array sale.

 

So 2012 will be very exciting year for storage as automated storage optimization becomes standard among arrays, vendors continue to pursue deeper integration with virtualization, and cloud storage moves more mainstream.

Filter Blog

By date:
By tag: