The storage admin doesn’t always have the most glamorous life.  They typically have responsibility for all the storage arrays and infrastructure supporting the most critical systems for the company, from virtualization, to applications and databases. A lot of people get upset when there is a storage problem.  There has also been a shift over the last few years. As the cost of storage has dropped, the primary storage issue has shifted from disk space to the storage I/O – essentially how fast data can be read or written to the storage media. This becomes critically important for application performance especially in the high performance world of databases and I/O intensive activities like virtualization.


The other key trend is around virtualization and flexibility of application infrastructure.  When most applications were hard-wired to physical servers and the servers to dedicated storage, storage load was much more predictable and steady. Now with things like vMotion and high performance VMs capable of running large workloads like virtualized databases, I/O can change quickly.


That is where the Clark Kent analogy comes in.  The storage admin doesn’t always get the most respect (maybe Rodney Dangerfield would be better). Even though they can be the critical link in application performance, they are often not consulted before changes occur in the virtual or application environment.  Things are often just changing too fast.  So while they have responsibility for an increasingly mission critical resource, storage I/O load changes often come at them out of the blue with little warning or interaction for other teams.  Don’t worry; they still get the blame when the storage systems result in application performance problem.


But there is a potential phone booth on the horizon for storage admins that could facilitate a quick change into their alter ego.  Many of the new storage technologies, especially solid-state disk (SSD), have the potential to turn the storage admin into Superman, saving the day for multiple other teams including the virtualization, application and database teams. SSD has radically better storage I/O performance than traditional spinning disk but is very expensive by comparison.  Due to the cost, blind investment if SSD can result in a large expense without a corresponding payback.  As a result, we continually hear IT users asking “How do I effectively use SSD in my environment?” Given how important that question is becoming, we worked with George Crump and the team at Storage Switzerland to try to provide some additional guidance about how and where to use SSD to get the most benefit from the investment.  The first article titled “How Do I Know My Virtual Environment is Ready For SSD?” was posted January 7.  An additional article on using SSD to enhance database performance will be coming soon. 


As new storage technologies like SSD become more mainstream, it will be even more important that storage admins in all kinds of IT environments have the ability to gather the right data to determine how much and where to use SSD to optimize the investment.  Hopefully then the storage admin can spend more time feeling like Superman and less like Clark Kent.