Without a doubt, the biggest trend in IT storage over the past year, and moving forward is the concept of Software Defined Storage (SDS). It’s more than just a buzzword in the industry, but, as I’ve posted before, a true definition has yet to be achieved. I’ve written previously about just the same thought. Here’s what I wrote.


Ultimately, I talked about different brands with different approaches and definitions. So, at this point, I’m not going to rehash the details. But at a high level, the value as I see it, has to do with the divesting of the hardware layer from the management plane. In the view I have, the capacities of leveraging the horsepower of commodity hardware, in reference builds, plus a management layer optimized toward that hardware build grants the users/IT organization the costs come down, and potentially, the abilities to customize the hardware choices for the use-case. Typically your choices revolve around Read/Write IOPS, Disc Redundancy, Tiering, Compression, Deduplication, Number of paths to disc, failover, and of course, with the use of X86 architecture, the amount of RAM, and speed of processors in the servers. To compare these to traditional monolithic storage platforms makes for a compelling case.


However, there are other approaches. I’ve had conversations with customers who only want to buy a “Premixed/Premeasured” solution. And, while that doesn’t lock out SDS models such as that one above, it does change the game a bit. Toward that end, many storage companies will align with a specific server, and disc model. They’ll build architectures very tightly bound around a hardware stack, even though they’re relying on commodity hardware, and allow the customers to purchase the storage much in the same as more traditional models. They often take it a step further and put a custom bezel on the front of the device. So it may be Dell behind, but it’s “Someone’s Storage” company in the front. After all, the magic all takes place at the software layer, whatever makes the solution unique… so why not?


Another category that I feel is truly trending in terms of storage, is really a recent category in backup, dubbed CDM, or Copy Data Management. Typically, these are smaller bricks of storage that act as gateway type devices, holding onto some backup data, but also pointing to the real target, as defined by the lifecycle policy on the data. There are a number of players here. I am thinking specifically of Rubrik, Cohesity, Actifio, and others. As these devices are built on storage bricks, but made functional simply due to superior software, I would also consider them to be valid considerations as Software Defined Storage.


Backup and Disaster Recovery are key pain points in the management of an infrastructure. Traditional methods consisted of some level of scripted software moving your data for backup into a tape mechanism (maybe robotic), which would then require quite often manual filing, and then the rotation of tapes to places like Iron Mountain. Restores have been tricky, time spent awaiting restore, and the potential corruption of files upon those tapes has been reasonably consistent. With tools like these, as well as other styles of backup including cloud service providers and even public cloud environments have made things far more viable. These CDM solutions take so much of the leg work out of the process, and as well, enable quite possibly zero Recovery Point and Recovery Time objectives, regardless of where the current file is located, and by that I mean, the CDM device, local storage, or even on some cloud vendor’s storage. It shouldn’t matter, as long as the metadata points to that file.


I have been very interested in changes in this space for quite some time, as these are key process changes pushing upward into the data center. I’d be curious to hear your responses and ideas as well.