My thought provoking article is around the management of storage in the datacenter. IT managers and Storage Admins have long since had decisions to in the areas of capacity planning and performance but now that we are being encouraged to think of the software defined datacenter, why should we still be thinking about storage ?
Virtualisation is teaching us to forget about the building blocks and focus on the apps but if you still manage on premise storage then you must have a vested interest in the upkeep of these building blocks. In essence, we are being encouraged to think outside the box and to not worry about where IOPS and latency are required and let the dynamic models of automation handle this.
Storage comes in all shapes and sizes from DAS and NAS all the way up to enterprise based flash systems but are you guys meant to be able to distinguish where one type of system is favoured over another ? Sometimes you don’t know the application requirements until everything is in production. How do you then go back and adjust your model ?
How do staff continue to forecast capacity growth with cloud and virtualized deployments since the nature of these elastic deployments is to grow and shrink depending on business requirements ? For the people that have to manage dissimilar hardware, what tools are you using ?
I’d like to hear your thoughts on what you think and will reply to any comments to see what interesting discussions arise.