All-flash storage array (AFA) provides two major benefits for the data center. First, AFA enables capacity efficiency with consistent performance and a reduced storage footprint. Second, AFA usually includes a software overlay, which abstracts storage hardware functions into the software. Think software-defined storage. These features include deduplication, or the elimination of duplicate copies of data, data compression, and thin provisioning.
These two qualities combine to form a dynamic duo of awesomeness for infrastructure teams looking to optimize their applications and maximize the utility of their storage arrays. All-flash storage essentially optimizes CPU utilization, so the number of IOs per second (IOPs) per host increases and also reduces the number of host servers needed to service the IOs.
So all that glitters is gold, right? Not so fast. The figure below shows how AFA affects the data ecosystem. In the past, traditional storage performance was measured in terms of the number of IOPs and the most influential variable is the number of spindles. With AFA, spindle count doesn’t matter, so performance centers on average latency, and that latency is influenced by the number of applications that will be piled onto the AFA. This means the bottleneck moves from spindle count to hitting the storage capacity limit as well as running hot in the other subsystems in the overall application stack.
Are you considering AFA in your data center environment? Have you already implemented AFA? What issues have you run into, if any? Let me know in the comments below.
And don't forget to join SolarWinds and Pure Storage as we examine AFA beyond the IOPs to highlight performance essentials and uptime during a live webcast on June 8th at 2PM EST.