What does this CI concept mean? A converged architecture is, to put it simply, a purpose built system, with the inclusion of Server, Storage, and Network, designed to ease the build of a centralized environment for the data center. By this I mean, there’re servers and storage combined with networking, so that with very minor configuration after plugging the equipment in, one will be able to manage the environment as a whole.

 

In the early days of this concept, prior to the creation of VCE, I worked on a team at EMC called the vSpecialists. Our task was to seek out appropriate sales opportunities wherein an architecture like this would be viable, and qualify our prospects for what was called the vBlock. These included Cisco Switches, the just released Nexus line, Cisco Servers, the also freshly released UCS blades and EMC storage. These vBlocks were very conscripted toward sizing, and very dedicated to housing virtualized environments. VMware was critical to the entire infrastructure. In order for these systems to be validated by the federation, all workloads on these would need to be virtualized. The key, and the reason that this was more significant than what customers may already have in their environments was the management layer. A piece of software combining UCS Manager, IOS for the Switch layer, and storage management was pulled together called Ionix. This was where the magic occurred.

 

Then came a number of competitors. NetApp released the FlexPod in response. The FlexPod was just that. More flexible. Workloads were not required to be virtual exclusively, obviously, storage in this case would be NetApp, and importantly, the customer would be able to configure these based less rigidly on their sizing requirements, and build them up further as they needed.

 

There were other companies, most notably Hewlett Packard and IBM that built alternative solutions, but the vBlock and FlexPod were really the main players.

 

After a bit of time, a new category was created. This was called HyperConvergence. The early players in this field were Nutanix and Simplivity. Both these companies built much smaller architectures. These are called Hyper Converged, most reasonably. They were originally seen as entry points for organizations looking to virtualize from a zero point, or point type solutions for new circumstantial projects like VDI. They’ve grown both in technology and function to the point where companies today are basing their entire virtual environments on them. While Nutanix is leveraging new OS models, building management layers onto KVM and replication strategies for DR, Simplivity has other compelling pieces, such as a storage deduplication model, and replication making for compelling rationale for pursuing.

There are also many new players in the HyperConverged marketplace, making it the fastest growing segment of the market now. Hybrid cloud models are making these types of approaches very appealing to IT managers setting direction for the future of their data centers. Be sure to look for new players in the game like Pivot3, Scale Computing, IdealStor, and bigger companies like Hewlett Packard and the EVO approach from Vmware with Evo Rail and Evo Rack getting it’s official launch this week at VMworld.