I’ve written before about the potential promise of hyper-converged infrastructure when it comes to bringing some new opportunities to the data IStock_000021110632XSmallcenter. While hyper-convergence may not be for everyone and it’s one part of a broad IT strategy, it is certainly an intriguing option that can bring a bit of calm to the data center upgrade and expansion process.

Contents

Some constraints

However, until relatively recently, emerging hyper-converged products suffered from the “get it in any color you like as long as its black” syndrome. In other words, although these products provided new opportunities for data center simplicity and improved economics, they took a one size fits all approach in some critical ways. First, until this summer, each vendor created but a single hardware configuration. While this satisfied the “simplicity” part of the convergence equation, it failed to address flexibility and created a situation in which customers needed to scale all resources in lockstep. This need ate away at some of the economic advantages the solutions presented. Scale was still done in small bites, which is an economical way to scale, but there was the potential for having to buy resources that weren’t really needed when there was imbalance. That said, this wasn’t necessarily a huge negative, particularly since the initial product releases were very much “v1” devices and this uniformity was planned.

In addition, with the hypervisor acting as the glue that holds together the data center, many convergence products focus on vSphere, and with good reason given vSphere’s dominance in the market. However, as other hypervisors grow in capability, there will come a need to enable these hypervisors in any data center, converged or not. The inability to choose a hypervisor in some instances inhibits customer choice in the data center, thus locking out a broad swath of customers that may need or want a different hypervisor solution.

Two players emerge

The two primary players in what has become a hot space are Nutanix and SimpliVity and both this summer made announcements regarding the different ways that the companies are tackling the kinds of challenges that could limit customer adoption of hyper-converged infrastructure. With these announcements and associated product releases, both companies have expanded their product offerings to meet more customer needs and provide customers with more scale flexibility, which will further improve the economics that drive these converged solutions.

More hypervisors, more choice, more flexibility

In August 2013, Nutanix announced that the company has added technology preview support for Hyper-V 2012 to its product platform with full production support expected by the end of 2013. Nutanix already supports both vSphere and KVM as hypervisor options on its Virtual Computing Platform.

Hypervisor agnosticism is an important component in a software-defined vision, which is the secret to these devices. They leverage commodity hardware by layering virtualization over the top of it along with tools that enable scale for each of the various resource areas – compute, networking, and storage. The hypervisor itself is quickly becoming another commodity, with different potential uses for different hypervisors. As such, the ability to choose the most appropriate hypervisor for the situation is an important element of customer choice.

Expanded hardware offerings extend customer reach

Both Nutanix and SimpliVity have also begun to create differentiated hardware offerings that are intended to combat the lockstep resource expansion issue while still taking a building block approach to infrastructure expansion. The differentiated units are simply added to an existing cluster just like any other node.

These differentiated offerings will help bring hyperconvergence to more organizations than was going to happen with one size fits all offerings. The ability for organizations to reasonably tailor these solutions to meet unique needs is critical to many. Otherwise, they would simply be too constrained to be useful beyond a niche.

500px-2013-09-29_23-17-17Nutanix

Nutanix expanded and differentiated its line through the introduction of new models. The list below briefly describes each model.

  • NX-1050. Each node contains 64 or 128 GB of RAM, 400 GB of SSD capacity and 4 TB of hard disk capacity.
  • NX-3050. Each node contains 128 or 256 GB of RAM, 2 x 400 GB SSDs, and 4 TB of hard disk capacity.
  • NX-3051. The same as the NX-3050, but with dual 800 GB SSDs.
  • NX-6050. Each node contains 128 or 256 GB of RAM, 2 x 400 GB SSDs, and 16 TB of hard disk capacity.
  • NX-6070. The same as the NX-3050, but with dual 800 GB SSDs and faster processors.

Simplivity

Likewise, SimpliVity has released multiple editions of their hyperconverged infrastructure appliance – the OmniCube. The models are as follows:

  • CN-2000. 4 x 100 GB SSDs and 8 TB of hard disk space, 6 to 8 processor cores and 48 to 128 GB of RAM.
  • CN-3000. 4 x 200 GB (or 4 x 400GB or 4 x 800 GB) SSD, 24 TB of hard disk space, 12 to 24 processor cores, 128 GB to 768 GB of RAM.
  • CN-5000. 4 x 400 GB (or 4 x 800 GB) SSD, 18 TB of high performance hard disk space, 24 processor cores, 384 GB to 768 GB of RAM.

2013-09-29_23-13-14

Action Item: For CIOs that may consider what may appear as inflexible configurations for critical infrastructure, some variation in the market was essential to long term success. With such differentiation, organizations can more specifically target resource needs while also retaining the promised benefits of hyperconvergence. While such solutions may not be suitable for every use case, new models expand the addressable market.

Footnotes: Also see How Convergence Moves From Tactical Savings to Strategic Foundation and much more on Converged Infrastructure under Wikibon’s Software-led infrastructure page.