In looking around today’s storage market, scale out storage has become a white-hot area of development, with vendors working hard to add this capability to their products to enable customers to grow their data to ever-larger extremes. The direction is almost a requirement, given how even companies in the SMB space need all kinds of capacity and performance as new and different kinds of workloads are taken on, each requiring lots of space and I/O. In this changing era, the single non-expandable array that is simply as island of storage isn’t really a sustainable business model.
Except when it’s just fine, that is.
Consider what is needed in a storage device. It’s a place for organizations to store their files, virtual machines, and whatever else is deemed necessary. It’s also a place where SQL databases will live, Exchange message stores will reside, and SharePoint lists will make their home. All of these workloads have varying requirements in terms of both capacity and performance.
In a scale up environment, expansion does not take place in a linear fashion and there is generally relatively limited expansion opportunity. Further, expansion only takes place in one dimension. In scale up, that dimension is usually capacity. As the organization begins to run out of storage space on an array, an expansion shelf is added that connects to that array and adds additional capacity. However, that capacity is still controlled by the processors in the original array and connects to the network via the network adapters in the original array.
When an organization just needs more capacity, scaling up is a perfectly viable option for expansion. However, administrators need to make sure that the head unit has sufficient CPU to handle the additional workload and that there is enough network capacity to handle additional data traffic.
For vendors, scaling up is pretty easy as most systems have inherent support capacity to manage the extra load. It can also be less expensive for the customer since it just requires buying a shelf of disks and adding it to the existing processing unit. The expansion shelves don’t have their own compute and network capabilities, and there isn’t much in the way of specialized software necessary to make the solution work.
So, if workloads can operate using the head unit’s CPU and network, and only additional capacity is needed, scale up is a good option. Don’t forget: Scale up will also add some additional IOPS, thanks to the addition of new disks.
When it comes to sheer expansion capability, scale up only goes so far and scaling up has the potential for exhausting CPU and networking resources since all units share those resources from the head unit. Scaling out, on the other hand, enables organizations to achieve massive scale – petabytes at a time – and with opportunity to support workloads with massive IO loads.
But doing scale out right requires extra engineering. Vendors must create the constructs necessary to enable all of the nodes to act as a cohesive, singular whole. With scale up, an organization could deploy a lot of head units, but each would act independently. With scale out, everything works as a single unit, at least to a point. The individual nodes don’t have to be tightly coupled, but there are mechanisms that loosely couple them to ensure coherence between nodes. This is especially important when data is stored across multiple nodes.
Let the customer choose
This week at Storage Field Day 4 in San Jose Nimble Storage demonstrated its Scale to Fit architecture, which allows customers to scale in whatever dimension they need. This is a good move by Nimble, which can now boast limited scale out capability – up to four nodes in a cluster – but each individual node can scale up to four disk shelves. So, a Nimble storage system can now scale to 16 trays of disks in the environment. That certainly expands the market opportunity for Nimble. And, I’m a big believer in solutions that provide the most flexibility for the customer. This solution is certainly a positive step.
But, Nimble isn’t the only vendor doing this. Although Nutanix is absolutely a scale-out system, its latest offerings, which are differentiated by different resource mixes — one is heavy on storage while another is heavy on compute – let customers scale their environments to fit individual needs. Fully linear scalability is nice in theory, but the real world rarely operates that cleanly. Most often one resource runs out before another. One of the major benefits to some of today’s startups is cost efficiency, and by enabling the customer to choose the resource mix they eliminate waste.
Action Item: For CIOs, make sure you look for solutions that provide you the most flexibility even after the initial sale. You will eventually hit a wall and need to expand the environment. Make sure that you don’t choose something that limits you in ways that are difficult to overcome or that will add new complexity to the environment.