Formerly known as Wikibon
Search
Close this search box.

Storage Key Issues for 2017

 

PREMISE: Persistent data storage architectures must evolve faster to keep up with the accelerating pace of change that businesses require.

These are times of tumultuous change in IT as the rate of business innovation accelerates. Solid-state persistent storage together with higher-speed, lower-latency interconnects are enabling new applications architectures and deployment models. Software-defined storage (SDS) promises to provide data services accessible via APIs and provisioned from pools of resources. Public cloud storage is maturing, but methods for managing and orchestrating hybrid clouds are still immature.

As such in 2017, we will focus on the following key issues for storage/data services:

  • Where will your data persist and why?
  • Which storage software stacks will thrive?
  • How can cloud storage best be used in the enterprise?

 

Where will your data persist and why?

The processor no longer is the main asset in a technology-based business system; data is. Many emerging architectural best practices reflect that, including:

Putting data closer to the processor – Best practices in the use using magnetic mechanical persistent storage like hard disks and tape are well understood and the industry is mature. In the solid-state space, flash-based SSDs are delivering speeds and economies we could only dream of a few years ago. Nonetheless, this isn’t fast enough for some users/applications. The NVMe (non-volatile memory express) standard is being quickly adopted. And we expect the same for new transport protocols including RDMA over Converged Ethernet (RoCE) and NVMe over fabric NVMe-oF.

The best I/O is no I/O – We now can see a path to persistent storage that is accessed with memory (DRAM) semantics rather that block device semantics. Indeed systems are emerging with sufficient performance to permit, for example, both transactional and analytical applications to run in the same system at the same time — sharing only one copy of the data! This is a game changer, but getting there will require substantial intellectual and financial investments.

Even closer to the processor, in-memory (DRAM) databases are either small or rare today. On the horizon are new, faster and larger storage class memory technologies such as Intel’s Optane (3D XPoint) or HP’s Persistent Memory. However, without the economies of scale enjoyed by flash memory, adoption will be limited. Adding another tier to the storage stack is also problematic.

Which storage software stacks will thrive?

Enterprise likely will have multiple storage stacks, a function of past hardware choices and emerging application data subtleties. However, the storage stacks that thrive will be those capable of evolving to, first, serve changing data needs, and second, incorporate rapid advances in infrastructure technology. All storage stacks that thrive, therefore, will be software-defined.

There are many definitions for software-defined storage (SDS). Most include an approach in which the software that controls storage-related tasks is decoupled (abstracted) from the physical storage hardware. Most also place an emphasis on storage/data services accessed via APIs. A common goal is to provide administrators with flexible policy-based management capabilities drawing on a pool of resources. The promise of SDS is that it will enable enterprise IT to provide a more on-demand and agile experience for business users who rely on IT infrastructure. However, unbundling storage software and hardware requires someone (user or vendor) to spend a lot of time and money to create new bundles. Furthermore, they will have to repeat this process regularly to keep pace with supply-chain changes. In some cases of exquisite irony, companies have needed to hire hardware groups to implement their SDS.

Moreover, there are many storage stacks beyond SDS. Some live in the application. Microsoft Exchange, for example, uses DAS/JBODs and Data Availability Groups. At the other end of the spectrum, as the industry moves away from mechanical disks and data is closer the processor block storage will increasingly yield to object and file storage as well as still undefined techniques. Persistent storage evolves to persistent data services.

Storage stacks were traditionally built by vendors, but today open stacks are emerging. OpenStack with its Swift, Ceph and Cinder APIs for storage interaction have been applied to open-source projects as well as to vendor products. Companies like Google, Facebook and Amazon with their hyper-converged infrastructures built their own to achieve hyper-scale.

How can cloud storage best be used in the enterprise?

Private Cloud Mandate – The role of infrastructure is to be the platform for applications. As noted in Wikibon’s True Private Cloud premise, public cloud is the bar that IT should measure itself against:

“Wikibon believes that public cloud represents a quantum level of disruption to the manner in which IT Services are delivered to enterprises and users. As such, in order for in-house IT organizations to remain valuable and relevant to the business functions and processes they support, enterprises should consider delivering private cloud in the same light – that it should emulate the pricing, agility, low operational staffing and breadth of services of Public Cloud to the maximum degree possible.”

 

Getting there is not that easy! Success will require the use of common sense for placing workloads and expectation that most enterprises will feature hybrid cloud implementations.

Use common sense for public clouds – Users are under a lot of pressure to use public cloud data and compute services. There’s been lots of hype for quite a while. And recent price wars by the big cloud services providers signal a degree of maturity and opportunity. However, we still haven’t changed the speed of light – something often ignored by cloud services sales forces. Data movement isn’t free. Physics insists that it costs time and fidelity, and WAN providers insist on getting paid. As digital technology moves closer to mechanical or knowledge work, it will drag system requirements with it, and physics and WAN providers will have their say.

Hybrid Clouds – While vendors are more than happy to lock users into their prepackaged “solutions”, the real point of hybrid clouds is secure freedom. Although open stacks have received much attention, the real standards are still dominated by the big public cloud service providers. As such, we expect Amazon Web Services (AWS) will all announce a software storage system which runs on enterprise provided servers, both on-premises and/or at a colocation facility, with a full set of APIs to manage the storage from AWS. This will initiate the beginning of AWS hybrid systems running both on-premises and in the AWS cloud. This will be particularly important for hybrid Industrial IoT systems which will need to process data locally but manage data remotely. We expect the other big vendors to follow suit. Not a bad thing, but still a lock-in.

Action Items

 

  • Strategic planners must embrace the pace of change in storage.
  • Businesses should recognize new application opportunities that emerge as digital data assets grow, gain fluidity and share-ability.
  • SDS sounds like a good idea, but it’s a long road. Plan on it.
  • Procurement officers must realize that this accelerated pace will be justified by the faster business processes it enables, but at a price.

 

 

Book A Briefing

Fill out the form , and our team will be in touch shortly.
Skip to content