Formerly known as Wikibon
Search
Close this search box.

Pure Storage, NVIDIA, and the AI Market’s Shift Toward Workload-Optimized Hardware Platforms

Artificial intelligence (AI) is like any enterprise market. As the core AI workloads become standardized and the underlying hardware and software requirements crystallize into repeatable patterns, you’ll see more solution providers offer pre-integrated bundles for these requirements. Increasingly, enterprise buyers may flock to these bundles for their premises-based requirements, either as an alternative to public cloud offerings that offer these same capabilities or as the foundation for private or hybrid cloud deployments.

Over the past decade, we’ve seen this workload-optimized solution-packaging model, which some refer to as “appliances,” come to such data-centric solution segments as data warehousing. Though few observers consider AI infrastructure amenable to appliance-based delivery of modular hardware platform, it’s clear that some workloads, such as iterative training of deep learning models, can be accelerated through pre-built combinations of storage, compute, and interconnect resources.

AI-ready storage/compute integration is becoming a core requirement for many enterprise customers, and Wikibon sees more solution providers offering this capability in robust platform offerings. In that regard, we call your attention to Pure Storage, a well-established provider of all-flash storage platforms, who launched an important new product this week. In partnership with NVIDIA, Pure Storage announced AIRI, which is an integrated hardware/software platform for distributed training and other compute- and storage-intensive AI workloads. Available now through selected Pure Storage reseller partners. the new product, whose name stands for “AI-Ready Infrastructure,” is purpose-built for a wide range of AI pipeline workloads, ranging from upfront data ingest and preparation all the way through modeling, training, and operationalization.

Demonstrated this week at NVIDIA’s annual GPU Technology Conference, AIRI integrates into just over a half-rack Pure Storage’s FlashBlade storage technology with four NVIDIA DGX-1 supercomputers that run the latest Tesla V100 GPUs. AIRI’s storage and compute are interconnected through Arista 100GbE switches that incorporate NVIDIA’s GPUDirect RDMA technology, thereby providing a direct path for high-speed high-volume data exchange on distributed training and other AI workloads. The solution also incorporates Pure Storage’s AIRI Scaling Toolkit as well as NVIDIA’s GPU Cloud deep learning software stack, a container-based environment for TensorFlow and other AI modeling frameworks.

In a market where more machine learning workloads are being automated every step of the way, Wikibon expects that workload-optimized hardware/software platforms such as AIRI will find a clear niche for on-premises deployment in enterprises’ AI development shops. Before long, no enterprise data lake will be complete without pre-optimized platforms for one or more of the core AI workloads: data ingest and preparation, data modeling and training, and data deployment and operationalization. It will be interesting to see if Pure Storage, NVIDIA, and other solution providers build out their product lines to address a full spectrum of AI pipeline workload-scaling requirements.

Roy Kim of Pure Storage discussed the AIRI announcement in a Cube Conversation with Wikibon’s Peter Burris. Kim highlighted how the solution addresses the growing need by modern developers to “pull compute, and storage, and networking all into this compact design so there is no bottleneck, that data lives close to compute, and delivers that fastest performance for your neural network training.”

Here’s my discussion of this announcement on a recent Wikibon Action Item.

For a good discussion of Pure Storage’s flash and hyperconverged infrastructure strategy, check out this recent interview with Mike Bundy on theCUBE at Cisco EU Live 2018.

Book A Briefing

Fill out the form , and our team will be in touch shortly.
Skip to content