Formerly known as Wikibon
Search
Close this search box.

Wikibon’s 2018 Artificial Intelligence Chipset Predictions

Premise

Low-cost AI chipsets will push more deeply into the mobility market in 2018. Next-generation AI-optimized chipset architectures will proliferate in the coming year, incorporating a blend of high-performance technologies, including but not limited to GPUs. The mass-adoption tipping point of edge-deployed AI will come early in the 2020s with further declines in the cost of optimized chipsets and improvements in the technology’s maturity and sophistication.

Analysis

Developers increasingly incorporate artificial intelligence (AI) algorithms, models, and code into their applications. As organizations invest in AI, they’re deploying these assets to the edges of the Internet of Things and People (IoT&P) to drive ubiquitous systems of agency.

With this overall trend in mind, Wikibon makes the following predictions:

  • Low-cost AI chipsets will gain a significant market foothold.
  • Next-generation chipsets will blend high-performance AI technologies.
  • Mass adoption of edge-deployed AI chipsets will come early in the next decade.
  • Consumer applications will be the primary demand drivers of edge-deployed AI chipsets.

Low-Cost AI Chipsets Will Gain A Significant Market Foothold.

AI is rapidly being incorporated into diverse applications in the cloud and at the network’s edge. The core of these AI-infused applications is machine learning (ML), deep learning (DL), and natural language processing (NLP).

The pace at which AI is being adopted depends on the extent to which it is incorporated into commodity chipsets. To be ready for widespread adoption, AI’s algorithmic smarts need to be miniaturized into low-cost, reliable, high-performance chips for robust crunching of locally acquired sensor data.

Over the past several years, hardware manufacturers have introduced an impressive range of chip architectures—encompassing graphic processing units (GPUs), tensor processing units (TPUs), field programmable gate arrays (FPGAs), and application-specific integrated circuits (ASICs)—that address these requirements. These chipsets are optimized to execute layered deep neural network algorithms—especially convolutional and recurrent—that detect patterns in high-dimensional data objects.

In the coming year, the next generation of commodity AI-optimized chipsets will gain a foothold in mass-market deployment. By year-end 2018, the dominant AI chipmakers will all have introduced new generations of chipsets that densely pack tensor-processing components on low-cost, low-power systems on a chip that are ready-made for edge deployment.

Next-Generation Chipsets Will Blend High-Performance AI Technologies.

Currently, most AI executes on GPUs, but other approaches are taking shape and are in various stages of being commercialized across the industry.

The AI chipset wars will be a major IT industry story throughout 2018. During the year, most of the AI chipset startups who’ve received funding in the past 2 years will come to market. The pace of mergers and acquisitions in this segment will increase as the incumbent AI solution providers (especially Google, AWS, Microsoft, and IBM) deepen their technology portfolios and the incumbent AI chip manufacturers—especially NVIDIA and Intel—defend their positions in what’s sure to be the fastest growing chip segment of the next several years.

What emerges from this ferment will be innovative approaches that combine GPUs with CPUs, FPGAs, and a new generation of densely packed TPUs, exemplified by Google’s architecture, which is one of several competing tensorcore-dense architectures on the market. Over the next several years, AI hardware manufacturers will achieve continued year-over-year 10-100x boosts in the price-performance, scalability, and power efficiency of their chipset architectures. Every chipset that comes to market will be optimized for the core DL and ML algorithms in AI apps, especially convolutional neural networks, recurrent neural networks, long short-term memory networks, and generative adversarial networks.

In 2018, a new generation of AI-optimized commodity chipsets will emerge to accelerate the technology’s adoption in edge, cloud, and other deployments. More embeddedmobile, and IoT&P platforms will come to market incorporating blends of GPU, TPU, FPGA, CPU, and various neuromorphic ASIC architectures. In the next generation of AI hardware, processors will be optimized primarily for low-cost, low-power, edge-based inferencing, and only secondarily for edge-based training. Embedded AI-inferencing co-processors will be standard components of all computing, communications, and consumer electronics devices by 2020.

Over the next several years, the principal AI client chipset architectures will shift away from all-GPU architectures toward hybrid approaches. Nevertheless, GPUs have a promising future as high-performance computing workhorses for the most demanding cloud-based AI training and inferencing workloads. GPU architectures will continue to improve to enable highly efficient inference at the edge, defending against further encroachments from CPUs, FPGAs, ASICs, and other chipset architectures for these workloads.

Well into the next decade, GPUs will continue to dominate AI training in public clouds, private clouds, and data centers. We expect NVIDIA and its major cloud partners (AWS, Microsoft, Google, IBM, Baidu, etc.) and systems partners (Dell EMC, HPE, IBM, Supermicro, etc.) to expand their GPU-based supercomputers for compute-intense AI training and inferencing workloads.

Mass Adoption Of Edge-Deployed AI Chipsets Will Come Early In The Next Decade.

Currently, AI is still a premium feature in mobile, IoT&P, and other edge applications. That’s been due in large part to the cost, complexity, and immaturity of the technology, especially the chipsets, needed for such deployments.

The cost of AI chipsets will drop dramatically in coming years as more efficient hardware architectures come to market, competition heats up, and suppliers leverage economies of scale. Wikibon predicts that the AI chipset market will reach a tipping point early in the next decade. We predict that, by 2022, the price per AI-optimized edge-embeddable chip will drop below $25. That’s also when we predict that the open-source ecosystem for real-time Linux on DL systems on a chip will mature. Convergence of those trends will trigger mass adoption of AI edge-client chipsets for embedding in mass-market mobiles, robotics, IoT&P devices, and other devices for consumer, industrial, and other applications. Figure 1 provides a diagrammatic view of the market trends that Wikibon believes will precipitate this tipping point.

Figure 1: Market Forces Driving the Coming AI Chipset Mass-Adoption Tipping Point

By 2025, 75 percent of mass-market edge devices will be equipped with AI-chipsets. By the middle of the coming decade, local inferencing will be standard on AI-enabled edge devices in support of the following applications: multifactor authentication, face recognition, speech recognition, natural language processing, chatbot, virtual assistant, computer vision, mixed reality, and generative image manipulation.

However, only 10 percent of the AI workloads performed at the edge in 2025 will involve local training, which, for most DL and ML workloads, will continue to be executed primarily in high-performance computing environments in public, private, and hybrid clouds. Consequently, AI-optimized chipsets for edge deployment will primarily be engineered for fast, low-power, low-cost inferencing.

The primary exception to this will be for drones, industrial robotics, and self-driving vehicles, which will rely, in part, on device-level AI-driven autonomous inferencing from locally acquired sensor data. For these use cases, the standard embedded AI hardware architecture going forward will incorporate specialized neuromorphic chipsets engineered to support device-level reinforcement learning in distributed and disconnected IoT&P deployments.

Consumer Applications Will Be The Primary Demand Drivers Of Edge-Deployed AI Chipsets.

Going forward, consumer applications for AI chipsets will far outpace business, industry, government, and scientific applications. Owing to their degrees of consumer orientation, the principal AI-edge chipset verticals in 2025 (in descending order of likely worldwide volumes and revenues) will be for smart mobiles, smart cameras, smart appliances, self-driving vehicles, medical devices, robotics, and drones.

In all segment of the mass market, one of the primary demand drivers for AI chipsets will be in bundled solutions that incorporate two or more smart devices with special-purpose domain hubs and cloud gateways. Going forward, the demand for AI chipsets—both in edge devices and in domain hub/gateways—will come from growth in the following emerging markets: smart homes, smart buildings, smart factories, smart supply chains, smart healthcare institutions, smart drone fleets, and smart security systems.

Action Item

Wikibon recommends that developers build edge applications for the growing range of mass-market mobile, IoT&P, and other devices that embed low-cost, lower-power AI-optimized chipsets. In 2018, low-cost chipsets for ML, DL, and NLP will gain a significant market foothold, with a tipping point to mass adoption of edge-embedded AI expected for 2022. Going forward, commodity AI chipsets will incorporate more densely packed tensorcore processing elements in support of autonomous edge-based applications.

 

 

 

You may also be interested in

Book A Briefing

Fill out the form , and our team will be in touch shortly.
Skip to content