Formerly known as Wikibon
Search
Close this search box.

Building AI Microservices for Cloud-Native Deployments

Premise

Artificial intelligence (AI) is steadily infusing every component of cloud-native computing environments. Developers are incorporating AI – in the form of deep learning, machine learning, and kindred technologies – into cloud-native applications through tools that enable them to compose these features as data-driven microservices.  Going forward, composable cloud-native AI microservices will be as diverse as the intelligent use cases they drive.

Analysis

AI is becoming the brain driving cloud-native environments. Software developers are embedding microservices to imbue cloud-native applications with AI-driven intelligence. Innovative applications such cognitive chatbots, automated face recognition, and image-based search depend on provision of the enabling AI technologies within cloud-native architectures.

Software developers face dual adoption challenges in cloud-native environments. First, AI is becoming the brain driving cloud-native applications, but widely adopted standard practices for incorporating AI into rich, cloud-native applications have yet to emerge. Second, microservices architecture is an evolving development philosophy, but it may take a long time before legacy monolithic applications, toolsets, and development habits adopt these practices, such as exposing discretely separated functions through RESTful APIs.

However, we’ve seen that development organizations that explicitly use a microservices-based approach to adding AI to cloud-native apps can further adoption of both of these powerful newer technologies. To make progress in an additive, amplifying way, development organizations need to:

For developers working AI projects, the key Wikibon guidance is as follows:

  • Factor AI applications as modular functional primitives. Factoring AI into modular microservices requires decomposition of data-driven algorithmic functions into reusable primitives. For AI, the core primitives consist of algorithms that perform regression, classification, clustering, predictive analysis, feature reduction, pattern recognition, and natural language processing.
  • Use cloud-native approaches to build modular AI microservices. Deploying AI microservices into cloud-native environments requires containerization of the core algorithmic functionality. In addition, this functionality should expose a stateless, event-driven RESTful API so that it can be easily reused, evolved, or replaced without compromising interoperability.

Factor AI Applications As Modular Functional Primitives

Microservices architectures are the core development paradigm in the era of cloud-native computing. Under this practice, developers build cloud-native applications as orchestrations of functional primitives that are fine-grained, reusable, stateless, event-driven, loosely coupled, on-demand, easily discoverable, and independently scalable and managable.  In comparison to monolithic applications, microservices-based cloud-native applications can be created, tested and deployed more quickly and independently.

To effectively develop AI microservices, developers must factor the underlying application capabilities into modular building blocks that can be deployed into cloud-native environments with minimal binding among resources. Like all development technologies, however, bad application design or poor execution of microservice principles can lead to complex, monolithic, and hard-to-maintain applications. Our research shows that developers can apply a number of guidelines to ensure that software artifacts remain consistent with microservices principels (see Table 1). Employing a microservices approach can be especially crucial to AI applications because different AI resources can evolve at different rates, including continuously (e.g., model training).

GUIDELINE DISCUSSION
Break down AI capabilities into reusable primitives Factoring AI into modular microservices requires decomposition of data-driven algorithmic functions into reusable primitives. For AI, the core primitives consist of algorithms that perform regression, classification, clustering, predictive analysis, feature reduction, pattern recognition, and natural language processing.
Build an orchestration graph of AI functional microservice modules However they’ve been modularized from a functional standpoint, developers should build an orchestration graphs in which these microservices declare other submodules internally, have other modules passed to them at construction time, and share cross-module variables.
Link modular AI microservices into multifunctional solutions New AI modules might be added to existing modules to create new augmented solutions, such as when the output of some neural networks is used as the input to other neural networks. For example, you may build a hierarchical deep learning network in which a feedforward neural network drives a master module, which calls another module containing a long/short term memory network, which in turns calls a linear regression model.
Reuse subsets of AI microservices functionality through modular subdivision Another modularization approach is to split off new AI sub-modules from existing modules in order to reuse them as layers in other neural networks. One example of this is when autoencoders that have been pre-trained on distinct elements within a ground-truth image data set and are subsequently split off to execute on subsets of that source domain.
Transfer learning from existing AI modules into new microservices of similar domain scope When creating new AI microservices, the process may be accelerated by applying knowledge assets—such as feature sets—from pre-existing modules that come from different but similar-enough application contexts. The enabling practices for this are referred to as transfer learning. This approach may enable AI microservices to be substituted and interchanged, such as when transfer learning enables student networks to serve as substitutes of teacher networks.
Use high-level AI programming languages to build modular microservice logic Another approach for accelerating AI modularization is to use a high-level language such as the one discussed in this recent article. Languages such as this usually enable developers to describe AI logic declaratively at an application level into modules that are then compiled down to a deep-learning library, such as TensorFlow.
Apply standard AI application patterns to create modular microservices Yet another approach for speedy AI modularization is to use standard application patterns–such as deep belief networks and generative adversarial networks—that are modular by their very nature. As discussed here, deep belief networks are “stacked architectures” consisting of a layering of restricted Boltzmann machines and variational auto-encoders in which each layer learns from the output of the algorithms that constitute the previous layer. By contrast, generative adversarial networks consist of only two networks—usually a feedforward neural network and a convolutional neural network—with one generating content designed to resemble a source data domain and the other having to evaluate whether that content was or an algorithmically generated fabrication.

Table 1: Factoring Applications Into Microservice Building Blocks

Use Cloud-Native Approaches To Build Modular AI Microservices

In factoring AI capabilities as functional microservices, the developer needs to keep in mind how best to program these capabilities in cloud-native environments.

In a cloud-native environment, AI microservices are containerized and orchestrated dynamically within lightweight interoperability fabrics. Each containerized AI microservice exposes an independent, programmable RESTful API, which enables it to be easily reused, evolved, or replaced without compromising interoperability. Each containerized AI microservice may be implemented using different programming languages, algorithm libraries, cloud databases, and other enabling back-end infrastructure. But for it all to interoperate seamlessly in complex AI applications, there needs to be a back-end middleware fabric for reliable messaging, transactional rollback and long-running orchestration capabilities (such as provided by Kubernetes, Docker Swarm, or Mesosphere DC/OS). All interactions among distributed microservices are stateless and event-driven.

In “serverless” environments—such as AWS Lambda, Microsoft Azure Functions, or IBM OpenWhisk–the AI developer need not be concerned with managing the container substrate of the distributed computing service. Instead, a public cloud provider automatically provisions and optimizes containerized microservices on a fully managed on-demand basis. Typically, this happens through abstract “functions as a service” interfaces that enable microservices to execute transparently on back-end cloud infrastructures without developers needing to know anything about where or how the IT resources are being provisioned from. Instead, they can simply focus on modeling and coding. The serverless back-end enables the infrastructure to automatically provision the requisite compute power, storage, bandwidth, and other distributed resources to AI microservices at run time.

Developers should compose AI microservices for cloud-native deployment using a layered approach. At their recent Build conference, Microsoft made several announcements that, considered as a whole, point the way toward emerging best practices (see Table 2).

GUIDELINE DISCUSSION
Develop AI microservices in a cloud-native environment Developers can now use Microsoft Visual Studio 2017 to compose AI and other microservices as event-driven stateless “serverless” Azure Functions and Logic Apps. The tool also support composition of equivalent code for running those microservices in persistent containers in the Azure cloud or to edge devices in the Internet of Things (IoT).
Customize AI microservices from a library of reusable algorithms Microsoft announced several additions to its established portfolio of Azure-based cognitive services. It provides tools and application programming interfaces for developers to customize its library of almost 30 off-the-shelf vision, speech, text, NLP and other AI algorithms to go into any cloud-native service or other application.
Support building AI microservices in the open tool of choice Microsoft’s Cognitive Toolkit now offers greater support for developers wishing to work in the Google-developed TensorFlow or Facebook-developed Caffe2 open frameworks. Microsoft also plans to support containerized deployment of AI apps from its own toolkit as well as deployment of those apps both to cloud-native environment and to IoT edge devices.
Train AI microservices in a cloud-native distributed environment Microsoft launched a private preview of a batch AI-training capability that allows developers to use their choice of parallel CPUs, GPUs or FPGAs for this process. Developers can use Azure Batch AI Training  to train their models using any framework they choose, including Microsoft Cognitive Toolkit, TensorFlow and Caffe2.
Execute AI microservices in public, private, or hybrid data clouds Microsoft added a new cloud data service, Azure Cosmos DB, to its portfolio of massively-parallel offerings—including Azure Data Lake and SQL Server 2017—where developers can deploy and execute their AI microservices. Cosmos DB provides a globally distributed, horizontally scalable, schema-free database to support every type of data, which is essential for much of today’s AI, and offer several levels of well-defined consistency options, as befits the diversity of AI use cases.
Support AI microservices in diverse containerized cloud-native environments Microsoft announced wide-ranging support for scalable, cross-platform application containers of nearly every type, with the ability to run Windows and Linux containers inside every application. Developers can use Visual Studio 2017 as well as Docker Compose to develop, test, debug and develop apps to deploy to Azure Service Fabric, as well as to Docker Swarm, Kubernetes and Mesosphere DC/OS. They can run Service Fabric as a unifying middleware layer within and between Azure in the public cloud, Azure Stack in private clouds, VMWare and bare-metal cloud environments. And they made several announcements that will help developers build Azure containers, serverless functions and other microservices for multi-device mobile and IoT environments in Azure, Office 365, Xamarin and Docker.
Instrument AI microservices to monitor cloud resource utilization Developers can use Microsoft’s Azure Application Insights tool to track resource usage, performance and other issues surrounding Azure Functions and Logic Apps.
Manage AI microservices in a comprehensive DevOps environment Microsoft Visual Studio Team Services provides an end-to-end toolchain for DevOps on all AI microservices, whether they’re stood up as persistent containers or as serverless code in the Azure cloud.

Table 2. Microsoft’s Recently Announced Layered Microservices Approach

Action Item

Enterprises should establish a standardized framework for building future AI microservices, through such tactics as a deploying a library of reusable algorithms (such as convolutional and recurrent neural nets) and requiring use of a common modeling tool (such as the increasingly popular open-source TensorFlow). Going forward, developers should build on the success of their first AI microservices by using “transfer learning” techniques to reuse relevant aspects of their neural-net architectures into new AI microservices of similar functional scope. And as developers confront the reality of heterogeneous multi-clouds, cloud-native environments, and AI tools, they should demand industry-standard DevOps tools for building, deploying, and optimizing AI microservices.

 

 

Book A Briefing

Fill out the form , and our team will be in touch shortly.
Skip to content