Formerly known as Wikibon
Search
Close this search box.

Wikibon’s 2018 Developer Tooling, Services, and Practices Predictions

Premise

In 2018, application developer tooling, services, and practices will shift toward a keen focus on functional programming, automated programming, open development frameworks for artificial intelligence, DevOps for data science, and reinforcement learning for autonomous edge applications.

Analysis

Developers are at the forefront of digital transformation. Increasingly, the applications they build are cloud-native microservices that incorporate stateless, event-driven functions; deploy into serverless cloud environments; encapsulate artificial intelligence (AI), and are deployed for autonomous, actuated, and augmented uses on the edges of the Internet of Things (IoT). More of these microservices are developed, trained, deployed, and optimized in automated and continuous DevOps workflows.

With these overall trends in mind, Wikibon makes the following predictions for developer tooling, services, and practices in 2018:

  • Functional programming will become the core development approach for cloud-native computing.
  • Convergence of machine learning (ML) and robotic process automation (RPA) will boost programming productivity.
  • Emergence of open frameworks will simplify AI application development.
  • Maturation of ML automation will facilitate end-to-end data science DevOps
  • Growth in autonomous edge applications will boost reinforcement learning’s role in enterprise data science.

Functional programming will become the core development approach of cloud-native computing

Wikibon

Prediction

Functional programming will become the mainstream approach for building lightweight cloud microservices. By year-end 2018, more than 50 percent of new microservices deployed in public cloud will be programmed utilizing functional coding and deployed in serverless environments. However, due to the embryonic adoption of serverless, on-premises platforms, fewer than 10 percent of new functional code-builds in private clouds will use functional code.

 

Functional programming enables developers to produce code that is clear, modular, and maintainable. Functional programming addresses the core features of mainstream cloud microservices, including many new machine learning, deep learning, and artificial intelligence applications. These core functional microservices include API publishing, stateless event-driven semantics, and request-response interactions with immutable content sources. Figure 1 illustrates the relationship of functional programming to serverless, cloud-native computing.

By decoupling coding decisions from the management of serving infrastructure, functional programing can accelerate the DevOps pipeline for cloud-native applications. Furthermore, it spares developers from having to write the application logic that manages containers, virtual machines, and other back-end microservices runtime environments. And it allows developers to avoid embedding cross-module dependencies, complex transition rules, synchronous function calls, and other heavyweight application logic into their microservices code.

As public cloud providers such as AWS, Microsoft, Google, and IBM deepen their serverless computing portfolios, developers will increasingly focus on functional programming. Serverless environments—such as AWS Lambda, Microsoft Azure Functions, Google Cloud Functions, and IBM Bluemix OpenWhisk–automatically provision, scale, and manage function-specific microservices for diverse use cases. These serverless fabrics enable speedy functional microservices startup and teardown with minimal overhead and maintenance. They allow developers to create functions in the cloud and run them without having to worry about managing infrastructure

Though typically associated with public cloud services, serverless computing is coming to private clouds as well. A growing range of open-source and other serverless frameworks are entering the market, providing options for enterprises that wish to deploy them entirely inside their private clouds or hybrid private/public serverless computing architectures.

Figure 1: Functional programming will become the core development approach of cloud-native computing

 

 

 

 

 

 

 

 

 

 

 

Convergence of machine learning (ML) and robotic process automation (RPA) will boost programming productivity

Wikibon

Prediction

Auto-programming will become a centerpiece of enterprise application development. By year-end 2018, the latest auto-programming techniques–ML and RPA—will be incorporated into the top 5 integrated development environments.

 

Programming an information system can be strenuous labor. If you’re a developer, you will avidly seek out any technique for relieving the robotic tedium of most coding tasks and accelerating projects toward successful completion. Auto-programming—aka, automated code generation—generally involves specifying a high-level programming abstraction that drives generation of a more verbose executable code implementation. As implemented in commercial code-generation tools, the most common abstractions include programming templates, domain-specific languages, metamodels, database models, metadata models, graphical models, flowchart models, tree models, and scripts.

ML and RPA are on a fast path to converging as auto-programming approaches, judging by the range of tool vendors that are integrating them in their commercial tools. RPA increasingly relies on ML to automatically recognize on-screen presentation elements and even user intentions. Today’s ML-driven autoprogramming tools are forerunner of what are sure to be other, more accurate, and increasingly versatile code-generation tools in the future that leverage ML, deep learning, and other data-driven artificial intelligence techniques.

Figure 2: Convergence of machine learning and robotic process automation will boost programming productivity

 

 

 

 

 

 

 

Emergence of open frameworks will simplify artificial intelligence application development

Wikibon

Prediction

The data-science community will converge on an open development framework for building AI applications. By year-end 2018, the top-tier data-science tool vendors will all provide solutions that support abstraction layers for development, training, and deployment of AI apps in any or all of the leading open-source tools.

AI application developers typically do most of their work within a particular modeling tool, such as TensorFlow, MXNet, CNTK, and DeepLearning4J. Typically, a developer has to adapt their own manual coding style to interfaces provided by a specific tool. As enterprises proliferate AI initiatives, the range of tools in use is likely to expand. That trend may crimp developer productivity if launching into a new AI project requires cross-training on a different modeling tool.

Recognizing this, more vendors, such as AWS, IBM, and NVIDIA, provide tool-agnostic deep learning (DL) development platforms that bundle or interface to popular tools, both on the front-end modeling capabilities and in the back-end compilation and deployment of built models. The AI profession is beginning to recognize the need for an open, tool-agnostic modeling framework. Diverse industry efforts are building the various layers that will be essential elements of such a framework. These layers include higher-level DL APIs and abstractions (such as Gluon and Keras), shared DL-model representations (such as Open Neural Network Exchange), cross-platform DL model compilers (such as NNVM Compiler, NGraph, XLA, and TensorRT 3), and heterogeneous DL-microservice decouplers (such as Distributed Deep Learning).

Figure 3 illustrates the layering of the emerging industry-wide open framework for AI development.

Figure 3: Emergence of open frameworks will simplify artificial intelligence application development

 

 

 

 

 

 

 

 

Maturation of machine-learning automation will facilitate data-science DevOps initiatives

Wikibon

Prediction

Advances in automation will power a new DevOps-focused “model factory” paradigm in the development, optimization, deployment, and management of AI applications in the enterprise. By year-end 2018, the top-tier data-science tool vendors will all support continuous automation of AI model preparation, training, optimization, and deployment.

The AI profession is rapidly developing tools and approaches for automating every last step of the machine learning development pipeline. More comprehensive automation is key to developing, optimizing, and deploying these application assets at enterprise scale. Data scientists will be swamped with unmanageable workloads if they don’t begin to offload many formerly manual tasks to automated tooling. Automation can also help control the cost of developing, scoring, validating, and deploying a growing scale and variety of models against ever expanding big-data collections. Figure 4 illustrates the range of data science development, optimization, deployment, and other processes to be automated DevOps pipeline.

Also, AI developers are starting to realize that automatically generated and algorithmically labeled synthetic training data is an essential component of a scalable DevOps model factory. This involves using AI to fabricate machine-labeled AI training data without need for human curation. Otherwise, there won’t be enough data and people to source, prepare, and label the data required for supervised training of apps’ deep learning, machine learning, and other statistical algorithms. Synthetic AI training-data generation is not a futuristic approach, but, rather, is kindred to the establish data science practice known as “Monte Carlo simulation”, which involves predicting statistical outcomes not from the actual values of input data (when those aren’t known) but from the likely or “simulated” values of that data, based on their probability distributions.

In this new industrial order, the role of the working data scientist will become similar to a foreman in a factory that has implemented robotics and computerized numerical controllers. Addressing this growing demand, Wikibon sees ongoing growth and development of a new niche for AI/ML/DL DevOps workflow, collaboration, and automated release pipeline tools from such solution providers as AWS, IBM, Cloudera, DataRobot, Domino Data Lab, MapR, and Microsoft.

 

Figure 4: Maturation of machine-learning automation will facilitate data-science DevOps initiatives

Growth in autonomous edge applications will boost reinforcement learning’s role in enterprise data science

Wikibon

Prediction

Autonomous edge devices, powered by AI-infused componentry that are trained through reinforcement learning, are becoming ubiquitous. By year-end 2018, more than 25 percent of enterprise AI application-development projects will involve autonomous edge devices and that, by that time, more than 50 percent of enterprise AI developers will have gained expertise with reinforcement learning tools and techniques.

Autonomous edge devices are becoming the next big thing in practically every sphere of our lives, especially in the enterprise. Much of edge application development—for industrial, transportation, healthcare, and consumer applications—involves building AI-infused robotics that can operate with varying degrees of contextual autonomy under dynamic environmental circumstances.

In such application domains, edge devices’ AI brains must rely on reinforcement learning, in which, lacking a pre-existing “ground truth” training data set, they seek to maximize a cumulative reward function, such as assembling a manufactured component according to spec. This is in contrast to how other types of AI learn, which is either by minimizing an algorithmic loss function with respect to the ground truth data (as with supervised learning) or (as with unsupervised learning) minimizing a distance function among data points.

With reinforcement learning, the edge device being trained is never presented with the “correct” input/output data pairs and never has its suboptimal intermediate steps explicitly corrected. Instead, it algorithmically works through various action paths to find a balance between exploration of unfamiliar options and exploitation of the knowledge it gains toward maximizing the cumulative reward function. In addition to robotics, reinforcement learning is an important learning technique for AI-infused edge and enterprise applications in finance, logistics, online gaming, Web content publishing, and application performance management. It has even proven useful in automating the ML pipeline and improving the conjoined performance of cooperative AI models.

Figure 5 illustrates the relationship of reinforcement to training of AI for robotics and other autonomous edge applications.

Figure 5: Growth in autonomous edge applications will boost reinforcement learning’s role in enterprise data science

 

 

 

 

 

 

 

Action Item

Wikibon recommends that developers invest in the new generation of tools that automate the DevOps lifecycle around deployment of AI-based microservices into public, private, hybrid, multi-cloud, and edge IoT clouds. In sorting through the commercial offerings in this growing segment, developers should prioritize those solutions that support strong end-to-end governance of heterogeneous data, models, microservices, and other key artifacts in the lifecycle of AI-infused applications throughout distributed computing environments.

Book A Briefing

Fill out the form , and our team will be in touch shortly.
Skip to content