Formerly known as Wikibon
Search
Close this search box.

Robots will rule the edge: At re:Invent, AWS drives a new AI paradigm

Edge computing means many things, and we often overlook that robotics is at the heart of it all.

Autonomous edge robots are fundamental to pretty much everybody’s vision of what the future holds in store. At the recent AWS re:Invent 2018, there were plenty of robotics-related announcements, most with a focus on the AI DevOps pipeline needed to build the intelligence that makes it all possible.

A robot is essentially a smart object that has had AI hammered into it in a workbench, sort of like the fictional Geppetto did for his wooden boy Pinocchio. One thing that’s been sorely lacking in today’s AI marketplace is a dominant smart-objects workbench for building, training, deploying, and managing the algorithmic smarts needed for robotics to become a truly mass phenomenon.

But that’s exactly what AWS rolled out at re:Invent 2018, though you’d be forgiven for overlooking that fact, considering that it was buried in the blizzard of announcements that Andy Jassy and crew let loose on the world. Even the vendor didn’t seem to grasp the full significance of several announcements it made to that effect.

Echoing that confusing message, some participants in this week’s Wikibon post-re:Invent CrowdChat scratched their heads over what I considered a very significant AWS announcement that points to the evolving shape of that AI smart object workbench in the cloud. I’m referring to AWS DeepRacer, which is a tiny, but highly functional, AI-driven autonomous vehicle.

Now in limited preview and available for pre-order, DeepRacer is a fully autonomous toy race car. It’s essentially an intelligent robot that comes equipped with all-wheel drive, monster truck tires, a high-definition video camera, and on-board compute.

Yes, on the surface, DeepRacer sounds as trivial as can be, and that’s why two other chatters singled it out as the most “ho-hum” announcement at re:Invent:

  • Kenneth Hui: “The DeepRacer made a big splash initially. But in talking with fellow attendees, most of them thought of it as a free toy for themselves or for their kids. :)”
  • Maish Saidel-Keesing: “I would say deepracer – it was cool – but what is the business use case for such a service / device ?….So it is a nice toy that can solve an AI problem – I understand the logic behind it – but announcing a new “product” just to prove the point – to me seemed flaunting their dominance.”

However, these comments view DeepRacer out of context of other smart objects that Amazon has been bringing to market, such as the AI-driven DeepLens smart camera that they announced at last year’s re:Invent  and, of course, the wildly popular Echo devices that have brought Alexa, the AI-driven conversational UI, into so many homes. In the sphere of autonomous vehicles, AWS positions DeepRacer much the same way they present DeepLens: as an AI prototyping platform that puts AI “in the hands of developers, literally.” Like DeepLens, the new tiny autonomous vehicle is fully programmable and comes with  with tutorials, code, and pre-trained models to accelerate development of a specific type of smart object.

Developer-ready smart objects such as DeepRacer, DeepLens, and the Echo family represent a paradigm shift in AI development for the edge. Going forward, more AI-infused edge applications, including robotics for consumer and business uses, will be developed on workbenches that sprawl across both physical platforms such as these devices as well as virtual workspaces in the cloud.

As this trend intensifies, more data scientists will begin to litter their physical workspace with a menagerie of AI-infused devices for demonstration, prototyping, and even production development purposes. We’re moving toward a world in which IoT edge devices become the predominant workbench for advanced AI applications that can operate autonomously. The AI DevOps ecosystem will evolve to accelerate the DevOps workflow that graduates smart objects into production deployments.

Reinforcement learning (RL)—which was a cross-cutting theme in many AWS AI-related announcements last week—is the common thread in this paradigm shift. RL refers to a methodology, algorithms, and workflows that have historically been applied to robotics, gaming, and other development initiatives where AI is built and trained in a simulator. In addition to its central role in gaming, robotics, and other use cases, RL is being used to supplement supervised and unsupervised learning in many deep learning initiatives.

Going forward, more AI practitioners will shift toward new RL-oriented workbenches that execute all or most DevOps pipeline functions—including distributed training–in the smart objects themselves. Increasingly, data scientists and other developers are being called on to pour data-driven algorithmic intelligence into a wide range of interconnected smart objects.

What I found most noteworthy in AWS’ announcements last week was their deepening investment in both the intangible cloud services and tangible devices needed flesh out this smart-object workbench. To see the emerging shape of the RL-driven AI development ecosystem, let’s consider the relevant solution announcements that AWS made at re:Invent 2018:

  • RL in edge-AI modeling and training: To support developers who may never have applied RL to an AI project, AWS announced general availability of SageMaker RL, a new module for their data-science toolchain managed service. AWS’ launch of SageMaker RL shows that the mainstreaming of RL is picking up speed. This new fully managed service from AWS is the cloud’s first managed RL offering for AI development and training pipelines. It enables any SageMaker user to build, train, and deploy robotics and other AI models through any several built-in RL frameworks, including Intel Coach and Ray RL. SageMaker RL leverages any of several simulation environments, including SimuLink and MatLab.
  • RL in edge-AI simulation: SageMaker RL integrates with the newly announced AWS RoboMaker managed service, which provides a simulation platform for RL in intelligent robotics projects. It provides an AWS Cloud9-based robotics integrated development environment for modeling and large-scale parallel simulation. It extends the open-source Robot Operating System with connectivity to such AWS services as machine learning, monitoring, and analytics. It enables robots to stream data, navigate, communicate, comprehend, and learn. It works with the OpenGym RL environment as well as with Amazon’s Sumerian mixed-reality solution,
  • RL in AI DevOps: With RoboMaker, AI robotics developers can start application development with a single click in the AWS Management Console, with the service automatically provisioning trained models into production in the target robotics environment in the edge or IoT infrastructure. AWS RoboMaker supports over-the-air robotics fleet application deployment, update, and management in integration with AWS Greengrass. AWS RoboMaker cloud extensions for ROS include Amazon Kinesis Video Streams ingestion, Amazon Rekognition image and video analysis, Amazon Lex speech recognition, Amazon Polly speech generation, and Amazon CloudWatch logging and monitoring. The new AWS IoT SiteWise, available in preview, is a managed service that collects data from distributed devices, structures and labels the data, and generates real time key performance indicators and metrics to driven better decisions at the edge.
  • RL in edge-AI cross-edge application composition: The new AWS IoT Things Graph, available in preview, enables developers to build IoT applications by representing devices and cloud services—such as training workflows in SageMaker RL–as reusable models that can be combined through a visual drag-and-drop interface, instead of writing low-level code. IoT Things Graph provides a visual way to represent complex real-world systems. It deploys IoT applications to the edge on devices running AWS Greengrass so that applications can respond more quickly, even if not connected to the Internet.

This is where I bring DeepRacer back into the discussion. The AI model that powers DeepRacer’s autonomous operation was programmed, built, and trained in SageMaker RL. AWS also launched what it calls “the world’s first global autonomous racing league” so that DeepRacer developers can benchmark their RL-powered prototypes against each other.

It’s very likely that distributed graph models—perhaps built in AWS IoT Graph–will become an essential canvas for developing the AI that animates complex, multi-device edge and robotics deployments. Without graph technology, developers will find it difficult to compose and monitor the distributed RL training workflows needed to yoke fleets of smart objects into coordinated collectives.

At some point in the future, it will probably become necessary for AWS to bring its “infrastructure as code” tool–CloudFormation–into its smart-object AI-at-the-edge cloud workbench strategy. As an organizing framework for cloud management DevOps, infrastructure-as-code eliminates the need for IT professionals to touch physical IT platforms, access cloud providers’ management consoles, log in to infrastructure components, or make manual configuration changes or use one-off scripts to make adjustments.

The scalability, speed, and efficiencies of doing DevOps this way will become essential for managing the pipeline of AI-app updates being pushed constantly to millions if not trillions of smart objects at the edges. As an alternative to traditional IT change and configuration management, infrastructure as code involves writing templates—aka “code”–that declaratively describes the desired state of a new infrastructure component, such as a distributed graph of smart objects. Within IT management tooling that leverages underlying DevOps source control, the template drives creation of graphs of what the cloud infrastructure code base is supposed to look like. The tooling then looks for deficiencies in deployed code and fixes them by deploying the end-to-end code so that the end-to-end deployed infrastructure converges on the correct state.

Declarative specification and iterative convergence are what RL is all about, based on trial-and-error algorithmic workflows that aim at maximizing a cumulative reward function. It would be interesting to see what sort of optimized interaction patterns AI-driven autonomous smart objects will develop using RL. Most likely, future smart objects will be trained through outcome-focused reward functions that are specified declaratively. One example of such a reward functions, geared to distributed edge robotics, might be to specify that an RL-driven graph find the lowest latency path between an arbitrary number of intelligent edge devices under various end-to-end network-loading scenarios, all while maintaining payload transparency and message traceability.

If we’re going to entrust AI-driven smart robots to drive our cars and the rest of our cloud-besotted world, we need workbenches suited to these edge challenges. It seems to me that AWS has laid those foundations.

You may also be interested in

Book A Briefing

Fill out the form , and our team will be in touch shortly.
Skip to content