Formerly known as Wikibon
Search
Close this search box.

Wikibon Weekly Research Meeting Notes: Serverless Computing

Wikibon Weekly Research Meeting

Friday August 25, 2017

Serverless Computing: It’s all about Functional Stateless Microservices

Presented by James Kobielus, Senior Analyst

Premise:

Serverless computing is coming fast and furious to the cloud world bringing many advantages. In particular, with serverless, developers don’t have to manage complex infrastructure typically associated with containers, virtual machines and other underlying hardware and software. Serverless computing allows developers to build applications using functional programming languages and tools which we believe will largely be a complement, not a replacement for, traditional programming models. The latter we believe will remain in vogue for stateful enterprise applications in particular while serverless will increasingly grow for stateless apps.

What is serverless computing?

Serverless computing is a cloud-oriented operating model that dynamically manages underlying infrastructure resources.  Serverless is typically deployed as an architecture build on functional microservices that allow developers to invoke functions as they’re needed and pay for resources based on what’s consumed by an application (versus paying for fixed units of capacity).

Serverless still requires hardware and while the name is somewhat misleading, the management of infrastructure resources is essentially “invisible” to application developers. Specifically, in serverless environments, developers don’t have to define the attributes of the servers. The infrastructure that supports invoked services is managed by the cloud provider and developers don’t need to know what’s sitting behind the functions.

Serverless can be thought of as completely pre-configured functions-as-a-service where pricing for the functions is utility-like and paid for by consumption at some interval of granularity (e.g. hours, minutes, seconds).

What are the benefits of serverless computing?

Serverless architectures are much simpler for application developers/managers. Serverless virtually eliminates the responsibility to maintain software, microcode, OS levels, etc. and developers need only have to worry about developing and testing a function-based offering. As such, serverless architectures are highly scalable and potentially much less expensive platforms on which to develop and maintain applications. As a result, the compute fabrics that support serverless can be exceedingly efficient and cost effective.  

Where did serverless come from?

Serverless is a relatively immature space. Amazon announced Lambda in 2014 and was the industry’s first serverless offering. Other clouds vendors have followed suit, including Google with Cloud Functions, Microsoft Azure Functions and IBM with Bluemix OpenWhisk.

What are the main use cases for serverless?

The main use cases for serverless/function-based approaches are stateless applications and functional programming models. Examples include API publishing, query response, face recognition, voice recognition…these are typical for stateless apps using functional programming models.

Edge-oriented environments are another emerging use case for Serverless Computing. As Edge devices capture data on certain events (e.g. an IoT device emitting some data over time), the device platform can call some functions or model or logic to perform real-time analysis to make an on-the-fly adjustment. Notably, we believe the serverless model will be used extensively for edge applications, even those that are end-to-end, as long as these applications are stateless.  

We also see certain data analytic workloads such as BI and high performance computing use cases (such as climate modeling, genomics, and basic scientific research)  that are potentially good candidates for Serverless.

What are the key caveats of serverless for developers?

Serverless environments today run in a shared cloud environment so this means peaks, valleys and competition for resources. As such, developers must be cognizant of managing unexpected situations as they relate to latency and error recovery. Users of Serverless Computing must do rigorous testing in this new environment and focus on recovery (e.g. how to deal with timeouts, etc.). Expect the service level agreements provided by the cloud provider to be less rigorous with Serverless than with stateful apps, at least for now.

As well, by deploying multiple serverless cloud offerings organizations can be exposed to “Serverless Creep.” Just as spinning up VMs and using containers extensively has created challenges for organizations, development managers must be sensitive to an explosion of serverless apps. Customers must in our view be wary of getting to a point where they lose track of what’s being developed within the application portfolio, which is probable precisely because of the lack of state. The risks here include compliance and audit challenges, duplicative work products and cost overruns. As well, different clouds will support different functional languages (e.g. Javascript vs. Python, etc.) and serverless apps may not be very portable to other clouds. This brings up a potential issue of diluting skill sets across an organization where the cloud choice wags the skills dog, versus a more deliberate and well-thought-out strategy.  

Where do Containers and PaaS fit into this discussion?

Serverless computing leverages containers as the underlying infrastructure. Serverless allows developers to essentially abstract away the core container complexity. PaaS is a microservices environment by its very nature. Containerized microservices require management by developers whereas with the functional microservices associated with Serverless abstract that complexity…assuming the cloud provider is doing its job.

Action Item

Serverless is an emerging and highly useful concept for developers of cloud-based services and Wikibon believes that it’s a fundamental operating model that is here to stay. Developers should begin using serverless and start with simple use cases. In particular we advise embracing stateless functions such as Web content publishing, API notification and alerts, and other event-driven applications; however developers must be careful to consider recovery plans in these new environments…as always, hope for the best, plan for the worst.

 

Keep in Touch

Thanks to Alex Myerson and Ken Shifman on production, podcasts and media workflows for Breaking Analysis. Special thanks to Kristen Martin and Cheryl Knight who help us keep our community informed and get the word out. And to Rob Hof, our EiC at SiliconANGLE.

Remember we publish each week on theCUBE Research and SiliconANGLE. These episodes are all available as podcasts wherever you listen.

Email david.vellante@siliconangle.com | DM @dvellante on Twitter | Comment on our LinkedIn posts.

Also, check out this ETR Tutorial we created, which explains the spending methodology in more detail.

Note: ETR is a separate company from theCUBE Research and SiliconANGLE. If you would like to cite or republish any of the company’s data, or inquire about its services, please contact ETR at legal@etr.ai or research@siliconangle.com.

All statements made regarding companies or securities are strictly beliefs, points of view and opinions held by SiliconANGLE Media, Enterprise Technology Research, other guests on theCUBE and guest writers. Such statements are not recommendations by these individuals to buy, sell or hold any security. The content presented does not constitute investment advice and should not be used as the basis for any investment decision. You and only you are responsible for your investment decisions.

Disclosure: Many of the companies cited in Breaking Analysis are sponsors of theCUBE and/or clients of theCUBE Research. None of these firms or other companies have any editorial control over or advanced viewing of what’s published in Breaking Analysis.

Book A Briefing

Fill out the form , and our team will be in touch shortly.
Skip to content