Welcome to “Serverless Superheroes”. In this space, I chat with the innovators, toolmakers, and developers who are navigating the brave new world of “serverless” cloud applications. For today’s edition, I chatted with Adam Johnson, the cofounder of IOpipe. The following interview has been edited and condensed for clarity.
I had the great privilege of speaking at ServerlessConf in Austin a couple of weeks ago. The conference is a community event run by the fine folks at A Cloud Guru, but you’d never know that they do other things with their time besides plan conferences, because the logistics were practically flawless. Perfect size (about 400 attendees), great food and a cool venue near downtown Austin made for a fun couple of days. Both the quality of sessions and the technical chops of attendees seemed exceptionally high, leading to lots of thought-provoking content and productive hallway conversations. The only negative comment I have about the event was the pacing – the organizers found a way to cram forty sessions into just two days, and the human brain can only absorb so much information before starting to check out.
Fortunately, all the sessions are now available on YouTube for further review. Here are my top five takeaways from the conference, as well as a few of my favorite sessions.
1. In the land of “No Ops”, ops is still king
Creating an app with serverless technologies is superficially easy, but actually deploying, testing, monitoring and debugging that app in production can be a nightmare. Without insight into the underlying services, you have less control over what breaks and less ability to fix it, and the ecosystem of tools that might help is still pretty thin. Nobody puts their finger better on this problem than DevOps legend Charity Majors, whose session was a rambling, electrifying rant on the folly of assuming that “going serverless” means you don’t have to think about traditional ops considerations anymore. If anything, getting rid of the in-house ops team removes the veil between developers and their own code: if something you wrote stops working in production, you’d better be prepared to fix it yourself. Unless you’ve hit a problem in the underlying services, in which case your app is completely beholden to somebody else’s dev cycle – a very real possibility that is not to be brushed off lightly.
AWS Lambda functions can only run for a maximum of five minutes. This must be distinctly understood, or nothing wonderful can come of the story you are about to hear.
This past summer, my team and I set out to build an internal software system used for deployment testing on AWS. The application would run a large number of workflow executions in parallel each night and might perform a few one-off executions during the day – maybe six hours total use out of every twenty-four, with only a small fraction of that time spent doing actual compute tasks. Trying to scale, manage and spend money on EC2 instances for that workload didn’t interest us. We wanted to run our whole workflow process end-to-end on AWS Lambda.
And we did. Heaven help us, we did. This is our story.
The open source Serverless project, which currently has nearly 10,000 stars on Github, provides tooling around AWS’s “Function as a Service” ecosystem that includes Lambda and API Gateway. I recently had the opportunity to chat with Florian Motlik, CTO of Serverless, about his thoughts on serverless architectures and the future of the Serverless framework.
The following interview has been edited and condensed.
Forrest: Although AWS Lambda is less than two years old, we’re already seeing a robust tooling ecosystem appear around it, including the Serverless Framework. How did the Serverless project get started?
Florian: Austen Collins, our founder, started Serverless about a year ago. In his previous life as a consultant, he worked with AWS Lambda while building various applications. Austen saw two things about Lambda that made a huge difference for him. First, it enables you to build applications without having to maintain infrastructure. And as someone who had to maintain infrastructure in the past, he saw that was a really interesting direction for the industry to go. Second, Lambda enables an event-driven architecture, where you just react to events that can be fired from anywhere to anywhere. Austen also saw that although Lambda was very powerful, its lack of tooling made it hard for new users to get started. So, about a year ago he started building the Serverless framework. The project took off right away, and towards the end of last year, he decided that this is not just an open source framework; it’s something we can build a company around. So that’s when I was brought on as the CTO to lead our engineering team, and we grew from there.
Pester and CI
If you’re doing Windows scripting in 2016, you’d better be using PowerShell. And if you’re writing PowerShell scripts, you’d better be checking them into source control and covering them with Pester tests.
It turns out that you can do more with Pester than just run tests manually at the console. As part of a continuous integration (CI) process, you may want to invoke Pester tests on a remote server and report the results up through the build chain. Handily, you can export Pester test output in an NUnit XML format that modern CI systems like Jenkins understand.
But what if you’re not using a build server to invoke Pester? What if your CI setup is … dun dun dun … “serverless”?
This cookbook is still in progress and will grow over time.
Lambda, AWS’s bite-size “serverless” compute service, is mostly awesome. However, it still has a relative lack of good documentation.
I’ve been using Lambda a lot lately, meaning I’ve had a lot of browser tabs open trying to find examples of the latest features like VPC support, Cloudformation integration and Python 2.7 functions. In this post, I’ll try to save you some time by sharing examples of a few things that have sent me searching.
What’s that old schoolyard rhyme? “AWS and Azure, sitting in a tree, I – A -A – S, P – A -Y – G. First come VMs, then containers, then come stateless microservices running on public cloud infrastructure at fractions of a cent per second.” Or something like that.
Anyway, application deployments are getting lighter, backend microservices are getting smaller, and now many development shops are moving toward “serverless architectures” in which dynamic computational tasks are handled using a few cycles on somebody else’s managed server. As of 2016, the public cloud giants (AWS, Google Cloud and Microsoft Azure) all have their own “serverless services” that allow you to buy processing time for cheap. And I do mean cheap – a million AWS Lambda requests per month, each lasting five seconds, will set you back about $10.62.
Developers gravitate toward this approach because it’s scalable, cost-effective and requires little to no infrastructure maintenance. In AWS, you might deploy an application with data stores in RDS or DynamoDB, static web content hosted in S3, an API Gateway directing traffic and Lambda functions running the business rules – look Mom, no servers!
But wait a minute. Is a pay-as-you-go public cloud really the only place to run serverless compute functions? After all, a handful of computer scientists have been running little pieces of code on distributed computers for years, at a price even Lambda will never beat: free.