What is Serverless?
Serverless applications are full-featured systems that are architected without the middle tier of a typical 3-tier web application architecture. The middle tier, often referred to as the back end, contains the majority or the application’s core business logic, data manipulation, and service orchestration. Servers in the data center or cloud infrastructure run the back end components. In Serverless architectures these components of the back end are pushed to the front end, handed off to specialized third-party services, or implemented as single purpose functions that run on cloud compute services such as AWS Lambda, Google Cloud Functions, or Azure Functions.
Peter Sbarski defines 5 principles of Serverless architecture in his book Serverless Architectures in AWS.
- Use a compute service to execute code on demand (no servers).
- Write single-purpose stateless functions.
- Design push-based, event-driven pipelines.
- Create thicker, more powerful front ends.
- Embrace third-party services.
Future articles will explore these 5 principles in more depth, but for now, let’s consider why embracing Serverless for part or all of your application architecture is compelling.
Why should I care?
“The Serverless Framework is a core component of The Coca-Cola Company’s initiative to reduce IT operational costs and deploy services faster.” – Patrick Brandt, Solutions Architect at coke
Deploying, running and maintaining load balanced server clusters and highly available databases, even with the trend toward Infrastructure as a Service (IaaS) and Platform as a Service (PaaS), require significant investment in IT operations. The economy of scale realized by compute and database service providers translate to cost savings for businesses that leverage these services instead of building and running them themselves. These service providers also bring the added value of technology and solution expertise.
The commoditization of common application components like authentication, payment processing, email delivery, and image processing can free development organizations to focus on their unique problem domains instead of recreating functionality already implemented by someone else. The time saved by leveraging these specialized third-party services translates to a significant reduction in development costs and time to market.
One of the realized benefits of IaaS and PaaS is the ability to horizontally scale a system without the upfront investment in physical hardware. You can write custom code or configure thresholds based on metrics to scale a system up and down by one or more servers at a time. Function as a Service (FaaS) pushes capacity management even further. With FaaS scaling is completely elastic and handled for you and there is zero waste from idle or underutilized servers. You don’t pay for what you don’t use.
Vendor control is one of the first things to consider. In Serverless architectures you are giving up control to vendors to gain benefits like automatic scaling and not re-creating common functionality, but this sacrifice has downsides to consider as well. When a vendor’s service delivery is degraded then applications that depend on those services have degraded functionality or worse downtime. Some vendors offer service-level agreements (SLAs) for their services and businesses must decide if those SLAs are acceptable for their needs.
Performance is another critical consideration. Applications that are latency-sensitive, like mission critical banking or medical applications, are not a good fit for Serverless architectures. The decentralization of business logic into microservices that interact in response to events is great for flexibility and elastic scaling, as mentioned above, but is not a good pattern when latency is a key performance indicator.
Decentralized and distributed systems present new challenges for engineering teams that do not have experience with these paradigms. Now interactions are remote instead of in-process, network latency issues and slower communication protocols are the norms, and failures must be handled across the network of interconnect microservices. Tooling from FaaS vendors, best practices and patterns shared by early adopters, and a growing culture of DevOps in engineering organizations is helping to overcome these challenges.
Vendor lock-in is potentially less of an issue today as many businesses have already tied themselves to one major vendor, but it is worth considering and putting simple abstractions in place so solutions can move from one vendor to another if needed. More interesting is building systems that leverage the best services from numerous vendors and the challenges that exist with maintaining data access, consistency, and orchestration across public clouds. New services, like FaunaDB, are specifically trying to make this a non-issue by providing cross-cloud access to data with strong consistency, low-latency, and ACID guarantees.