Intro to Serverless Applications

Forest Carlisle

Forest Carlisle

Guild Master: Full Stack
For over 10 years, I have worked on or managed teams with remote members. Remote teams present unique challenges, but with today’s technology and intentional interactions it is possible for the remote team to be as connected and productive as a team that all work in the same room.
Forest Carlisle

Latest posts by Forest Carlisle

What is Serverless?

Serverless applications are full-featured systems that are architected without the middle tier of a typical 3-tier web application architecture. The middle tier, often referred to as the back end, contains the majority or the application’s core business logic, data manipulation, and service orchestration. Servers in the data center or cloud infrastructure run the back end components. In Serverless architectures these components of the back end are pushed to the front end, handed off to specialized third-party services, or implemented as single purpose functions that run on cloud compute services such as AWS Lambda, Google Cloud Functions, or Azure Functions.

Peter Sbarski defines 5 principles of Serverless architecture in his book Serverless Architectures in AWS.

  1. Use a compute service to execute code on demand (no servers).
  2. Write single-purpose stateless functions.
  3. Design push-based, event-driven pipelines.
  4. Create thicker, more powerful front ends.
  5. Embrace third-party services.

Future articles will explore these 5 principles in more depth, but for now, let’s consider why embracing Serverless for part or all of your application architecture is compelling.

Why should I care?

“The Serverless Framework is a core component of The Coca-Cola Company’s initiative to reduce IT operational costs and deploy services faster.” – Patrick Brandt, Solutions Architect at coke

Deploying, running and maintaining load balanced server clusters and highly available databases, even with the trend toward Infrastructure as a Service (IaaS) and Platform as a Service (PaaS), require significant investment in IT operations. The economy of scale realized by compute and database service providers translate to cost savings for businesses that leverage these services instead of building and running them themselves. These service providers also bring the added value of technology and solution expertise.

The commoditization of common application components like authentication, payment processing, email delivery, and image processing can free development organizations to focus on their unique problem domains instead of recreating functionality already implemented by someone else. The time saved by leveraging these specialized third-party services translates to a significant reduction in development costs and time to market.

One of the realized benefits of IaaS and PaaS is the ability to horizontally scale a system without the upfront investment in physical hardware. You can write custom code or configure thresholds based on metrics to scale a system up and down by one or more servers at a time. Function as a Service (FaaS) pushes capacity management even further. With FaaS scaling is completely elastic and handled for you and there is zero waste from idle or underutilized servers. You don’t pay for what you don’t use.

Considerations

Vendor control is one of the first things to consider. In Serverless architectures you are giving up control to vendors to gain benefits like automatic scaling and not re-creating common functionality, but this sacrifice has downsides to consider as well. When a vendor’s service delivery is degraded then applications that depend on those services have degraded functionality or worse downtime. Some vendors offer service-level agreements (SLAs) for their services and businesses must decide if those SLAs are acceptable for their needs.

Performance is another critical consideration. Applications that are latency-sensitive, like mission critical banking or medical applications, are not a good fit for Serverless architectures. The decentralization of business logic into microservices that interact in response to events is great for flexibility and elastic scaling, as mentioned above, but is not a good pattern when latency is a key performance indicator.

Decentralized and distributed systems present new challenges for engineering teams that do not have experience with these paradigms. Now interactions are remote instead of in-process, network latency issues and slower communication protocols are the norms, and failures must be handled across the network of interconnect microservices. Tooling from FaaS vendors, best practices and patterns shared by early adopters, and a growing culture of DevOps in engineering organizations is helping to overcome these challenges.

Vendor lock-in is potentially less of an issue today as many businesses have already tied themselves to one major vendor, but it is worth considering and putting simple abstractions in place so solutions can move from one vendor to another if needed. More interesting is building systems that leverage the best services from numerous vendors and the challenges that exist with maintaining data access, consistency, and orchestration across public clouds. New services, like FaunaDB, are specifically trying to make this a non-issue by providing cross-cloud access to data with strong consistency, low-latency, and ACID guarantees.

Share This Article


Forest Carlisle

Forest Carlisle

Guild Master: Full Stack
For over 10 years, I have worked on or managed teams with remote members. Remote teams present unique challenges, but with today’s technology and intentional interactions it is possible for the remote team to be as connected and productive as a team that all work in the same room.
Forest Carlisle

Latest posts by Forest Carlisle

4 Comments
  • Supriya Rajgopal
    Posted at 09:41h, 18 June Reply

    Firstly, thank you for your insight on Serverless applications.

    Do you mind stating a typical example that would fit the bill here? Is there a tech stack that is most suitable for this kind of architecture? Like, a Node JS with NoSQL application or something like that?

    • Forest Carlisle
      Forest Carlisle
      Posted at 16:30h, 18 June Reply

      Some use cases that match serverless very well are Mobile backends and web applications where much of the application capabilities can be implemented on the frontend using a single page application (SPA) architecture. In both these cases, the backend APIs for sending emails, processing payments, and importing data are possible via single purpose APIs implemented in technologies like Google Cloud Functions and AWS Lambda. Something you would build with a Node JS backend, SPA, and NoSQL DB is often a good fit for serverless.

  • Bhanu Sireesha Gunda
    Posted at 11:12h, 21 June Reply

    Very Nice Article Forest Carlisle.

    Is it possible to Implement three-tier architecture including any middle ware like java. Then packaging the code along with servers as an image, publish it to cloud docker and pay the vendor depending on image usage time?

    • Forest Carlisle
      Forest Carlisle
      Posted at 21:41h, 21 June Reply

      Currently, I don’t know of any cloud solutions where Container based deployments are billed by usage. Containers can orchestrators like Kubernetes do allow for better utilization of the underlying hosts, but you still need to pay for the running hosts by the hour or minute. AWS Lambda supports JavaScript, Java, C#, and Python while Google Cloud Functions support only JavaScript.

Post A Comment