Photo by Paul Hanaoka on Unsplash

Serverless Feature Flags 🚀

Serverless Advocate
Level Up Coding
Published in
6 min readJun 27, 2021

--

When working with configuration in the Serverless World on AWS Cloud, you typically have a few options for storing and accessing your config, in particular when working with dynamic feature flags. This blog post discusses the advantages of using AWS AppConfig alongside the AWS AppConfig Lambda Extension to get the power of feature flags natively without a 3rd party service. The basic code repo can be found here. 🙇‍♂️

Firstly, what is a ‘feature flag’?

Feature flags/toggles are discussed in a Martin Fowler article as:

Feature Toggles (often also referred to as Feature Flags) are a powerful technique, allowing teams to modify system behavior without changing code. They fall into various usage categories, and it’s important to take that categorisation into account when implementing and managing toggles.

There are of course many 3rd party software providers providing feature flags as a service, with the main one being Launch Darkly. This article discusses how you can do this natively using the Serverless Framework and native AWS services.

Feature flags are typically used to toggle a feature on or off in code dynamically to improve development speed and de-risk deployments, so you can deploy your code and enable/disable the feature independent of each other. Some use cases are:

  • A new customer facing feature.
  • Refactoring of existing business logic.
  • A new non customer facing feature.

This very often allows teams to perform A/B testing and gradual deployments of new features silently to gain confidence in production before deploying fully.

It makes a rollback very simple without having to redeploy code/infrastructure, reducing the risk of backing out deployments. It also allows teams to deploy incomplete features in production over time (especially over various serverless services), and to make data driven decisions off the back of the small releases, rather than a big bang approach.

How does AWS AppConfig help?

Typically when accessing configuration data in your lambdas you will do one of the following:

  1. Use the serverless framework native ssm: reference variable call to pull in configuration from Parameter Store at build time. The problem with this approach is that the configuration is static (embedded at package build time), and although may be applicable for config that very rarely changes, not very good for feature flags.
  2. Use Parameter Store, however pulling the config every time the lambda invokes through code (no caching). The configuration is no longer static now; however it is costly pulling the configuration in on every invocation from AWS Parameter Store.
  3. As above, however using S3 to store the configuration details. Again, this means that you need to read from S3 on every invocation. Configuration in S3 is also not as easy to change compared to Parameter Store in my opinion.
  4. In the most basic of cases you can use environment variables within the lambda, however these are prone to human error, no audit of what has changed, not easy to change outside of the console, and prone to human error when changing the values.

AppConfig Lambda Layer Extension to the rescue!

AWS AppConfig allows you to store your configuration in one of four places:

  1. AWS Parameter Store.
  2. AWS AppConfig Hosted.
  3. AWS S3.
  4. AWS CodePipeline.

Whilst also allowing you to create an application with a set of configurations and deployments, allowing teams to deploy updated configuration to a set of targets very easily (as well as reverting to a previous deployment configuration).

The AppConfig Lambda Extension makes the integration between Lambda and AppConfig seamless through the use of Lambda Extensions. The AppConfig Lambda Extension will automatically build a cache of your config alongside the running Lambda, only pulling the latest config from AppConfig in a configurable time sequence. This means that the configuration is updated across all of your invoked lambda functions very quickly, without the overheads and limitations detailed in the previous section, and without having to re-deploy any code!

AWS describe the lambda extension service as (8th October 2020):

AWS Lambda is announcing a preview of Lambda Extensions, a new way to easily integrate Lambda with your favorite monitoring, observability, security, and governance tools. In this post I explain how Lambda extensions work, how you can begin using them, and the extensions from AWS Lambda Ready Partners that are available today.

Extensions help solve a common request from customers to make it easier to integrate their existing tools with Lambda. Previously, customers told us that integrating Lambda with their preferred tools required additional operational and configuration tasks. In addition, tools such as log agents, which are long-running processes, could not easily run on Lambda.
https://aws.amazon.com/blogs/compute/introducing-aws-lambda-extensions-in-preview/

OK, show me some code! 😜

The following serverless.yml snippet below shows a basic configuration for adding the feature flag through the use of AppConfig and Lambda Extensions, as well as the first configuration value:

As you can see from the file above, the AppConfig Lambda Extension is added through a lambda layer, and the resources section has the building up of the Application, Deployment Strategy and Configuration profile for AppConfig (including the configuration itself). This JSON object could hold all of your feature flags for the particular application, essentially a set of boolean values.

Within the actual lambda handler now we can access the config from the extension which is running alongside the lambda locally (localhost), which is shown below in a basic version:

The code above shows pulling the latest config locally in the cache on invoke from the extension, with the configuration in the serverless.yml file meaning that it is checked for updates every 30 seconds against the AWS AppConfig service.

Rather than logging and returning the value out from API Gateway in the response you would use the value to dictate whether the feature was enabled or not.

The configuration can be also pulled directly from CodePipeline, meaning that the amending of the actual configuration values can be done within your CI/CD pipeline without even doing a serverless deploy.

Next steps..

It would be very straight forward to create a middleware function for the pulling of the config from localhost as shown above, which could be added to any lambda function in a reusable fashion as default (could install from NPM once packaged or a monorepo).

This means that you could destructure the config from the returned value from the middleware and just use it if it has been supplied.

It would also be very straight forward to create a serverless plugin for the generation of the CloudFormation in the resources section of the serverless.yml file (I will add that to my todo list!) 🤓

If you were an organisation that thought the approach above could save a lot of money compared to using a 3rd party product, it would not be difficult to create a web client with an API using the AWS SDK on lambda under the hood to create a small in house product with very basic feature parity of the service providers. (I think I will create a free open source version for fun that can be deployed to any AWS account!) 😎

Wrapping up

Lets connect on any of the following if you enjoyed reading this:

https://www.linkedin.com/in/lee-james-gilmore/
https://twitter.com/LeeJamesGilmore

If you found the articles inspiring or useful please feel free to support me with a virtual coffee https://www.buymeacoffee.com/leegilmore and either way lets connect and chat! ☕️

If you enjoyed the posts please follow my profile Lee James Gilmore for further posts/series, and don’t forget to connect and say Hi 👋

This article is sponsored by Sedai.io

About me

Hi, I’m Lee, an AWS certified technical architect and polyglot Principal software engineer based in the UK, working as a Technical Cloud Architect and Serverless Lead, having worked primarily in full-stack JavaScript on AWS for the past 5 years.

I consider myself a serverless evangelist with a love of all things AWS, innovation, software architecture and technology.

** The information provided are my own personal views and I accept no responsibility on the use of the information.

--

--

Global Head of Technology & Architecture | Serverless Advocate | Mentor | Blogger | AWS x 7 Certified 🚀