Serverless Streamed Events 🚀

How to stream domain events based on database changes in DynamoDB and DocumentDB with EventBridge, with visuals and code examples written in TypeScript and the CDK.

Serverless Advocate
Published in
9 min readMar 19, 2022

--

Introduction

When we are architecting event-driven Serverless solutions across multiple domains, it is important that we ensure immutable events are categorically raised only after an event has taken place.

We can utilise DynamoDB Streams and DocumentDB Change Streams to ensure that we only raise the relevant events once the data has been committed to the relevant database, and ensure that the events are always 100% raised (through the use of dead-letter queues, retries etc). This architectural pattern and accompanied TypeScript repo can be found here: https://github.com/leegilmorecode/serverless-streamed-events

Note: For ease for people deploying and playing around this is all within one AWS account. Ideally all three domains would be in separate AWS accounts.

What scenarios can we get ourselves in with other methods?

There are three common scenarios:

❌ The database record is committed in code, however there is an issue and the event fails to be raised. In this scenario our consuming domains are never notified of the change of state/event.

❌ We raise the event successfully in code, however there is an issue committing the database record. In this scenario our consumers believe a change in state/event has happened, but in fact the record was not committed.

✔️ The third option is to generate the event off the back of the record being successfully committed to the database, and utilising dead-letter queues to ensure that the event will always 100% be raised for consumers (even if it is eventually consistent).

This is shown below:

Some strange scenarios we can get into and a way of resolving this

Let’s see how we can architect this below in the next section.

The Transactional Outbox Pattern

What we have described in the third setup in the diagram above is the ‘Transactional Outbox’ pattern.

This is shown high level in the diagram below:

https://microservices.io/patterns/data/transactional-outbox.html

A domain service typically needs to update a database and then send an event to other domains off the back of it (for example ‘OrderCreated’). We need to ensure that this process is atomic, or we can end up in the scenarios we discussed in the previous section.

When we are using AWS services for our database and message broker, we are unable to perform an atomic two phase commit (distributed transaction), and need a different approach.

If we ensure that we only raise the event off the back of the database actually being updated, and ensure that we have resilience around the sending of events, then we have the following benefits:

  • We don’t need to use a Two Phase Commit.
  • Messages are guaranteed to be sent if and only if the database transaction commits.

Let’s see how we can achieve this with both DynamoDB and DocumentDB when working with EventBridge.

What are we building? 🏗️

Our fictitious razor subscription company ‘Lee James Razors’

We are going to build a fictitious scenario where a customer can create a monthly subscription for mens razors in our Subscription domain, which then allocates the relevant stock in the Stock domain, and creates the direct debit record in the Payments domain.

Create Subscription

If a customer cancels their Direct Debit through the Payments domain, then the Stock domain deallocates the stock, and the Subscription domain cancels the subscription. This is shown below:

payment cancelled flow

What we can notice from this sample basic architecture is that the events are generated off the back of the two streams (DynamoDB streams and DocumentDB Change Streams). We will do a deeper dive below.

Why no central event bus?

For the eagle eyed on the diagrams above, we can see that there is no central event bus, and each of the event bus’s communicate with each other. This is a multi bus, multi account pattern that we will be using.

With EventBridge, to communicate across accounts, we need to go from bus to bus, and when architecting our domains correctly, each domain should be split into its own AWS account. This gives us two options:

Single Bus, Multi account pattern

It is very typical for organisations to have a “DevOps” team that is responsible for managing a shared resource via a single event bus. Each service team owns and manages its own application stack, while the DevOps team manages the stack that defines event bus rules and target configurations for the services integrations.

https://github.com/aws-samples/amazon-eventbridge-resource-policy-samples/blob/main/patterns/README.md

Considerations

  • Additional cross-account policy management (compared to single-bus, single account pattern).
  • Service teams manage target configurations, but not routings
  • Introduces the need for multiple event buses to transfer event between accounts
  • Routing rules still via central bus
  • Target rules migrate to service account event bus

Multi Bus, Multi account pattern

In this pattern, each of the event buses are owned by the service teams. Each of the service teams manages their own buses. There is no centralised management of routing logic or target configuration.

Service teams need to be aware of the services that are interested in subscribing to events they are publishing.

Note: Each event bus sets its own EventBusPolicy to scope what event sources can publish to the bus, and create EventBusPolicies defining which accounts can manage rules and targets on their account.

https://github.com/aws-samples/amazon-eventbridge-resource-policy-samples/blob/main/patterns/README.md

Considerations

  • Service teams manage all resources for sending and receiving events
  • Additional overhead in managing distributed rules and resource policies
  • Each service team manages their own event bus
  • No additional buses required to facilitate cross account event delivery
  • Aligned to service boundary

Getting Started! ✔️

To get started, clone the following repo with the following git command:

git clone https://github.com/leegilmorecode/serverless-streamed-events

This will pull down the example code to your local machine.

💡 Note: This is not production ready code and is to demo the concept only. We are deploying all of the domain services to one AWS account but we would have each in its own account typically as best practice. We would also harden the resilience around sending events etc

Deploying the solution! 👨‍💻

🛑 Note: Running the following commands will incur charges on your AWS accounts, and some services are not in free tier.

In the folder infra run npm run deploy.

In the folder payment-service/src/stream run npm run build.

In the folder payment-service run npm run deploy.

In the folder stock-service run npm run deploy.

In the subscription-service run npm run deploy.

🛑 Note: Remember to tear down the stacks when you are finished so you won’t continue to be charged, by using ‘npm run remove’ in the relevant folders above.

Talking it through 👊

🛑 Note: Bear in mind that this code has been put together for a POC and discussion point only. I have also tried to keep relevant code in less files so it reads easier for the article.

Streaming in DynamoDB ✅

The following diagram shows a basic example of streaming events following a database record being committed in DynamoDB:

DynamoDB streams in action

As we can see from the diagram above, we raise our events off the back of the committed DynamoDB records, and utilise a DLQ for any events which fail to raise so they are never lost.

Streaming in DocumentDB ✅

The following diagram shows the process of streaming events following a database record being committed to DocumentDB through Change Streams:

DocumentDB streams in action

As we can see from the diagram above, when a payment is cancelled and the record updated in the DocumentDB database, a change stream of updates is used by an ECS task. The task sends the messages to a FIFO queue, which a Lambda reads and sends the correct event(s) toEventBridge.

💡 Note: With our change streams we need to recover if there is an issue or if the message fails to be sent to SQS, for example network issues or the ECS task fails. For this reason we would update our code running in ECS to resume the change stream to the last record that was processed. https://docs.aws.amazon.com/documentdb/latest/developerguide/change_streams.html#change_streams-resuming

Testing the solution 🎯

New subscription 🪒

We start by creating a new Customer Subscription for our razors:

Which then creates the relevant record in the Subscriptions domains:

The new customer subscription is created

And when the Stock domain receives the ‘SubscriptionCreated’ event it allocates the relevant stock as shown below, and the relevant payment is created in the payment domain (we allocate 12 months worth up front and also create a payment subscription record).

Stock record is created to allocate the stock

Cancel Payment ❌

Let’s now cancel the payment as if a customer has cancelled their direct debit from a third party system:

Cancelling the payment

This raises the PaymentCancelled event, which means that the Subscription domain now cancels the subscription:

Customer subscription is cancelled

And in the stock domain we deallocate the relevant stock now that the payment and subscription is cancelled:

The stock domain deallocates the relevant stock

Please visit our sponsors Sedai.io

Summary

I hope you found that useful as a basic example of the pattern in Serverless with DynamoDB and DocumentDB!

Go and subscribe to my Enterprise Serverless Newsletter here for more of the same content:

Wrapping up 👋

Please go and subscribe on my YouTube channel for similar content!

I would love to connect with you also on any of the following:

https://www.linkedin.com/in/lee-james-gilmore/
https://twitter.com/LeeJamesGilmore

If you found the articles inspiring or useful please feel free to support me with a virtual coffee https://www.buymeacoffee.com/leegilmore and either way lets connect and chat! ☕️

If you enjoyed the posts please follow my profile Lee James Gilmore for further posts/series, and don’t forget to connect and say Hi 👋

Please also use the ‘clap’ feature at the bottom of the post if you enjoyed it! (You can clap more than once!!)

About me

Hi, I’m Lee, an AWS Community Builder, Blogger, AWS certified cloud architect and Global Serverless Architect based in the UK; currently working for City Electrical Factors, having worked primarily in full-stack JavaScript on AWS for the past 6 years.

I consider myself a serverless advocate with a love of all things AWS, innovation, software architecture and technology.

*** The information provided are my own personal views and I accept no responsibility on the use of the information. ***

You may also be interested in the following:

--

--

Global Head of Technology & Architecture | Serverless Advocate | Mentor | Blogger | AWS x 7 Certified 🚀