Event-driven Serverless Architecture
Use S3 event notifications to generate thumbnails
Terraforming S3 event notifications to trigger a lambda to generate an image thumbnail
Hi people!
It is very common for applications to store files in the cloud as a method of file persistence. This gives a lot of flexibility on how and where your applications can be viable.
AWS offers the managed service S3 (Simple Storage Service) as an option for storing objects. It has high availability, scalability, and performance. It is often used as a storage service for web applications.
S3 also offers the ability to be notified about object actions inside a bucket, which can be from object creation, update, move, deletion, and many others. These are called S3 Event Notifications.
In this article, I’ll go through a serverless application that listens to event notifications of a bucket for images and generates a thumbnail for every image that is uploaded.
We’ll have a lambda function written in Go, responsible for generating the thumbnails and listening to the event notification.
Let’s do it!
Requirements
- AWS account
- Your favorite code editor (I’ll be using Visual Studio Code)
- GitHub account
The architecture
We’ll configure the bucket to send object events to an SNS topic that will send the message to a lambda that will generate a thumbnail, and upload it to an S3 bucket.
We will not target lambda as a direct destination from the S3 bucket events because only one destination type can be specified for each event notification. Using an SNS topic as a destination, we can fan out the event to multiple destinations, like an SQS queue, email, phone notification, and many others that SNS supports.
Setting up the infrastructure with Terraform
Images S3 Bucket
To get started, let’s initialize our terraform folder by creating a folder named iac
on the root level of your project. Inside, create a file named providers.tf
so we can configure our AWS provider and add the following code:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
// Region is set from AWS_REGION environment variable
provider "aws" {
}
If you’d like Terraform to keep track of the state of your infrastructure, you can create an S3 bucket in AWS and set it as your state backend, as I do in the following code:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
backend "s3" {
bucket = "terraform-medium-api-notification" // Here is your state bucket
key = "thumbnail-generator/state"
}
}
// Region is set from AWS_REGION environment variable
provider "aws" {
}
Now, let’s create our S3 bucket hosting our images.
Create a file in iac
folder named s3.tf
and add the following code:
resource "aws_s3_bucket" "my-app-images" {
bucket = "my-super-app-images" // Use unique name for your bucket
}
resource "aws_s3_object" "images_folder" {
bucket = aws_s3_bucket.my-app-images.bucket
key = "images/"
}
resource "aws_s3_object" "thumbnails_folder" {
bucket = aws_s3_bucket.my-app-images.bucket
key = "thumbnails/"
}
In the bucket
property, you must use a unique name for your bucket. That is because S3 bucket names are global across all AWS. If you don’t want to provide a name, you can leave it blank and AWS will assign a unique bucket name for you.
This code will generate the S3 bucket with two folders, images/
and thumbnails/
that we’ll use to store our files.
Messaging with SNS
Now, let’s set up the notification topic and message queue.
Create a new file named messaging.tf
in the iac
folder and add the following code:
resource "aws_sns_topic" "topic" {
name = "image-events"
}
Add a new file named variables.tf
to define the variables:
variable "region" {
description = "Default region of your resources"
type = string
default = "eu-central-1" // Set as your default region here
}
variable "account_id" {
description = "The ID of the default AWS account"
type = string
}
And create another file named variables.tfvars
to set the variables:
region = "eu-central-1" // Set your region here
The account_id
we’ll be passing it as an argument to the terraform command later.
S3 event notification
Now let’s set up the event notification for S3.
In the s3.tf
file, add the following code to set up the bucket notification:
resource "aws_s3_bucket_notification" "images_put_notification" {
bucket = aws_s3_bucket.my-app-images.id
topic {
topic_arn = aws_sns_topic.topic.arn
filter_prefix = "images/"
events = ["s3:ObjectCreated:*"]
}
}
To enable this, we also need to add a policy to our SNS topic allowing the S3 bucket to publish notifications to it. So, go to the messaging.tf
file and add the policy:
resource "aws_sns_topic" "topic" {
name = "image-events"
policy = data.aws_iam_policy_document.sns-topic-policy.json
}
data "aws_iam_policy_document" "sns-topic-policy" {
policy_id = "arn:aws:sns:${var.region}:${var.account_id}:image-events/SNSS3NotificationPolicy"
statement {
sid = "s3-allow-send-messages"
effect = "Allow"
principals {
type = "Service"
identifiers = ["s3.amazonaws.com"]
}
actions = [
"SNS:Publish",
]
resources = [
"arn:aws:sns:${var.region}:${var.account_id}:image-events",
]
condition {
test = "ArnEquals"
variable = "aws:SourceArn"
values = [
aws_s3_bucket.my-app-images.arn
]
}
}
}
Here we create the sns-topic-policy
resource and pass it to the topic
resource in the policy
property.
Adding a base lambda
Now, it is just left to add the infra for a base lambda that will add the code later. We’ll code it using Go.
First, we need a base code for our lambda to initialize. So let’s create a folder lambda_init_code
in iac
folder. Now, you can get the source code here and either use the main
compiled file directly or follow the instructions in the README.md file to compile a new executable.
Now we can add our lambda infrastructure by creating a new file lambdas.tf
and adding the following code:
resource "aws_iam_role" "iam_for_lambda" {
name = "thumbnail-generator-lambda-role"
assume_role_policy = data.aws_iam_policy_document.assume_role.json
inline_policy {
name = "DefaultPolicy"
policy = data.aws_iam_policy_document.lambda_role_policies.json
}
}
resource "aws_lambda_function" "lambda" {
filename = data.archive_file.lambda.output_path
function_name = "thumbnail-generator"
role = aws_iam_role.iam_for_lambda.arn
handler = "main"
runtime = "go1.x"
timeout = 15
}
data "archive_file" "lambda" {
type = "zip"
source_file = "./lambda_init_code/main"
output_path = "thumbnail_generator_lambda_function_payload.zip"
}
data "aws_iam_policy_document" "assume_role" {
statement {
effect = "Allow"
principals {
type = "Service"
identifiers = ["lambda.amazonaws.com"]
}
actions = ["sts:AssumeRole"]
}
}
data "aws_iam_policy_document" "lambda_role_policies" {
statement {
effect = "Allow"
actions = [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents",
]
resources = ["arn:aws:logs:*:*:*"]
}
}
This will generate a lambda function with Go as the runtime, create a role, and give permissions for the lambda function to assume this role and log to CloudWatch.
Next, we need to create a subscription to the SNS topic to enable our lambda to be triggered. You can add the following code to the lambdas.tf
file:
resource "aws_sns_topic_subscription" "topic_subscription" {
topic_arn = aws_sns_topic.topic.arn
protocol = "lambda"
endpoint = aws_lambda_function.lambda.arn
}
resource "aws_lambda_permission" "apigw_lambda" {
statement_id = "AllowExecutionFromSNS"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.lambda.arn
principal = "sns.amazonaws.com"
source_arn = aws_sns_topic.topic.arn
}
Making the final lambdas.tf
file state:
resource "aws_iam_role" "iam_for_lambda" {
name = "thumbnail-generator-lambda-role"
assume_role_policy = data.aws_iam_policy_document.assume_role.json
inline_policy {
name = "DefaultPolicy"
policy = data.aws_iam_policy_document.lambda_role_policies.json
}
}
resource "aws_lambda_function" "lambda" {
filename = data.archive_file.lambda.output_path
function_name = "thumbnail-generator"
role = aws_iam_role.iam_for_lambda.arn
handler = "main"
runtime = "go1.x"
timeout = 15
}
resource "aws_sns_topic_subscription" "topic_subscription" {
topic_arn = aws_sns_topic.topic.arn
protocol = "lambda"
endpoint = aws_lambda_function.lambda.arn
}
resource "aws_lambda_permission" "apigw_lambda" {
statement_id = "AllowExecutionFromSNS"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.lambda.arn
principal = "sns.amazonaws.com"
source_arn = aws_sns_topic.topic.arn
}
data "archive_file" "lambda" {
type = "zip"
source_file = "./lambda_init_code/main"
output_path = "thumbnail_generator_lambda_function_payload.zip"
}
data "aws_iam_policy_document" "assume_role" {
statement {
effect = "Allow"
principals {
type = "Service"
identifiers = ["lambda.amazonaws.com"]
}
actions = ["sts:AssumeRole"]
}
}
data "aws_iam_policy_document" "lambda_role_policies" {
statement {
effect = "Allow"
actions = [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
]
resources = ["arn:aws:logs:*:*:*"]
}
statement {
effect = "Allow"
actions = [
"s3:GetObject",
]
resources = [
format("%s/%s*", aws_s3_bucket.my-app-images.arn, aws_s3_object.images_folder.key)
]
}
statement {
effect = "Allow"
actions = [
"s3:PutObject",
]
resources = [
format("%s/%s*", aws_s3_bucket.my-app-images.arn, aws_s3_object.thumbnails_folder.key)
]
}
}
This will give the permissions to our SNS topic to invoke our lambda with events.
Note that the lambda has a timeout of 15 seconds. That is because the default timeout is 3 seconds, and as we are downloading and uploading files to S3, depending on the image size, it might take more than 3 seconds to perform these actions. This happens because S3 actions go over the internet. If you’d like to improve the performance, you can create a VPC and the lambda and a VPC Endpoint for the S3 service to make sure the connection is from the AWS network instead of the internet.
Deploying our infrastructure
Now that we have our infrastructure as a code defined, let’s use GitHub actions to deploy it to AWS.
In your code, create a folder .github/workflows
and add a deploy-infra.yml
file to define our GitHub actions workflow:
name: Deploy Infrastructure
on:
push:
branches:
- main
paths:
- iac/**/*
- .github/workflows/deploy-infra.yml
defaults:
run:
working-directory: iac/
jobs:
terraform:
name: 'Terraform'
runs-on: ubuntu-latest
steps:
# Checkout the repository to the GitHub Actions runner
- name: Checkout
uses: actions/checkout@v3
- name: Configure AWS Credentials Action For GitHub Actions
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: eu-central-1
# Install the latest version of Terraform CLI and configure the Terraform CLI configuration file with a Terraform Cloud user API token
- name: Setup Terraform
uses: hashicorp/setup-terraform@v3
# Initialize a new or existing Terraform working directory by creating initial files, loading any remote state, downloading modules, etc.
- name: Terraform Init
run: terraform init
# Checks that all Terraform configuration files adhere to a canonical format
- name: Terraform Format
run: terraform fmt -check
# Generates an execution plan for Terraform
- name: Terraform Plan
run: |
terraform plan -out=plan -input=false -var-file="variables.tfvars" -var account_id=${{ secrets.AWS_ACCOUNT_ID }}
# On push to "main", build or change infrastructure according to Terraform configuration files
# Note: It is recommended to set up a required "strict" status check in your repository for "Terraform Cloud". See the documentation on "strict" required status checks for more information: https://help.github.com/en/github/administering-a-repository/types-of-required-status-checks
- name: Terraform Apply
run: terraform apply -auto-approve -input=false plan
- AWS_ACCESS_KEY — this is an access in AWS that has the rights to create resources
- AWS_SECRET_ACCESS_KEY — this is the AWS secret associated with the access key
- AWS_ACCOUNT_ID — this is your account ID, which you can find in the top right corner of the AWS dashboard
- YOUR_REGION — it is the default region where you’d like the infrastructure to be deployed to
Now, push your code to GitHub and see your infrastructure be created once the workflow completes.
To test it, you can upload a file to images/
folder and check the Lambda logs in Cloudwatch.
The S3 should be created with two folders:
The SNS should be created with a subscription:
The Lambda should be created with an SNS trigger:
Implementing the Lambda
Now that we have our infra setup, we need to implement our thumbnail generator code.
Let’s create a new folder on the root level named src
and run the following code to initialize a Go module:
go mod init example.com/thumbnail-generator
go get github.com/aws/aws-lambda-go
go get github.com/aws/aws-sdk-go-v2
go get github.com/aws/aws-sdk-go-v2/service/s3
go get github.com/aws/aws-sdk-go-v2/config
go get github.com/disintegration/imaging
You can replace the example.com/thumbnail-generator
with your preferred module name if you’d like.
Now, create a file main.go
and add the following code:
package main
import (
"bytes"
"context"
"encoding/json"
"fmt"
"image"
"image/png"
"io"
"log"
"strings"
"github.com/aws/aws-lambda-go/events"
"github.com/aws/aws-lambda-go/lambda"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/config"
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/disintegration/imaging"
)
type awsClient struct {
s3 s3.Client
ctx *context.Context
}
func handleRequest(ctx context.Context, event events.SNSEvent) error {
awsConfig, err := config.LoadDefaultConfig(ctx)
if err != nil {
log.Fatalf("Could not load AWS default configuration")
return err
}
awsClient := awsClient{s3: *s3.NewFromConfig(awsConfig), ctx: &ctx}
for _, record := range event.Records {
var imageEvent events.S3Event
err := json.Unmarshal([]byte(record.SNS.Message), &imageEvent)
if err != nil {
log.Fatalf("Could not unmarshal SNS message %s into S3 Event Record with error: %v", record.SNS.Message, err)
return err
}
for _, imageRecord := range imageEvent.Records {
bucketName := imageRecord.S3.Bucket.Name
objectKey := imageRecord.S3.Object.Key
file, err := awsClient.downloadFile(bucketName, objectKey)
log.Printf("Successfully downloaded image")
if err != nil {
log.Fatalf("Error loading file %s from bucket %s", objectKey, bucketName)
return err
}
thumbnail, err := createThumbnail(file)
if err != nil {
log.Fatalf("Error creating thumbnail for file %s from bucket %s. Error is %v", objectKey, bucketName, err)
return err
}
log.Printf("Successfully created thumbnail")
err = awsClient.uploadFile(bucketName, objectKey, thumbnail)
log.Printf("Successfully uploaded thumbnail")
if err != nil {
log.Fatalf("Error uploading file %s to thumbnails/ in bucket %s", objectKey, bucketName)
return err
}
}
}
return nil
}
func createThumbnail(reader io.Reader) (*bytes.Buffer, error) {
srcImage, _, err := image.Decode(reader)
if err != nil {
log.Fatalf("Could not decode file because of error %v", err)
return nil, err
}
// Generates a 80x80 thumbnail
thumbnail := imaging.Thumbnail(srcImage, 80, 80, imaging.Lanczos)
var bufferBytes []byte
buffer := bytes.NewBuffer(bufferBytes)
err = png.Encode(buffer, thumbnail)
return buffer, err
}
func (client *awsClient) downloadFile(bucketName string, objectKey string) (*bytes.Reader, error) {
result, err := client.s3.GetObject(*client.ctx, &s3.GetObjectInput{
Bucket: aws.String(bucketName),
Key: aws.String(objectKey),
})
if err != nil {
log.Fatalf("Couldn't get object %v:%v. Here's why: %v", bucketName, objectKey, err)
return nil, err
}
defer result.Body.Close()
body, err := io.ReadAll(result.Body)
if err != nil {
log.Fatalf("Error reading file. Error: %s", err)
return nil, err
}
file := bytes.NewReader(body)
return file, err
}
func (client *awsClient) uploadFile(bucketName string, originalObjectKey string, thumbnail io.Reader) error {
objectKeyParts := strings.Split(originalObjectKey, "/")
fileNameWithoutExtensions := strings.Split(objectKeyParts[len(objectKeyParts)-1], ".")[0]
objectKey := fmt.Sprintf("thumbnails/%s_thumbnail.png", fileNameWithoutExtensions)
_, err := client.s3.PutObject(*client.ctx, &s3.PutObjectInput{
Bucket: aws.String(bucketName),
Key: aws.String(objectKey),
Body: thumbnail,
})
if err != nil {
log.Fatalf("Couldn't upload file %v to %v:%v. Here's why: %v\n",
originalObjectKey, bucketName, objectKey, err)
}
return err
}
func main() {
lambda.Start(handleRequest)
}
Now we need to set the GitHub workflow to deploy our lambda code.
In the .github/workflows
folder, add a new file named deploy-lambda.yml
and add the following code:
name: Deploy Thumbnail Generator Lambda
on:
push:
branches:
- main
paths:
- src/**/*
- .github/workflows/deploy-lambda.yml
defaults:
run:
working-directory: src/
jobs:
terraform:
name: 'Deploy Thumbnail Generator Lambda'
runs-on: ubuntu-latest
steps:
# Checkout the repository to the GitHub Actions runner
- name: Checkout
uses: actions/checkout@v3
- uses: actions/setup-go@v4.1.0
with:
go-version: '1.22.0'
- name: Configure AWS Credentials Action For GitHub Actions
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: eu-central-1
- name: Build Lambda
run: GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build -o build/main .
- name: Zip build
run: zip -r -j main.zip ./build
- name: Update Lambda code
run: aws lambda update-function-code --function-name=thumbnail-generator --zip-file=fileb://main.zip
Commit and push your code to your repository and it should run the build.
Once it is finished, you can see that it is deployed by checking the Last modified
property in the lambda page:
To test it, you need to upload an image to the images/
folder in your S3 bucket. After uploading succeeds, you can wait a few moments and check the thumbnails/
folder for your newly created thumbnail.
Conclusion
In this article, you learned how to generate and connect S3 buckets, lambdas, SNS, SNS notifications, and others using Terraform infrastructure as a code.
You also learned how to send S3 events to SNS topics so you can fan out these events to many other sources, including other SNS topics.
We also coded a lambda in Go invoked by SNS messages to download files from S3 buckets, generate a thumbnail from an image, and upload this thumbnail to S3.
The code for this article can be found here.
Happy coding! 💻