How I host all my personal projects for £10 / month (<$15) on AWS

Scott Fraser
Level Up Coding
Published in
10 min readJan 24, 2021

--

Target audience: beginner to intermediate

Recently I have consolidated a number of personal projects that were hosted across a selection of cloud providers to one (relatively) easy-to-manage and cost effective AWS solution. Want to know more?

This article provides a high level overview of my AWS setup and links to past (and hopefully) future articles which provide a practical guide to replicating the setup yourself.

Situation

Over the years I have created and attempted to maintain a number of personal projects. While on the subject, I would strongly encourage all software engineers to work on personal projects as the benefits are far-reaching. Such projects can be created in pursuit of the ever-elusive passive income in mind; to contribute to a community you are part of; or to learn new technologies.

For me, these projects span from when I was starting out as a noobie software engineer to now working as a senior and so they naturally live across a number of hosting services — from the noob-friendly platform-as-a-services to the powerhouses that are AWS and GCP.

This year I decided to bite the bullet and consolidate these projects into one easy-to-manage and affordable place from which I can easily build and host more projects in the future. This article provides an overview of the setup I settled on and why.

Scope

This hosting setup is not going to be for everyone and there are, without a doubt, easier and cheaper options out there. However, this (at least so far) has proved a good option for my needs which were to be:

  • An environment where any application could easily be run.
  • As little “magic” as possible — I wanted to provision and configure each AWS resource and service as and when I needed it.
  • As cost-effective as possible (within reason — I had no interest in going down the serverless route).
  • Be able to handle a relatively large number of very low traffic sites (my personal projects).

Disclaimer

To make things very clear, I am not and would never advocate for anything like this setup for production applications of any importance. This setup is a fault-intolerant deployment for unimportant personal projects which can only handle the smallest load.

The setup

What does it look like?

Fig. 1 — my personal project setup (created using cloudcraft — a cool little tool for AWS diagrams)

The building blocks of the setup

Fig. 1 above shows the AWS resources used to keep my personal projects ticking over. If you are new to AWS or devops in general please do not be intimidated by this as I will attempt to go through each resource in layman’s terms in the next section.

AWS EC2 — The brains of the operation ($10.80/month)

Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers.

Amazon’s definition above is quite a mouthful, isnt it? For those new to the world of devops I will attempt to provide a more down-to-earth explanation.

Imagine Mr. Bezos walking down to the local computer store and buying a bunch of regular desktop computers. He arrives home and connects them to the internet. He fires them up and lets you “rent” these computers for as long as you want. This “rent-a-computer” service is known as EC2 but AWS does this on a much larger and more efficient scale than my rather poor analogy.

In reality, AWS purchases very large and powerful computers and runs them in vast facilities known as data centres (see fig. 2 below). Special software is then used to split these large machines up into many smaller “virtual machines”. These vritual machines are self-contained instances that believe they are computers in their own right.

Fig. 2 — AWS data center

As such when you elect to “rent” such a computer you have the flexibility to specify how powerful (usually measured in CPU and RAM) a computer you want and AWS will provision such a virtual machine for you.

It is very unlikely that my personal projects will ever have more than a handful of simultaneous users so the machine on which they run does not require much memory or compute. For my personal projects I run one of the smallest machines (instances) available, the t3.micro (click here for a list of all instance types available). This is a tiny machine with only 1GB of RAM and 1 virtual CPU.

Amazon provides a discount if you commit to “renting” a machine for a year and an additional discount if you extend your commitment to three years. You can also receive discounts for paying a portion of the annual cost upfront rather than opting for a pay-as-you-go monthly fee. As such there are quite a few billing combinations available. To keep things simple I have only outlined a few below (at the time of writing — Jan 2021):

  • t3.micro on demand (no commitment or upfront payment): $17.23 / month
  • t3.micro reserved for a year with no upfront payment: $10.80 / month
  • t3.micro reserved for a year with half paid upfront: $10.28 / month

For my setup, I elected for the 1 year reserved instance at a cost of round $11 per month. This makes up the bulk of monthly cost for the entire setup. Though please note that if you are new to AWS then you can get a similar instance (t2.micro) for free for year — details here:

AWS Free Tier includes 750 hours of Linux and Windows t2.micro instances, ( t3.micro for the regions in which t2.micro is unavailable) each month for one year. To stay within the Free Tier, use only EC2 Micro instances.

Once you have set up your instance — detailed instructions to be included in a follow up article — you will be able to SSH (secure shell) into the machine. This allows you to run terminal commands in the remote machine from your own computer. Upon successfully ssh-ing into the instance you can execute whatever commands you need to run your project just like you would on your local machine.

I am a big fan of docker and so run each project in its own docker container within my EC2 machine. To allow multiple projects to run on the same machine I set up some simple rules to route incoming requests to the relevant container depending on the url to which that request was sent. I have dubbed this setup “a poor man’s kubernetes” and have attempted to show the internal workings below in figure 3.

Fig. 3— A poor man’s Kubernetes :D

As eluded to above, I plan to write a follow up which digs into the above in far more detail. If there is enough interest the article will provide a practical guide on how to:

  1. Provision this instance on AWS.
  2. SSH into the instance and install docker.
  3. Set up and run an NGINX container to forward requests to a project container.
  4. Configure SSL certificates thus allowing your api to handle HTTPS requests.

It will take some time to plan and write such an article so please comment if this would be of use to you.

AWS EBS — Your computer needs storage ($1/month)

Amazon Elastic Block Store (EBS) is an easy to use, high-performance, block-storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction intensive workloads at any scale.

Remember those computers that you “rent” from Mr. Bezos? Well they don’t come with a hard drive and so you don’t yet have anywhere to store your project code. In order for AWS to be as flexible and confurable as possible one needs to specify how much, and what type of, persitant storage the machine should have access to.

This persistant storage is where your projects’ code and dependencies will be stored such that it can be loaded into memory for execution.

I opted for 8GB of general purpose SSD storage known as a gp2 volume. Perhaps this was a bit low but it is working for me at present. This costs around $1 per month.

Fig. 4 — screenshot of block storage bound to my EC2 instance

AWS S3 — Store files for other services (~$0.00/month)

Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance

As a very crude analogy think of S3 being like a very stripped down dropbox-like service where you can store any file type in “the cloud” and make it accessible to whoever needs it. S3 is organised in what are known as buckets which, again rather crudely, can be thought of as similar to folders or directories.

Whilst your EBS volume stores persistant data for your EC2 instance, the data is only directly available to that instance and not to other AWS services or to the public. Contrary to EBS, S3 organises data and makes it efficiently available to external entities — a bit like organising this whisky collection below for anyone to come in and taste a whisky of their choice 🙈🥃.

Fig. 5— the S3 of the whisky world

If your projects rely on a frontend with which your users must interact you will almost certainly want a mechanism to serve static content (images, files etc) to your users. Such static content can be as trivial as an image or be entire web application builds that make up the frontend of your project.

The majority of my personal projects use a React (my preferred FE framework) application as the primary user interface. This UI then sends HTTP requests to one of my backend services running in a container on EC2 (as shown in fig. 3) to submit and fetch data requested by the user.

In order for the above to work I store a production build of my react application in an S3 bucket. In this context, a “production build” can be thought of as an entire react application bundled into a small set of files that can easily be served to a user and loaded in their browser.

Now for the exciting part. Once any object is stored in S3 it is easy to map a url to that location so that when a user visits myproject.com they are served a file of your choice from S3. In my case this is index.html of my react build which means when they visit the project url the entire react application loads in their browser. A practical guide to setting this up is explained in one of my previous articles, Deploying your React App to AWS in 2019 with a NameCheap domain.

AWS Cloudfront — Bring your project closer to the user (~$0.01/month)

Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment.

AWS cloudfront ensures that copies of your web content, in our case html, javascript, css, and image files are spread around data centres so that they are as geographically close to your users as possible. This can reduce the latency between your user navigating to myproject.com and the site loading in their browser.

Fig. 6— The difference in users’ request path when using a CDN vs. not using a CDN (www.digitalocean.com)

To make things faster and more secure I use AWS cloudfront to serve my react builds from S3 to users who want to access my sites. This means that AWS stores copies of my assets in S3 across a number of edge locations close to my users.

Cloudfront also makes it very easy to ensure users can use HTTPS to request content which is a must these days. The below article (same as the one above) also covers how to set this up, Deploying your React App to AWS in 2019 with a NameCheap domain.

(Optional) AWS RDS — (~$10/month)

Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning, database setup, patching and backups.

It is likely that your projects will require a database to easily fetch and update data. AWS RDS is a service provided by AWS that makes it very easy to provision a database. Within the AWS console you can select a database type (e.g. MySQL, PostgreSQL, etc) and AWS will do all the heavy lifting for you. Once the instance is created you will be provided with a host and port for any application to connect to in order to use the database.

I have opted for this service in my setup because of how easy it is to get off the ground and for the additional peace of mind of automated backups. However, strictly speaking this is an optional component for the following reasons:

  1. Not every project will require a database.
  2. It is possible to run dockerised db instances within your EC2 machine to avoid this cost (I will add an article later).

Conclusion

The end result is a rather nice flexible setup that can host all my projects for around $14 (£10) per month. Admittedly I do make use of RDS which brings my total monthly cost to around $25 (£18) but this is an optional convenience.

The beauty is I understand and have manually provisioned every component of the end-to-end stack and can keep adding more projects (in the form of docker containers) to my EC2 instance until it really begins to complain. Once the instance is overloaded I can simply migrate to a larger instance.

I hope this article has been useful and if you would like me to dig deeper into any component of this setup then please comment and I will consider writing follow-up articles.

All the best,

Scott

--

--

Senior engineer interested in everything tech. Whether its React on the FE, Python BE services, or devops magic — I’m in! Enjoying a role at heycar uk.