FastAPI: How to Process Incoming Requests in Batches

Ng Wai Foong
Level Up Coding
Published in
5 min readNov 17, 2020

--

Utilizing WebSockets, Queue and APScheduler for batch processing in FastAPI

Photo by Mihály Köles on Unsplash

In this tutorial, I am going to share a simple experiment that I have conducted over the weekend. The basic idea is to queue up all the incoming requests and process them in batches at certain interval each time. To do so, we are going to

  1. establish two-way connections between the clients and our server via WebSockets.
  2. store the incoming requests in queue
  3. run a scheduler and process the requests in batches
  4. return the response to users via WebSockets

The architecture is fairly straightforward and good enough for our case study. Please be reminded that if you are looking for production-ready solutions, you should be using RabbitMQ (message broker) or Celery (distributed task queue) instead based on your use cases.

Let’s proceed to the next section and start installing the necessary modules

Setup

Installing FastAPI

It is highly recommended to create a virtual environment before you continue. Activate your virtual environment and run the following command to install FastAPI.

pip install fastapi

--

--

Senior AI Engineer@Yoozoo | Content Writer #NLP #datascience #programming #machinelearning | Linkedin: https://www.linkedin.com/in/wai-foong-ng-694619185/