Hands-On Tutorial: Using GPT-3 and Gradio to Create a Chatbot on AWS

Vivien Chua
Level Up Coding
Published in
5 min readApr 16, 2023

--

This is the second article in a series that will guide you through the process of creating and fine-tuning a GPT-3 chatbot on an AWS EC2 instance.

In this hands-on tutorial, we will explore how to create a chatbot using the GPT-3 language model and Gradio user interface, and deploy it on Amazon Web Services (AWS). It assumes some basic knowledge of Python programming and AWS, but it is suitable for beginners who are willing to learn.

So, let’s get started and create our very own chatbot!

OpenAI Platform

Step 1: Sign up for OpenAI API key.

To get started, we will sign up for an OpenAI API key on the website:

https://beta.openai.com/signup

Once you have created an account, you can obtain an API key here:

https://platform.openai.com/account/api-keys

Step 2: Allow installation of python packages on the virtual environment

Go to the virtual environment folder ‘pythonenv’, and open ‘pyvenv.cfg’ file with

cd pythonenv

sudo vim pyvenv.cfg

The contents of the file ‘pyvenv.cfg’ is shown below.

Set ‘include-system-site-packages = true’. To save and exit Vim, hit ESC and type :wq .

home = /usr
implementation = CPython
version_info = 3.8.10.final.0
virtualenv = 20.0.17
include-system-site-packages = true
base-prefix = /usr
base-exec-prefix = /usr
base-executable = /usr/bin/python3

Exit the virtual environment with command

deactivate

Go to the ‘chatbot’ folder, and reactivate the virtual environment ‘pythonenv’ with

source pythonenv/bin/activate

The Python virtual environment will appear to the left, as shown below:

Step 3: Install OpenAI with pip

Check that pip is installed with the command

pip --version

Once pip is installed, we will install the OpenAI package with

pip install openai

This will install the latest version of the OpenAI package and its dependencies.

Step 4: Install Gradio with pip

We will install Gradio with the command

pip install gradio

Gradio is a tool that allows you to create interactive web-based user interfaces for machine learning models. It provides a simple way to deploy your model and create a user interface for it without requiring any front-end development knowledge.

Step 5: Write a python script for chatbot

Create a new file ‘my_chatbot.py’ with

sudo vim my_chatbot.py

Copy and paste the following python script into ‘my_chatbot.py’. Replace with your API key.

import openai
import gradio as gr

openai.api_key = "sk-xxxx"

start_sequence = "\AI:"
restart_sequence = "\Human:"

prompt = " "

def generate_response(prompt):
completion = openai.Completion.create(
model = "text-davinci-003",
prompt = prompt,
temperature = 0,
max_tokens= 500,
top_p=1,
frequency_penalty=0,
presence_penalty=0,
stop=[" Human:", " AI:"]
)
return completion.choices[0].text

def my_chatbot(input, history):
history = history or []
my_history = list(sum(history, ()))
my_history.append(input)
my_input = ' '.join(my_history)
output = generate_response(my_input)
history.append((input, output))
return history, history

with gr.Blocks() as demo:
gr.Markdown("""<h1><center>My Chatbot</center></h1>""")
chatbot = gr.Chatbot()
state = gr.State()
txt = gr.Textbox(show_label=False, placeholder="Ask me a question and press enter.").style(container=False)
txt.submit(my_chatbot, inputs=[txt, state], outputs=[chatbot, state])

demo.launch(share = True)

To save and exit Vim, hit ESC and type :wq .

This code is an implementation of a chatbot using the GPT-3 models. We selected the text-davinci-003 model, which is the most powerful version of GPT-3 models, and is capable of generating highly coherent and human-like text, as well as performing a wide range of natural language processing tasks.

First, we define the start and restart sequences for the prompt. The generate_response function takes a prompt and uses the GPT-3 model to generate a response.

The my_chatbot function takes two inputs: input and history. input is the user's question or input to the chatbot, and history is the chat history between the user and the chatbot. The function uses the generate_response function to generate a response to the user's input based on the chat history. It then appends the user's input and generated response to the chat history and returns the updated history.

Finally, we provide a user friendly interface using the Gradio interface, that includes a textbox for user input and a chatbot window to display the chat history.

When the user inputs a question, the my_chatbot function is called to generate a response, and the chat history is updated and displayed in the chatbot window.

Step 6: Open Chatbot in Browser

Run the above python script with the command

python my_chatbot.py

The output will be as follows:

Open the public URL link in your browser: https://ed22c77b8600a1b917.gradio.live

Conclusion

That’s it! It is very easy to build your own chatbot using the OpenAI API.

In addition, you can fine-tune your own chatbot with a custom dataset. In the next article, we will show how to use the OpenAI API to train the chatbot on a specific set of conversational data that is relevant to our use case or domain. This process can significantly improve the chatbot’s ability to generate relevant and accurate responses to user input.

Also read:

Thank you for reading!

If you liked the article and would like to see more, consider following me. I post regularly on topics related to on-chain analysis, artificial intelligence and machine learning. I try to keep my articles simple but precise, providing code, examples and simulations whenever possible.

Level Up Coding

Thanks for being a part of our community! Before you go:

🚀👉 Join the Level Up talent collective and find an amazing job

--

--

I invest in companies. CIO and co-founder at Meadowfield Capital. Stanford PhD.