Build an AI-powered LinkedIn profile reviewer: OpenAI Assistants API and Gpt-4-vision

A Deep Dive into experimenting with GPT-4 Vision and Assistants API

Alessandro Amenta
Level Up Coding

--

Many of us have tinkered with building apps using OpenAI’s APIs, and there’s a plethora of articles on Medium about it. However, what I suspect many readers have not yet experimented with — at least at the time of writing this article — is building with the Assistants API. There are good reasons for this: for starters, it’s still in beta; secondly, it doesn’t support streaming, which can be frustrating for impatient users. Despite its quirks, it is definitely worth exploring — it offers built-in support for thread management, features like retrieval, code interpretation, and function calling, enabling developers to create more sophisticated AI assistants with less overhead. So, when the new updates are rolled out, it will be even more advantageous to build your AI apps with it.

For now, though, considering it’s still quite new and lacking extensive tutorials or documentation, I decided to dive in and build a small prototype myself. That’s what this article is all about.

So, What Will We Build?

Among the ideas that came to mind, I settled on creating an assistant that focuses on optimizing the user’s LinkedIn profile. This idea stemmed from a conversation with a good friend who has been job-seeking recently and asked for LinkedIn profile tips. Not being an expert myself, I thought, why not build one — an expert? And while at it, why not seize the opportunity to experiment with the Assistants API and have some fun? Thus, a plan was formed, and all I needed to do was bring it all together.

The outcome, even just as a prototype, is quite impressive — the assistant does a remarkable job advising the user on what to work on, change, improve, and even provides a score for the entire profile.

Let’s get into it!

Image generated with DALL-E 3

How It Works:

The User Experience

Users start by inputting their LinkedIn profile URL, and then the magic begins. In the backend, a three-step process unfolds:

  1. Scraping the LinkedIn Profile: Extract relevant information from the user’s profile, including the profile image.
  2. Image Analysis via GPT-4 Vision: Utilizing function calling, the profile image is analyzed to assess its suitability and effectiveness for a professional LinkedIn presence.
  3. Comprehensive Profile Review: The textual data and image analysis are synthesized by the Assistant into a detailed report with actionable suggestions for improvement.

After displaying the initial analysis, users can continue their conversation with the assistant for further insights.

Building It Up and Breaking Down the Code:

The app comprises three simple scripts:

  • main.py: Creates the Assistant with a predefined knowledge base and the added ability to analyze profile pictures.
  • linkedin_scraper.py: Scrapes the LinkedIn profile, extracting details and the profile image URL for analysis.
  • app.py: The user interface, built with Streamlit, allows profile submission and displays the AI-generated recommendations.

Part 1: Creating the assistant — main.py:

The first step involves creating the Assistant, which can be done via the OpenAI platform’s playground or directly in the code for local execution (which is what main.py is all about). The Assistants API supports three tools: Code Interpreter, Retrieval, and Function Calling. We will enable the assistant with both data retrieval and function calling:

  • Data Retrieval: By uploading a collection of 6 PDFs containing expert advice on LinkedIn profile improvement, the Assistant can draw upon this knowledge base for analysis.
  • Function Calling for Image Analysis: Although the Assistants API lacks built-in vision capabilities, we can utilize GPT-4 Vision. We need to connect it to the assistant using a custom function that is automatically called when needed. This gives our assistant the capability to visually analyze the user’s profile picture and provide feedback.

Workflow of Integrating the Assistants API in main.py

  • Create an Assistant: Define the assistant’s instructions and select a model (I chose gpt-4-turbo-preview). Enable Retrieval and Function Calling.
  • Initiate a Conversation Thread: Begin a new thread whenever a user starts a conversation with the assistant.
  • Add User Queries as Messages: Populate the thread with messages as the user asks questions.
  • Activate the Assistant on the Thread: The assistant generates responses based on the thread’s content, automatically invoking the relevant tools as needed.

The create_linkedin_profile_analyzer function contains the logic for initializing our Assistant. This includes uploading the PDF files to form our knowledge base. These documents are uploaded to OpenAI, and their IDs are used to enable the assistant's retrieval feature, permitting it to leverage this knowledge base when analyzing user profiles. The custom function for image analysis, analyze_profile_picture, is designed to be invoked by the Assistant automatically whenever an image URL is provided, allowing for the assessment of profile pictures against professional standards. Here is the code:

import json
import os
from dotenv import load_dotenv

from openai import OpenAI

load_dotenv()

api_key = os.environ.get("OPENAI_API_KEY")
client = OpenAI(api_key=api_key)

# Define the function for creating the Assistant
def create_linkedin_profile_analyzer():
# List of PDF files
pdf_files = [
"file1.pdf",
"file2.pdf",
"file3.pdf",
"file4.pdf",
"file5.pdf",
"file6.pdf"
]

# Path to the directory containing the PDFs
pdf_directory = "knowledge" # Adjust the path as necessary

# Upload documents for Retrieval
file_ids = []
for pdf_file in pdf_files:
file_path = os.path.join(pdf_directory, pdf_file)
with open(file_path, "rb") as file_data:
file = client.files.create(file=file_data, purpose="assistants")
file_ids.append(file.id)

# Create the Assistant with necessary tools
assistant = client.beta.assistants.create(
name="LinkedIn Profile Analyzer",
instructions = (
"You are an expert in LinkedIn profile optimization, tasked with providing a comprehensive analysis "
"of a user's LinkedIn profile, analyze it thoroughly. Be helpful, "
"and maintain a casual, approachable yet professional tone. Remember to address the user directly and use the first person.\n"
"- analyze_profile_picture, when the image url of the profile is given."
),
model="gpt-4-turbo-preview",
tools=[
{"type": "retrieval", "file_ids": file_ids},
{"type": "function", "function": function_json}
]
)
return assistant

# Define the function to create thread and run
def create_thread_and_run(assistant_id, user_message):
thread = client.beta.threads.create()
client.beta.threads.messages.create(
thread_id=thread.id, role="user", content=user_message
)
return client.beta.threads.runs.create(
thread_id=thread.id,
assistant_id=assistant_id,
)

# Define the custom function for image analysis
function_json = {
"name": "analyze_profile_picture",
"description": "Analyze this LinkedIn profile picture provided through the image URL. Examine its appropriateness for a professional LinkedIn profile by focusing on the presentation, expression and body language, composition and setting (including background), and the quality of the image. Ensure your analysis determines whether these aspects meet professional standards and offer specific recommendations for any needed improvements to enhance the profile's professional image. Overall, describe the image in detail.",
"parameters": {
"type": "object",
"properties": {
"image_url": {"type": "string", "description": "URL of the profile picture"}
},
"required": ["image_url"]
}
}


def handle_custom_function(run):
if run.status == 'requires_action' and run.required_action.type == 'submit_tool_outputs':
for tool_call in run.required_action.submit_tool_outputs.tool_calls:
if tool_call.function.name == "analyze_profile_picture":
image_url = json.loads(tool_call.function.arguments)["image_url"]

# Call GPT-4 Vision API to analyze the image
vision_response = client.chat.completions.create(
model='gpt-4-vision-preview',
messages=[
{
"role": "user",
"content": [
{
"type": "text",
"text": (
"Analyze this LinkedIn profile picture. Provide an analysis focusing on its "
"appropriateness and effectiveness for a LinkedIn profile. Consider the following aspects:\n\n"
"1. Presentation: Evaluate the subject's attire and grooming. Does it align with professional standards suitable for "
"their industry or field?\n"
"2. Expression and Body Language: Assess the subject's facial expression and body language. Does it project confidence, "
"approachability, and professionalism?\n"
"3. Composition and Setting: Comment on the composition of the photograph, including the background. Is it distraction-free "
"and does it enhance the subject's professional image?\n"
"4. Quality and Lighting: Evaluate the quality of the photograph, including lighting and clarity. Does the image quality "
"uphold professional standards?\n\n"
"Provide recommendations for improvement if necessary, highlighting aspects that could enhance the subject's professional "
"portrayal on LinkedIn."
)
},
{
"type": "image_url",
"image_url": {
"url": image_url,
},
},
],
}
],
max_tokens=400
)


# Submit the output back to the Assistant
client.beta.threads.runs.submit_tool_outputs(
thread_id=run.thread_id,
run_id=run.id,
tool_outputs=[
{
'tool_call_id': tool_call.id,
'output': vision_response.choices[0].message.content
}
]
)


# Main function to create the assistant
def main():
assistant = create_linkedin_profile_analyzer()

if __name__ == "__main__":
main()Part 2. Scraping the profiles — linkedin_scraper.py

Part 3: Scraping the profiles— linkedin_scraper.py:

For scraping the Linkedin profiles, this script uses a third-party API found on RapidAPI, which offers 50 free requests — plenty for testing purposes. This is an easy workaround, direct scraping can be challenging due to LinkedIn’s protective measures. While building a custom scraper is an option that affords more control over what data to retrieve, the simplicity of using an established API for a quick prototype like this one, is my preference.

The linkedin_scraper.py starts by extracting the username from the provided LinkedIn profile URL. With the username in hand, we make a request to the RapidAPI endpoint, retrieving the profile's data, including personal information, professional experience, education, skills, languages, and certifications. This data is then formatted into a structured format suitable for GPT analysis. This is the API I used. Here is the code:

import os
import requests
import argparse
import logging
from dotenv import load_dotenv

# Set up basic configuration for logging
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(levelname)s - %(message)s')

load_dotenv()
rapidapi_key = os.getenv('RAPIDAPI_KEY')

def main(profile_url):
formatted_text, profile_image_url = scrape_linkedin_profile(profile_url)
if formatted_text: # Just check if formatted_text is not None or empty
logging.info("Profile Data Scraped Successfully:")
logging.info(formatted_text)
if profile_image_url:
logging.info(f"Profile Image URL: {profile_image_url}")
else:
logging.info("No Profile Image URL found.")
else:
logging.error("Failed to scrape profile data or profile is incomplete/private.")

def scrape_linkedin_profile(profile_url):
username = extract_username(profile_url)
if not rapidapi_key:
logging.error("RapidAPI key not found. Please set the RAPIDAPI_KEY environment variable.")
return None, None

url = "https://linkedin-api8.p.rapidapi.com/"
querystring = {"username": username}

headers = {
"X-RapidAPI-Key": rapidapi_key,
"X-RapidAPI-Host": "linkedin-api8.p.rapidapi.com"
}

try:
response = requests.get(url, headers=headers, params=querystring)
if response.status_code == 200:
profile_data = response.json()
logging.debug(f"API Response: {profile_data}")
formatted_text, profile_image_url = format_data_for_gpt(profile_data)
# Additional logging to confirm data formatting
logging.debug(f"Formatted Text: {formatted_text[:500]}")
logging.debug(f"Profile Image URL: {profile_image_url}")
return formatted_text, profile_image_url
else:
logging.error(f"Failed to fetch profile data. Status Code: {response.status_code}")
return None, None
except Exception as e:
logging.exception("An error occurred while fetching the profile data.")
return None, None

def extract_username(linkedin_url):
username = linkedin_url.split('/')[-1] if linkedin_url.split('/')[-1] else linkedin_url.split('/')[-2]
return username.strip()

import logging

def safe_get_list(data, key):
"""
Safely get a list value from a dictionary. Returns an empty list if the key is not found or the value is None.
"""
value = data.get(key)
return [] if value is None else value

def safe_get_value(data, key, default=''):
"""Safely get a single value from a dictionary. Returns a default value if the key is not found or the value is None."""
return data.get(key) if data.get(key) is not None else default

def format_data_for_gpt(profile_data):
try:
# Extract the profile image URL safely
profile_image_url = safe_get_value(profile_data, 'profilePicture', '')

# Personal Information with safe retrieval
firstName = safe_get_value(profile_data, 'firstName', 'No first name')
lastName = safe_get_value(profile_data, 'lastName', 'No last name')
fullName = f"{firstName} {lastName}"
headline = safe_get_value(profile_data, 'headline', 'No headline provided')
summary = safe_get_value(profile_data, 'summary', 'No summary provided')
location = safe_get_value(profile_data.get('geo', {}), 'full', 'No location provided')

formatted_text = f"Name: {fullName}\nHeadline: {headline}\nLocation: {location}\nSummary: {summary}\n"

# Professional Experience
formatted_text += "Experience:\n"
for position in safe_get_list(profile_data, 'position'):
company = position.get('companyName', 'No company name')
title = position.get('title', 'No title')
jobLocation = position.get('location', 'No location')
jobDescription = position.get('description', 'No description').replace('\n', ' ')
formatted_text += f"- {title} at {company}, {jobLocation}. {jobDescription}\n"

# Education
formatted_text += "Education:\n"
for education in safe_get_list(profile_data, 'educations'):
school = education.get('schoolName', 'No school name')
degree = education.get('degree', 'No degree')
field = education.get('fieldOfStudy', 'No field of study')
grade = education.get('grade', 'No grade')
eduDescription = education.get('description', 'No description').replace('\n', ' ')
formatted_text += f"- {degree} in {field} from {school}, Grade: {grade}. {eduDescription}\n"

# Skills
formatted_text += "Skills:\n"
for skill in safe_get_list(profile_data, 'skills'):
skillName = skill.get('name', 'No skill name')
formatted_text += f"- {skillName}\n"

# Languages
formatted_text += "Languages:\n"
languages = safe_get_list(profile_data, 'languages')
if languages:
for language in languages:
langName = language.get('name', 'No language name')
proficiency = language.get('proficiency', 'No proficiency level')
formatted_text += f"- {langName} ({proficiency})\n"
else:
formatted_text += "- No languages provided\n"

# Certifications
formatted_text += "Certifications:\n"
for certification in safe_get_list(profile_data, 'certifications'):
certName = certification.get('name', 'No certification name')
formatted_text += f"- {certName}\n"

return formatted_text, profile_image_url

except Exception as e:
logging.exception("An error occurred during data formatting.")
return None, None


if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Scrape LinkedIn profile data.")
parser.add_argument("profile_url", help="LinkedIn profile URL to scrape.")
args = parser.parse_args()

main(args.profile_url)

Part 3: Putting it all together — app.py:

The final piece of the puzzle: The User Interface. I picked Streamlit for its simplicity and efficiency in turning scripts into shareable web apps — pure python, no frontend fuss.

The Workflow:

The script begins by loading the necessary environment variables, including ASSISTANT_ID, which is crucial for interactions with the OpenAI Assistant we've created. This can be retrieved from the Assistants page on the OpenAI platform.

Streamlit’s session state is used to manage the state of our application, tracking whether the chat has started, the analysis requested, and if it’s been completed.

Building the UI:

The sidebar acts as the primary interaction point, where users enter their OpenAI API key and LinkedIn profile URL. The option to input job preferences is available and encourages a more customized analysis.

Initiating the analysis triggers the creation of a conversation thread with the Assistant through a button.

Processing the LinkedIn Profile:

Once the analysis button is triggered, the script uses the previously mentioned scrape_linkedin_profile function to extract and format data from the LinkedIn profile, including the profile image. This data is then forwarded to the Assistant for processing.

Handling Custom Function Calls:

A core part of the workflow is the handle_custom_function. This function is invoked when the Assistant needs to analyze the profile picture, calling the GPT-4 Vision API, submitting the image URL, and processing the returned analysis. This step illustrates the extension of the Assistant's capabilities, enabling it to "see" and provide feedback on images and visual elements.

Displaying Analysis Results:

The results of the analysis are displayed within the main application interface. Users can also ask additional questions and keep conversing.

Quick Heads-up: I put a good chunk of effort into the Streamlit UI to make it easy on the eyes, which means the code might look a bit long at first glance. But don’t worry, the underlying logic is pretty straightforward and user-friendly.

Image from the author

Here’s the code:

import json
import os
import time
from dotenv import load_dotenv

import openai
import streamlit as st

from linkedin_scraper import scrape_linkedin_profile

# Load environment variables from .env file
load_dotenv()

assistant_id = os.getenv("ASSISTANT_ID")

# Initialize Streamlit session state variables
if "start_chat" not in st.session_state:
st.session_state.start_chat = False
if "thread_id" not in st.session_state:
st.session_state.thread_id = None
if "openai_api_key" not in st.session_state:
st.session_state.openai_api_key = ""
# Add a new session state variable for analysis completion tracking
if "analysis_completed" not in st.session_state:
st.session_state.analysis_completed = False

# Configure the Streamlit page
st.set_page_config(page_title="ReviewIn", page_icon=":computer:")

with st.sidebar:
st.markdown("<h1 style='display: flex; align-items: center;'>ReviewIn <img src='https://www.pagetraffic.com/blog/wp-content/uploads/2022/09/linkedin-blue-logo-icon.png' alt='LinkedIn Logo' width='40' height='40'></h1>", unsafe_allow_html=True)
#bullet points
with st.expander("🚀 Features and Tips"):
st.markdown("""
- 🧐 **LinkedIn Profile Review:** Personalized, actionable advice for enhancing your profile.
- 📚 **Specialized Knowledge:** Draws on a custom knowledge base tailored for LinkedIn profile improvements.
- 💡 **Powered by OpenAI:** Leverages the Assistant's API with GPT-4 Turbo for analysis and conversation.
- 🖼 **Vision Insights:** GPT-4 Vision for detailed feedback on profile pictures.
- 🔥 **Tip:** For better analysis, set your profile to public. The more complete and public your profile, the better our insights.
- ⚠️ **Note:** Assistants API is in beta and doesn't yet support streaming - full response is generated before being sent. Your patience is appreciated! 🙂
""", unsafe_allow_html=True)


st.session_state['openai_api_key'] = st.text_input("🔑 OpenAI API Key:", type="password")
with st.expander("🎯 Job Preferences & Context (optional)"):
job_preferences = st.text_area("Got any specific job-seeking goals? Let us know here!",
placeholder="e.g., 'Software Engineer, entry-level, Tech industry and interested in AI.'")
profile_url = st.text_input("🌐 Enter LinkedIn Profile URL:")

# Check if both OpenAI API Key and LinkedIn Profile URL are provided
if st.session_state['openai_api_key'] and profile_url:
if st.button("Analyze"):
st.session_state.start_chat = True
openai.api_key = st.session_state.openai_api_key
thread = openai.beta.threads.create()
st.session_state.thread_id = thread.id
st.session_state.analysis_requested = True
else:
# Optionally, display a message prompting the user to fill in all required fields
st.warning("Friendly reminder - add your OpenAI API Key and LinkedIn Profile URL to kick things off!😎")

# Initialize or reset session state on page load
if 'init' not in st.session_state:
st.session_state['init'] = True
st.session_state.start_chat = False
st.session_state.messages = []
st.session_state.thread_id = None
st.session_state.analysis_requested = False

def handle_custom_function(run, job_preferences=""):
if run.status == 'requires_action' and run.required_action.type == 'submit_tool_outputs':
for tool_call in run.required_action.submit_tool_outputs.tool_calls:
if tool_call.function.name == "analyze_profile_picture":
image_url = json.loads(tool_call.function.arguments)["image_url"]
print(f"Analyzing image at URL: {image_url}")
additional_context = ""
if job_preferences:
additional_context += f"\n\n**ADDITIONAL** - If relevant for your analysis, please consider the following context about the user's job preferences: {job_preferences}"

# Call GPT-4 Vision API to analyze the image
vision_response = openai.chat.completions.create(
model='gpt-4-vision-preview',
messages=[
{
"role": "user",
"content": [
{
"type": "text",
"text": (
"Analyze this LinkedIn profile picture. Provide an analysis focusing on its "
"appropriateness and effectiveness for a LinkedIn profile. Consider the following aspects:\n\n"
"1. Presentation: Evaluate the subject's attire and grooming. Does it align with professional standards suitable for "
"their industry or field?\n"
"2. Expression and Body Language: Assess the subject's facial expression and body language. Does it project confidence, "
"approachability, and professionalism?\n"
"3. Composition and Setting: Comment on the composition of the photograph, including the background. Is it distraction-free "
"and does it enhance the subject's professional image?\n"
"4. Quality and Lighting: Evaluate the quality of the photograph, including lighting and clarity. Does the image quality "
"uphold professional standards?\n\n"
"Provide recommendations for improvement if necessary, highlighting aspects that could enhance the subject's professional "
"portrayal on LinkedIn. {additional_context}"
)
},
{
"type": "image_url",
"image_url": {
"url": image_url,
},
},
],
}
],
max_tokens=400
)
vision_content = vision_response.choices[0].message.content
print(f"Vision API response: {vision_content}")

# Submit the output back to the Assistant
openai.beta.threads.runs.submit_tool_outputs(
thread_id=run.thread_id,
run_id=run.id,
tool_outputs=[
{
'tool_call_id': tool_call.id,
'output': vision_response.choices[0].message.content
}
]
)
print(f"Submitted vision analysis back to the assistant: {vision_content}")

# Main interaction logic
if st.session_state.start_chat:
if "messages" not in st.session_state:
st.session_state.messages = []

for message in st.session_state.messages:
with st.chat_message(message["role"]):
st.markdown(message["content"])

if st.session_state.analysis_requested:
with st.spinner('⏳🔍 Crunching the numbers - going to take a sec chief!😊'):
# Process the LinkedIn profile URL
formatted_text, image_url = scrape_linkedin_profile(profile_url)
time.sleep(3)
if formatted_text:
# Instructions for analysis
instructions = """
Provide an analysis/report of my LinkedIn profile below. Approach this task with professionalism and friendliness, ensure your recommendations is are both helpful and actionable. Provide detailed feedback for improvement. You can follow this structure:

1. **Profile Picture**: Begin with the profile picture. Assess its alignment. Suggest specific changes to enhance the first impression it makes, if needed.
2. **Headline and Summary**: Evaluate the clarity and impact of the headline and summary. How well do they communicate the individual's professional narrative and unique value proposition? Provide actionable advice to refine these elements, enhancing their appeal and coherence.
3. **Work Experience and Skills**: Delve into the work experience and skills sections. Identify the strengths and pinpoint areas that can benefit from greater detail or stronger examples of achievements. Recommend strategies to showcase expertise/skills more effectively.
4. **Educational Background and Volunteer Experience**: Analyze the education section and provide recommendations, if needed. Do the same for the Volunteer experience, if present. Advise on optimizing these areas to support the professional identity and narrative.
Each section should receive an assessment that contributes to an overall profile rating. Conclude with:
5. **Overall Quality Evaluation and Potential**: Rate the profile's current state out of 100, based on the coherence, presentation, and effectiveness of all sections combined - be as objective as you can, refrain from giving overly high ratings, unless the profile really is amazing, you can be critical, but still friendly. Then, estimate the potential score increase achievable by implementing your recommendations. Highlight the transformative impact of suggested changes, not just incrementally but in terms of elevating the profile's professional stature and networking potential.

Remember, your analysis should be comprehensive and nuanced, leveraging your expertise and any relevant external information from the files, where relevant. Address me directly and use the first person for a personal touch Let's evaluate this LinkedIn profile:
"""
# Prepare the analysis request content
analysis_request = f"{instructions}\n\n**HERE IS THE CONTENT FOR ANALYSIS**:\n- **Profile Text**: {formatted_text}\n"
if image_url: # Conditionally include image URL if available
analysis_request += f"- **Profile Image URL**: {image_url}"

# Add job preferences to the analysis request if any
if job_preferences:
analysis_request += f"\n\n**ADDITIONAL** - If relevant, please incorporate the following context about the job preferences of the user to tailor the recommendations: {job_preferences}"

print(analysis_request)

openai.beta.threads.messages.create(
thread_id=st.session_state.thread_id,
role="user",
content=analysis_request
)

# Wait for the analysis to complete
run = openai.beta.threads.runs.create(
thread_id=st.session_state.thread_id,
assistant_id=assistant_id,
instructions="Address me directly and use first person for a personal touch. Be helpful and approachable."
)

while run.status != 'completed':
time.sleep(1)
run = openai.beta.threads.runs.retrieve(thread_id=st.session_state.thread_id, run_id=run.id)
if run.status == 'requires_action':
print("Function Calling")
handle_custom_function(run)

# Fetch and display the analysis results
messages = openai.beta.threads.messages.list(
thread_id=st.session_state.thread_id
)

# Filter and display messages for the current run
for message in messages.data:
if message.run_id == run.id and message.role == "assistant":
st.session_state.messages.append({"role": "assistant", "content": message.content[0].text.value})
with st.chat_message("assistant"):
st.markdown(message.content[0].text.value)

# Mark the analysis as completed and ready for user follow-up
st.session_state.analysis_requested = False
st.session_state.analysis_completed = True


# After completing the initial analysis, enable follow-up conversations
if st.session_state.analysis_completed:
user_input = st.chat_input("Ask me anything about improving your LinkedIn profile!")
if user_input: # If there's user input, process it
# Append user input to messages for display
st.session_state.messages.append({"role": "user", "content": user_input})
with st.chat_message("user"):
st.markdown(user_input)

# Send the user input to OpenAI API as a new message in the thread
openai.beta.threads.messages.create(
thread_id=st.session_state.thread_id,
role="user",
content=user_input
)

# Wait for response
run = openai.beta.threads.runs.create(
thread_id=st.session_state.thread_id,
assistant_id=assistant_id,
instructions="Be helpful and approachable."
)

while run.status != 'completed':
time.sleep(1)
run = openai.beta.threads.runs.retrieve(thread_id=st.session_state.thread_id, run_id=run.id)

# Fetch and display the analysis results
messages = openai.beta.threads.messages.list(
thread_id=st.session_state.thread_id
)

# Filter and display messages for the current run
for message in messages.data:
if message.run_id == run.id and message.role == "assistant":
st.session_state.messages.append({"role": "assistant", "content": message.content[0].text.value})
with st.chat_message("assistant"):
st.markdown(message.content[0].text.value)


# Display if chat not yet started
if not st.session_state.start_chat:
st.markdown(
"""
<h2 style='text-align: center;'>
🚀 Ready to enhance your LinkedIn profile? <br>
Drop your <img src='https://upcdn.io/FW25bBB/image/content/app_logos/485244ee-9158-4685-8f1e-d349e97b35e1.png?f=webp&w=1920&q=85&fit=shrink-cover' alt='LinkedIn Logo' width='45' height='45' style='border-radius: 15%'> API key and <img src='https://www.pagetraffic.com/blog/wp-content/uploads/2022/09/linkedin-blue-logo-icon.png' alt='LinkedIn Logo' width='40' height='40'> profile URL in the sidebar and let's dive in!
</h2>
""",
unsafe_allow_html=True
)
st.image("dalle.png", use_column_width=True)

And here’s what the end result looks like:

Image from the author — Made with ❤️ and Python.

Wrapping up

That’s all for this article! Dive into the code to build your own version by visiting the GitHub repo.

Excited to hear your feedback and see what you build with the Assistants API.

Should you find this article enlightening, consider expressing your appreciation with 50 claps 👏 — your support means a ton.

Thanks for following along and happy coding! :)

All images are by me, the author, unless otherwise noted. Apart from being a user, I have no affiliation with OpenAI, Streamlit, Rapid API or any other organisation.

--

--

Self-taught full stack dev passionate about AI. Turning your ideas into working prototypes fast @ 4amdev.vercel.app