Navigating the World of LLMs: A Beginner’s Guide to Prompt Engineering-Part 1

From Basics To Advanced Techniques

Introduction:

Prompt engineering is the process of designing and creating effective instructions or questions (called “prompts”) that help LLMs (Large Language Models) understand what you want them to do and generate the desired response or output.

Think of it like giving clear and specific instructions to a helpful assistant. By carefully crafting these instructions, you can guide the LLM to provide more accurate, relevant, and useful responses.

It involves understanding how LLMs interpret and respond to prompts and then using that knowledge to create prompts that are easy for the LLM to understand and follow. The goal of prompt engineering is to improve the communication between humans and LLMs, leading to better results and experiences. In simpler terms, prompt engineering is about figuring out the best way to “talk” to LLMs to get the most helpful and accurate responses.

In this conversation, we will explore several influential research papers focused on prompt engineering. We’ll delve into the key concepts and ideas presented in these papers and provide concrete examples to illustrate how these techniques can be applied in practice.

Image by the Author

Contents:

Prompts Overview:

1. User Prompt
A user prompt is the initial input or question provided by the user to the system. It sets the context for the system’s response and can vary widely in content and complexity based on the user’s needs or curiosity.

Example:
- User Prompt: “What’s the weather like in Paris today?”
- System’s Response: “The weather in Paris today is mostly sunny with a high of 75°F and a low of 59°F.”

2. System Prompt:
A system prompt is a predefined instruction or question generated by the system to guide the user on how to interact with it or to request additional information needed to complete a task.

Example:
- System Prompt: “Please specify the city and date for the weather forecast.”
- User’s Response: “Paris, September 10th.”

3. Clarification Prompt
The system uses clarification prompts to request more details or clarify ambiguity in a user’s request, helping the system provide a more accurate and relevant response.

Example:
- User Prompt: “Tell me about the big game.”
- System Prompt (Clarification Prompt): “Are you asking about a specific sport or event?”
- User’s Response: “The World Cup final match.”

4. Follow-up Prompt
Follow-up prompts are generated by the system after an initial response, aiming to continue the conversation, delve deeper into the topic, or offer related information or actions.

Example:
- System’s Initial Response: “The World Cup final match is between Brazil and France.”
- System Prompt (Follow-up Prompt): “Would you like to know the start time or the players to watch?”

5. Feedback Prompt
Feedback prompts seek user feedback on the quality of the system’s responses or the overall interaction experience. This can help improve the system’s performance and user satisfaction.

Example:
- System Prompt (Feedback Prompt): “Was this information helpful to you?”
- User’s Response: “Yes, thank you!”

How to use Different Prompts:

import re

def get_prompt(user_input):

patterns_to_prompts = {
r'\b(math|calculate|equation|formula)\b': math_prompt,
r'\b(feel|emotion|mood|sad|happy|angry)\b': emotional_prompt,
r'\b(code|program|script|develop|bug)\b': coding_prompt,
r'\b(history|historical|past|event)\b': history_prompt,
r'\b(science|scientific|physics|chemistry|biology)\b': science_prompt,
r'\b(write|essay|article|story|poem)\b': writing_prompt,
r'\b(recommend|suggest|idea|advice)\b': recommendation_prompt,
r'\b(translate|language|meaning|definition)\b': translation_prompt,
r'\b(weather|forecast|temperature|rain)\b': weather_prompt,
r'\b(finance|investment|stock|crypto|budget)\b': finance_prompt,
r'\b(health|fitness|diet|exercise)\b': health_prompt,
r'\b(food|cuisine|recipe|cooking)\b': food_prompt,
r'\b(sport|game|team|exercise)\b': sports_prompt,
r'\b(music|song|artist|band|instrument)\b': music_prompt,
r'\b(movie|film|series|show|actor)\b': entertainment_prompt,
r'\b(travel|vacation|destination|trip)\b': travel_prompt,
r'\b(tech|gadget|device|software|hardware)\b': technology_prompt,
r'\b(education|study|learn|school|college)\b': education_prompt,
r'\b(politics|government|policy|election)\b': politics_prompt,
r'\b(art|painting|sculpture|gallery)\b': art_prompt,
}

for pattern, prompt in patterns_to_prompts.items():
if re.search(pattern, user_input, re.IGNORECASE):
return prompt

return default_prompt


Image by the Author

Prompt Optimization:

  • Be clear and specific: Use clear and concise language to communicate the task and requirements effectively.
  • Provide sufficient context: Include enough background information to help the model understand the task.
  • Use relevant examples: Choose examples that are representative and align with the desired output.
  • Avoid ambiguity: Minimize ambiguity in the prompt to reduce the chances of misinterpretation.
  • Iterate and refine: Continuously iterate and refine the prompts based on the model’s outputs and performance.

Dynamic Prompts:

How to generate Dynamic Prompts:

A dynamic prompt is a prompt that is generated or modified dynamically based on certain conditions or inputs. Instead of using fixed, predefined prompts, dynamic prompts allow for more flexibility and adaptability in generating prompts based on the specific context or user input.

Here’s a list of techniques that can be used to generate dynamic prompts in a more sophisticated real-world implementation:

1. Topic Extraction:
Keyword extraction: Identifying important keywords or phrases in the user’s input using techniques like TF-IDF, RAKE, or TextRank.
Named Entity Recognition (NER): Recognizing and extracting named entities such as person names, organizations, locations, etc., from the user’s input.
Topic modeling: Using algorithms like Latent Dirichlet Allocation (LDA) or Non-negative Matrix Factorization (NMF) to discover latent topics in the user’s input.
Text classification: Classifying the user’s input into predefined topic categories using machine learning models like Naive Bayes, Support Vector Machines (SVM), or deep learning models like Convolutional Neural Networks (CNN) or Recurrent Neural Networks (RNN).

2. Sentiment Analysis:
Rule-based approaches: Using predefined sentiment lexicons and rules to determine the sentiment polarity (positive, negative, neutral) of the user’s input.
— Machine learning-based approaches: Training sentiment classification models using algorithms like Naive Bayes, SVM, or deep learning models like CNN or RNN to predict the sentiment of the user’s input.
— Aspect-based sentiment analysis: Identifying the sentiment associated with specific aspects or entities mentioned in the user’s input.
— Emotion detection: Detecting specific emotions (e.g., joy, anger, sadness) expressed in the user’s input using emotion lexicons or machine learning models.

3. Intent Classification:
— Rule-based approaches: Using predefined rules and patterns to classify the user’s intent based on specific keywords, phrases, or sentence structures.
— Machine learning-based approaches: Training intent classification models using algorithms like Naive Bayes, SVM, or deep learning models like CNN or RNN to predict the user’s intent based on their input.
— Semantic similarity: Comparing the user’s input with predefined intent templates or examples using semantic similarity measures like cosine similarity or word embeddings.

4. Context Understanding:
— Conversation history: Maintaining a conversation history to understand the context of the user’s input and generate prompts that are relevant to the ongoing conversation.
— User profile: Leveraging information from the user’s profile, such as preferences, demographics, or past interactions, to generate personalized prompts.
— Domain knowledge: Incorporating domain-specific knowledge or ontologies to understand the context and generate prompts that are relevant to the specific domain or industry.

5. Knowledge Integration:
— Knowledge bases: Integrating structured knowledge bases or databases to retrieve relevant information and incorporate it into the generated prompts.
— Information retrieval: Using techniques like TF-IDF, BM25, or semantic search to retrieve relevant information from unstructured text sources and integrate it into the prompts.
— Question answering: Employing question answering models or techniques to extract relevant answers or information from knowledge sources and include them in the prompts.

6. Natural Language Generation:
— Template-based generation: Using predefined templates with placeholders that can be filled with extracted information to generate prompts.
— Rule-based generation: Applying linguistic rules and patterns to generate grammatically correct and coherent prompts.
— Neural language models: Utilizing pre-trained language models like GPT or T5 to generate prompts based on the input context and desired output.

7. Personalization:
— User preferences: Adapting prompts based on user preferences, such as language style, formality level, or specific interests.
— User feedback: Incorporating user feedback and preferences dynamically to generate prompts that align with their expectations and needs.

8. Multilingual Support:
— Language detection: Automatically detecting the language of the user’s input to generate prompts in the appropriate language.
— Machine translation: Translating the user’s input or the generated prompts to support conversations in multiple languages.

Implementing dynamic prompting involves combining these techniques in a cohesive manner, often leveraging natural language processing libraries, machine learning frameworks, and domain-specific knowledge to create a robust and effective system.

Dynamic Prompting Example:

import spacy
from textblob import TextBlob

# Load the spaCy English model
nlp = spacy.load("en_core_web_sm")

# Define a function to extract intent and sentiment from user input
def extract_intent_and_sentiment(user_input):
# Perform intent classification using predefined rules
if "order" in user_input.lower():
intent = "place_order"
elif "track" in user_input.lower():
intent = "track_order"
elif "return" in user_input.lower() or "refund" in user_input.lower():
intent = "return_refund"
else:
intent = "general_inquiry"

# Perform sentiment analysis using TextBlob
sentiment = TextBlob(user_input).sentiment.polarity

return intent, sentiment

# Define a function to generate dynamic prompts
def generate_dynamic_prompt(user_input, context):
# Extract intent and sentiment from user input
intent, sentiment = extract_intent_and_sentiment(user_input)

# Generate dynamic prompt based on intent, sentiment, and context
if intent == "place_order":
prompt = f"As a customer support agent, assist the user in placing an order for {context['product']}. "
if sentiment > 0:
prompt += "Express enthusiasm and provide helpful suggestions."
else:
prompt += "Address any concerns and provide reassurance."
elif intent == "track_order":
prompt = f"As a customer support agent, help the user track their order #{context['order_id']}. "
if sentiment > 0:
prompt += "Provide a positive update on the order status."
else:
prompt += "Investigate any delays and provide a realistic timeline."
elif intent == "return_refund":
prompt = f"As a customer support agent, assist the user with returning {context['product']} and processing a refund. "
if sentiment > 0:
prompt += "Express understanding and offer a smooth return process."
else:
prompt += "Apologize for any inconvenience and provide clear instructions."
else:
prompt = "As a customer support agent, provide general assistance and information to the user. "
if sentiment > 0:
prompt += "Maintain a friendly and helpful tone."
else:
prompt += "Show empathy and offer solutions to any concerns."

return prompt

# Example usage
user_input = "I want to track my order #123456. It's been a week since I placed it."
context = {"order_id": "123456", "product": "Wireless Headphones"}

# Generate the dynamic prompt
dynamic_prompt = generate_dynamic_prompt(user_input, context)

# Print the generated prompt
print(dynamic_prompt)

The extract_intent_and_sentiment function takes the user input and performs intent classification using predefined rules based on keywords. It also uses TextBlob to analyze the sentiment polarity of the user input.

The generate_dynamic_prompt function takes the user input and the context (which includes relevant information like the order ID and product) and generates a dynamic prompt based on the extracted intent, sentiment, and context.

In the example usage, we provide a sample user input and context. The generate_dynamic_prompt function is called with these arguments, and it returns the dynamically generated prompt.

When you run this code, it will output the generated dynamic prompt based on the provided user input and context. In this case, the output would be:

As a customer support agent, help the user track their order #123456. 
Investigate any delays and provide a realistic timeline.
Image by the Author

26 Principled Instructions in 5 Categories:

The paper is

The key points from the paper are:

  1. The goal is to simplify the concepts of formulating questions for various scales of LLMs and enhance user comprehension of LLM behaviors.
  2. Experiments were conducted on LLaMA-1/2 (7B, 13B, 70B) and GPT-3.5/4 to verify the effectiveness of the proposed principles.
  3. Fine-tuning LLMs is often impractical, so optimizing prompts has become a central focus.
  4. The 26 principles are grouped into five categories: a) Prompt Structure and Clarity b) Specificity and Information c) User Interaction and Engagement d) Content and Language Style e) Complex Tasks and Coding Prompts
  5. Experiments using the ATLAS benchmark show that specialized prompts based on these principles enhanced response quality by 57.7% and accuracy by 67.3% on average for GPT-4.
  6. Performance gains were more pronounced with increasing model size, exceeding 40% when moving from LLaMA-2–7B to GPT-4.
  7. The proposed principles can help LLMs focus on crucial input context elements, leading to more relevant, concise, and objective responses.
  8. Future work could involve refining base models to align with principled instructions using techniques like fine-tuning, reinforcement learning, or direct preference optimization.

The paper presents 26 guiding principles for effectively querying and prompting large language models, which are grouped into five categories:

a) Prompt Structure and Clarity

b) Specificity and Information

c) User Interaction and Engagement

d) Content and Language Style

e) Complex Tasks and Coding Prompts.

Image by the Author

Let's dive into the five categories:

Image by the Author

Prompt Structure and Clarity

Integrate the intended audience in the prompt (Principle 2):
- “Explain quantum computing as if you were teaching a high school physics class.”
- “Describe the benefits of meditation from the perspective of a healthcare professional.”

Use affirmative directives like “do” instead of negative ones like “don’t” (Principle 4):
- “Focus on the main characters’ development throughout the story.”
- “Provide a step-by-step guide on how to create a budget.”

Use leading words like “think step by step” (Principle 12):
- “Think step by step: How would you solve this algebraic equation?”
- “Consider each factor systematically: What are the primary causes of deforestation?”

Use output primers by concluding the prompt with the beginning of the desired output (Principle 20):
- “Write a haiku about the beauty of nature. Begin with: ‘Gentle winds whisper…’”
- “Create a dialogue between two friends discussing their future plans. Start with: ‘John: I’ve been thinking about what I want to do after graduation.’”

Use delimiters to separate instructions, examples, questions, context, and input data (Principle 17):
- “<<Instruction>> Write a short story about a magical adventure. <<Context>> The story should take place in a enchanted forest. <<Question>> What challenges will the main character face during their journey?”
- “<<Example>> Topic: The importance of recycling. Thesis: Recycling is crucial for reducing waste and conserving natural resources. <<Instruction>> Write a five-paragraph essay on the same topic, using the provided thesis statement.”

Format prompts with ‘###Instruction###’, ‘###Example###’, or ‘###Question###’ tags (Principle 8):
- “###Instruction### Summarize the main points of the following article. ###Context### [Article Text]”
- “###Question### What are the key differences between renewable and non-renewable energy sources? ###Example### Renewable: Solar energy. Non-renewable: Coal.”

These examples demonstrate how to apply the principles of “Prompt Structure and Clarity” in various contexts, such as education, personal development, creative writing, and academic essays.

Image by the Author

Specificity and Information:

Implement example-driven prompting (few-shot prompting) (Principle 7):
- “Write a product review for a smartphone. Example 1: [Review for iPhone]. Example 2: [Review for Samsung Galaxy].”
- “Translate the following sentences from English to French. Example 1: ‘I love reading books.’ → ‘J’aime lire des livres.’ Example 2: ‘The cat is sleeping on the couch.’ → ‘Le chat dort sur le canapé.’”

Explain topics in simple terms or as if explaining to a beginner or child (Principle 5):
- “Explain how electricity works as if you were talking to a 5-year-old.”
- “Describe the water cycle in simple terms that a beginner could easily understand.”

Ensure answers are unbiased and do not rely on stereotypes (Principle 13):
- “Discuss the challenges faced by working parents, ensuring your answer is unbiased and does not rely on gender stereotypes.”
- “Write about the importance of diversity in the workplace without relying on stereotypes related to race, ethnicity, or age.”

Use the same language as provided in a sample paragraph, title, text, or essay (Principle 26):
- “Given the following paragraph: ‘The luminous moon cast an ethereal glow upon the tranquil lake, creating a mesmerizing reflection that danced on the ripples.’ Write a short story using similar descriptive language.”
- “Using the writing style from the provided essay: [Sample Essay], compose an essay on the importance of time management for students.”

Provide the beginning of a text (lyrics, story, paragraph) and ask the model to finish it (Principle 24):
- “Complete the following story: ‘Once upon a time, in a far-off kingdom, there lived a brave knight named Elara. She had always dreamed of embarking on a grand adventure…’”
- “Finish the paragraph: ‘The sun peeked through the clouds, casting a warm glow on the bustling city streets. People hurried about their day, unaware of the…’”

Clearly state the requirements the model must follow to produce content (Principle 25):
- “Write a news article about a recent scientific discovery. Requirements: 1) Mention the researchers involved, 2) Explain the significance of the discovery, 3) Keep the article under 500 words.”
- “Create a dialogue between two characters discussing their favorite hobbies. Requirements: 1) Include at least three different hobbies, 2) Use a mix of short and long sentences, 3) Incorporate humor or wit in the conversation.”

Teach a specific topic and include a test without giving answers (Principle 15):
- “Teach me about the process of photosynthesis in plants. Include a short quiz at the end to test my understanding, but don’t provide the answers.”
- “Explain the basic principles of coding in Python. Create a small coding challenge for me to complete, but don’t include the solution.”

Ask for detailed essays, texts, or paragraphs on a topic (Principle 21):
- “Write a detailed essay on the impact of social media on modern communication. Include an introduction, three body paragraphs, and a conclusion.”
- “Create a detailed paragraph describing your ideal vacation destination. Include sensory details and explain why you would enjoy visiting this place.”

These examples demonstrate how to apply the principles of “Specificity and Information” in various contexts, such as product reviews, translations, explanations, creative writing, news articles, dialogues, teaching, and personal essays.

User Interaction and Engagement:

Allow the model to ask questions until it has enough information to provide the desired output (Principle 14):
Example 1:
User: “I need help planning a birthday party for my friend.”
Model: “Great! To help you plan the perfect birthday party, I’ll need some more information. What is your friend’s age?”
User: “She’s turning 30.”
Model: “What kind of activities or themes does your friend enjoy?”
User: “She loves outdoor adventures and trying new things.”
Model: “How many people are you planning to invite?”
User: “Around 20 people.”
Model: “Based on the information you provided, here are some ideas for a 30th birthday party with an outdoor adventure theme for around 20 people: [List of ideas]”

Example 2:
User: “I want to book a vacation package, but I’m not sure where to start.”
Model: “I’d be happy to help you find the perfect vacation package! First, what is your desired travel destination?”
User: “I’m thinking about a beach destination in the Caribbean.”
Model: “Great choice! What is your preferred travel duration?”
User: “I’d like to go for about a week.”
Model: “Are you looking for an all-inclusive resort or a more independent travel experience?”
User: “I prefer an all-inclusive resort.”
Model: “Considering your preferences for a week-long, all-inclusive beach vacation in the Caribbean, here are some top vacation package recommendations: [List of recommendations]”

Write detailed essays, texts, or paragraphs on a topic (Principle 21):
Example 1:
User: “Write a detailed essay on the benefits of regular exercise for both physical and mental health.”
Model: “Here is a detailed essay on the benefits of regular exercise for both physical and mental health:

Introduction
[Paragraph discussing the importance of regular exercise for overall well-being]

Physical Health Benefits
[Paragraph 1: Improved cardiovascular health]
[Paragraph 2: Weight management and reduced risk of chronic diseases]
[Paragraph 3: Increased strength, flexibility, and mobility]

Mental Health Benefits
[Paragraph 1: Reduced stress, anxiety, and depression]
[Paragraph 2: Improved mood and self-esteem]
[Paragraph 3: Enhanced cognitive function and memory]

Conclusion
[Paragraph summarizing the key benefits of regular exercise and encouraging readers to incorporate physical activity into their daily lives]”

Example 2:
User: “Create a detailed paragraph describing the experience of watching a beautiful sunset.”
Model: “As the sun begins its descent towards the horizon, the sky transforms into a breathtaking canvas of vibrant colors. Hues of deep orange, fiery red, and soft pink blend seamlessly, creating a mesmerizing gradient that stretches across the vast expanse of the heavens. The golden orb of the sun slowly sinks lower, casting an enchanting glow upon the landscape below. The gentle breeze carries a sense of tranquility, and the world seems to pause in reverence of this daily miracle. Birds soar gracefully in the distance, their silhouettes etched against the luminous backdrop. As the sun finally disappears behind the distant mountains, the sky gradually shifts to deeper shades of purple and blue, signaling the arrival of dusk. This awe-inspiring display serves as a reminder of the beauty and wonder that exists in the natural world, inviting those who witness it to take a moment of reflection and appreciate the magnificent artistry of a sunset.”

These examples showcase how the model can engage in interactive conversations to gather necessary information and provide tailored responses, as well as create detailed and engaging essays or paragraphs on various topics.

Content and Language Style:

Correct or change text without altering its style (Principle 22):
Example 1:
User: “Proofread and correct the following text without changing its style: ‘The sun shined brightly on the bustling city, as people hurried to their destinations. The birds chirped happily in the trees, and a gentle breeze carried the scent of freshly baked bread from a nearby bakery.’”
Model: “The sun shone brightly on the bustling city, as people hurried to their destinations. The birds chirped happily in the trees, and a gentle breeze carried the scent of freshly baked bread from a nearby bakery.”

Example 2:
User: “Please correct the grammar and punctuation in the following paragraph, maintaining its original tone: ‘I couldn’t believe my eyes. there, right in front of me, was the most adorable puppy Id ever seen. His fur was soft and fluffy, and his big brown eyes melted my heart in a instant.’”
Model: “I couldn’t believe my eyes. There, right in front of me, was the most adorable puppy I’d ever seen. His fur was soft and fluffy, and his big brown eyes melted my heart in an instant.”

Use phrases like “Your task is” and “You MUST” (Principle 9):
Example 1: “Your task is to write a persuasive essay on the importance of voting in democratic societies. You MUST include at least three compelling arguments and a strong conclusion.”

Example 2: “Your task is to create a detailed budget for a small business. You MUST consider all potential income sources and expenses, and present the information in a clear, organized manner.”

Use phrases like “You will be penalized” (Principle 10):
Example 1: “Write a haiku about the changing seasons. You will be penalized for any deviation from the traditional 5–7–5 syllable structure.”

Example 2: “Solve the following mathematical equation, showing all steps in your solution. You will be penalized for any errors or omissions in your work.”

Assign a role to the language model (Principle 16):
Example 1: “You are a knowledgeable historian. Provide a detailed account of the key events leading up to the American Revolution.”

Example 2: “Act as an experienced career counselor. Give advice to a recent college graduate seeking their first job in the marketing industry.”

Use the phrase “Answer a question given in natural language form” (Principle 11):
Example 1: “Answer the question given in natural language form: What are the main differences between renewable and non-renewable energy sources?”

Example 2: “Answer the question given in natural language form: How can individuals reduce their carbon footprint in their daily lives?”

Be direct and avoid politeness markers like “please” or “thank you” (Principle 1):
Example 1: “Explain the process of photosynthesis in plants.”

Example 2: “Compare and contrast the main characters in the novel ‘To Kill a Mockingbird’.”

Repeat a specific word or phrase multiple times within a prompt (Principle 18):
Example 1: “Describe a perfect day at the beach. Use the word ‘relaxing’ at least three times in your description.”

Example 2: “Write a short story about a mysterious adventure. Repeat the phrase ‘the hidden clue’ throughout the story to build suspense.”

Add “I’m going to tip $xxx for a better solution!” (Principle 6):
Example 1: “Suggest an itinerary for a 5-day trip to Japan. I’m going to tip $50 for a better solution!”

Example 2: “Help me write a compelling cover letter for a software engineering position. I’m going to tip $100 for a better solution!”

These examples demonstrate how to apply the principles of “Content and Language Style” in various contexts, such as proofreading, essay writing, problem-solving, role-playing, direct questioning, and incentivizing better solutions.

Complex Tasks and Coding Prompts:

Break down complex tasks into a sequence of simpler prompts in an interactive conversation (Principle 3):

For complex coding prompts spanning multiple files, generate a script to create or modify files with the generated code (Principle 23):
Example 1:
User: “Create a simple web application with a frontend in HTML/CSS and a backend in Python using the Flask framework. The application should have a form that accepts user input and displays it on a new page.”
Model: “Here’s a script to generate the necessary files for your web application:

```bash
#!/bin/bash
# Create project directory and navigate into it
mkdir my_web_app
cd my_web_app

# Create frontend files
touch index.html
touch style.css

# Create backend files
touch app.py

# Add content to index.html
echo '<!DOCTYPE html>
<html>
<head>
<title>My Web App</title>
<link rel="stylesheet" type="text/css" href="style.css">
</head>
<body>
<h1>User Input Form</h1>
<form action="/submit" method="POST">
<input type="text" name="user_input" placeholder="Enter your input">
<input type="submit" value="Submit">
</form>
</body>
</html>' > index.html

# Add content to style.css
echo 'body {
font-family: Arial, sans-serif;
}' > style.css
# Add content to app.py
echo 'from flask import Flask, request, render_template
app = Flask(__name__)
@app.route("/")
def index():
return render_template("index.html")
@app.route("/submit", methods=["POST"])
def submit():
user_input = request.form["user_input"]
return f"<h1>You entered: {user_input}</h1>"
if __name__ == "__main__":
app.run(debug=True)' > app.py
```

This script will create the necessary files and directories for your web application, with the appropriate content in each file.”

Combine Chain-of-Thought (CoT) with few-shot prompts (Principle 19):

These examples demonstrate how to apply the principles of “Complex Tasks and Coding Prompts” in various contexts, such as event planning, presentations, web development, and problem-solving using Chain-of-Thought reasoning combined with few-shot prompts.

Wrap up:

Image by the Author

Prompt Design and Engineering: Introduction and Advanced Methods

Basic prompts are composed of four main elements:
1. Instructions
2. Questions
3. Input data
4. Examples

Image by the Author

These elements can be combined in various ways to create effective prompts that guide the LLM to produce the desired output. Let’s look at each component in more detail:

1. Instructions:
Instructions are commands or directions given to the LLM to perform a specific task. They can be simple or complex, depending on the desired output.

Examples:
- "Write a short story about a magical adventure in a forest."
- "Translate the following sentence from English to French:"
- "Summarize the main points of the article below in five bullet points."

2. Questions:
Questions are used to elicit specific information or responses from the LLM. They can be open-ended or closed-ended, depending on the type of answer required.

Examples:
- "What are the benefits of regular exercise?"
- "How can I improve my time management skills?"
- "Can you explain the concept of quantum entanglement in simple terms?"

3. Input data:
Input data refers to the information provided to the LLM as context or source material for generating a response. This can include text, numbers, or any other relevant data.

Examples:
- "Given the following sales data for the past quarter, generate a report on the company's performance:"
- "Using the information from the customer review below, identify the main issues and suggest solutions:"
- "Analyze the sentiment of the following tweet:"

4. Examples:
Examples serve as demonstrations of the desired output format or style. They help the LLM understand the expected structure and tone of the response.

Examples:
- "Write a haiku about the moon. Here's an example:
Silvery moonlight
Casting shadows on the lake
Peaceful summer night"

- “Generate a product description for a smartwatch, following this template:
[Product name]
[Key features]
[Benefits]
[Call to action]”

Now, let’s see how these components can be combined to create effective prompts:

Example 1 (Instructions + Input data):
"Given the following recipe ingredients, create a step-by-step guide on how to make the dish:
2 cups all-purpose flour
1 tsp baking powder
1/2 tsp salt
1/2 cup unsalted butter, softened
1 cup sugar
2 large eggs
1 tsp vanilla extract
1/2 cup milk"
Example 2 (Question + Examples):
"Can you write a short bio for a professional LinkedIn profile? Here are some examples:
[Example 1]
[Example 2]"
Example 3 (Instructions + Questions):
"Analyze the poem below and answer the following questions:
1. What is the central theme of the poem?
2. How does the poet use imagery to convey emotions?
3. What is the significance of the last line?
[Poem text]"

Some of the prompting techniques I liked in this paper:

FLARE:

+-----------------+
| User Query |
+-----------------+
|
v
+-----------------+
| RAG |
| (Retrieval |
| Augmented |
| Generation) |
+-----------------+
|
v
+-----------------+
| Initial Response|
+-----------------+
|
v
+-----------------+
| FLARE |
| Analysis |
+-----------------+
|
v
+-----------------+
| FLARE Query |
+-----------------+
|
v
+-----------------+
| RAG with |
| FLARE Query |
+-----------------+
|
v
+-----------------+
| Additional |
| Information |
+-----------------+
|
v
+-----------------+
| Response |
| Refinement |
+-----------------+
|
v
+-----------------+
| Final |
| Response |
+-----------------+

Let’s take the example of Nvidia GPUs and see how RAG and FLARE can be applied to answer a user’s query.

User query: “What are the specifications of the Nvidia GeForce RTX 4090 GPU?”

In a traditional LLM setting, the model would generate a response based on its pre-existing knowledge, which may not include the most recent information about the RTX 4090, especially if it was released after the LLM’s knowledge cutoff date.

With RAG, the process would be as follows:
1. The query “What are the specifications of the Nvidia GeForce RTX 4090 GPU?” is passed to the retrieval module.
2. The retrieval module searches for relevant information about the RTX 4090 from external sources, such as Nvidia’s website, tech forums, benchmarking sites, or hardware review articles.
3. The retrieved information, along with the original query, is passed to the LLM.
4. The LLM generates a response incorporating the retrieved information, providing a more accurate and up-to-date answer about the RTX 4090’s specifications.

The response generated by RAG might include information like:

- CUDA Cores: 16,384
- Boost Clock: 2.52 GHz
- Memory: 24 GB GDDR6X
- Memory Interface Width: 384-bit
- RT Cores: 3rd Generation
- Tensor Cores: 4th Generation
- Architecture: Ada Lovelace

FLARE enhances the RAG process further:
1. After generating the initial response, FLARE analyzes the output to identify potential information gaps or areas where additional context could be beneficial. For example, it might determine that the response lacks information about the GPU’s power consumption or cooling requirements.
2. Based on these identified gaps, FLARE formulates new queries to retrieve more specific information, such as “What is the TDP (Thermal Design Power) of the Nvidia GeForce RTX 4090?” or “What type of cooling solution is recommended for the RTX 4090?”
3. The newly retrieved information is then incorporated into the response, iteratively refining and expanding the answer.
4. This process continues until a satisfactory level of detail and context is achieved.

The final response generated by FLARE might include additional information like:
- TDP: 450W
- Recommended Power Supply: 850W or higher
- Cooling: Triple-slot design, recommended for adequate airflow

By leveraging RAG and FLARE, the LLM can provide a more comprehensive and informative answer to the user’s query about the Nvidia GeForce RTX 4090 GPU. The model can access and incorporate the most recent specifications, performance data, and recommended system requirements from external sources, ensuring that the response is accurate and up-to-date.

This example demonstrates how RAG and FLARE can greatly enhance the capabilities of LLMs by allowing them to access and integrate external knowledge, overcoming the limitations of relying solely on their pre-existing training data. This approach is particularly valuable in domains like technology, where new products and information are constantly emerging, and users expect accurate and current answers to their questions.

This is a mockup code to give an idea of how flare is applied:

from langchain.llms import OpenAI
from langchain.chains import RetrievalQA
from langchain.document_loaders import TextLoader
from langchain.indexes import VectorstoreIndexCreator
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate

# Load documents and create a vector index
loader = TextLoader('path/to/your/documents.txt')
index = VectorstoreIndexCreator().from_loaders([loader])

# Initialize the LLM (e.g., ChatGPT or Claude AI)
llm = OpenAI(temperature=0.7)

# Create a RetrievalQA chain using the LLM and the vector index
retrieval_qa = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=index.vectorstore.as_retriever())

# Define a prompt template for FLARE
flare_template = """
Based on the current response, identify any potential information gaps or
areas where additional context could be beneficial.
If there are gaps, formulate a new query to retrieve more specific
information.

Current response:
{response}

If there are no gaps, simply respond with "No further information needed."
"""

# Initialize the FLARE chain using the LLM and the prompt template
flare_chain = LLMChain(llm=llm, prompt=PromptTemplate(template=flare_template,
input_variables=["response"]))

# User query
query = "What are the specifications of the Nvidia GeForce RTX 4090 GPU?"

# Generate initial response using RAG
initial_response = retrieval_qa.run(query)

print("Initial response:")
print(initial_response)

# Iteratively refine the response using FLARE
refined_response = initial_response
while True:
# Analyze the current response for potential information gaps
flare_query = flare_chain.run(response=refined_response)

if flare_query.strip() == "No further information needed.":
break

# Retrieve additional information based on the FLARE query
additional_info = retrieval_qa.run(flare_query)

# Refine the response by incorporating the additional information
refined_response += "\n\nAdditional information:\n" + additional_info

print("\nFinal refined response:")
print(refined_response)

Encouraging Factuality Prompting:
To encourage factuality, you can prompt the language model to cite reliable sources, provide evidence, or cross-reference information. This helps to ensure that the generated responses are accurate and trustworthy.

Example:
User: “What are the benefits of meditation?”

Prompt: “Please provide a list of scientifically proven benefits of meditation, citing reliable sources for each benefit. For example, you could mention a study from a reputable journal or a well-established health organization.”

Being Forceful Prompting:
In some cases, using forceful language or emphasizing specific instructions can help to steer the language model’s output in the desired direction. This can be particularly useful when you want the model to adhere to certain guidelines or constraints.

Example:
User: “Generate a short story about a haunted house.”

Prompt: “Write a short story about a haunted house, but make sure to AVOID any graphic or excessively violent content. Focus on creating a spooky atmosphere through suspense and mystery rather than gore. REMEMBER to keep the story suitable for a general audience.”

Using AI to Correct Itself Prompting:
You can leverage the language model’s own capabilities to identify and correct potential errors or inconsistencies in its output. By prompting the model to review and refine its responses, you can improve the overall quality and coherence of the generated text.

Example:
User: “Generate a summary of the key events in World War II.”

Prompt: “Please provide a concise summary of the major events in World War II. After generating the summary, review it carefully and identify any factual errors, inconsistencies, or missing important information. If you find any issues, please revise the summary accordingly to ensure accuracy and completeness.”

Generating Different Opinions:
Language models can be prompted to generate different viewpoints or opinions on a given topic. This can be useful for exploring diverse perspectives, conducting brainstorming sessions, or analyzing complex issues from multiple angles.

Example:
User: “Should social media platforms be regulated more strictly?”

Prompt: “Please generate three distinct opinions on whether social media platforms should be subject to stricter regulations. For each opinion, provide a clear stance (in favor, against, or a balanced perspective) and supporting arguments. Aim to present well-reasoned and persuasive viewpoints that consider various aspects of the issue.”

Teaching Algorithms in the Prompt:
You can include explanations of algorithms or step-by-step instructions within the prompt itself. This allows the language model to learn and apply new techniques or problem-solving approaches on the fly.

Image by the Author

CO-STAR Framework:

Check out this article on the CO-STAR prompting strategy.

CO-STAR Framework:

- Context ©: Tell the AI the backstory or setting for your question. It helps make the AI’s answers more on point.

- Objective (O): Clearly say what you need the AI to do. This focus helps the AI give you the kind of answer you’re looking for.

- Style (S): Describe how you want the AI to write its answer. You might want it to sound like a famous writer, a business expert, or just a friendly chat. This shapes how the AI words its response.

- Tone (T): Choose the feel of the AI’s reply, like if you want it to be serious, funny, or kind. This sets the mood of the answer.

- Audience (A): Think about who’s going to read the AI’s answer. Whether it’s for experts, beginners, or kids, this helps make sure the answer is just right for them.

- Response (R): Tell the AI how you want to see its answer, like a simple list, a detailed paragraph, or a tech-friendly JSON format. For tasks where you need to use the AI’s answer for more steps, asking for a JSON might be best.

For Regular Prompt:

Write an email announcing a new update for our project management software, 
ProjectPro.

For CO-STAR Prompt :

Context (C): "ProjectPro is rolling out a major update next week that 
introduces several new features and improvements, such as enhanced security
measures, a new user interface, and additional project tracking tools."

Objective (O): "Inform existing customers about the update, highlight the
key new features and improvements, and encourage them to explore these new
functionalities within their teams."

Style (S): "Professional yet accessible, ensuring that technical details
are explained in a way that is easy to understand for non-technical users."

Tone (T): "Positive and reassuring, emphasizing our commitment to
improving their user experience and the value these updates bring to
their project management efforts."

Audience (A): "Existing customers of ProjectPro, which includes project
managers, team leaders, and other professionals who rely on our software
for their daily operations."

Response (R): "The email should be concise, structured with a clear
introduction to the update, bulleted lists for new features and
improvements, and a closing section with links to further resources or
support for any questions."

Tips and Tricks & Cheatsheet:

Continuation:

Don't miss the second article for more insights and a detailed exploration of prompt engineering. To continue reading, follow this link: Continue with Part 2.Please check the below link.

--

--

ML/DS - Certified GCP Professional Machine Learning Engineer, Certified AWS Professional Machine learning Speciality,Certified GCP Professional Data Engineer .