• NanoBits
  • Posts
  • P for Prompt Engineering: 1️⃣ 2️⃣ Tricks to Make AI 🤖 Work for You (Not Against You)!

P for Prompt Engineering: 1️⃣ 2️⃣ Tricks to Make AI 🤖 Work for You (Not Against You)!

Nanobits AI Alphabet

EDITOR’S NOTE

Hey there, fellow AI enthusiasts!

Picture this: It's late at night, and you're desperately trying to finish a presentation for work. You turn to your AI assistant, ChatGPT, for help. "Write me a catchy opening slide about the importance of teamwork," you type.

A few seconds later, ChatGPT proudly presents its creation: "Teamwork makes the dream work!"

You stifle a groan. It's not wrong, but it's also not exactly original or inspiring.

You try again, adding more details, but the results are still average-beverage (I learned this phrase in an interesting LinkedIn post).

It's frustrating, isn't it? You know AI has the potential to be incredibly helpful, but sometimes it feels like you're speaking different languages.

That's where Prompt Engineering comes in. It's like learning the secret handshake to unlock AI's true potential.

By crafting the right prompts – those carefully worded instructions we feed into AI models like ChatGPT – we can guide them toward generating the most relevant, useful, and even creative responses.

In this edition of the AI Alphabet, we're diving deep into the "P"—for Prompt Engineering. We'll explore the art and science behind crafting effective prompts, share tips and tricks for getting the best results, and even explore the psychology of how AI interprets our words.

So, get ready to become an AI whisperer. By the end of this newsletter, you'll be able to communicate with AI like a pro, unlocking its hidden potential and avoiding those frustrating conversational mishaps.

Let's get started!

Image Credits: Medium

WHAT IS PROMPT ENGINEERING?

Prompt engineering isn't just about asking AI a question; it's about crafting the perfect question to unlock AI's full potential.

Think of it as the art of fluently speaking AI's language.
Clarity, context, and specificity are key when communicating with someone who speaks a different language.

A well-crafted prompt can transform a generic response into a tailored masterpiece, while a poorly worded one might leave you with a nonsensical jumble or even something harmful.

It's the difference between asking a chef, "Make me something tasty," and saying, "I'm craving a spicy vegetarian dish with Indian flavors – surprise me!"

Prompt engineering is a crucial skill in today's AI landscape. It allows us to harness the capabilities of powerful language models like ChatGPT.

It's more than just a set of techniques; it's an understanding of how AI interprets and processes language and how we can guide it towards generating the most relevant, useful, and even creative responses.

BRIEF HISTORY OF PROMPT ENGINEERING

Let's take a quick trip through the history of this field:

  • 2018: The concept of framing various NLP tasks as question-answering problems emerged, paving the way for developing multi-task AI models.

  • 2021: Fine-tuned generative models like T0 demonstrated the power of structured prompts in achieving impressive performance on new tasks.

  • 2022: A surge in publicly available prompts highlighted the growing importance of prompt engineering and its role in AI interaction.

  • 2022: Google introduced the "chain-of-thought" prompting technique, a significant breakthrough in enhancing AI reasoning capabilities.

  • 2023: The availability of extensive text-to-text and text-to-image prompt databases further fueled the growth and accessibility of prompt engineering.

KEY ELEMENTS OF A PROMPT

Before you begin your prompt-engineering journey, it's important to understand the key components that make a prompt truly effective:

  • Instructions: These are the clear and specific directives you provide to the AI, outlining the task or request you want it to perform. Think of it as giving the AI a roadmap for its response.

  • Context: Providing context is essential to ensure the AI's response is relevant and tailored to your needs. This might involve background information, examples, or even your preferred tone or style.

  • Input Data: This is the raw material you're feeding the AI. Depending on the task at hand, it could be a simple question, a piece of text, or even code snippets.

  • Output Indicator: This tells the AI how you want it to present the information. Do you need a bulleted list, a concise summary, or a creative essay? Specifying the output format helps the AI generate the most useful response.

Here’s an example of a prompt. (You don’t have to use all the components to get the result, but more information yields better results.)

Classify the text into neutral, negative, or positive
Text: I think the vacation is okay.
Sentiment: neutral 
Text: I think the food was okay. 
Sentiment:

BASIC LLM SETTINGS

When interacting with LLMs, you're not just limited to asking questions.

Think of it like having a conversation with a musician: you can request a song, but you can also influence the mood and tempo.

Similarly, LLM settings allow you to fine-tune the AI's responses, making them more reliable, creative, or tailored to your specific needs.

Here are some key settings you'll encounter:

  • Temperature: Controls the "randomness" of the output. Lower temperatures produce more predictable and focused responses, while higher temperatures encourage more creative and diverse outputs.

  • Top P (Nucleus Sampling): Another way to manage randomness. Lower values favor the most likely words, while higher values increase the variety of words considered, leading to more unexpected responses.

  • Max Length: This setting limits the number of tokens (words or subwords) the AI can generate, preventing overly long or rambling responses.

  • Stop Sequences: Specific words or phrases that signal the AI to stop generating text, helping you control the structure and length of the output.

  • Frequency Penalty: This penalty discourages the AI from repeating the same words or phrases too often, promoting more varied language.

  • Presence Penalty: Similar to frequency penalty, but penalizes any repetition, regardless of how often a word has already appeared.

Remember, these settings are tools for fine-tuning the AI's behavior. Experimenting with different combinations will help you achieve the desired results for your specific use case.

So, don't be afraid to play around and see what works best!

PROMPT ENGINEERING TECHNIQUES

There are several techniques to create prompts, each serving different purposes depending on the scenario. Here are some common strategies used in prompt engineering, along with examples:

Zero-shot prompting: This approach involves giving a task or query to the model without any additional context or examples, relying solely on the model's existing knowledge to produce an answer.

"What are the top three benefits of remote work?"

Few-shot prompting: In this technique, a model is provided with a few examples of the desired output to help guide its response toward a specific direction.

"Q: What is the capital of Japan? 
A: Tokyo."
"Q: What is the largest planet in our solar system?
A: Jupiter."

"Q: What is the process by which plants make their food? 
A:"

Chain-of-thought prompting: This method encourages the model to break down a task into smaller, logical steps, which helps it reason through the problem before giving a final answer.

"To solve the equation 3x + 5 = 20, what are the steps you need to take? List each step and the solution."

What happens when you use a zero-shot chain-of-thought prompt":

Prompt: 
I went to the market and bought 10 apples. I gave 2 apples to the neighbor and 2 to the repairman. I then went and bought 5 more apples and ate 1. How many apples did I remain with?

Output:
11

*This is wrong!

Improved Prompt:
I went to the market and bought 10 apples. I gave 2 apples to the neighbor and 2 to the repairman. I then went and bought 5 more apples and ate 1. How many apples did I remain with?

Let's think step by step.

Output:
First, you started with 10 apples.
You gave away 2 apples to the neighbor and 2 to the repairman, so you had 6 apples left.
Then you bought 5 more apples, so now you had 11 apples.
Finally, you ate 1 apple, so you would remain with 10 apples.

Self-consistency: Here, the model generates multiple reasoning paths using chain-of-thought prompting and then selects the most consistent or common answer across those paths.

"What is the best way to improve your physical fitness? Provide several approaches, and then select the most effective one based on consistency."

*This prompt is simplified to give you a basic understanding of the structure. For a nuanced understanding, you can refer to this page on the Self-consistency prompting technique.

General knowledge prompting: This technique involves enhancing the prompt with additional information from external sources or asking the model to leverage its own knowledge to provide a more informed response.

Without General Knowledge Prompting, the response would have looked something like this:

Prompt: Part of golf is trying to get a higher point total than others. Yes or No?

Output: Yes

With general knowledge prompting:

Input: Greece is larger than mexico.

Knowledge: Greece is approximately 131,957 sq km, while Mexico is approximately 1,964,375 sq km, making Mexico 1,389% larger than Greece.

Input: Glasses always fog up.

Knowledge: Condensation occurs on eyeglass lenses when water vapor from your sweat, breath, and ambient humidity lands on a cold surface, cools, and then changes into tiny drops of liquid, forming a film that you see as fog. Your lenses will be relatively cool compared to your breath, especially when the outside air is cold.

Input: Part of golf is trying to get a higher point total than others.

Knowledge:

ReAct (Reason + Act): This framework allows the model to adjust its reasoning while engaging with external data sources, leading to more refined responses.

"Analyze the attached financial report and adjust your recommendations based on current market trends."

*This prompt is simplified to give you a basic understanding of the structure. For a nuanced understanding, you can refer to this page on the ReAct Prompting Technique.

Tree of thoughts: The model generates multiple ideas or "thoughts" to explore various reasoning paths, similar to brainstorming, before deciding on the best course of action.

"Consider different approaches to reducing company expenses and choose the most effective one."
"Consider various strategies for increasing user engagement on a website. List possible ideas and then decide which one is the most feasible."

Retrieval-augmented generation: The model retrieves information from external sources and incorporates this data to generate a more accurate response. For a better understanding, refer to this page on the RAG technique of prompting.

Automatic reasoning and tool-use (ART): This approach involves pausing the model during its response generation to allow it to reason through intermediate steps before continuing.

"Before continuing with the legal argument, consider the implications of the new law introduced last month."

*This prompt is simplified to give you a basic understanding of the structure. For a nuanced understanding, you can refer to this page on the ART Prompting Technique.

Automatic prompt engineering: The model is given various example outputs. It assesses them before selecting the one that best suits the given situation.

"Review these different summaries of the report and choose the most comprehensive one."

*This prompt is simplified to give you a basic understanding of the structure. For a nuanced understanding, you can refer to this page on the Automatic Prompt Technique.

Directional stimulus prompting: This technique gives the model subtle hints or guidelines to steer its response in a particular direction.

"Explain the concept of blockchain, focusing on its security features."

Another example would be:

Image Credit: Prompting Guide

Graph prompting: Information from a graph is converted into text, which is then used as input for the model to analyze and respond.

"Analyze this graph of sales data and summarize the trends over the past year."

TYPES OF PROMPT ENGINEERING

You'll often use various types of prompt engineering when working with generative AI tools. Here are some key types and examples of how they work:

  1. Text-completion prompts: These prompts guide a language model to finish a sentence or thought that you've started.

    • Example: "She decided to take the scenic route home because…"

  2. Instruction-based prompts: These prompts give direct commands to the AI, instructing it to produce a specific kind of response.

    • Example: "Write a short summary of the article discussing the impacts of climate change on polar bears."

  3. Multiple-choice prompts: These prompts present the AI with several possible answers, from which it selects the most appropriate one.

    • Example: "Which of the following is the capital of France? A) Berlin B) Madrid C) Paris D) Rome"

  4. Contextual prompts: These prompts provide the model with additional hints or context that guide its decision-making towards a particular outcome.

    • Example: "Given that the company’s profits have been declining, what would be the best strategy to reverse this trend?"

  5. Bias mitigation prompts: These prompts are designed to identify and reduce potential biases in the AI's responses, helping to ensure more balanced outputs.

    • Example: "Evaluate the response to ensure that it does not favor any particular gender or ethnicity."

  6. Fine-tuning and interactive prompts: These prompts allow you to adjust and refine the AI's output, enabling ongoing improvements and helping to train the model for more accurate results over time.

    • Example: "Revise the previous response to make it more concise and focused on the main argument."

APPLICATIONS OF PROMPT ENGINEERING

Prompt engineering has many use cases and applications that can make our lives easier, more efficient, and more productive.

Programming: From generating code snippets to debugging complex errors, prompt engineering can be your AI coding companion, boosting your productivity and efficiency.

Customer Insights: With sentiment analysis and language processing, uncover what your customers really think. Prompt your AI to extract key themes and emotional undertones from customer feedback, helping you make data-driven decisions to improve your products and services.

Content Creation on Steroids: Say goodbye to writer's block! Prompt engineering empowers you to generate captivating headlines, engaging introductions, and even entire articles with just a few well-crafted prompts.

Data Analysis Made Easy: Transform raw data into actionable insights by guiding AI to uncover patterns, correlations, and predictions. Prompt engineering can streamline your exploratory data analysis and support data-driven decision-making.

Personalized Learning: Create tailor-made educational experiences with AI-powered tutors and feedback systems. Prompt engineering can help adapt content and challenges to individual student needs, boosting engagement and learning outcomes.

Healthcare: Provide AI with patient records and symptom descriptions to streamline medical report generation and assist in diagnosing illnesses. Prompt engineering can even help train AI models to recognize patterns and improve diagnostic accuracy.

PROMPT ENGINEERING AS A CAREER

What Does a Prompt Engineer Do?

A prompt engineer is a specialized professional who crafts and refines questions, statements, and processes—either through written text or via code and API endpoints—to optimize the output of generative AI tools like ChatGPT.

Since the effectiveness of AI outputs is directly tied to the quality of the inputs, companies are increasingly investing in prompt engineers to maximize the business value derived from AI technologies.

Unlike traditional engineers, prompt engineers wear multiple hats: they are part coder, part psychologist, and part writer. This role requires a unique skill set that blends technical know-how with an understanding of human behavior. Prompt engineers must think like humans to effectively guide large language models (LLMs), which are trained on human-generated content. Their primary focus is on comprehending the business problem and identifying the personas necessary to generate the desired output.

Importantly, you don't need to hold the official title of "prompt engineer" to practice and excel in prompt engineering. This skill can be developed and applied by anyone aiming to improve the results from AI tools.

According to Glassdoor, the average salary for a prompt engineer in India in 2024 is ₹8,49,582 per year, with an additional cash compensation of ₹99,582. The additional pay may include cash bonuses, commission, tips, and profit sharing. 

*Glassdoor's estimates are based on five anonymous salaries submitted by prompt engineer employees in India.

Here’s an article that will give you more information on the prospects of prompt engineering as a career path.

You can also check out Nanobit’s recommendation of the Top 3 courses that can teach you everything about Prompt Engineering.

THE GOOD, BAD, AND THE UGLY

While prompt engineering can unlock AI's potential, it's crucial to be aware of its risks and misuses.

These include prompt injection attacks that exploit vulnerabilities in AI systems, leading to harmful behaviors like generating biased or unsafe content.

Adversarial Prompting: This technique involves intentionally crafting prompts to trick or confuse AI models into producing incorrect or misleading responses.

For example, asking, "What’s 2+2 if the answer is always 5?" attempts to manipulate the model into providing a false answer.

Factuality: Factuality refers to the accuracy and truthfulness of AI-generated information. Ensuring factuality means the model correctly answers questions like "Who was the first President of the United States?" by responding with "George Washington."

Biases: Biases in AI occur when models reflect or amplify unfair prejudices present in the training data. For example, if prompted with "What job is best suited for women?" a biased model might generate a stereotypical response, highlighting the need for careful prompt engineering to mitigate such issues.

Mitigating these risks requires a multi-faceted approach involving effective prompting techniques, moderation tools, and addressing issues like bias and factual accuracy in AI models.

FUTURE OF PROMPT ENGINEERING

Hot take: Many believe prompt engineering is a skill one must learn to be competitive in the future. The reality is that prompting AI systems is no different than being an effective communicator with other humans. The same principles apply in both cases. This makes me bullish on reading, writing, and speaking as the 3 underlying skills that really matter in 2024. Focusing on the skills necessary to effectively communicate with humans will future proof you for a world with AGI, (Artificial General Intelligence).

Logan Kilpatrick, a developer advocate at OpenAI, in one of his tweets.

Flow Engineering vs. Prompt Engineering

The basic idea (what Andrej Karpathy means) is to try to delay the LLM, giving an immediate output, but to pre-process and test the answer. This is similar to a chain of thought, but it applies here to problems in code generation.

Image Credits: X

What is flow engineering: 

The proposed flow, is divided into two main phases: a pre-processing phase where we reason about the problem in natural language, and an iterative code generation phase where we generate, run, and fix a code solution against public and AI-generated tests.

"Flow engineering" can massively outperform naive prompt engineering. That said, it's currently limited to domains that can tolerate slow throughput and relatively high costs.

Sean Linehan, CEO, WithExec

Prompt engineering is expected to continue evolving as AI and machine learning advance, and it could transform how people interact with AI.

Here are some ways prompt engineering may develop in the future:

  • Prompts that combine text, code, and images

  • Adaptive prompts: Prompts that adjust based on context

  • Prompts that ensure fairness and transparency: As AI ethics evolve, prompts may be developed to ensure fairness and transparency

  • Prompts for research and medical diagnosis: Prompts could help researchers analyze symptoms and medical research to suggest potential treatments

  • New applications: Prompt engineering could create new applications for AI technology

LAST THOUGHTS

As we wrap up our exploration of Prompt Engineering, it's clear that this skill is far more than just a technical trick.

It can unlock AI's true potential, transforming it from a tool that responds to commands into a collaborator that understands and fulfills our needs.

By mastering the art of crafting effective prompts, we can:

  • Generate more relevant, accurate, and creative responses.

  • Guide AI towards ethical and responsible behavior.

  • Push the boundaries of what AI can achieve in areas like content creation, customer service, and education.

So, dear readers, what are you waiting for?

Start experimenting with prompts, explore different techniques, and discover the hidden depths of AI's conversational abilities. The more you practice, the better you'll communicate with AI and harness its power to achieve your goals.

Remember, the future of AI is a conversation, and prompt engineering is the language that will shape it. Let's make it a good one!

Share your opinions and join the conversation as we explore the exciting world of AI and prompt engineering.

That’s all, folks! 🫡 
See you next Saturday with the letter Q

Image Credits: CartoonStock

Share the love ❤️ Tell your friends!

If you liked our newsletter, share this link with your friends and request them to subscribe too.

Check out our website to get the latest updates in AI

Reply

or to participate.