Prompt Engineering: A Practical Example
You’ve used ChatGPT, and you understand the potential of using a large language model (LLM) to assist you in your tasks. Maybe you’re already working on an LLM-supported application and read about prompt engineering, but you’re unsure how to translate the theoretical concepts into a practical example.
Your text prompt instructs the LLM’s responses, so tweaking it can get you vastly different output. In this tutorial, you’ll apply multiple prompt engineering techniques to a real-world example. You’ll experience prompt engineering as an iterative process, see the effects of applying various techniques, and learn about related concepts from machine learning and data engineering.
In this tutorial, you’ll learn how to:
- Work with OpenAI’s GPT-3.5 and GPT-4 models through their API
- Apply prompt engineering techniques to a practical, real-world example
- Use numbered steps, delimiters, and few-shot prompting to improve your results
- Understand and use chain-of-thought prompting to add more context
- Tap into the power of roles in messages to go beyond using singular role prompts
You’ll work with a Python script that you can repurpose to fit your own LLM-assisted task. So if you’d like to use practical examples to discover how you can use prompt engineering to get better results from an LLM, then you’ve found the right tutorial!
Get Sample Code: Click here to download the sample code that you’ll use to get the most out of large language models through prompt engineering.
Understand the Purpose of Prompt Engineering
Prompt engineering is more than a buzzword. You can get vastly different output from an LLM when using different prompts. That may seem obvious when you consider that you get different output when you ask different questions—but it also applies to phrasing the same conceptual question differently. Prompt engineering means constructing your text input to the LLM using specific approaches.
You can think of prompts as arguments and the LLM as the function that you pass these arguments to. Different input means different output:
>>> def hello(name):
... print(f"Hello, {name}!")
...
>>> hello("World")
Hello, World!
>>> hello("Engineer")
Hello, Engineer!
While an LLM is much more complex than the toy function above, the fundamental idea holds true. For a successful function call, you’ll need to know exactly which argument will produce the desired output. In the case of an LLM, that argument is text that consists of many different tokens, or pieces of words.
Note: The analogy of a function and its arguments has a caveat when dealing with OpenAI’s LLMs. While the hello()
function above will always return the same result given the same input, the results of your LLM interactions won’t be 100 percent deterministic. This is currently inherent to how these models operate.
The field of prompt engineering is still changing rapidly, and there’s a lot of active research happening in this area. As LLMs continue to evolve, so will the prompting approaches that will help you achieve the best results.
In this tutorial, you’ll cover some prompt engineering techniques, along with approaches to iteratively developing prompts, that you can use to get better text completions for your own LLM-assisted projects:
- Zero-Shot Prompting
- Few-Shot Prompting
- Delimiters
- Numbered Steps
- Increased Specificity
- Role Prompts
- Chain-of-Thought (CoT) Prompting
- Structured Output
- Labeled Conversations
There are more techniques to uncover, and you’ll also find links to additional resources in the tutorial. Applying the mentioned techniques in a practical example will give you a great starting point for improving your LLM-supported programs. If you’ve never worked with an LLM before, then you may want to peruse OpenAI’s GPT documentation before diving in, but you should be able to follow along either way.
Get to Know the Practical Prompt Engineering Project
You’ll explore various prompt engineering techniques in service of a practical example: sanitizing customer chat conversations. By practicing different prompt engineering techniques on a single real-world project, you’ll get a good idea of why you might want to use one technique over another and how you can apply them in practice.
Imagine that you’re the resident Python developer at a company that handles thousands of customer support chats on a daily basis. Your job is to format and sanitize these conversations. You should also help with deciding which of them require additional attention.
Collect Your Tasks
Your big-picture assignment is to help your company stay on top of handling customer chat conversations. The conversations that you work with may look like the one shown below:
[support_tom] 2023-07-24T10:02:23+00:00 : What can I help you with?
[johndoe] 2023-07-24T10:03:15+00:00 : I CAN'T CONNECT TO MY BLASTED ACCOUNT
[support_tom] 2023-07-24T10:03:30+00:00 : Are you sure it's not your caps lock?
[johndoe] 2023-07-24T10:04:03+00:00 : Blast! You're right!
You’re supposed to make these text conversations more accessible for further processing by the customer support department in a few different ways:
- Remove personally identifiable information.
- Remove swear words.
- Clean the date-time information to only show the date.
The swear words that you’ll encounter in this tutorial won’t be spicy at all, but you can consider them stand-ins for more explicit phrasing that you might find out in the wild. After sanitizing the chat conversation, you’d expect it to look like this:
Read the full article at https://realpython.com/practical-prompt-engineering/ »
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]