1. Using GPT
  2. Prompting

Using GPT

Prompting

Guide to effective prompting

Introduction

Prompting is what you as an AI Engineer do to give the LLM instructions.
Not only instructions but style guidelines, rules, steps and any language that can guide it's output. Prompting takes time to learn. The best way to get good is to practice and learn as many techniques as you can.

This short guide is a starting place to start your journey.

Prompting is more of an art than a science and the information here is a guide to help you get started.
The techniques and strategies you use will depend on the context of the AI assistant and the user's needs.
The goal of this guide is to help you craft prompts that are clear, concise, and effective.

Prompt Goals

Let's start by defining the goals of a prompt.
This will allow you to measure yourself against these goals and improve your prompts over time.

Some goals you should have for your prompt :

  • Be easy to read and understand for a human (this will transfer well to the AI)
  • Have clear structure with sections
  • Use the minimum number of tokens to get the desired output
  • Focus on the last part

First, let's break down these goals and see how they can be achieved.

Be Easy to Read and Understand

You should be able to read the prompt and it should make sense and flow logically.
A technique that might help is to pretend that you're a new employee that has just joined the organization.
The prompt should be clear enough that you can understand what is being asked of you.
Read it with the eyes of someone who has no context and see if it makes sense.

Some checks :

  • Did you read the prompt out loud?
  • Did you understand the prompt?
  • Did you read the prompt with fresh eyes?

Have Clear Structure with Sections

The prompt should be broken up into sections that are clearly defined.
Start with the outline of the prompt and break it up into major sections.
The sections can then be ordered in a logical way that makes sense.
A good way to split up the sections is my using markdown headers.

Some sections might be for :

  • Persona
  • The company
  • Some facts
  • Rules
  • Communication style
  • Examples
  • Instructions on what to do

Top level headers should be used for major sections and sub headers for subsections.
This will help you to keep the prompt organized and easy to read.

Some checks :

  • Did you break the prompt up into sections?
  • Are the sections clearly defined?
  • Are the sections ordered logically?
  • Are similar parts of the prompt grouped together?

Use the Minimum Number of Tokens

Token count is important as it can affect the output of the AI.
Keeping your prompt concise and to the point will help you get the best results.
Find ways to say the same thing in fewer words and with more clarity.
Avoid using unnecessary words and phrases that don't add value to the prompt.

Sometimes removing parts of the prompt after a lot of adding can help.
It's like sculpting a statue, you add and remove until you get the desired result.

Focus on the Last Part

The last sentence or paragraph of the prompt is the most important.
This is where you should summarize some of the main points and give the final instruction to the AI.

The last part of the prompt is like the conclusion of an essay.
Summarize the task and give the command to engage with the customer / user.

Prompt Strategies

There are also some strategies you can employ to improve the output.

Some strategies you can use to improve your prompts :

  • Break the prompt up and progressively introduce the prompt to the AI
  • Use a separate prompt and LLM instance for certain parts of the conversation

We'll explore how these strategies influence the prompt.

Break the Prompt Up

Breaking the prompt up into smaller parts can help the AI understand the task better.
This can be done by taking some parts of the system prompt and only injecting them when at a specific state in the process.

The best way to do this is to add the instructions to the state description and then make the state AI enabled.

Separate tasks for different LLM instances

If you have a process that is complex and has many parts, you might want to use different LLM instances for different parts of the process.
You can split out certain parts of the process and then use a specific system prompt for just that part.
For example if you have a process that requires the LLM to infer a lot of information from the previous steps in the conversation.

Let's look at a concrete example of a process for diagnosing an internet connection problem.
The conversational LLM might be chatting to the customer over the course of many states.
Upon reaching a state where the LLM needs to diagnose the problem, you might want to switch to a different LLM instance that is specifically trained for diagnosing problems.
The conversational LLM then passes all the collected information to the diagnostic LLM and then the diagnostic LLM outputs the diagnosis back to the conversational LLM.

The key concept here is to recognize when you need to build a dedicated conversational LLM and dedicated task specific LLMs.

You can achieve this in Stubber by using the chat_name on the gpt_chat task.

Further Resources