Create the perfect ChatGPT prompt - Prompt Engineering

Learn to create an AI prompt - Prompt Engineering

Create a prompt! As someone who works with large language models like GPT and Claude on a daily basis, I know the challenges, but also the incredible possibilities of prompt engineering.

So you want to learn how to create the perfect ChatGPT prompt? No problem, you’ve come to the right place!

In this article, I would like to share my personal experience and tips on how to write the perfect prompt for AI models and get the desired results.

☝️ Key points at a glance

  • 🤖 Selecting the AI model: Choosing the right AI model (GPT-4, Claude-2, or Gemini) is crucial for the quality of the prompt.
  • ✍️ Clear instructions: Detailed and specific prompts improve results. Formatting helps to separate different parts of the prompt.
  • ⏳ Patient model: The model needs time to formulate thoughtful answers. Questions should be repeated and adapted to achieve nuanced results.

Tip 1: Use the right model

The first decision is which AI (artificial intelligence) model to use. There is now a whole range of powerful options such as ChatGPT, Claude, GPT-3.5 or Bard. Depending on the use case, different models are more suitable.

LLMFirst releaseDeveloper
ChatGPT2022-11-30OpenAI
Bard2023-03-21Google
Claude2023-03-14Anthropic
  • GPT-3.5 for simple tasks, developed by OpenAI, is a successor to the popular GPT-3 model and is based on an extensive data collection of texts and code.
  • GPT-4 for development and large-scale tasks is the latest version of OpenAI’s GPT language model. It was trained on a huge dataset of text and code and has significantly more parameters than the previous versions. GPT-4 has proven its superiority over previous versions in various benchmarks and is a powerful tool for developers and bloggers who want to use large language models.
  • Bard for SEO Tasks is a comprehensive language model developed by Google AI, based on the PaLM 2 model. This is one of the largest and most powerful language models developed to date. Although Bard is still in the development phase, it has already demonstrated its impressive potential. It can write creative content in various forms, translate languages and answer questions informatively.
  • Claude-2 for long texts, developed by Anthropic, is also a comprehensive language model. It has been trained on a large amount of text and code data and can generate text, translate languages and produce various types of creative content. Claude-2 is particularly known for generating text that is both informative and engaging.

Tip 2: Write clear instructions

The prompt is the input used to control the AI model. The more detailed and specific the prompt, the better the results usually are. I like to use different formatting to separate the different parts of the prompt, such as headings, dashes or quotation marks.

Provide a detailed context for the problem. By reducing ambiguity, you reduce the likelihood of irrelevant or incorrect answers.

Use delimiters to clearly identify different parts of the entry. Examples include section titles, triple quotation marks (“””), triple backticks (“`), triple hyphens (—) and angle brackets (<>).

Persona and format: Specify the desired output format or the length of the response. One method is to ask the model to assume a role. Examples:

  • Imagine you are a professional blogger.
  • Summarize the text in three sentences.
  • Give me a summary of this text. Here are examples of summaries that I like:

Give examples. These steps are to be used for the Few-Shot-Prompting method, for example:

  1. First example (first “shot”): Give an example of a prompt and the corresponding response.
  2. Second example (second “shot”): Give a second example of a prompt and response.
  3. Your prompt: Write your actual prompt. The model can now follow the pattern of the first two examples.

Tip 3: Give the model time to think

Many users expect AI models to respond at lightning speed. However, the more time you give the model to formulate its answer, the more thoughtful and correct it usually is. One option is to let the model think explicitly step by step, i.e. to formulate out loud how it arrives at a solution.

By asking for a logical chain of thought, you encourage the model to think step by step and in a more considered way. You can ask for a “chain of thought” or define the specific steps. This simple addition to the prompt is known to work well: “Think step by step.

For example, if you’re asking the model to evaluate a blog post, you could guide them like this:

  1. Read the blog post carefully and understand the main topic.
  2. Compare the content and style of the blog post to recognized standards of good blog posts.
  3. Evaluate the blog post by critically assessing both the content and the style.
See also
neuroflash Guide - How to use the AI Text Generator

Tip 4: Ask the question several times

Instead of always being satisfied with the first answer, I like to ask important questions several times, with slightly different parameters.

By varying the temperature or the number of examples in the prompt, for example, I may get more nuanced results. The optimal solution can then be found by comparing the different outputs.

Here are some customization options:

  • Temperature: controls the randomness or creativity of the LLM’s response. A higher temperature leads to more varied and creative answers. A lower temperature results in more conservative, predictable answers.
  • Customized instructions: If you are using ChatGPT, customize the Custom Instructions.
  • Shots: Refers to the number of examples given in the prompt. Zero-shot means to give no examples, one-shot means to give one example, and so on.
  • Prompt: Consider various variants. The degree of directness, asking for explanations, drawing comparisons and the like.

Tip 5: Lead the model

AI models tend to want to satisfy the user. You therefore need to steer them carefully in the right direction.

If I am not satisfied with an initial answer, I can ask the model to check and correct its answer. Or I can ask it to explain its approach step by step to uncover sources of error.

Here are some examples:

  • If the document is too long, the model might stop reading too soon. You can guide the model to process long documents piece by piece and gradually create a complete summary.
  • Help it to self-correct. If the model begins incorrectly, it is often difficult to correct itself. “You gave me an explanation about keyword research. Are you sure about your answer? Could you check it and provide a corrected explanation, starting with the basics of keyword analysis?”
  • Avoid leading questions. The model wants to satisfy, so guide it, but keep the prompts open.
    • ❌ “Is SEO important for bloggers?”
    • ✅ “I’d like an unbiased overview of case studies on the relationship between blogging and search engine optimization.”

Tip 6: Break down the task or request

Complex tasks are more prone to errors. That’s why I like to break them down into several simple steps. I either process the parts one after the other and feed the intermediate results back in, or I give the model a separate prompt for each sub-aspect. In this way, the strengths of the AI come into their own.

Intention classification makes it possible to identify the most relevant instructions and then combine the answers into a coherent overall result.

For example:

  • Original query: I am planning a new blog section on the topic of “prompt engineering” and need the most important keywords and a list of 20 article ideas.
  • Breakdown of the request:
    • Step 1: Research the most important articles on the topic of “prompt engineering”.
    • Step 2: Create a table with the most important keywords.
    • Step 3: List 20 article ideas based on the keywords.

The AI would look at each intent individually and give customized advice, then combine them into a comprehensive answer.

Tip 7: Use the available techniques

AI models have limitations that can be compensated for with supplementary tools. I integrate a calculator for mathematical calculations and a retrieval system for searches. The integration of APIs and external functions can also expand the model’s capabilities. By combining the strengths of different systems, impressive applications can be developed.

Here are some exemplary tools:

  • Calculators: LLMs are not good at math. Their main task is to generate words, not numbers. Calculators can significantly improve an LLM’s math skills.
  • RAG: Connect the LLM to intelligent information retrieval systems instead of trying to squeeze everything into the context window. For example, Metaphor’s web search API.
  • Code execution: Use the code execution capability or call external APIs to execute and test code created by the model.
  • Functions: Define functions that the model should call. Examples are get_keywords(), write_intro() or fetch_wp_api(). Execute these functions and return the answer to the model.

If you are wondering which AI tools I use on a daily basis, I would like to write about them briefly.

Currently, I mainly use GPT-4 directly via ChatGPT Plus or HARPA. For longer texts, I like to use Claude-2.

I use tools like Jasper or neuroflash less and less.

References:

As you can see, good prompt engineering requires a lot of experience and a feel for what makes AI models tick. But with the right techniques, the performance of the systems can be increased enormously.

I hope these tips on prompt creation from my daily work will help you to write even better prompts and make the most of the potential of this fascinating technology.

I would be delighted to receive constructive feedback!

👉 This is what happens next

  • Experiment with different AI models to see which one best suits your specific use case.
  • Use the tips from this article to write and refine your own prompts.
  • Share your experiences and results in the comments or in your blog post to help others in the community.

Leave a Reply

Your email address will not be published. Required fields are marked *