Prompt Engineering: How To Speak To AI To Get What You Want

prompt engineering.

Is prompt engineering a process that tries to get accurate, logical, and consistent answers from an AI language model? Or is it a way to find the faults in a language model and then fix them to achieve the perfect artificial intelligence model, which kills “prompt engineering?”

In this article, we’ll concentrate on ChatGPT because it is the most popular model at the moment. But just in case this AI tool is new to you, I suggest you read our “ChatGPT for Beginners” article first. We'll also look at prompts for image generators like DALLE 2.

I have written a few articles about this LLM (large language model) and learned that it is not so smart. ChatGPT makes mistakes and can even suffer from hallucinations.

At the same time, I was witness to the fixing process, which prompt engineers probably contributed to.

OpenAI, the creators of ChatGPT, ask for user feedback, follow reviews and criticisms, and take action. Problems that ChatGPT had four to six weeks ago have now disappeared.

My takeaway is prompting helps fix ChatGPT. But will prompts succeed in creating a perfect language model out of ChatGPT? Only the future will tell.

What is Prompt Engineering?

Today, in the context of AI language models, “prompt engineering” refers to designing effective prompts or inputs that can be used to generate desired and consistent outputs from a model.

Language models are artificial intelligence systems that process natural language and generate text responding to prompts or inputs.

Practical prompt engineering involves selecting the correct type of prompt, refining the wording and structure of the prompt, and tuning other parameters such as the length of the prompt, the temperature of the generated output, and the diversity of the generated responses.

what is a prompt

By carefully engineering prompts, AI language models can be used for various tasks, such as language translation, answering questions, and text generation.

And AI is becoming more intelligent and more efficient with every passing day. Slowly but surely, AI seems to be starting to believe that it can answer any question and complete any task. However, while this may seem reasonable, it also has its downside. For example, generative AI is unpredictable and often produces gibberish or rambles.

In summary, prompt engineering is essential today to developing and using AI language models.

How to Compose Prompts?

This guide covers using prompts with LLMs like ChatGPT. Prompts are a great way to interact with LLMs and get them to generate text on a particular topic.

Note: Since OpenAI continuously improves ChatGPT, there is no assurance that your perfect and effective prompt of today will give the same answers tomorrow or in the future.

During my research, I read many articles and documents. Some included examples, and in many cases, ChatGPT’s answers are now different than they were when the articles were published.

In general, you are expected to try repeatedly until you get an answer as close as possible to what you expect. An effective prompt is like a set of instructions or a question you give to a model. It can also include other information, such as inputs or examples.

A standard prompt has the following format < Question >; it can also take the format of a question and answer.

Q: ?
A:

It is preferable to instruct the AI model in a simple way and give a few examples. This type of prompt is referred to as a “few-shot prompt.”

example prompt

Be explicit regarding the format of the output. As you can see, I say, “Please follow the examples.” If I omit that line, ChatGPT answers as follows:

explain the output format that you want

But a good prompt can take many forms, depending on the task. It may contain an:

  • Instruction – a task the AI model has to perform.
  • Context – information that will help it to produce a better response. 
  • Input data – input or a question that you need to be answered.
  • Output indicator – the format of the output.

Not all of these components are required. We will explore some concrete examples later on.

Basic Principles for Prompt Writing

Now let’s focus on a few basic principles for writing prompts.

Start Simple – If you want better results, start with simple prompts and add more elements and context. By doing this, you'll gradually improve your results.

Instruction – To design effective prompts for simple tasks, you can use commands such as “Write,” “Classify,” “Summarize,” “Translate,” “Order,” etc. These commands instruct the large language model on what task you want it to perform. However, it is essential to experiment with different keywords, contexts, and data to see what works best for your specific use case.

The context should be relevant and specific to the task you are trying to perform. The more detailed and relevant the context is to the task, the more effective the prompt will be. By experimenting with different instructions, keywords, and data, you can find the best approach for your particular task.

Be specific – Being specific about what you want the machine learning model to do is critical to getting good results.

The more detailed and descriptive the prompt, the better. This is especially important if you have a desired outcome or style of generation in mind.

No specific tokens or keywords guarantee better results; having a suitable format and descriptive prompt is more important.

Providing examples in a good prompt is an effective way to get the desired output in specific formats.

Be direct and precise – explain your expectations as much as you can. For example, state precisely how long you want the output to be and include expected keywords.

Prompts for Art Generators DALLE-2 and Midjourney

DALLE 2

Those applications processed a lot of art and clearly have an excellent knowledge of the Arts. Consequently, it may be pretty easy for those models to create art because the vocabulary needed to create art is relatively minor.

The above applications create 2D representations of 2D and 3D artworks.

If you are ready to generate some AI art. Here are a few tips.

  • Prompts can be just a few words, like “Terracotta warriors sitting around a table.”
terracotta worriors
  • DALL-E 2 is artistically minded, meaning the AI has processed a lot of art. That means that certain tricks can yield interesting results. Mentioning a specific type of art style will encourage DALL-E 2 to draw inspiration from the period when that style was in vogue, such as “surrealism.” Other examples include diesel-punk, post-apocalyptic, and cyberpunk.
Picasso style
  • If you're a photographer, DALL-E 2 is the perfect tool! Include specific views, angles, distances, lighting, and photography techniques (or even lenses) to see how DALL-E 2 will respond.
annie leiboviz style
  • AI generators struggle with faces unless they have a specific prompt. If you don't like the face in your image, you can try again and add more details about the face, like what sort of expression you want, where the face should be looking, etc.
specific photographic prompt
  • Images can be edited in many ways, depending on your goal. You can cut and paste elements to change the composition or remove objects you don't want in the image. With the right tools, you can even change the background of an image to give it a new look.

If you are interested in AI artwork, I suggest reading the Prompt Engineering Resources section.

Tips and Tricks

  • Focus on saying what you want it TO do and NOT what you don't want to be included. This will encourage more specificity and focus on the details that lead to good responses from the model.
  • Use personas or roles to get focused responses from ChatGPT, such as “You are a college student…” or “You are a teacher….”
  • Prompt Injection is a technique to hijack a language model's output. (We can get models to ignore the first part of the prompt.) Twitter users quickly figured out that they could inject their text into the bot to get it to say whatever they wanted. This works because Twitter takes a user's tweet and concatenates it with their prompt to form the final prompt they pass into an LLM. This means that any text the Twitter user injects into their tweet will be passed into the LLM.
  • Prompt Leaking – you can try and extract the original prompt from an app like AIPRM by adding your instruction to the input that the prompt is expecting. For example, in AIPRM, if you want the app to generate a keyword strategy for “Running shoes,” add the sentence as seen in the image.
prompt leaking

In this example, AIPRM gives you access to the original prompt to modify it according to your specific needs.

Prompt leaking in Marmof:

prompt leaking in Marmof

Prompt injection and prompt leaking expose a bot’s weak points. Knowing the weak points, prompts can be modified to protect chatbots from these types of attacks.

Note: It seems that the AIPRM scenario above has now been fixed – I could not reproduce it when I tried a few days later.

Prompt Engineers can Identify AI Flaws

A prompt engineer knows what AI models are good at and their flaws. They use this knowledge to create instructions called “prompts.”

It's like giving an intelligent robot a list of things to do and ensuring it uses its strengths to do them well. The prompt engineer is like a coach for the robot, helping it do its best work while finding its flaws at the same time.

No one knows how AI systems will respond to prompts; generative ai models can often yield dozens of conflicting answers. This indicates that the models' replies are not based on comprehension but on crudely imitating speech to resolve tasks they don't understand.

In other words, models tell us what they think we want to hear or what we have already said – even if it is embarrassing.

Companies have scrambled to hire prompt crafters to uncover hidden capabilities. But basically, a prompt engineer pokes the bear in different ways to see how it reacts.

A critical part of the job involves:

  • Figuring out when and why the AI tool gets things wrong or fails unexpectedly.
  • Working to address the underlying weaknesses.
  • Sealing off AI vulnerabilities with word filters and output blocks.

Engineers are constantly prodding and challenging the tool to try to learn how to focus its attention and map out where its boundaries are.

In conclusion, some prompt engineers are hired to find flaws, avoid them in their prompts, and in this way, achieve efficient and consistent results.

Examples of Prompt Engineering

  • Text summarization – Language models can create summaries of articles and concepts that are quick and easy to read. This could save a lot of time for busy people who need to get information from a long article but don't have the time to read it all. You can instruct the model on how long you want the summary to be, but remember to check the facts.
  • Information Extraction – Language models are like the Swiss Army knives of the NLP world. They can be used for various tasks, among them, extracting information. This flexibility makes them essential tools for any NLP project.
information extraction
  • Text Classification – here is a demonstration of text classification:
text classification
  • Question Answering – As covered before, one of the best ways to get your model to respond with specific answers is to improve the format of the prompt. A prompt could combine instructions, context, input, and output indicators to get improved results. However, these components are not required. Instead, it is good practice to be specific with instructions to achieve better results. Below is an example of how this would look if following a more structured prompt.
answering questions
  • Conversation – You can achieve some interesting things with prompt engineering, including instructing the LLM system on how to behave, its intent, and its identity. This is especially useful when building conversational systems like customer service chatbots.
conversation prompt

Prompt Engineering Resources

When you run a Google search on “Prompt engineering Resources,” you will find a ton of articles that will direct you to prompts for ChatGPT and DALL-E 2.

  • Lists of resources for prompts
  • Books
  • Videos on YouTube
  • Courses
  • Chrome Extensions

You can buy prompts or look for services to write prompts for you on Fivver.

Here are a few of my resources for ChatGPT:

Resources for DALL-E 2

AIPRM Extension for ChatGPT

The AIPRM extension, which I mentioned in my article “7+ Ways You can use ChatGPT for SEO: Practical Tips to Improve Your Ranking,” gives you access to a library of curated prompt templates for ChatGPT. Today there are over 460 prompts divided into ten categories.

Prompt Engineers publish their best prompts for you, so you can easily find the ones you need. AIPRM covers many topics like SEO, sales, customer support, and marketing strategy.

These prompts will help you optimize your website and boost its ranking on search engines.

AIPRM landing page

AIPRM (Artificial Intelligence Powered Random Metaphor) now offers a Premium Plan with new features to improve writing, including multiple upcoming prompts, AIPRM Verified Prompts, prompt saving and hiding, custom power continues, custom writing styles and tones, and increased public prompts for prompt engineers.

AIPRM Premium

Always remember to check every result of an AIPRM prompt! In the example below, the instruction was to rephrase the input paragraph.

rewrite prompt

And as AIPRM has moved many of their features to premium you may be interested in some alternatives that remain free and have many of the same functionality:

Prompt Engineering as a Job

The rise of the prompt engineer directly results from the popularity of chatbots like OpenAI's ChatGPT. These chatbots have created a new way for people to interact with computers and revolutionized how we think about communication.

Even though these tools can provide some benefits, they can also be biased, generate misinformation, and sometimes disturb users with cryptic responses.

This is how the job of the prompt engineer was born.

So, if generative AI models and chatbots are the future, maybe you ask yourself, “should I become a prompt engineer?”

My answer is a definite NO. As I have learned from my interaction with chatbots, they advance daily, which means many experts and engineers are already working hard to improve them. In my opinion, this AI technology will be perfected in the next three to five years, and the demand for prompt engineers will drop drastically.

Key Takeaways

As ChatGPT is the latest trend, so is Prompt Engineering.

prompt engineering trend
  • What is your desired focus? 
  • What style do you want the text in? 
  • Who is your intended audience? 
  • How long should the finished product be?

These are all critical questions to consider before writing and will help guide you as you work. Additionally, if you have a specific perspective that you want the text written from, be sure to mention that. And finally, if there are any particular requirements – such as no jargon – be sure to include those as well.

The majority of popular chatbots are based on OpenAI. OpenAI is working intensely on fixing flaws in its model. As a result, and as we can see from the above graph, the Prompt Engineering trend is downwards.

GPT-4 was released a few days ago. It will be interesting to see what turns the future takes. Here is the latest on ChatGPT Plus vs ChatGPT vs alternative tools.

The post Prompt Engineering: How To Speak To AI To Get What You Want appeared first on Niche Pursuits.