Prompt engineering: A Way to Smartly Use AI

Research Graph
8 min readMay 14, 2024
Photo by Andrew Neel on Unsplash

Author

Introduction

Large Language Models (LLMs) have become the new face of Natural language processing (NLP). With their generative power and ability to comprehend human language, the human reliance on these models is increasing every day. However, the LLMs have been known to hallucinate and thus produce wrong outputs. This usually happens when the user gives a previously unseen input. However, using a technique called prompt engineering we can reduce the chances of this happening. In this article, we will understand the what, why and how of prompt engineering.

Why do we need Prompt Engineering?

Image showing different prompting techniques. Source: Breuss, 2024, Link:https://realpython.com/practical-prompt-engineering/

Even before we understand the term prompt engineering, it is important to realise what led to the creation of the term prompt engineering. To get the answer to this question, we need to first understand how LLMs work.

Imagine LLM as a text generator which calculates the probability for the next word and then predicts the word using that probability. In the case of ChatGPT or any other generative AI model, the LLM tries to map the given input user data to the domain data (training data). Once LLM can do this mapping, it generates the output. However, when the LLM fails to do this mapping, it starts to hallucinate and generate wrong outputs.

This behaviour is strange because even though LLMs were created as a text generation tool, there is a lot more to it. With the huge amount of corpus data LLMs can learn how to do some tasks that they have not been trained upon. This statement contradicts what was said earlier. The reason why an LLM can learn how to perform new tasks is because of the correct mapping between user domain and domain data. This mapping makes sure that the generative model does not hallucinate and produces correct results.

What is Prompt Engineering?

Image showing what prompt engineering is. Source: Breuss, 2024. Link: https://realpython.com/practical-prompt-engineering/

Now that we have understood the need to convert from the user domain to the domain that LLM understands, let us see how it is done using an example.

Imagine a scenario where you’ve got an important assignment submission and the internet goes off because you accidentally did something and the modem box is showing a red light on it instead of a green light. You open up the helpdesk chat which uses a LLM model in the backend. You are very upset and write the following in the helpdesk chat:

I’m pissed. I’ve got an assignment due tonight and the internet is not working. I have been working on this assignment for such a long time. Please help me.

When such input is given to the helpdesk, the LLM in the backend will identify the problem and give you a long manual about why the internet might not be working which will make you even more upset. This is because during the mapping of keywords, LLM found that the internet is not working and according to the domain data it has generated the correct answer. However, let’s change the instructions given to the helpdesk a bit. Let’s assume the following instruction is given instead:

I’m pissed. I’ve got an assignment due tonight and the internet is not working. I accidentally kicked the modem and it fell. Now instead of showing a green light, a red light is beeping. I have been working on this assignment for such a long time. Please help me.

When such an input is given to the helpdesk, the LLM in the backend will try to find data mapping to the fallen modem box and the red light beeping. It then can generate an output similar to this:

Apologies for the internet outage. The red light could be beeping because all the wires are not plugged in. Please make sure that all the wires are plugged in and try restarting the modem.

You get this response, double-check the wiring and voila, the internet is back on to score a ninety one hundred in the assignment and you think how good is AI?

But the important thing to note here is that you gave all the necessary keywords to the model for it to fully realise its potential. This is exactly what prompt engineering is. Prompt engineering is nothing but a collaboration between the user and the AI model that helps fully utilise the capability of the model. Sounds simple right, but it is not.

It is more than just designing and developing prompts. It comprises a wide range of skills and techniques required to interact, develop, and test the LLMs. A simple prompt with some instructions can get you an output, however, to obtain good-quality results, the prompt must be well crafted. A prompt can contain instructions such as questions and other details which provide more context such as giving hints about the answer.

How to write an effective prompt?

Image showing the pipeline of prompt engineering. Source: (Hordiienko, n.d.) Link: https://serpstat.com/blog/who-are-prompt-engineers-and-why-hire-them/

Prompts vary from problem to problem. However, before learning how to write an effective prompt, let us understand the different parts of the prompt. A prompt generally, can be divided into four parts namely: instructions, context, input data and the format of the output.

Let’s assume a prompt which goes like this:

“Can you proofread something for me? Make sure the output has no grammatical or spelling errors. The generated text needs to be consistent, coherent and easy to read. Also, output the original text document and highlight the mistakes along with the reason behind them. The text to proofread is ‘…….’ ”.

In the given prompt:

Instruction: Can you proofread something for me?

Context: Make sure the output has no grammatical or spelling errors. The generated text needs to be consistent, coherent and easy to read.

Output format: Also output the original text document and highlight the mistakes along with the reason behind them.

Input: The text to proofread is ‘…….’.

To write an effective prompt all of these components must be present. Apart from this, 3 basic principles can help one write a prompt. These principles are:

  • Simple: This means that the input prompt query given to an LLM is as simple as possible. The idea is not to confuse the LLM. For example:

“Write a Python code that creates a calculator and then also write the code to merge it into a website using ReactJS framework.”

This prompt is complex and will confuse the LLM. Instead, we can break the prompt into several prompts and use it to write the code more efficiently.

Prompt 1: ‘Write a Python code that creates a calculator.’

Prompt 2: ‘ Now can you write the code for basic HTML and CSS to integrate the calculator with it?’.

After breaking the prompt into different parts we make sure that the instructions given to the LLM are simple and therefore, it reduces the chances of error due to hallucination.

  • Specific: This principle means that the instructions given to the LLM need to be crystal clear. For example, in the previous prompt, the LLM will write the code for a complex calculator which is not required. Therefore, the prompt should be:

Prompt 1: ‘Write a Python code that creates a calculator which includes addition, multiplication, subtraction and division. The function should take in two input numbers.

  • Short: If you’ve been following the article thoroughly you will notice that all the prompts have been kept short. You are not supposed to give long and verbose prompts as if you are talking to a human. The prompts need to be kept concise and convey the right message.

Benefits of Prompt Engineering

Image showing different benefits of prompt engineering. Source:(Howell, 2024) Link: https://101blockchains.com/implement-prompt-engineering-in-organization/

We are living in the age of AI and using AI tools is becoming an invaluable skill. Prompt engineering enables just the same. It allows you to collaborate with LLMs and generate a response that is both correct and quick. Some of the benefits of prompt engineering are as follows:

  • Enhanced reliability: After creating the right prompt, the results generated by any AI model fall within the required standards. Knowing the results will fall within the required guidelines, it can be used for rapid content creation.
  • Faster operations: Once the output becomes reliable and starts meeting expectations, it can be used to generate results quickly. This allows you to cut down time on testing and reiteration.
  • Easier scalability: Using prompt engineering, the outputs generated by any AI model become reliable and quick. Using these two properties one can easily scale up the AI models across the organisation.
  • Cost reduction: Using correct prompts one can reduce the human interventions and correction. This reduces the cost required for corrections and alterations.

Conclusion

In this article, we discussed the why, what, and how of prompt engineering. Prompt engineering is a useful technique when working with AI tools. It can help cut down the time required for reiterations and improve the overall reliability of AI-based tools. Additionally, we also discussed the three basic principles of prompt engineering that can help one create an effective prompt. Lastly, if used correctly, prompt engineering can enable any user to fully utilise the hidden potential of AI-based tools and models.

References

--

--