The Basics of #AI #Prompt Engineering #vExpert #LLM


Let us recap quickly:

When interacting with Large Language Models and using them to assist you, you follow this methodology:

So now we have got that out of the way, we can now look some prompt engineering. This is all about the need to adjust your prompt to help guide the LLM  and make it output the info you need.

To add to that we have the topic of Inference

Inference is the process of running live data through a trained AI model to make a prediction or solve a task. Another way to look at it is that it is the process of applying learned knowledge to new and unseen data to make decisions or predictions.

This topic becomes a bit deeper as there are 3 types of inference when it comes to prompt engineering:

So the smaller the LLM is (IE the less parameters it has to work with) the more inference you will have to give it to guide it. The general guidance is that if you have to give numerous examples and it still isn’t outputting what you need, adding more examples is a waste of time and it’s best to start fine tuning the model instead.

Some examples :

Zero shot example:

Translate the following word from English to Italian:

Prompt: Pineapple

Single shot example:

Translate from English to Italian:

Example: Pineapple = ananas

Prompt: Police

Few shot example:

Translate from English to Italian:

Example: Pineapple = ananas

Example : Superhero = supereroe

Prompt: Pineapple on pizza

A few shot example is usually a min of 2 to max of 5, after 5 the general consensus is that if it isn’t working at that point, using more examples is not going to help.

In general the larger the LLM model, the less inference it needs.


Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.