LLM
The Basics of #AI #Prompt Engineering #vExpert #LLM
Let us recap quickly: When interacting with Large Language Models and using them to assist you, you follow this methodology: So now we have got that out of the way, we can now look some prompt engineering. This is all about the need to adjust your prompt to help guide the LLM and make it output the info you need. To add to that we have the topic of Inference: Inference is the process of running live data through a trained AI model to make a prediction or solve a task. Another way to look at it is that it is the process of applying learned knowledge to new and unseen data to make decisions or predictions. This topic becomes […]
So Let’s Chat about #AI and #LLMs
First off, is it just me that when someone mentions AI, that I think of Ali G? I’ve been wanting to say that for ages, now that I have got that off my chest, let’s continue! I have started digging into the topic recently and this and any following blog posts are just going to be my ramblings on the matter and could indeed be incorrect (as I am still learning). I’ve done a couple NVIDIA training courses on the topic so far and they are free, so please go check them out if you just want to get a high-level feel and overview of the topic! I mean let’s be honest the topic is a deep deep topic that […]