On this tutorial, we’ll explore promptrefiner
: A tiny python tool I actually have created to create perfect system prompts on your local LLM, through the use of the assistance of the GPT-4 model.
The python code in this text is out there here:
https://github.com/amirarsalan90/promptrefiner.git
Crafting an efficient and detailed system prompt on your program could be a difficult process that always requires multiple trials and errors, particularly when working with smaller LLMs, comparable to a 7b language model. which might generally interpret and follow less detailed prompts, a smaller large language model like Mistral 7b could be more sensitive to your system prompt.
Let’s imagine a scenario where you’re working with a text. This text discusses a number of individuals, discussing their contributions or roles. Now, you must have your local language model, say Mistral 7b, distill this information into a listing of Python strings, each pairing a reputation with its associated details within the text. Take the next paragraph as a case:
For this instance, I would really like to have an ideal prompt that leads to the LLM giving me a string just like the following:
"""
["Elon Musk: Colonization of Mars", "Stephen Hawking: Warnings about AI", "Greta Thunberg: Environmentalism", "Digital revolution: Technological advancement and existential risks", "Modern dilemma: Balancing ambition with environmental responsibility"]
"""
After we use an instruction fine-tuned language model (language models which might be fine-tuned for interactive conversations), the prompt often consists of two parts: 1)system prompt, and a couple of)user prompt. For this instance, consider the next system and user prompt:
You see the primary a part of this prompt is my system prompt that tells the LLM the best way to generate the reply, and the second part is my user prompt, which is…