Last month, I had the incredible honor of winning Singapore’s first ever GPT-4 Prompt Engineering competition, which brought together over 400 prompt-ly sensible participants, organised by the Government Technology Agency of Singapore (GovTech).
Prompt engineering is a discipline that blends each art and science — it’s as much technical understanding because it is of creativity and strategic pondering. This text is a compilation of the prompt engineering strategies and insights that I learned along the best way, that push any LLM to do exactly what you wish and more!
This text covers the next, with 🟢 referring to beginner-friendly prompting techniques while 🟠 refers to advanced strategies:
1. [🟢] Structuring prompts using the CO-STAR framework
2. [🟢] Sectioning prompts using delimiters
3. [🟠] Using system prompts with LLM guardrails
4. [🟠] Analyzing datasets using only LLMs, without plugins or code
Effective prompt structuring is crucial for eliciting optimal responses from an LLM. The CO-STAR framework, a brainchild of GovTech Singapore’s Data Science & AI team, is a handy template for structuring prompts. It considers all the important thing features that influence the effectiveness and relevance of an LLM’s response, resulting in more optimal responses.
Here’s how it really works:
(C) Context: Provide background information on the duty.
This helps the LLM understand the particular scenario being discussed, ensuring its response is relevant and aligned along with your expectations.