Prompting Techniques
- Chain-of-Thought (CoT) – Encourage the LLM to output a chain of thought; not just a final answer. Do this by providing examples (one-shot or few-shot prompting).
- Zero-Shot Chain-of-Thought – Like regular CoT but instead of providing examples, just append “Let’s think step by step.” to the end of a question.
- Self-Consistency with CoT (CoT-SC) – Call the LLM multiple times to get multiple rationales and final answers; use the final answer that is most commonly arrived-at. According to the research paper, this can perform significantly better than CoT but (i) is more costly (as the LLM is called multiple times) and (ii) only works in scenarios where there’s likely to be overlap in the answers given.
- Tree of Thoughts (ToT)
- Graph of Thoughts (GoT) – Sounds more complicated than other techniques but the researchers who proposed say it performs better than CoT-SC or ToT whilst being cheaper than ToT. They also mention it is “particularly well-suited to tasks that can be naturally decomposed into smaller subtasks that are solved individually and then merged for a final solution.”
- ‘Panel of Experts’ – Ask the LLM to role-play a panel discussion. (Just described in a blog post but might have merit.)