Fascinations from Anthropic’s Prompt Engineering
Everything is easy, and simple.
Sometimes these features are the ‘product’ the AI companies are trying to sell you. Some document it, some don’t. Sometimes it’s just their prompt telling the LLM to ‘think through it’, sometimes it’s more than that.
This document is for Anthropic’s ‘Claude’ from 2024. The most interesting section is the answer key, but the introduction and chapters are helpful context as well.
A ‘role’ is helpful. It’s good to define, but not always necessary.1
XML tags (like
Every interaction is separated by ‘User:’ and ‘Assistant:’. System prompts don’t get this treatment.
The Assistant section is used to help Claude start an answer.3
You can almost use this like a formula4 and if you already have examples.5
Strange that getting it to think (first give me the pros and cons) can give you a better answer.6
The XML and Assistant really came together in the email classification example. “Do not include an extra words except the category. (A)…” And using the Assistant to start the prompt with a ‘(‘.7
The magic of LLMs is how they’re able to consider what they’ve written. ‘Thinking’ through step by step is still relevant, but may be implemented differently depending on your model.
You can ask it to brainstorm or make a scratchpad of quotes to consider to avoid hallucination.8
You can also ask for reference quotes.9
And it’s okay to be comfortable with not knowing something.9
We could all learn from this.
A comment acknowledged how similar this was to managing and communicating well.
Formula for a complex prompt:10
- Task context
- Tone context (optional)
- Information (in any order?):
- Detailed task description and rules
- Examples
- Input data to process
- Immediate task description or request
- Optional:
- Precognition (thinking step by step)
- Output formatting
- Prefilling Claude's response (if any), ‘Assistant:’
The whole course was designed to help you send multiple queries to Claude using Python with multiple variables.
via Hacker News, previous