/adrift/ by Stantonius

Floating without direction across the digital ocean in an unfinished dinghy.

Prompting guidelines

ยท Calculating...

General

  • Be explicit & literal: Spell everything out exactly as you want it
  • Induce planning/CoT: +4% gain. Explicitly tell the model to plan/think step-by-step if needed.
  • Put instructions before and after long context: If you can only choose one, choose before
    • Kind of makes sense - the attention mechanism needs to know what to look for; asking it after having provided the context goes against how I understand attention
  • Use XML or Markdown, not JSON
  • Structure prompts: Use sections (Role, Instructions, Examples, etc.) using XML or Markdown (see above)
  • Check for contradicting rules: Later rules win; but contradicting instructions hurt performance
  • Not necessary to use threats or ALL CAPS if the rules in this post are followed

Agents

  • Agent Reminders (persist, use tools, plan): ~20-24% gain. Tell agents to keep going, use their tools (don't guess), and optionally plan out loud.
  • Use API tools field, not prompts: +2% gain. Define tools properly in the API, don't just describe them in the text.
  • Specify context reliance: Explicitly instruct whether to only use provided context or allow mixing with internal knowledge/tool calling.

Caveats

  • Performance may degrade if many items need retrieval or complex reasoning spans the entire context.
  • Resistance to long repetitive outputs
  • Parallel tool calls can sometimes fail

Sources