I thought Claude was bad at following instructions, until I learned to prompt it right

Anita Kirkovska
2 replies
Have you tried instructing Claude in the same way as you would GPT-4? Given the widespread use and familiarity with OpenAI's models, it's a common reflex. Yet, this approach doesn't quite hit the mark with Claude. Claude is trained with different methods/techniques, and should be instructed with specific instructions that cater to those differences. So, I looked into Anthropic's official docs, and tried to use their guidelines to improve the LLM outputs for our customers. Turns out, Claude can do even better than GPT-4 if you learn to prompt it right. The official documentation can be a bit confusing, so here are some tips for you that can improve your outputs: 1. Use XML tags to separate instructions from context 2. Be direct, concise and as specific as possible 3. Use the Assistant message to provide the beginning of the output 4. Assign a role 5. Provide Examples I wrote down some examples on how these work in my latest blog post here: https://www.vellum.ai/blog/promp..., but feel free to AMA and I'll do my best to answer everything.

Replies

Swayam
Thanks for the tips! Appreciate it!
Share