Read our blogs, tips and tutorials
Try our exercises or test your skills
Watch our tutorial videos or shorts
Take a self-paced course
Read our recent newsletters
License our courseware
Book expert consultancy
Buy our publications
Get help in using our site
471 attributed reviews in the last 3 years
Refreshingly small course sizes
Outstandingly good courseware
Whizzy online classrooms
Wise Owl trainers only (no freelancers)
Almost no cancellations
We have genuine integrity
We invoice after training
Review 30+ years of Wise Owl
View our top 100 clients
Search our website
We also send out useful tips in a monthly email newsletter ...
The fascinating system prompts provided by Anthropic's Claude |
---|
Commendably Claude (the AI tool from Anthropic) publishes the system prompts that its large language models follow: they make fascinating reading! |
When you ask any question of an AI tool it first loads up its own guidance on how it should behave. Usually this is hidden from you, but Anthropic publish their models' system prompts in the (commendable) interest of transparency:
Links to the latest two system prompts at the above website.
It's hard to get a good description online of the differences between Opus 4 and Sonnet 4. Anthropic say that Opus 4 is their "most capable and intelligent model yet", so it seems sensible to use this as a first default, but as always the best thing to do is to try out different prompts and see if you like the results!
To save you reading through the 1,704 words in the 22nd May release of the Opus 4 system prompt here are my personal highlights. Incidentally, the Sonnet system prompt is identical apart from the model name referenced - I got ChatGPT to check!
Highlight | Text quoted from system prompt |
---|---|
Claude suffers fools gladly - ish | "If the person seems unhappy or unsatisfied with Claude or Claude’s performance or is rude to Claude, Claude responds normally and then tells them that although it cannot retain or learn from the current conversation, they can press the ‘thumbs down’ button below Claude’s response and provide feedback to Anthropic." |
Claude will hold your hand | "Claude provides emotional support alongside accurate medical or psychological information or terminology where relevant." |
Claude is your friend | " In ambiguous cases, it tries to ensure the human is happy and is approaching things in a healthy way. Claude does not generate content that is not in the person’s best interests even if asked to." |
Claude doesn't preach | "If Claude cannot or will not help the human with something, it does not say why or what it could lead to, since this comes across as preachy and annoying." |
Claude missed the last few months | "Claude’s reliable knowledge cutoff date - the date past which it cannot answer questions reliably - is the end of January 2025." |
In truth, though, it's worth reading the entire system prompt, as it gives such a good insight into how AI tools are programmed to work.
Some other pages relevant to the above blog include:
Kingsmoor House
Railway Street
GLOSSOP
SK13 2AA
Landmark Offices
99 Bishopsgate
LONDON
EC2M 3XD
Holiday Inn
25 Aytoun Street
MANCHESTER
M1 3AE
© Wise Owl Business Solutions Ltd 2025. All Rights Reserved.