Everything about Claude — Anthropic’s AI assistant known for thoughtful, careful, high-quality responses. How it was built, what makes it different, how to use it well, prompts for every situation, and full technical depth. Three reading levels. Official sources only.
Claude is an AI assistant made by a company called Anthropic. Like ChatGPT, you type something and it replies — but users consistently notice that Claude’s answers feel a little different. More careful. More nuanced. More honest about uncertainty. Better at handling long, complex documents.
You can use Claude for free at claude.ai or on the Claude app on your phone. No download needed for the web version.
If ChatGPT is the confident, fast-talking assistant who always has an answer, Claude is the thoughtful colleague who reads everything carefully, admits when something is uncertain, and writes with noticeably more care.
Claude was made by Anthropic, a company founded in 2021. Here is the story of how it came to exist — and it involves a dramatic split from OpenAI.
In 2020 and 2021, a group of senior researchers at OpenAI became increasingly concerned about the direction of AI development. They felt that as AI became more powerful, safety needed to be the central priority — not an afterthought. The company was growing fast, taking large investments, and there were disagreements about how much to prioritise safety research versus capability development.
In 2021, Dario Amodei — who had been Vice President of Research at OpenAI — left the company. His sister Daniela Amodei, who had been VP of Operations, left with him. They took with them about a dozen other senior researchers and engineers. Together, they founded Anthropic.
Their founding principle was simple but radical: build AI that is safe, and prove that safety and capability are not in conflict — that you can build a very capable AI that is also genuinely trustworthy.
The reason Claude behaves differently from other AIs is directly connected to why Anthropic was founded. The people who built Claude genuinely believe that how an AI behaves — whether it is honest, whether it admits uncertainty, whether it avoids harm — is as important as how capable it is. That philosophy is baked into Claude at a fundamental level.
Dario and Daniela Amodei co-found Anthropic in April 2021 with initial funding of $124 million from a group of investors. The company sets up in San Francisco. Their first major research effort is developing Constitutional AI — the technique that will eventually define Claude’s character.
Anthropic releases the first public version of Claude in March 2023 — the same month OpenAI releases GPT-4. Claude 1 is available through an API and a limited early access programme. It immediately distinguishes itself through longer context windows (the ability to handle more text at once) and a noticeably careful, nuanced writing style. Reviewers note it is more likely to say “I’m not certain about this” than other AI assistants — which some users find refreshing and others find frustrating.
Claude 2 arrives in July 2023 with a dramatically expanded context window — up to 100,000 tokens, compared to around 8,000 for most competitors at the time. This means Claude 2 could read and reason about a full novel, a lengthy legal contract, or hundreds of pages of research in a single conversation. For users dealing with long documents, this was transformative.
Claude 2 also improves coding ability significantly and is made available to UK users for the first time.
Claude 3 launches in March 2024 as a family of three models — Haiku (fast and lightweight), Sonnet (balanced capability and speed), and Opus (the most capable). This tiered approach lets users choose between speed and power depending on their task.
Claude 3 Opus, at launch, outperforms GPT-4 on several benchmarks including graduate-level reasoning (GPQA), undergraduate knowledge (MMLU), and coding tasks (HumanEval). For many users, this is the moment Claude becomes a serious competitor to ChatGPT rather than an alternative.
Claude 3 also introduces vision capabilities — the ability to understand images as well as text.
Claude 3.5 Sonnet arrives in June 2024 and immediately becomes the most-used Claude model. It is faster than Opus, considerably cheaper, and actually outperforms Opus on many tasks — particularly coding. In a widely-shared evaluation by Anthropic, Claude 3.5 Sonnet achieves 64% on the SWE-bench Verified coding benchmark, compared to 49% for the previous best. Developers take notice.
This release also introduces Artifacts — a feature in claude.ai that lets Claude create standalone content (code, documents, visualisations) in a side panel that users can interact with separately from the conversation.
Anthropic continues its rapid release cadence. Claude 3.5 Haiku brings the performance of the original Claude 3 Opus to a fast, low-cost model. Claude 3.7 Sonnet introduces extended thinking — the ability to reason through complex problems step by step before producing a response, analogous to OpenAI’s o1. Anthropic also releases Computer Use — the ability for Claude to interact with a computer interface, clicking, typing, and navigating as a human would.
The Claude 4 series, released in 2026, represents Anthropic’s current frontier. Specific architectural details remain confidential per Anthropic’s standard practice, but the models demonstrate strong performance on long-context tasks, complex reasoning, and agentic applications — tasks where Claude operates with some autonomy to complete multi-step goals.
Both are excellent AI assistants. The differences are real but subtle:
“I had a 40-page research paper to read and summarise for my coursework. I pasted the whole thing into Claude and asked it to: summarise in 5 paragraphs, identify the 3 main arguments, list any weaknesses in the research, and suggest 5 questions for further study. It did all four in one go. What would have taken me three hours took 10 minutes. And the summary was actually good — not just bullet points, but a real understanding of what the paper was saying.”
Claude’s long context window makes it uniquely suited to working with large documents. Paste in a contract, a research paper, a book chapter, a set of financial statements, or a lengthy report — and ask Claude to summarise, explain, critique, or extract specific information.
Claude’s writing tends to be more polished and less generic than other AI assistants. It avoids clichés, structures arguments clearly, and adapts tone convincingly. Particularly good for: academic writing, professional reports, persuasive documents, and anything where quality matters more than speed.
Claude is notably willing to say “I’m not certain,” “there are arguments on both sides,” or “this depends on factors I don’t know.” For analysis and decision-making, this is genuinely valuable — you want honest uncertainty flagged, not confident-sounding guesswork.
Claude is highly regarded by developers for its coding ability — particularly debugging and explaining code. It can write code in most major languages, spot errors, explain what existing code does, and suggest improvements.
Claude is excellent at explaining complex topics at whatever depth you need — and at being honest about the limits of its knowledge. For learning something new, it can act as a patient tutor who gives you increasingly detailed explanations as you ask follow-up questions.
Source: anthropic.com/claude — April 2026
Claude responds especially well to clear, detailed instructions. It is designed to follow complex, multi-part prompts precisely — and unlike some AI assistants, it will tell you if your request is unclear rather than guessing and producing something wrong.
Claude was trained to be helpful, harmless, and honest — and it takes all three seriously. It will push back on requests that seem harmful. It will admit uncertainty rather than confabulate. It will ask clarifying questions if your request is ambiguous. Working with Claude means working with an AI that treats conversation as a genuine collaboration.
Claude’s context window (the amount of text it can hold in one conversation) is among the largest available. Use it to paste in entire documents rather than summarised extracts. The more context you give, the better the output.
Claude is unusually good at following instructions with multiple components. Give it a full brief rather than asking one question at a time. This produces better, more cohesive output.
Claude Pro includes Projects — a feature that gives Claude persistent memory within a defined project context. You can set background instructions, upload reference documents, and have Claude remember key facts about your work across multiple conversations.
Claude is available through Anthropic’s API for developers building applications. The API supports all Claude models and can be accessed through the Anthropic SDK for Python and TypeScript, or directly via REST.
import anthropic
client = anthropic.Anthropic(api_key="your-api-key")
message = client.messages.create(
model="claude-opus-4-5",
max_tokens=1024,
messages=[
{"role": "user", "content": "Explain transformer architecture simply."}
]
)
print(message.content[0].text)
Source: docs.anthropic.com — Anthropic’s official API documentation
Claude’s distinctive behaviour — its tendency toward nuance, its honesty about uncertainty, its willingness to push back — is not accidental. It emerges from a specific training methodology called Constitutional AI (CAI), developed by Anthropic and described in a 2022 research paper.
Standard RLHF (Reinforcement Learning from Human Feedback, used to align ChatGPT) requires human raters to evaluate model outputs for harmfulness. This has two problems: it is expensive to scale, and it exposes human raters to harmful content at volume. Anthropic’s CAI addresses both by using the AI itself to perform much of the feedback process.
Phase 1: Supervised learning with AI feedback (SL-CAI)
A “constitution” — a set of principles — is defined. Anthropic’s constitution draws from sources including the UN Declaration of Human Rights, Apple’s terms of service, and Anthropic’s own research on AI safety. The model is then given a harmful prompt and asked to produce a response. It is then asked to critique that response against the constitution, and revise it. This self-critique-and-revision loop is run multiple times. The revised responses are used as supervised fine-tuning data.
Phase 2: Reinforcement learning from AI feedback (RLAIF)
Rather than using human raters to rank model outputs (as in standard RLHF), a separate “feedback model” is trained to evaluate outputs against the constitution. This feedback model is used to generate preference data — which outputs are better according to the constitutional principles. The main model is then fine-tuned using RL against this AI-generated preference data. Crucially, the constitution’s principles are made explicit to the feedback model, making the value judgements more transparent and auditable than human ratings.
Bai, Y., Jones, A., Ndousse, K., Askell, A., Chen, A., DasSarma, N., Drain, D., Fort, S., Ganguli, D., Henighan, T., Joseph, N., Kadavath, S., Kernion, J., Conerly, T., El-Showk, S., Elhage, N., Hatfield-Dodds, Z., Hernandez, D., Hume, T., Johnston, S., Kravec, S., Lovitt, L., Nanda, N., Olsson, C., Amodei, D., Brown, T., Clark, J., McCandlish, S., Olah, C., Mann, B., & Kaplan, J. (2022). “Constitutional AI: Harmlessness from AI Feedback.” Anthropic. arxiv.org/abs/2212.08073
A distinctive aspect of Anthropic’s research agenda is mechanistic interpretability — understanding what is actually happening inside neural networks, not just what they produce. Their team has published significant work on:
Elhage, N., et al. (2021). “A Mathematical Framework for Transformer Circuits.” transformer-circuits.pub
Bricken, T., et al. (2023). “Towards Monosemanticity: Decomposing Language Models With Dictionary Learning.” transformer-circuits.pub
Anthropic does not publish the full architectural specifications of Claude models. The Claude model cards — available at anthropic.com — describe intended use, evaluation results, and safety properties without revealing training data composition, parameter counts, or architectural choices in detail. This is consistent with Anthropic’s responsible scaling policy, which ties capability disclosure to evaluated risk levels.
Anthropic’s Responsible Scaling Policy, first published in September 2023, defines a framework of AI Safety Levels (ASL-1 through ASL-4+) with corresponding commitments about what safety research must be completed before deploying models at each capability level. The policy is notable for including specific, measurable commitments — not just principles — including commitments to pause or adjust development if safety evaluations reveal certain capability thresholds have been reached.
Anthropic (2023). “Anthropic’s Responsible Scaling Policy.” anthropic.com/responsible-scaling-policy
Claude 3.7 Sonnet introduced extended thinking — a mode in which Claude engages in an explicit internal reasoning process before producing its final response. Unlike the hidden chain-of-thought in standard inference, extended thinking makes the reasoning process visible to the user in a collapsible “thinking” section. This is analogous to OpenAI’s o1 approach, and similarly trades inference speed for improved accuracy on complex tasks. Benchmarks from Anthropic’s model card show significant improvements on tasks requiring multi-step reasoning, mathematical problem solving, and complex coding tasks.
Anthropic (2025). “Claude 3.7 Sonnet Model Card.” anthropic.com/claude/model-cards
Anthropic’s Computer Use capability, released in public beta in October 2024, enables Claude to interact with computer interfaces — moving a cursor, clicking, typing, and navigating applications. Technically, the model receives screenshots of the current state of the screen and produces actions (described as tool calls) that manipulate the interface. This represents a significant step toward agentic AI — systems that can take sequences of actions in the real world to complete goals, rather than just generating text.
Anthropic (2024). “Introducing computer use, a new Claude.ai feature.” Anthropic News. anthropic.com/news/computer-use
The Anthropic API uses the Messages endpoint as its primary interface. Key technical parameters:
import anthropic
client = anthropic.Anthropic()
response = client.messages.create(
model="claude-opus-4-5",
max_tokens=2048,
system="You are an expert data analyst. Be precise and cite uncertainty.",
messages=[
{
"role": "user",
"content": "Analyse the following data: [data here]"
}
],
temperature=0.3 # Lower = more deterministic
)
# Access the response
print(response.content[0].text)
print(f"Input tokens: {response.usage.input_tokens}")
print(f"Output tokens: {response.usage.output_tokens}")
Model identifiers (April 2026): claude-opus-4-5, claude-sonnet-4-6, claude-haiku-4-5-20251001. Full model reference: docs.anthropic.com/en/docs/about-claude/models