Anthropic introduced Claude 3.7 Sonnet as a hybrid reasoning model: one model that can answer quickly or spend more time thinking before it responds. That is the right product instinct. Nobody outside model discourse wakes up excited to choose between six nearly identical brains before answering an email.

Claude 3.7 Sonnet is available across Claude plans, the Claude Developer Platform, Amazon Bedrock, and Google Cloud Vertex AI. Extended thinking is available everywhere except the free Claude tier, and API users can control the thinking budget. That last part is the practical bit.

Source credit: Anthropic's original source material.

One model, two behaviors

Anthropic says Claude 3.7 Sonnet can produce near-instant responses or extended, visible thinking, with API users able to set how long the model can think up to its output limit. Same model, same basic workflow, different amount of deliberation.

This solves a real annoyance. Some questions need speed. Some need the model to stop tap-dancing and reason. A single model that can do both is easier to explain, easier to route, and easier to build around than a little model zoo that asks users to become part-time benchmark analysts.

  • Claude 3.7 Sonnet launched at the same $3 per million input tokens and $15 per million output tokens as its predecessors
  • thinking tokens are included in that output pricing
  • Anthropic emphasized coding, instruction-following, agentic tasks, and real-world business work
  • extended thinking gives teams a speed-versus-quality dial rather than a separate product lane

The honest product question is when extended thinking actually pays for itself. Use it for messy planning, hard code changes, math, analysis, and decisions where a wrong answer creates cleanup. Do not use it for every shrug-worthy prompt because the model can think longer. That is how teams turn capability into latency with a receipt.

The good version of this feature is a workflow that escalates only when the task gets serious. The bad version is every internal tool defaulting to deep thought because someone liked the demo. Please do not make the chatbot meditate before formatting a table.

Claude 3.7 Sonnet is interesting because it treats reasoning like a product behavior, not a religious identity. That is healthier for users and probably better for developers too.

The model still has to earn trust on actual work. But the shape is right: fast when possible, careful when necessary, and controllable enough that teams can stop guessing what kind of intelligence they are paying for.

In short

Anthropic made Claude 3.7 Sonnet a hybrid reasoning model instead of a separate thinking product. Good. Users do not want a model menu with homework. They want control when the task deserves it.