Key Takeaways:
- Anthropic launched Claude Opus 4.7 on April 16, 2026, that includes an 87.6% rating on the SWE-bench Verified take a look at.
- The AI business shift towards agentic autonomy sees Opus 4.7 outperform GPT-5.4 in complicated coding and finance.
- Builders should handle prices as the brand new mannequin makes use of 1.0 to 1.35 occasions extra tokens than the earlier 4.6 model.
AI Evolution: Claude Opus 4.7 Launched With Enhanced Imaginative and prescient and Reminiscence
The San Francisco-based AI startup positioned the discharge as its most succesful typically out there mannequin up to now. It serves as a focused improve over the Opus 4.6 model that arrived simply two months in the past in February.
Whereas the restricted Claude Mythos Preview stays in restricted testing for cybersecurity, Opus 4.7 is constructed for the broader market. It focuses particularly on software program engineering, long-horizon duties, and sophisticated monetary evaluation.
Efficiency metrics launched by Anthropic present the mannequin gaining important floor in autonomous workflows. On the SWE-bench Verified coding benchmark, the brand new mannequin hit 87.6 p.c, up from the 80.8 p.c seen within the 4.6 launch.
The mannequin additionally managed to edge out its major competitors in a number of key classes. Anthropic reported that Opus 4.7 outperformed OpenAI’s GPT-5.4 and Google’s Gemini 3.1 Professional in device use and laptop interplay exams.
One of the crucial seen adjustments includes an enormous improve to the mannequin’s imaginative and prescient capabilities. Claude Opus 4.7 can now course of photos as much as 2,576 pixels on the lengthy edge, which is triple the earlier decision restrict.
This visible increase permits the AI to raised interpret complicated charts, person interfaces, and technical diagrams. Nevertheless, the corporate famous that higher-resolution photos eat extra tokens, doubtlessly rising prices for high- quantity customers.
Anthropic additionally launched a brand new characteristic referred to as /ultrareview inside its Claude Code atmosphere. This device permits skilled and max-tier customers to run multi-agent periods to determine bugs and design flaws in software program.
For monetary professionals, the mannequin exhibits a better diploma of rigor in financial modeling. It achieved a 0.813 rating on the Common Finance module, representing a significant step up from the earlier model’s 0.767 score.
The pricing construction for the mannequin stays unchanged at $5 per million enter tokens and $25 per million output tokens. To assist handle bills throughout lengthy autonomous runs, Anthropic added a process finances characteristic in public beta.
Directions to a T
Early suggestions from the developer neighborhood suggests the mannequin is extra literal in following directions. This variation may require customers to re-tune current prompts that had been optimized for older variations of the Claude household.
“Claude 4.7 is out, and utilizing it looks like entering into an F1 automotive. Way more energy, and it does precisely what you inform it at full pace. Your job is to select the course and make the turns,” one person wrote on X.
Some testers have noticed that the up to date tokenizer can use as much as 1.35 occasions extra tokens for a similar enter. Whereas this may result in sooner restrict depletion, the corporate argues that the efficiency per process justifies the utilization.
Security stays a core focus, because the mannequin consists of new automated safeguards to dam high-risk cybersecurity makes use of. Anthropic’s system card highlights improved honesty and a stronger resistance to producing dangerous content material.
The mannequin is now out there by way of the Claude API, Amazon Bedrock, Google Vertex AI, and Microsoft Foundry. It retains the 1 million token context window launched earlier this 12 months.











