Anthropic nerfed Claude Code? Probe finds 3 major issues causing performance drop
Anthropic has announced that after weeks of user complaints over Claude Code allegedly getting 'dumber,' the AI startup has identified the issues that may have made the AI coding tool feel worse. The company says it has now fixed all issues that likely made you feel that Claude Code was getting worse at its job.
by Armaan Agarwal · India TodayIn Short
- Anthropic fixes bugs that made Claude Code dumber
- Company says three changes made it seem that Claude Code got worse
- Users had complained that Claude Code was getting lazy
Anthropic’s Claude Code has changed the way humans code. Claude Code is an AI coding tool that uses Anthropic’s Claude models to code. That is, the AI can write hundreds, and sometimes even thousands of codes based on just a single prompt. However, for the past couple of weeks, Claude Code users found themselves scratching their heads, they claimed that this coding agent was getting “dumber.”
On X, many users have been complaining for days that Claude Code seemed to be getting worse at its job. One person shared a compilation of different complaints regarding the coding agent, including phrases like “is it just me or claude is broken?” or “claude code is unusable.”
For some, the situation was even worse. On Reddit, one user claimed that Claude Code was actually acting like a lazy human employee. The user wrote, “It's like an employee that goes behind your back, only does what it feels like to do at that moment, jumps into other areas when specifically asked not to.” Though this is probably not the AGI we are looking for.
And this is not the first time Anthropic has found itself under the radar either. The company has previously been accused of trying to remove Claude Code access from its $20 Pro plan.
Anthropic fixes bug that made Claude dumber
The Dario Amodei-led startup shared an update on addressing these complaints. Anthropic stated, “We never intentionally degrade our models.” The company said that it fixed three separate issues in a patch released on April 20 that may have resulted in complaints from users. Anthropic has also reset all user limits, which gives you more access to Claude Code to work with.
Anthropic stated that the issues affected Claude Code, the Claude Agent SDK, and Claude Cowork, while its API and inference layer were not impacted.
Boris Cherny, the head of Claude Code, claimed that the company needed an intensive investigation to find the root causes. He wrote on X, “We take these reports incredibly seriously. In my time on the team, this has probably been the most complex investigation we’ve had.”
What was the issue?
According to Anthropic, three changes to Claude resulted in users feeling that Claude had just become dumber. The company stated that on March 4, it changed the default reasoning setting for Claude Code from “high” to “medium.” This was meant to reduce the time for a response from the AI, but this was not what users wanted. Anthropic added that users said, “they'd prefer to default to higher intelligence and opt into lower effort for simple tasks.”
The second problem came on March 26, from a change in Claude that would delete the AI’s older thinking from sessions that had been idle for over an hour. That is, the AI would forget old sessions till you actually resumed it. Here, Anthropic says, “a bug caused this to keep happening every turn for the rest of the session instead of just once, which made Claude seem forgetful and repetitive.”
On April 16, the third issue arose with change in how Claude responds. Anthropic reduced the “verbosity” of the AI, that is how wordy the AI is while responding. But the company says that this "in combination with other prompt changes, it hurt coding quality”
Anthropic said the combined effect appeared as broad and inconsistent degradation because each change affected a different slice of traffic on a different schedule. Ultimately, the company claims it has fixed all three issues, and now users will not have trouble with how Claude Code performs.
- Ends