According to 1M AI News monitoring, a team member of the Anthropic Claude Code team, Lydia Hallie, released the results of her investigation into the quota dispute from the past two weeks. The conclusion is: the peak-hours quota has indeed been tightened, and the consumption of 1 million token context sessions has increased—“this is the reason for most of what you’ve been feeling.” She said the team fixed some bugs, but emphasized that “no single bug caused overcharging.”
She then offered some advice to save on usage:
No mention was made of any form of quota reset or compensation.
AI podcast host Alex Volkov summarized her response as “you’re holding it wrong” (You’re holding it wrong). He noted that Anthropic set the 1 million context as the default and promoted Opus as its flagship model, yet now recommends that paying users shouldn’t use these features. He also pointed out that, unlike OpenAI Codex earlier when similar issues occurred and quotas were reset for users, Anthropic did not provide any retroactive compensation.
The claim of “no overcharging” is also at odds with Claude Code’s own update log. In the v2.1.90 version released the day before, a cache regression bug that existed since v2.1.69 was fixed: when using --resume to restore a session, requests that should have hit the cache instead trigger a full prompt cache miss and are billed at the full rate. This bug spanned about 20 versions before it was discovered and fixed. Lydia’s response did not mention this confirmed billing anomaly.
Since March 23, many Pro and Max subscribers have reported that their quotas were being used up at an abnormally fast rate. GitHub issue #41930 has gathered hundreds of reports, with some users saying that the quota for a Max 5x plan was used up within 1 hour, and others saying that a single simple one-line reply brought their utilization rate from 59% to 100%. On March 30, Anthropic acknowledged on Reddit that “the speed at which users reach the quota is far beyond expectations,” and said it has been listed as the team’s highest priority.
The core issue with this response isn’t whether the technical details are accurate; it’s that it shifts nearly all responsibility onto users’ usage habits. Anthropic sells “the strongest model + the largest context + the highest reasoning ability” Pro/Max subscriptions, charging 20 to 200 USD per month, and now tells users to use it more sparingly.