Home
/
Latest news
/
Industry updates
/

Insights on sonnet 4.5 and claude code ranking issues

Users Rank AI Coding Tools | Claude Code's New Limits Spark Outrage

By

Anika Rao

Oct 13, 2025, 11:09 PM

Edited By

Amina Kwame

2 minutes needed to read

Comparison of Sonnet 4.5 and Claude Code coding AI tools with performance metrics displayed

A growing number of people are criticizing Claude Code after a recent update slashed its weekly limits significantly. Many report issues with random file generation, raising questions about the sustainability of this coding assistant.

Context and Controversy

In the wake of the 2.0 update, which saw usage limits drop by 30% to 50%, users are voicing frustration across forums. *"Unnecessary .md files are eating up limits quickly," one user reported, pointing to a major flaw in the system. Staff from Anthropic confirmed these restrictions apply to all users, adding to the frustration.

Interestingly, complaints are primarily directed at Claude Code, as several users expressed satisfaction with alternatives, particularly Sonnet 4.5 and Gemini tools. A user commented, "Iโ€™ve only experienced issues using Sonnet through Claude Code, not with others." This suggests a specific problem with Claude Code itself rather than a systemic issue with Sonnet models in general.

Competitor Comparison

The discourse reveals contrasting experiences with various coding AIs. A user outlined a clear preference for Gemini 2.5 during research due to its unlimited usage and exceptional analysis capabilities. They noted:

"Gemini 2.5 Pro wins it's much more effective than Opus 4.1 in both quality and cost."

Moreover, users highlighted service efficiency as a key factor. A comprehensive comparison ranks popular tools based on quality and reliability:

  1. Auggie ($50 tier) - Best service, approximately 20โ€“25 days/month

  2. GLM 4.6 - Costs $3โ€“30 with virtually unlimited use

  3. Sonnet 4.0 / Opus 4.1 - Rated second for quality, still diluted

  4. ClaudeCode - Costs $20โ€“200 but struggles with uptime

  5. Codex - Equivalent cost, average performance

  6. Copilot - Improving steadily at $20

  7. KiloCode API - Not recommended due to poor API limit handling

  8. GeminiCLI - Free but fails to deliver quality

Insights from the Community

Responding to the recent updates, the community is actively seeking alternatives. Feedback points to positive experiences with tools like Auggie and Gemini but raises concerns about productivity disruption due to Claude Codeโ€™s latest changes. Users note:

  • "Sonnet on Auggie performs the same as on Claude."* This highlights competition rather than complacency within the marketplace.

Key Takeaways

  • 30-50% Limits Cut: Recent Claude Code Update caused significant user complaints

  • Mixed Sentiments: Some celebrate other tools like Auggie and Gemini

  • Quality Assurance: Users are prioritizing reliability over brand loyalty

As the conversation around AI coding tools continues, the question remains: will these companies address user concerns and improve their services, or risk losing their user base to more reliable alternatives?

Future Paths for AI Coding Tools

With Claude Code facing mounting dissatisfaction, there's a strong chance the company will revise its limits and address the issues raised by users, especially with many seeking alternatives. Experts estimate around a 70% probability that this major player will adapt in the next quarter to regain trust and market share. Additionally, as competition heats up, companies like Auggie and Gemini might expand their services to capitalize on Claude Code's setbacks, potentially increasing their user bases and refining their offerings.

Echoes of the Past

This scenario mirrors the rise and fall of payphone services in the 90s. When mobile phones began to take hold, payphones faced severe limitations and declining use as customers sought more dependable alternatives. Payphone providers struggled initially to adapt and innovate, but those that pivoted into digital solutions managed to survive in a transformed landscape. Just as those companies faced adaptation challenges, current AI coding tools must respond to user needs or risk losing relevance.