Home
/
Latest news
/
Research developments
/

Exploring why ll ms struggle to count letters in 'strawberry'

LLMs Face Backlash | Can't Count R's in 'Strawberry'

By

Dr. Sarah Chen

Jul 8, 2025, 09:31 PM

Edited By

Sofia Zhang

2 minutes needed to read

An illustration showing a large letter 'R' next to the word 'Strawberry', highlighting the counting issue with LLMs.
popular

A wave of skepticism circles around Large Language Models (LLMs) as many question their ability to perform simple tasks, like counting letters. A recent post highlighted this issue with examples such as the word "Strawberry," igniting fiery discussions on user boards.

What's Going Wrong?

LLMs break down text into "tokens" and convert them into numerical representations called vectors. This process omits precise character-level details, leading to inaccuracies in counting letters. Critics argue that this limitation raises concerns about the reliability of LLMs for basic tasks.

"Every time someone talks about this strawberry issue, just remind them: LLMs donโ€™t see letters, only entire words," mentioned one commentator, emphasizing the disconnect.

The User's Perspective

Several themes emerged from discussions:

  • Outdated Information: Comments suggest that explanations of how LLMs function often lack contemporary relevance.

  • Human vs. Machine Errors: Users ponder whether spellcheck reliance makes humans vulnerable too, with one quipping about its implications.

  • Counting Conundrums: Comparisons were made between LLM struggles and human challenges with tasks that seem straightforward.

One comment read, "If you see a string of numbers instead of letters, counting letters would be tricky. Itโ€™s much harder, yet people manage it easily!"

Sentiment Analysis

Overall, comments reflect a mix of frustration and skepticism about LLM capabilities. Many users feel LLMs oversimplify complex tasks that should be straightforward.

Key Highlights

  • โš ๏ธ Critics question LLM use for basic tasks like counting letters.

  • ๐Ÿ” "They can now. o3 counted 3 just now." - Commenter response.

  • ๐Ÿง  Users compare human and LLM limitations in counting challenges.

The discourse raises important questions about the future application and trustworthiness of LLMs in everyday scenarios. Will developers address these flaws, or will LLMs remain in the spotlight for their shortcomings?

Predictions on LLM Adaptations

As skepticism rises over LLMsโ€™ counting capabilities, developers are under increasing pressure to enhance these models. Experts estimate around a 60% probability that updates will focus on improving character-level processing to align better with human expectations. This could involve incorporating more sophisticated algorithms that mimic human recognition of text patterns. Moreover, companies may face tighter scrutiny in the coming year, potentially leading to regulatory actions aimed at ensuring LLMs meet basic accuracy standards. With these changes, there's a strong chance that LLMs will become more reliable for simple tasks in everyday applications, thus increasing public trust and usage throughout 2026.

Unseen Lessons from the Past

This situation mirrors early challenges in the 1990s with smartphones, which many viewed only as simple communication tools. Users initially doubted their potential for complex functions like internet browsing and navigational aids. Yet, as technology evolved, so did public perception, leading to smartphones becoming integral to daily life. Similarly, LLMs, despite their current shortcomings, could evolve swiftly, changing how people interact with technology. Just as smartphones transformed into multifunctional devices, LLMs could redefine language processing's role in society, making them indispensable in ways we have yet to fully realize.