Home
/
Latest news
/
Research developments
/

Are ll ms really understanding text in 2025?

As AI technology evolves, questions about the true comprehension of Large Language Models (LLMs) intensify. A recent discussion sparked by the book Artificial Intelligence - A Guide for Thinking Humans, published in 2019, challenges whether significant advances have occurred in LLM capabilities since then.

By

Carlos Mendes

Oct 13, 2025, 10:28 PM

Edited By

Amina Hassan

Updated

Oct 14, 2025, 07:18 AM

2 minutes needed to read

A person analyzes text data on a computer screen, representing advancements in language models and their comprehension abilities.
popular

Context and Controversy

Melanie Mitchell's 2019 assertions argue that CNNs (Convolutional Neural Networks) lack genuine understanding of text, often failing to interpret nuances or context. This year, the debate continues: Have LLMs truly improved since 2019, or are they merely more efficient at word prediction due to bigger datasets and advanced computing?

Insights from Critics

People venturing into this topic continue to highlight that today’s models remain fundamentally prediction engines. One commenter stated, > "They don’t understand; it’s just a mathematical prediction model."

Critics emphasize that while systems may identify subtleties better, they still fall short of real comprehension. Another user pointed out, "Models today are better at figuring out non-obvious connections, but understanding is still not the right term."

Human Understanding vs. Machine Comprehension

Interestingly, comments delve into a philosophical angle, questioning how we define understanding itself. One user remarked, "How do we know that humans really understand what they are processing?" Another added, "You can’t test true understanding; it’s about how they respond."

This raises the concern: Does the inability to test consciousness limit our assessment of AI's capabilities?

The Chase for True Understanding

Discussions hint at a wider exploration of AI capabilities. Some argue that a model with genuine understanding requires ongoing learning and an innovative neural structure, something LLMs currently lack.

"For true understanding, they would need to be on permanent learning mode. I have high hopes for the project trying to achieve this, which builds a mammalian brain from scratch," stated one user, showcasing alternative avenues for research.

Patterns of Sentiment

The sentiment regarding LLMs is mixed, with skepticism dominating the narrative. People express doubts about AI's capacity to achieve true understanding through existing architectures and training methods. A commenter succinctly stated, "Have we really made a breakthrough? Seems doubtful."

Key Insights

  • 🌐 Skepticism is prevalent: Many believe LLMs only excel at predicting words, not understanding.

  • πŸ” Evolving models: Improvements in attention mechanisms hint at some better nuance recognition but not genuine understanding yet.

  • βš–οΈ Philosophical debates: Users ponder the nature of understanding itself, both in machines and humans.

In summary, while advancements in data and computational power have enhanced LLM performance, the fundamental issue of understanding remains unresolved. As discussions evolve, many hope for a breakthrough that truly bridges the gap between computational skills and genuine comprehension.

What Lies Ahead for LLMs

Experts estimate around 60% probability that future models will incorporate integrated learning systems, enabling ongoing adaptation to new information. As researchers target the creation of more advanced architectures, we can expect a potential shift in model design inspired by cognitive functions seen in humans and animals.

A Creative Echo from the Past

Reflecting on the evolution of LLMs invites comparison to the development of the early telephone. Initially, people were skeptical that it could genuinely convey human emotion or thought. As technology advanced, the phone became integral to personal connection, similar to the evolution LLMs are experiencing today. This historical shift underscores that while skepticism is warranted, breakthroughs in understanding often require time, patience, and a willingness to adapt.