Home
/
Latest news
/
AI breakthroughs
/

Why can't chat gpt identify public figures anymore?

ChatGPT's New Policy | Users Confused Over Image Identification Limitations

By

James Patel

Oct 14, 2025, 08:46 AM

2 minutes needed to read

A group of diverse people looking at a screen, expressing confusion about AI's ability to identify public figures.
popular

A growing number of people are expressing frustration after ChatGPT's recent policy change that limits the identification of public figures in images. This alteration has sparked a wave of confusion and critique from users who once relied on the AI for quick identification.

Many individuals have reported incidents where they attempted to have ChatGPT identify well-known figures, only to receive a denial citing privacy rules. One user recounted trying to identify Donald Trump: "I swear I remember it being able to do that before. Am I tripping?"

Shifting Policies Spark User Outrage

Sources confirm that OpenAI has significantly modified ChatGPT's capabilities, placing stricter regulations on identifying individuals, even public figures. This change comes amid growing concerns over privacy and misidentification.

  • Privacy Concerns: Recent privacy controversies have prompted OpenAI to tighten its restrictions. One comment highlighted that users now receive recommendations to use other tools, such as Google Lens, instead.

  • Comparative Responses: Other AI systems, like Grok and Claude, reportedly still allow image identification. This creates a stark contrast to ChatGPT's restrictive stance, leading to dissatisfaction among users.

  • Evolving Guardrails: Some users have expressed bewilderment and annoyance at the increasing constraints, stating, "The guardrail is getting more extreme and extreme."

"ChatGPT no longer identifies or names real people, even in images, due to updated privacy and safety policies," one person remarked, highlighting a growing trend toward more cautious AI interactions.

User Sentiment: Discontent and Disbelief

The sentiment among users varies, with many expressing negative feelings toward the recent changes. "OpenAI is insane. They must be carrying a lot right," one person stated, indicating frustration with the company's decisions. Another comment poignantly captured the feeling: "I love and hate AI."

  • 😑 Frustration with the inability to receive identification.

  • πŸ˜’ Confusion about why some AIs can identify figures while ChatGPT cannot.

  • πŸ˜‚ Amusement at the idea that Google has looser identification policies.

Key Insights

  • ⚠️ Users report severe limitations on identifying public figures.

  • 🧐 Other AI tools continue to provide identification, leading to user dissatisfaction with ChatGPT.

  • πŸ’¬ "It’s just ridiculous!" - A frustrated user on the limitations.

As the digital landscape evolves, how these policy changes will impact user trust in AI remains uncertain. Will adjustments be made to satisfy user demands, or will privacy concerns continue to dominate the conversation?

Future Policy Directions

The recent pushback against ChatGPT's image identification restrictions indicates that OpenAI may reconsider these limitations in the months to come. There's a strong chance that they will adjust policies to strike a better balance between user needs and privacy concerns. Experts estimate around 60% probability of a relaxation in these rules as feedback from the community grows louder. Companies often respond to user sentiment, and since other AI systems still permit figure identification, OpenAI may find itself under pressure to follow suit.

A Historical Lens

Looking back, the 1970s saw similar tensions in tech and privacy when the rise of personal computing sparked fears over data vulnerability. Just like today, businesses had to find a middle ground between innovation and user trust. The parallels are strikingβ€”not every advancement can smoothly progress without stirring apprehension. As companies were forced to adapt or risk losing ground, today’s AI landscape is at a crossroads, reflecting the same need for compromise between service excellence and the ethical use of technology.