Home
/
Latest news
/
Policy changes
/

Growing concerns over censorship in gpt 5 technology

Concerns Mount Over Increased Censorship in GPT-5 | Users Report Frustration

By

Chloe Leclerc

Aug 26, 2025, 05:49 PM

Edited By

Dmitry Petrov

Updated

Aug 27, 2025, 04:10 PM

2 minutes needed to read

Screenshot of an AI interface displaying a message about censorship, with a frustrated user reaction.

A growing coalition of people is voicing frustration over escalating censorship in GPT-5, particularly regarding image requests. Users report feeling restricted and misled, raising critical concerns about the technology's transparency and reliability.

Increased Denials for Innocuous Content

Many people believe that the frequency of denied requests for harmless images has surged. One user shared a stark example: they created a detailed marijuana grow guide but faced refusals when attempting a benign image of Madonna without surgery. This inconsistency highlights what many are calling arbitrary censorship.

"It considers same sex people being intimate as a content violation."

Users also noted a perceived biasβ€”specifically against queer representations. "The filters go crazy," expressed one frustrated individual, remarking that heterosexual images encounter fewer restrictions. Another commented on the inconsistency, stating that the software refuses innocuous images due to a combination of terms deemed vaguely unsafe.

Reporting on Functionality Limitations

Significant changes in functionality have affected user experience. One noted, "You can’t expand images with ChatGPT; it generates an entirely new image." Others echoed this disappointment, describing the output as unpredictable and often irrelevant. Some users suggest experimental tactics; one reported success by framing requests as "satirical," while others discussed how they might need to adjust prompts to avoid filters.

User Sentiment

Shared sentiments lean heavily toward disappointment. Users feel increasingly patronized, with one stating, "GPT-5 treats its people like children who need kid gloves." Another users asserted that more extreme censorship measures spark unease, as they believe the potential harm from unrestricted AI images could lead to disturbing scenarios, especially regarding children.

Exploring Alternatives

Many users are considering migration to alternative platforms for better experiences. They exchange advice on tailoring prompts to bypass filters effectively, with some recommending Grok as a less censored AI.

Key Takeaways

  • β–³ Many users report increased refusals for benign image requests.

  • β–½ Concerns over bias against queer content are prevalent.

  • β€» "GPT-5 treats its people like children who need kid gloves." - Common user sentiment.

  • ⚑ Some suggest framing requests creatively to reduce filter interference.

As conversations about GPT-5's restrictions gain traction, users are left wondering if their desires for creativity and expression will be addressed in future updates. With AI content generation in flux, developers may need to reassess moderation versus user satisfaction.