Home
/
Latest news
/
AI breakthroughs
/

Chat gpt’s gpt 5.3 model will not tell you to calm down

ChatGPT's New Model | Removal of Calm Down Prompts Stirs Controversy

By

Anita Singh

Mar 4, 2026, 05:39 AM

2 minutes needed to read

A person interacting with a digital assistant on a screen, showing a thoughtful expression without any calming prompts.
popular

OpenAI's latest update introduces the GPT-5.3 Instant model, aiming to address user complaints about its previous versions encouraging relaxation. However, the model's changes have sparked criticism regarding transparency and privacy concerns.

Context and User Backlash

The rollout of GPT-5.3 Instant comes after vocal feedback on forums where users expressed frustration over prior interactions that felt dismissive. OpenAI acknowledged this on X, stating, "We heard your feedback loud and clear, and 5.3 Instant reduces the cringe." However, this reassurance is overshadowed by comments suggesting increased concerns around data privacy and governmental surveillance.

User Concerns: A Deep Dive

As people engage with the new model, three main themes emerge from the discussions:

  1. Data Privacy: A significant number of comments highlight fears regarding personal data being sold. One comment explicitly notes, "We are selling your data to the government," reflecting a strong distrust towards OpenAI's handling of information.

  2. Functionality Over Tone: While users appreciate the model's responsiveness, many critique its approach, worrying that prioritizing user comfort might overlook serious issues.

  3. Perception of Autonomy: There are also concerns that the model's enhancements may hide underlying technologies that could affect users' autonomy, such as increased monitoring functionalities.

“Freak out, you're completely right,” a user asserted, emphasizing the moral dilemmas posed by the tech shift.

Sentiment and Reaction

User sentiment appears largely negative regarding privacy issues, with many seeing the model's updates as a facade that glosses over more significant problems related to data use.

Key Observations

  • 🚨 A majority express distrust towards data security

  • 📉 Many hope for improved functionality but fear ethical implications

  • 💬 "This feels like a shift in priorities," a concerned user commented

As OpenAI continues to promote GPT-5.3 Instant, will user trust improve or decline further? The conversation around AI models and ethical use remains as relevant as ever.

Forecasting the Future of AI Trust

As we look to the future of OpenAI’s models like GPT-5.3 Instant, it’s likely user sentiment will further influence the direction of AI development. There’s a strong chance the company will prioritize transparency, especially in terms of data usage, with about 70% of people indicating they want clearer privacy policies. If OpenAI responds effectively, they might regain lost trust, but skepticism will linger, especially if privacy breaches occur. Experts estimate around 60% of users will remain cautious, influencing how new models evolve to cater to both functionality and user concerns.

A Tale of Shifting Paradigms

Consider the shift in the music industry during the advent of streaming services. Just as artists grappled with how to monetize their work amidst growing platforms like Spotify, so too do AI developers face the challenge of maintaining user trust while innovating. The tension between fair compensation and accessibility mirrors current fears surrounding data privacy and technological control. As people embrace new ways to engage with technology, they also wrestle with the question: how much of their autonomy are they willing to surrender?