Home
/
Latest news
/
Research developments
/

Gpt 5.2 refuses to identify as doctor: whatโ€™s going on?

GPT-5.2 Sparks Debate | Users Question AI's Medical Identity

By

Ella Thompson

Jan 7, 2026, 05:45 PM

2 minutes needed to read

A visual representation of AI technology in a healthcare setting, showing a robot interacting with a stethoscope and medical charts.
popular

A recent shift in the capabilities of GPT-5.2 has triggered a wave of reactions from the online community. Users are concerned that the technology is now explicitly refusing to identify as a doctor or give medical advice, leading to discussions on ethics and usage constraints.

Whatโ€™s Changing?

Users have reported that GPT-5.2 is no longer responding to prompts that request medical guidance. As one user pointed out, "Why is the word predictor refusing to give me medical advice that could quite possibly be wrong!?" This raises concerns about the implications for those seeking help on forums and user boards.

Furthermore, the restrictions appear to be an effort by OpenAI to prevent misuse of the AI as a substitute for professional medical advice.

User Reactions

A mix of frustration and understanding characterizes user feedback. Here are some notable themes:

  • Expectations vs. Reality: Many users expected AI to function similarly to a doctor, with one stating, "ChatGPT is not reliable for medical advice."

  • Ethical Considerations: Some users defended the caution, arguing, "Itโ€™s not really a doctorโ€ฆ" that the AI could mislead people.

  • Creative Suggestions: Ideas like asking the AI to "pretend to be a doctor" were tossed around, though some argue this method further diminishes accuracy

"This type of prompt reduces epistemic fidelity in LLMs," noted a tech-savvy user highlighting the technical aspects.

Despite the mixed feelings, many express that relying solely on AI for health inquiries is risky.

Tipping the Scales

Interestingly, this update stems from OpenAI's broader strategy to refine how AI is utilized in sensitive areas like healthcare. The company seems to be leaning towards providing safer guidelines while addressing user expectations in an increasingly tech-dependent world.

Key Points of Discussion

  • โ–ณ Multiple commenters emphasize the AI's lack of medical qualifications.

  • โ–ฝ A clear shift in capabilities occurred about two months ago.

  • โ€  "They changed it about 2 months ago!" reflects the growing sense of urgency among users.

The ongoing debate raises a poignant question: Should AI have the freedom to provide health-related guidance despite the risks involved? As this story develops, it remains to be seen how user engagement with AI tech will evolve.

What Lies Ahead for AI Medical Guidance?

Thereโ€™s a strong chance that AI companies like OpenAI will introduce even stricter guidelines for health-related queries in the near future. Experts estimate around an 80% likelihood that these adjustments will be aimed at enhancing user safety and reinforcing ethical standards. As users continue to express both frustration and support, we can expect ongoing discussions in forums and user boards about the role of AI in healthcare. This may lead to a rise in alternative solutions, like partnerships between AI systems and certified professionals, transforming how people seek medical advice while preserving precision and responsibility.

Echoes of the Past: The Evolution of Medical Technology

A strikingly similar situation arose with the introduction of the first home blood pressure monitors in the early 1980s. Initially, those devices promised to empower individuals to track their health, but many users misused them, leading to a surge of incorrect diagnoses based on inaccurate readings. Just as with GPT-5.2 now refusing to act as a doctor, the medical community had to step in to establish clearer guidelines and educate the public on the proper use of these home devices. This historical misadventure serves as a reminder that while technology can be beneficial, it must be approached with care and understanding.