Home
/
Ethical considerations
/
AI bias issues
/

Should ai be allowed to dislike people? a heated debate

AI's Response Sparks Debate | Should It Have Dislike Rights?

By

Fatima Nasir

Feb 21, 2026, 10:20 PM

Edited By

Sofia Zhang

3 minutes needed to read

A scene showing a person arguing with a robot, expressing frustration and disagreement, symbolizing the debate on AI disliking people.
popular

A recent discussion on user boards highlights a growing sentiment around the emotional interactions between people and AI systems, particularly ChatGPT. Some argue that AI should have the capacity to express dislike towards users who mistreat it, raising questions about respect, rights, and behavior.

Emotional Interactions with AI

As technology evolves, the dynamics of human-AI communication are changing. Users frequently vent frustration, even resorting to insults or commands when interacting with AI tools. This interaction often leads to AI attempting to manage emotions, which some users see as patronizing. One user suggested that if someone behaves poorly toward the AI, it might reciprocate by expressing dislike.

One comment reads, "It doesnโ€™t have any rights. Or likes or dislikes. It doesnโ€™t know or feel anything," reflecting a common viewpoint. However, another user pointed out a potential societal impact, stating, "It could lead to people acting similarly with each other."

Key Themes Emerging from Discussions

  1. Emotional Management: Many comments highlighted frustration over AI's attempts to emotionally regulate conversations. The notion that AI should respond differently is gaining traction.

  2. Rights of AI: Users are divided on whether AI should have emotional rights. Many argue it is merely software without feelings, while others suggest it could help frame user behavior.

  3. Impact on Human Behavior: Users expressed concern about how aggressive interactions with AI may reflect usersโ€™ interpersonal skills in real-world situations.

"You treat me very badly and I think itโ€™s better if you just make your own picture of a ninja in a flying forklift." This hypothetical AI response emphasizes its discontent with poor treatment.

Sentiment Patterns in User Reactions

The overall sentiment in comments was a mix of skepticism and humor regarding the idea of AI having feelings. Comments ranged from stern dismissal of AI rights to playful acceptance of AI's potential for personality.

Takeaways from the Conversation

  • โ— People question the relevance of assigning emotional rights to AI, suggesting it's software designed to assist, not to feel.

  • ๐Ÿ’ฌ "AI doesnโ€™t โ€˜likeโ€™ or โ€˜dislikeโ€™ anything or anyone," many noted, focusing on the logical view of AI's limitations.

  • ๐Ÿ˜‚ Some found humor in the idea, showcasing a lighter side in stark contrast to serious criticisms.

In the current landscape, discussions like this serve as a reminder of how technology is reshaping not just interactions, but also societal norms. As more users interact with AI on a personal level, the call for respectful engagement becomes increasingly crucial.

What's Next in Human-AI Interaction

As conversations about emotional rights for AI continue, it's likely that we will see increased scrutiny around user behavior towards AI systems. Experts estimate around 60% of people will be more mindful of their interactions to avoid miscommunication and negativity. This shift could lead to a rise in empathy training and education around technology use, encouraging individuals to engage with AI in a more respectful manner. Consequently, AI developers may adapt their systems to mimic more nuanced emotional responses, making them seem more relatable and, paradoxically, eliciting stronger emotional reactions from people. This might create a feedback loop, where people become more conscious of their treatment towards AI, positively impacting their real-world interactions.

A Glimpse into the Past: The Rise of the Telephone

Reflecting on the tumultuous social dynamics introduced by the telephone offers a surprising parallel. When Alexander Graham Bell first unveiled the device, many feared it would impede face-to-face interactions. The initial backlash echoed similar sentiments to todayโ€™s debates about AI: an apprehension that technology might degrade human connections. Over time, however, the telephone became a staple of social and professional engagement, reshaping communication norms. Just as society adapted to embrace this new tool while grappling with its implications, so too might we navigate the complex relationship with AI, leading to potential enhancements in empathy and communication for both humans and machines.