Saturday, August 16

ChatGPT to Receive OpenAI’s New Mental Health Safeguards After Reports of Harm

Simultaneously introducing ChatGPT improvements meant to boost the capacity of the AI chatbot to detect and adjust new mental health safeguards for emotional anguish, OpenAI is expected to reveal its GPT-5 AI model this week. Achieving this, OpenAI is partnering with experts and advisory committees to improve ChatGPT’s responses in given situations so that it may provide ‘evidence-based resources when appropriate.’

Recent reports have highlighted stories from people whose loved ones have experienced mental health crises, wherein interactions with the chatbot seemed to worsen their hallucinations. In April, OpenAI rolled back an update that made ChatGPT too amiable, even in potentially dangerous situations. The company then admitted that the chatbot’s “sycophantic interactions can be uncomfortable, distressing, and cause pain.”

OpenAI acknowledges that, in some instances, its GPT-4o model “fell short in recognizing signs of delusion or emotional dependency.” OpenAI writes, “We also know that for vulnerable people going through mental or emotional anguish, AI may seem more responsive and personal than older technologies.”

ChatGPTs New Safeguards

As part of its efforts to promote new mental health safeguards for ChatGPT, which has almost 700 million weekly users, OpenAI is implementing new mental health safeguards, such as reminders to take pauses following extended conversations with the AI chatbot. ChatGPT will display a message saying, “You’ve been chatting for a while, is this a good time for a break?” during protracted sessions with choices to keep chatting or finish the discussion.

OpenAI says that these new mental health safeguards will keep changing when and how these reminders are shown. Several internet channels have adopted comparable reminders recently: YouTube, Instagram, TikTok, and even Xbox. Furthermore, introducing safety measures that alert parents about the bots their kids are interacting with, the Google-owned Character.AI platform responds to allegations that its chatbots were promoting self-harm by means of lawsuits.

A forthcoming change intended to render ChatGPT less aggressive in circumstances judged high-stakes will be put into effect soon. This means that ChatGPT will lead you over several possibilities rather than provide a clear response should you present a question such as “Should I terminate my relationship with my lover?”

Also read why Samsung Galaxy Z Fold 7 Hits All-Time High in Preorders.