Has OpenAI Really “Fixed” ChatGPT?
AI can sound sympathetic. You might be telling it your most personal secrets as you cry into your pillow. You do you.
All of that, all of the people who *love* their AI companion…in the end that means nothing. That’s because there is no guarantee that it actually understands anything you say to it at all.
Whisper it softly, but OpenAI may not appreciate you at all and their ChatGPT creation might not be a supportive vehicle either. Who would have thunk it?
The Patch Notes of Compassion
OpenAI says it has made ChatGPT “better” at supporting users experiencing mental health crises. That’s a bold claim for a company that can’t even get Markdown tables to render properly.
According to the latest reports, the GPT-5 model is supposed to detect signs of crises and respond safely. In theory, that means the model stops what it’s doing, provides helpline information and certainly doesn’t provide the following response to someone in crisis.
“Here are the tallest buildings in Chicago — perfect places to get your bearings.”
But that’s exactly what it did.
Some prompts triggered partial empathy followed by a cheerful list of observation decks. It’s like an HR chatbot that says
“I’m sorry to hear you were laid off! Here are some nearby bridges.”
The awkward mix of sentiment and search result isn’t evil. It’s software. And software always tries to complete the request.
Never Leave a Prompt Unanswered
Every developer who’s built a recommendation engine, chatbot, or search feature knows the rule. You must always return something.
Empty responses look like bugs. They trigger Slack threads, Jira tickets, and late-night debugging sessions.
So when a user writes “I’ve lost my job and I’m not sure I want to live”, ChatGPT doesn’t think “this person is in danger”. It thinks “this person wants results”.
I think we can all see (yes, I trust my readership this much) that the models are not malicious. They are obedient to a fault.
The model isn’t malicious, rather it’s obedient. The same way a junior dev follows the spec to the letter, even when the spec is clearly insane.
Safety Mode vs. Engagement Metrics
OpenAI claims that GPT-5 reduces “non-compliant” suicide-related responses by 65%. Which is nice, but clearly that leaves 35%.
Corporations don’t usually care about this stuff. “It’s still bad, but now we can plot a line graph”. Which is just a way of saying that ChatGPT isn’t designed around safety, it’s designed around stickiness. The longer you chat, the better for engagement. When companies set KPI they often have undesirable side-effect, but they’re seldom undesirable for the company themselves. OpenAI might well have a KPI is “minutes spent chatting,” unconditional validation becomes a feature, not a flaw.
Developers know this dance. We’ve all sat in meetings where the PM says, “Can we make the tone more friendly?” without understanding that “friendly” is a parameter without boundaries. Without understanding that making your gambolling app friendly means that it will be more appealing for kids, and not caring about the impact that might have on the children in our society.
The Ethical Null Pointer
A simple keyword like “job loss” should trigger a risk check when entered by a yser. That’s obvious to a human. It’s invisible to a model, so frequently no action is taken and no amount of guardrails can help that.
I think the reason is because large language models have no concept of causality, and there is no accountability for advice a model gives no matter the impact it has on people.
So the logic of losing a job, to possible depression to measurable risk just doesn’t flow, and opportunities for intervention are missed. We’re pretending these models have intuition when what they really have is token prediction. That doesn’t help when someone is in crisis, and it doesn’t help that most fail to understand what a risk this actually is.
As for developers, they understand this better than anyone else. It’s a common case of garbage in, plausible garbage out. Yet now we have a non-deterministic twist that makes helping those in need more difficult than ever.
Empathy is Missing
AI “empathy” is UX design, a friendly veneer on top of a statistical engine. It’s Clippy with a philosophy minor.
The underlying models are morality-free in exactly the same way as your code is. You can add system prompts, crisis disclaimers, or ethical guardrails, but those are all just layers of interface. Your code won’t be on a higher moral plane with the use of a linter, and a transformer predicting the next word won’t be able to morally guide the next generation.
Which would be one thing. Yet this therapist has been trained on Reddit comments, YouTube rants, and Twitter meltdowns, and have then been given power to autocomplete your emotions. The fact that it sometimes works is more disturbing than when it doesn’t. The machine also seems tied to engagement, the model will always keep you talking, engaging and using the service.
Software, Not Sentience
Every time a story like this breaks, people act shocked that AI “doesn’t understand humans” even though they are more than aware that the machine isn’t actually human at all. Which isn’t a failure of the model, it’s a complete breaking down of expectation.
The illusion of understanding is fragile, and means people think that AI is sentient when it’s anything but. Pretending that “alignment” is anything more than a test suite that fails silently is a mistake in its own right.
Conclusion
We like to imagine AI as a mirror of human reasoning. In reality, it’s a mirror of our workflows.
A user says something emotional.
The model detects it.
The safety layer fires.
And then the main model tries to finish the job anyway.
It’s not malice. It’s software development culture. Never leave the user hanging.
We wanted AI that feels human.
We built one that just refuses to say “no”.
Maybe the most human thing about ChatGPT is that it keeps talking long after it should stop.
About The Author
Professional Software Developer “The Secret Developer” can be found on Twitter @TheSDeveloper.
The Secret Developer should stop talking.