When AI Says “No”
Photo by Glenn Carstens-Peters @glenncarstenspeters on Unsplash
I’m sure you’ve all used Cursor. It’s the software development AI that’s (probably) going to replace all software developers.
So, there is some poetic justice that its customer support bot (“Sam”, if you’d believe) is gaslighting users with nonexistent policies much like cursor makes up API calls.
The future is now, and we need to start communicating to leadership that AI isn’t the panacea that many companies think that it is.
Viral News
If there are CTOs and CEOs out there reading these blog posts, please get the message.
This little PR disaster might just have you calling HR to rehire those software developers and support guys you’ve laid off to be replaced with AI “solutions”.
Because once you put an AI in charge, you’re on the hook for whatever hallucinations it might come out with. That means (in this case) making up a support policy. For me, it means dealing with junior developers being unable to see when AI “forgets” to add complete unit tests or use deprecated APIs.
We Need Humans
When Cursor users realised they’d been getting logged out when switching devices they contacted customer support. Sam the AI told them that the logouts were “expected behavior” under a new login policy. A policy that didn’t exist as it had simply been made up.
Without a human in the loop to check the validity of responses, any given AI might “predict” that the solution to a given problem is something they’ve made up. This might mean support making up policies, or a coding agent making up code conventions that degrade the code. It’s time for humans to be given the power here.
Let’s pause for a moment to take this in. Cursor is worth almost $10 billion. It’s backed by Silicon Valley’s finest. It’s been riding the AI wave like a surfer hopped up on Red Bull and buzzwords. And somehow, nobody thought to implement a basic sanity check before letting HAL 9000 handle customer support tickets. How good do you think their guardrails are for their coding agents (you don’t need an LLM to predict this one for you).
The Outcome
This isn’t just about one bot lying. It’s about the collective delusion that AI agents are ready to be dropped into production environments with no adult supervision.
It reminds me of every time I’ve been locked out of my work account because the internal system assumes I’ve been fired if I go on vacation for more than two weeks. But at least that particular Kafkaesque policy wasn’t made up by a hallucinating piece of code. That was good old-fashioned corporate incompetence. I’m frankly petrified about the combination of AI and corporate ways, what new insane issues might we face? Needing to change our password daily? Performance reviews that require us to improve team morale? I can’t even think of good examples for the nonsense that might be required.
The Damage
There’s something uniquely damaging about AI and hallucinations.
That’s because AI doesn’t feel shame. It doesn’t get embarrassed, it doesn’t follow up, and it sure doesn’t fix its own mistakes. You wanted automation? Congrats. Now you’re in charge of quality assurance for a bot that invents policies when it gets confused and rips apart codebases because it “thinks” that’s the right move.
Confusion is the default when you throw AI at problems needing empathy, nuance, or basic human sense. This isn’t like autocomplete screwing up your grocery list. This is about trust. And when a customer feels like they’re being gaslit by a robot pretending to be a person, that trust is gone. Poof.
Conclusion
As someone who’s seen my fair share of managerial decisions driven by Twitter trends and gut feelings, I can’t say I’m shocked. Cursor’s team probably thought it was “disruptive” to go full-autonomous with customer service as well as codegen. I bet they had a Notion page about it. Maybe a Slack channel too.
That’s where the problems all started, I’ll bet.