AI? It Broke Your Code
A production incident is when something goes seriously wrong. People huddle (virtually, this is 2026) around a table and try to work out how we can fix the product quickly.
Everyone in the room becomes a detective. Everyone has a theory about what has gone wrong, and how it can be fixed.
“The servers couldn’t handle the traffic”
“It was an infrastructure issue”
“It was a deployment problem”
“It’s a Wednesday. Is it a Wednesday traffic spike?”
Maybe, perhaps, maybe. Further investigation needed, people.
Because a new contender just entered the ring. Did someone simply let Claude loose on our production code without any understanding of what the code actually does?
It’s an uncomfortable truth for sure, and nobody seems to be doing anything about this.
SatNav Got Lost
Years ago people would drive into rivers blaming their TomTom (remember them??) for taking them on the wrong route. Did I say years ago? Because a quick Google to remind me of such stories has turned up an incident of this just a couple of months ago (An Amazon Prime driver has got their van stuck in the mud …Reddit · r/CasualUK570+ comments · 2 months ago). This stuff is still happening (thanks, Amazon).
So we expect software developers to allow Claude to drive their software development and avoid mistakes? To be trusted with making sure that the output of code is accurate? I think we are asking the wrong people, to be absolutely honest with you.
AI code often looks correct, with decent formatting and clear naming. The tests are going to work, the documentation is spat out without issue. It’s all an illusion of quality, like that Amazon driver’s route that just happened to end in a river. Your software is on a cliff edge, and nobody cares about it.
The Shift
I’ve been using AI for my code for years. I’d copy and paste in my code to ChatGPT and get informal reviews in 2023. I hid that was going on. I got Cursor to write my code in the chat window, copy and paste the code and then change it to make it “look” like my own work.
I didn’t want people to know that I might be using AI to help me code, I didn’t want to be judged and told that I wasn’t good enough. I didn’t want the poor performance review and the PIP I sometimes feel like I deserve.
Now I see everyone (myself included) allowing Claude to generate tickets, code review and the code (of course pushing the code too) and only actually see the code at PR stage.
The question used to be.
“What is the solution”
Which we could plan out before starting on the code. You might spin up a demo project to make sure everything works, you might ask a senior colleague.
It’s all been replaced by a statement.
“The AI seems confident”
Engineering has gone. We don’t do that anymore, we simply gamble.
Shipping at the Expense of Everything
A junior developer can write code that compiles, they’d create a solution that “sort of” worked and then we’d need to get it done correctly. That has always been true.
The difference when a senior developer takes to the keyboard is like that between night and day. The senior engineer has understanding. They think about
why code exists
what happens when requirements change
where the edge cases are
how systems fail under pressure
what technical debt will look like six months later
The tech industry has always had an unhealthy obsession with speed, but the emergence of AI has put this into overdrive.
When you are told to move faster (and faster) it’s all exciting but somehow all roads seem to lead to debugging a production outage at midnight. Just click accept, don’t worry about understanding. The promise is not what we thought it should be, and perhaps we should never have let it get this far.
We needed to fix the code review process years ago, having developers give code a quick scan over before it goes into production was never acceptable. Now we are talking about AI reviewing AI code to “remove bottlenecks” and the nightmare is likely to already be in full swing.
If AI writes the code, reviews the code and pushes the button into production we have an issue.
The Accountability Problem
At work we had a discussion about AI generated code that a software engineer might not be happy with. Who is responsible for the code, they mused? I wondered whose name they thought would be at the top of the PR that Claude created.
We’re suddenly in an accountability vacum. AI can generate code, but it can’t generate responsibility. It doesn’t sound possible but when AI generates code people start to say that the code isn’t theirs, like generative AI is less a tool and more a sentient being that writes our code and takes responsibility for the consequences.
Software engineering has always required judgment. AI hasn’t removed that requirement. It has actually made judgment more important because the volume of generated code is increasing dramatically.
We are now desperate for software engineers who can actully
verify AI output
challenge assumptions
identify subtle defects
understand architecture
think critically
Ironically, these are exactly the skills many companies stopped training developers to develop in the first place, so it’s no surprise that we have a generation of developers who just like to push code and know that they’re measured by the number of PRs they make per day and per week.
The short-sightedness needs to stop.
Conclusion
The solution is as it’s ever been. We need people to take responsibility for their work, to make sure that what they do is of good quality.
We need people to train our software developers to that standard.
We need a training program which actually trains software developers.
It can’t be this hard, can it?
About The Author
Professional Software Developer “The Secret Developer” can be found on Twitter @TheSDeveloper and regularly publishes articles through Medium.com
The Secret Developer suspects some production systems are now held together by AI, caffeine and one exhausted senior engineer.