The Problem With Trusting Robots

We all know it, because it’s one of the truisms of modern AI systems. I’ve said it. You’ve said it. Now the cheerleaders of AI are beginning to sound the alarm.

So when Sundar Pichai (the CEO of Google, don’t you mind) is telling you to stop blindly trust AI you should take note. Because this is like a butcher warning you about the risks of eating meat.

So when AI models are released which promises to write, review, refactoring and merge code we should open a large bag of skepticism and feast on the yummy contents inside.

Code You Didn’t Write, Bugs You Own

I still remember the bad old days. Pull requests sit unreviewed for days as developers are too busy with spikes to change a piece of text. So it makes sense that people wanted a machine to take the hard labor out of reading code and thinking about what it should do.

But are we ready for a world where code is written by agents and reviewed by humans who never read it? When we know that the models make mistakes (as even Sundar Puchai admits?).

Well, it’s already happening. At work we have copilot making comments on PRs that are weird style issues (we have a linter for those, thanks) or are sometimes complete nonsense.

At least if you are on good terms with your colleagues you might exchange a drink after a hard day working. Now you just get a GitHub Copilot badge and a crash report.

Agents Are Just Mirrors

In some respects AI agents are like junior developers. The thing is, junior devs learn and grow. Sure they ask annoying questions about threading but they get there in the end.

AI agents cannot learn. They’re stuck in the same loop misunderstanding a new API until the gamechanging incremental update comes along for your chosen model. The problem is that update won’t help your team bond, they won’t improve as a team and won’t get closer to revolutionizing your industry.

So it’s difficult to overlook the spaghetti code that results from using AI for problems. It technically compiles, but causes the reviewer an existential crisis (unless the reviewer is an AI too). When the prompt engineer (or whatever we call ourselves now) doesn’t know how to create good code neither does any AI. Someone in the loop needs to know the importance of good code and having a human at the tiller. Because it’s not easy to find bugs will find their way through and cause difficult to identify bugs in production.

If you think AI will free up time for humans to focus on “more important problems”, you’ve never worked in a company that cancels projects halfway through a sprint because they won’t be “revenue-positive” in this fiscal quarter. I’ve been there, and I know plenty of you have been too.

A Calculator With Opinions

I use AI tools. They calm my imposter syndrome and fear that I’m producing code that works, but will make a reviewer sick at the sight of the name I’ve given my variables (think notActive = !activations). AI helps me to make good choices, remember syntax and even generate decent unit test stubs.

It’s like a helpful calculator that can help me with existential dread! The thing is, I wouldn’t trust my calculator to do my taxes unchecked, so what type of a programmer would I be to be someone who pushes code without checking it? I’ll tell you. A common one in 2025.

Yes, I still use calculators and I still use AI in my day-to-day work. But I don’t let the agent run off with the keys to the castle and hope that everything is going to be OK. I was never someone who would copy-paste Stack Overflow answers hoping they would work correctly either. I’d certainly not let someone on Stack Overflow tell me what “clean coding” is, and I wouldn’t trust an AI to do the same either. I believe if you’re that kind of developer AI is about to make you obsolete and redundant in short order of each other.

Stop Thinking?

To stop thinking, to halt being a considered person might be a successful strategy for those who are over-thinkers. I mean, do they really like me or are they just playing nice? Yet in the world of development stopping understanding what you’re building is fatal for your career. Code becomes magic and AI writes features based on vague tickets and Figma links (the ones that change mid-sprint) and you get software which “works” but nobody knows why. Then suddenly it doesn’t, and there is no one who is able to fix it.

At one company, my colleague used AI to generate a request handler. He didn’t understand that GET requests shouldn’t have bodies. Eight years in the industry. Eight. You could ask ChatGPT and it would scream at you. Still, here we are as it is 2025 after all.

At the other extreme I interviewed in a company where they said they don’t, and never will use AI. I know their policy has changed now, but surely they knew. Surely the company knew that on some level absolutely every employee was using AI.

Software’s Jevons Paradox

So, given time, will AI reduce the number of jobs in tech? Will the end result be new jobs or a revolution in the jobs we do?

For guidance I looked into my big book of paradoxes and analogies. In there, Jevons Paradox claims that making something more efficient leads to more of it being used. If we carry this over into our development landscape that might mean AI is the dawn of more codebases, more bugs and more humans required to untangle the mess.

Conclusion

I hope AI means an explosion of PRs. That’ll be good for us, the paid labor and in need of jobs for our retirement plan. Let the machines churn out pull requests. We’ll still need someone to decide if that architecture actually makes sense or if we’ve just let the bot recreate COBOL with better error messages.

It’s not about whether programmers can use agents. It’s not even about whether we should use agents. It’s about how we use them that matters.

Let AI help you build software. But don’t let it become the engineer.

The Secret Developer doesn’t really know how any code works at all.

About The Author

Professional Software Developer “The Secret Developer” can be found on Twitter @TheSDeveloper.

The Secret Developer doesn’t really know how any code works at all.

Previous
Previous

The Clout-Driven Exit Strategy

Next
Next

Has OpenAI Really “Fixed” ChatGPT?