The AI Paradox

Photo by Denys Nevozhai @dnevozhai on Unsplash

Ah, the sweet irony. Anthropic, the AI company swimming in billions of investor dollars, has one simple request for job applicants

“Please, for the love of all that is human, do not use AI.”

Classic. It’s a Michelin-starred chef banning knives in the kitchen, and it’s Tesla asking employees to arrive by horse. 

But we are here, and I guess we have to deal with it.

AI is Untrustworthy

In my opinion Anthropic aren’t wrong. AI can’t be trusted and is helping people to do things that perhaps they shouldn’t be able to, lowering barriers and quite probably “changing the world”.

They are an AI company who don’t want applicants to be helped to craft a coherent job application, and probably will not check the output for hallucinations. 

The reasoning behind this is rather self-evident. We all want to see unassisted human writing because hiring an actual human still matters. As part of the hiring process, you will always want to gauge a candidate’s communication skills, and AI will make the good candidates inseparable from the poor (the great are still obvious, though). 

I think we are well within our rights to ask the pertinent question though. If AI tools are good enough to junior programmers, are we really expecting applicants to forego them just to write a paragraph about “why I want to work here”?

The Contradiction

Anthropic is not alone in this contradiction. AI has been touted as the ultimate productivity tool, capable of automating the drudgery of writing, coding, and even making hiring decisions. Yet, when it comes to actually using AI in their own recruitment process, companies suddenly slam the brakes. It’s a fascinating double standard and one that suggests even AI builders don’t fully trust what they’ve unleashed.

It’s also a little suspicious. If an AI-powered cover letter is indistinguishable from a human-written one, doesn’t that imply the AI is doing a good job? And if AI-written applications are worse, then wouldn’t the bad ones be easy to filter out? I also can’t help but think they should be using AI to filter these applications too.

Anthropic’s stance is an admission. AI-generated writing is a problem they’d rather not deal with.

A Pattern of Tech Hypocrisy

The tech industry has always been full of companies that say one thing and do another.

FAANG used to be the pinnacle of software jobs. Now? They’ve become bureaucratic nightmares where innovation goes to die, and the best coders avoid their corporate ways.

Agile was supposed to make us faster, but somehow, it’s turned into a mess of endless meetings and micromanagement.

Remote work was the future. At least it was until companies (and others) realized they didn’t like it when employees had too much freedom. Who cares if people are more productive, we need to see them when they work!

And finally, AI is the next big thing, except when it’s used in ways that might make hiring managers uncomfortable.

It’s the same old story. Tech companies preach disruption and innovation but recoil when it affects their own workflows.

The Future of Hiring

There’s an unspoken reality at play here. 

AI is creeping into every corner of work. We always had the entertainment of auto-generated emails, now we have the thrill of AI-written emails. We always had coding and managers wanting more, now we have AI-assisted coding putting more anti-patterns into our codebase. Now instead of unresponsive recruiters we have machine learning-driven hiring decisions.

It’s all here, but without the requisite and balances, AI is also reducing the value of human work. As we degrade human work, we are also degrading those who do the work too and that is bad for everybody. It’s a growing fear, and one I see playing out at work and socially.

At work I see people coding using AI, but not really understanding the code they produce. They introduce hard to find issues into the codebase, and then require someone more senior to iron out the issues that have been introduced by using AI as an unrestricted shortcut. Emails (the few I read at work) are starting to have that smell of automation and lack any part of the human touch. It’s part-sad and part-issue because we are all becoming more wary about the future of our jobs, and work in a wider sense.

Because Anthropic’s policy is a symptom of fear. They don’t want to admit that AI could make job applications easier because that would raise the uncomfortable question: what happens when AI makes the job itself easier? Or worse, obsolete?

Conclusion

I wanted to leave this post on a positive note. Perhaps AI will make short working weeks a thing. It might make us all richer. It might make us all safer, and happier.

The problem is that I remembered this whole thing is run by the tech industry. I think I know the prevailing attitude to AI in C-suites up and down the land. It’s great when it makes them money. It’s a problem when it makes them think. AI needs to be used with care and consideration, two qualities that are sorely missing in almost all of our workplaces.

Previous
Previous

Waste.gov Is the Most Honest Domain Name in History

Next
Next

You Did Nothing Wrong. You’re Laid Off!