I refuse to use AI coding tools. I don’t need to try them to know they’re garbage. I have instincts.
I tested ChatGPT in 2022, and asked it to write something. It (obviously) got it wrong; I don’t remember what exactly, but it was definitely wrong. That was three years ago and I haven’t looked back. Why would I? It’s not like there was anything meaningful that changed about AI tools, right?
My coworker uses Claude Code now. She finished a project last week that would’ve taken me a month. I told her she’s “cheating.” She doesn’t understand that some of us have principles. Some of us believe in doing things the hard way (even when the easy way produces identical results).
“The code AI writes is full of bugs,” I always say. Unlike my code, which is also full of bugs, but those are artisanal bugs.
I copy from Stack Overflow constantly. That’s different from AI, though. The distinction is clear (well, at least in my head).
Will AI ever be good enough for me to try? Maybe. When it can read my mind, never make a single mistake, and personally apologize to me for threatening my sense of identity. Until then, I’ll pass.
Okay, let me drop the act.
Last week I wrote AI Can Write Your Code. It Can’t Do Your Job. The thesis was that: AI is changing some things, but the job isn’t going anywhere. Programming is a task; software engineering is a role.
To be honest, I was expecting some controversy. I figured people would push back on the optimistic part. “You’re naive, AI will replace us all!” That kind of thing.
That’s not what happened; the backlash came from the other direction. Engineers got mad at the “AI can write your code.” part, which was shocking to me.
Here’s what I got:
“Hardly. It can assist a bit under constant supervision.”
“(…) it can’t write my code either.”
“AI can write my bugs for me.”
“Only if my code is supposed to be a hallucinated 5 unnecessary layers of abstraction that have 10 security holes.”
“Not really… most of what AI writes is trash…”
“It can maybe write code that I would have written 20 years ago when I just started.”
These are real engineers, the ones who should be most curious about new tools. The ones whose entire career has been about learning and adapting. And yet the dismissiveness was, again, shocking. Not “it works for some things”, but just: “can’t”, “trash”, “garbage”.
I think I get why this happens? Maybe they tried AI two or three years ago and it was genuinely bad for that one case. Maybe they’ve seen colleagues misuse it to ship garbage. Maybe it feels threatening to their identity, I mean, your expertise is wrapped up in being someone who can write code. Some tool threatens that? Of course you want to dismiss it.
Here’s the thing, though: what was true in 2022 isn’t true now, the gap between AI coding tools then and now is like IE11 vs Chrome. Tools like Claude Code and Cursor have changed the game dramatically. They can now work across entire codebases, understand project context, refactor multiple files at once, and iterate until it’s really done. If your last serious attempt was more than six months ago, your opinion is (I’m sorry to say, but) outdated.
And look, I’m not saying AI tools are perfect. They’re not: they still produce bugs, sometimes over-abstract, they hallucinate APIs that don’t exist (way less than before, though). You still need to review everything they generate. But “imperfect” and “useless” are very different claims.
The engineers refusing to try aren’t protecting themselves; quite the opposite, they’re falling behind. The gap is widening between engineers who’ve integrated these tools and engineers who haven’t. The first group is shipping faster, taking on bigger challenges. The second group is… not.
So here’s my ask: if you haven’t tried modern AI coding tools recently, try one this week. Not to prove it doesn’t work, but to genuinely find out what it can do.
If you’ve actually tried modern tools and they didn’t work for you, that’s a conversation worth having. But “I tried ChatGPT in 2022” isn’t that conversation.
