

Software developer and data geek with 18+ years delivering web, mobile, and defense systems that ship to production. My focus is on analytics platforms, APIs and developer tooling. My open-source work ranges from compilers and automation frameworks to GIS data products. I weave AI-assisted workflows into day-to-day engineering to accelerate delivery and quality.
I've been writing code for over 20 years, and I've used every productivity tool that came along: better IDEs, better frameworks, better testing libraries. None of them changed my output significantly (maybe 10-20%). There's a running joke in the vim community about spending hours tuning your config to save 4 seconds a day, and honestly that's been the story of developer tooling for as long as I can remember. xkcd even made a chart for it: map how often you do a task against how much time you'd save, and you can figure out whether automating it is worth the effort. The answer, almost always, is no. The math just doesn't work out.
AI broke that chart. It collapsed the "time to automate" column to near zero, which means optimizations that were never worth the setup cost are suddenly free. I've written before about why I don't think AI is our competitor, but that was the philosophical argument. This is the practical one. Over the past two years, AI has changed my output by several multiples, not the 20% I got from every tool before it. But the benefit wasn't evenly distributed. The more I already knew about a domain, the more AI amplified my work, so the changes are more pronounced in my hobby projects – where I'm the only cook in the kitchen and know when AI is bullshitting me (it also helps that I'm not as anal about the quality of my code when I'm the only one contributing to it). There's a hierarchy I keep seeing play out:
novice < AI < expert < novice + AI < expert + AI
The interesting part isn't the middle, most people assume experts beat raw AI output. It's the ends. A novice with AI leapfrogs a bare expert, but an expert with AI compounds in a way that nothing else in that chain does. I've seen this play out in software, in real estate analysis, in writing, and I think it generalizes to any knowledge work. The xkcd chart has an axis people ignore: knowing which tasks are worth automating in the first place. An expert doesn't just use AI faster, they know when to push back, when AI's suggestion is confidently wrong and the right move is to throw it out and do the thing by hand. This judgment comes from battle scars earned in combat, judgment that employers are willing to pay a premium for because AI still lacks it.
Software developers are brick layers. We lay bricks (code) to build structures (software). AI has made it possible to lay bricks much faster, let's say 5-10x faster. But those bricks are of inferior quality.
For small buildings, a simple script, a landing page, a basic CRUD app, this doesn't matter. The structure is small enough that slightly crooked bricks won't cause problems. The building holds up fine, and you got it done in a day instead of a week. This is why "vibe coding" works for simple projects. I've seen people with no programming background ship functional apps in a weekend using nothing but AI. For that use case, AI is genuinely a superpower.
But for larger structures, complex backend systems, distributed architectures, anything that needs to scale, those inferior bricks start to compound. Each slightly-off brick makes the next layer a little more crooked. Before long, you've got a tower of Jenga that looks impressive from the outside but wobbles when you touch it. AI can write spaghetti code too, and will gladly do so without supervision. To make matters worse, it's really good at patterning off of good code without understanding the nuance – resulting in spaghetti code disguised as good code, complete with polymorphism and other compartmentalization techniques you rarely see present in regular spaghetti code. The result is that it's not always obvious to either the AI itself or the rookie dev why the code written by AI is buggy.
An experienced developer knows tricks that go beyond individual brick quality. They know how to design the foundation so that imperfect bricks still result in a stable structure. They know architectural patterns that distribute load. They know how to reinforce the structure. But the biggest leverage isn't in fixing bad bricks after they're laid. It's in telling AI "build a wall this shape that supports this weight" and letting it figure out which bricks go where. That's the difference between giving AI imperative instructions vs. declarative constraints: "lay this brick here, then that one there" vs. "here's what the finished structure needs to look like, here's what it needs to support, go." The tighter the constraint, the less it matters that individual bricks are crooked, because the structure they're going into was designed to handle it. If AI managed to put together a level surface out of two crooked bricks by aligning them perfectly, more power to it.
There's a trade-off, and it's a sneaky one. The same leverage that makes AI powerful makes it easy to coast. If AI handles the brick-laying and the output looks good enough, the temptation is to stop paying attention. I've caught myself doing this — accepting AI's output without scrutinizing it because it looked reasonable, only to find a subtle bug later that I would've spotted immediately if I'd been paying attention. Expertise stays sharp through use. Autopilot is comfortable, and comfort is where skills go to die.
The experts who get the most out of AI aren't the ones using it to do less. They're the ones treating it as a new skill to master, with its own nuances, failure modes, and tricks. Knowing when AI is confidently wrong, knowing how to constrain problems declaratively, knowing which output to trust and which to throw out — that's a skill set that barely existed two years ago, and it compounds on top of our existing domain expertise. With raw output being so easy to generate, the best use of human skillset is no longer producing output, but pruning it.