Why AI Makes You Need More Engineers, Not Fewer
Earlier this year, a founder emailed me three sentences.
"We cut our engineering plan from four engineers to two. We're using AI for everything else. This is going fine."
Three months later, he emailed again. Longer this time. The product was running. Features were landing. But nobody knew how half of it worked. The engineer they'd hired first — a recent bootcamp grad with good hustle and a Cursor subscription — had shipped fast. Really fast. And the codebase was now something no one wanted to touch.
He wasn't looking for sympathy. He was asking if we could help clean it up.
I get this question more often than I expected when I started Exit Code.
The conventional math is wrong
The dominant narrative right now — the one I hear from founders, advisors, and VCs who don't work closely with engineering teams — goes like this:
AI makes developers more productive. More productive developers means you need fewer of them. Therefore: smaller teams, lower burn, faster shipping.
It sounds right. The first part is true. The rest of it falls apart when you test it.
Here is what actually happens when you act on that logic:
You hire fewer engineers. You pick for availability and cost over seniority, because the assumption is that AI closes the gap. You ship faster than you expected for the first six to ten weeks. Then something breaks that your AI-assisted engineer doesn't know how to diagnose. Or you need to scale a system that was built to demo, not to run. Or a new hire joins and can't understand what they've inherited. Or you hit a security issue and nobody on the team has the pattern recognition to know what you're looking at.
None of these are hypothetical. I've watched all of them happen, in different sequences, to different companies, in the last eighteen months.
AI didn't make the team more productive at that point. It made them faster at building the wrong thing.
Why capability expands scope
Here's the mechanism nobody talks about.
Before AI tooling, what you could build was constrained by what your team could write. Two engineers had a ceiling. You scoped your product to fit under it.
AI raised that ceiling. A small team can now attempt things that used to require five or six people to execute. And founders — reasonably — respond by attempting those things. You're not shipping a three-screen MVP anymore. You're shipping a seven-screen product with integrations, webhooks, a data model with real complexity, and a customer-facing API.
That's not a smaller engineering problem. That's a larger one.
The amount of surface area that needs to be understood, governed, and maintained has grown — not shrunk. What's changed is who can write the first version of it. What hasn't changed is who needs to be responsible for the second, third, and tenth version.
Ambition scales with capability. And ambition creates complexity that doesn't disappear because it was generated quickly.
What you need less of
There is a real reduction in this story. It's just not where most people are looking.
You need fewer engineers doing work that AI does well: scaffolding, boilerplate, CRUD, translating specifications into functional code, writing tests for well-defined behavior, producing documentation from existing code. That category of work — the mechanical, repeatable part of software development — is genuinely, substantially faster with modern AI tooling.
A team that understands this doesn't staff for those tasks anymore. They staff for the work AI does poorly.
AI does poorly at: deciding what to build. Knowing when an architecture decision will be painful at scale. Recognizing when an edge case isn't actually an edge case. Making judgment calls under ambiguity. Debugging production systems that were built by other AI sessions six months ago. Knowing when "fast" is the wrong optimization.
That's not a small list. That's the majority of what senior engineers actually spend their time on.
Which means the team you need now is smaller on headcount but higher on seniority. Not because you're cutting corners — because you're concentrating on the thing that's still scarce and still matters.
The 1-junior-plus-AI bet
The bet a lot of founders are taking right now looks like this: one junior engineer with strong AI tooling, moving fast, building features. Low cost. Keeps the runway intact. Ship first, refactor later.
I understand the appeal. I also understand why it fails in most cases.
Here's the failure mode: a junior engineer using AI doesn't know what they don't know. AI fills in their knowledge gaps with confident-sounding code. The output looks complete. It passes the tests they wrote. It does what they asked it to do. And it carries assumptions they didn't know they were making.
Senior engineers catch these before they're load-bearing. They recognize the pattern. They've seen what happens when you build on that foundation. They know which shortcuts are fine and which ones are the thing you'll be explaining to your next hire.
The gap between "code that works" and "code that a team can build on" is not something AI closes. It's the thing AI makes easier to ignore — until you can't.
The math that actually holds
Here is what the correct model looks like:
One senior AI-native engineer regularly produces what a pre-AI team of two or three used to. Not in raw line count — that's the wrong unit — but in decisions made, systems designed, and product shipped at acceptable quality.
Two senior AI-native engineers, working together on a product with real scope, move as fast as a team of four or five in the pre-AI baseline and produce a codebase that can be maintained.
That's not a case for fewer engineers. That's a case for a different kind of engineer deployed at higher leverage.
The error is treating "faster output" as permission to hire cheaper. The engineers who produce that faster output are not cheaper than average. They are senior engineers who have spent years building intuition that AI cannot replicate — and they happen to also know how to use the tools.
If you hire around that and assume AI will compensate, you are outsourcing your judgment to a system that doesn't have any. The velocity is real. So is the bill when you need to fix what it built.
What founders should actually hear
The question isn't whether to use AI. Every engineering team is using it, and the ones that aren't are losing.
The question is whether you have someone on your team with the seniority to govern what AI produces.
AI is a force multiplier. Force multipliers amplify what's already there. If what's there is strong judgment and production experience, you get a small team that moves like a larger one. If what's there is enthusiasm and tools but not depth, you get a fast-moving team that builds a progressively harder problem.
The promise of AI in engineering is real. The version that's actually working for early-stage companies is: senior engineers who use AI as leverage, not junior engineers who use AI as a substitute for experience.
You don't need fewer engineers. You need the right ones — and fewer of those goes further than you think.
"Force multipliers amplify what's already there. Make sure what's there is worth amplifying."
Exit Code places senior, AI-native engineers with early-stage startups. Not to fill headcount. To give you the one or two engineers who build at the quality and pace that lets everything else scale. If that's the conversation you're in, I'm easy to find.
Exit Code builds AI-native engineering teams for pre-Series A startups. If you're trying to ship faster without the risk of vibe-coded chaos, let's talk.
$ let's talk →