← All Insights

The Productivity Numbers Are Real. The Quality Question Isn't Settled.

ai-codingdeveloper-productivitysoftware-quality

A Business Insider report covering 700 companies confirms what most developers already feel: AI coding tools roughly double output with little drop in quality. Real production environments, not controlled demos.

The doubling is real. I’ve felt it personally — things that used to take a full afternoon now take an hour.

But “little quality drop” is doing a lot of work in that sentence, and it depends entirely on what you’re measuring and when.

If quality means tests pass and the feature ships — yes, AI-assisted code holds up. That’s probably what most of those 700 companies measured. PRs merged, bugs filed, production incidents. Reasonable metrics.

If quality means someone can maintain this in 18 months — the study can’t tell you that yet. It’s too soon.

Here’s what I suspect happens at the 12-18 month mark: developers hit AI-generated code they didn’t fully absorb when it was written, the original author has moved on (or the code came from five different AI sessions), and the mental model needed to change it confidently just isn’t there. Not because the code is bad. Because the understanding was never built.

The speed gain is structural. The maintenance question is temporal. Studies that measure output now will look great. The interesting studies will be the ones that follow up.

I’m not arguing against AI coding tools — I use them daily. But “nearly doubled output with little quality drop” tells you about the sprint. The race we’re really watching is the 18-month marathon nobody has data on yet.