A few months ago, I asked three different AI models to design a landing page for a fictional SaaS product. I gave each one the same brief: “Modern, clean, professional. SaaS analytics dashboard.” All three returned something. All three were competent. And all three looked like the same page.

Same hero section with a gradient background. Same three-column feature grid with rounded icons. Same testimonial carousel. Same “Start your free trial” button in the same shade of blue. Technically flawless. Spiritually identical. If you squinted, you couldn’t tell which model made which.

That’s the ceiling. Not of capability, but of mediocrity.

Three nearly identical AI-generated landing page mockups illustrating design homogeneity

The ceiling of mediocrity

Designative published a piece in early 2026 that gave language to something I’d been feeling but couldn’t articulate. Their argument: AI brings the cost of average work to zero. It lifts the floor — nothing is truly terrible anymore, because even a lazy prompt produces something passable. But it simultaneously installs a ceiling. When everyone uses the same models trained on the same data to generate the same patterns, differentiation dies.

The floor rises. The ceiling drops. And the space in between — that vast middle where most commercial work lives — gets compressed into a band of pleasant, functional sameness.

When everyone can produce “good enough,” good enough stops being good enough.

This is already visible everywhere. Pull up Product Hunt on any given day and count how many new tools look like they were generated from the same Figma template. (They probably were.) The tools are converging the outputs.

The scarce resource used to be execution. Could you actually build the thing? AI has made execution abundant — not free, but cheap enough that it’s no longer the bottleneck. What’s scarce now is something harder to automate: taste.

What taste actually means here

I don’t mean taste in the pretentious, design-school sense. I mean something more fundamental: the ability to decide what matters.

As Product Power put it: AI won’t replace taste. The models can generate infinite options but they can’t tell you which option is right for your context, your users, your constraints. That judgment — the filtering, the prioritizing, the saying-no — is the work that remains when the how gets automated.

I think there are three levels to this.

Level 1: Choosing what’s worth solving. This is product taste. Not “can we build it?” but “should we?” Every developer has watched a team spend months building a feature nobody asked for, shipped with impeccable code quality, used by exactly nobody. AI makes building faster; it doesn’t make building the wrong thing faster any less wasteful. If anything, it makes it worse — you can now build the wrong thing in a weekend instead of a quarter, which means you can be wrong more frequently.

Level 2: Deciding what “good enough” means in context. This is engineering taste. A prototype for a pitch deck doesn’t need the same reliability as a payment processing system. A personal blog doesn’t need the same architecture as a social network. Knowing where you are on that spectrum — and resisting the urge to over-engineer or under-engineer accordingly — is a judgment call that AI consistently gets wrong. Ask a model to build something and it’ll give you enterprise patterns for a weekend project, or prototype-quality code for production infrastructure. It doesn’t know the difference unless you do.

Level 3: Governing what tools learn to prefer. This is strategic taste, and nobody’s talking about it enough. The models were trained on existing patterns. They amplify what already exists. If the training data is full of bloated React SPAs, that’s what you’ll get. Whoever decides what “good” looks like in the training data is shaping output for millions of developers downstream. That’s enormous leverage, and right now it’s mostly exercised by default rather than by intention.

The craft debate, revisited

Earlier in this series, I mentioned the tension between Ivan Turkovic’s “Artisanal Coding Manifesto” and Subbu Allamaraju’s counter-argument that the era of artisanal software is ending. Both pieces stuck with me, and I’ve been turning them over since.

Turkovic’s position is essentially this: code should be beautiful. Not just functional — intentional, maintainable, elegant. The craft matters because software that’s built with care lasts longer, breaks less, and can be understood by the next person who touches it. He’s arguing for software as a practice, not just a product.

Subbu pushes back hard. In his view, artisanal habits become bottlenecks when the economics shift. If AI can produce serviceable code at a fraction of the cost and time, insisting on hand-crafted perfection isn’t principled — it’s inefficient. The future, he argues, belongs to engineers who orchestrate AI effectively, not to those who pride themselves on writing every function by hand.

“Artisanal habits become bottlenecks when the economics shift.” — Subbu Allamaraju

Here’s what I think after sitting with both arguments. They’re not actually disagreeing about taste — they’re disagreeing about where taste gets applied.

Subbu is right that for commodity software — the CRUD apps, the internal tools, the glue code that holds systems together — artisanal craft is an indulgence. Let the machine write it. Save your energy. But Turkovic is right that for software that matters — the products people use daily, the systems that need to evolve over years, the interfaces that shape how people think about a problem — craft is the only competitive advantage left. When the baseline is mediocre-but-functional, the way you stand out is by being genuinely, intentionally good.

Industrial production for commodity work. Craft for differentiated work. Both, simultaneously, in the same organization, sometimes in the same codebase. That’s the reality. And knowing which mode to operate in? That’s taste.

The abstraction ladder

There’s a historical frame that helps here. Software development has always been a story of climbing an abstraction ladder. Assembly gave way to C. C gave way to higher-level languages. Languages gave way to frameworks. Frameworks gave way to no-code platforms. Each rung shifted the developer’s work upward — less time on how, more time on what.

At every step, people worried that the higher abstraction would make the lower one obsolete. Assembly programmers thought C was cheating. Framework skeptics thought Rails couldn’t produce production-quality software. None of those lower layers disappeared. They receded — practiced by fewer people, for higher stakes, at greater depth.

AI might be the rung where, for most developers, only “what” remains. The “how” doesn’t cease to exist — it becomes specialized. And the rest of the profession shifts its energy to decisions about what to build, for whom, and why.

That shift sounds liberating. It is, in theory. In practice, it means the thing you’re evaluated on changes. You’re no longer measured by the elegance of your implementation. You’re measured by the quality of your decisions.

Are you ready for that? Most of us aren’t. We’ve spent careers building identity around our ability to write code. The idea that the code is the least interesting part of the job is… disorienting. (I’ve felt it myself, staring at an AI-generated component that works perfectly but that I didn’t write, wondering what exactly I contributed. The answer is: I decided what it should do and how it should feel. That’s the contribution. It just doesn’t feel like enough, some days.)

The personal test

Here’s where I’ll get specific, because abstractions are easy and examples are hard.

I built this blog with Astro. It uses two fonts: Lora for headings, Montserrat for body text. The content max-width is 720 pixels. There’s no JavaScript framework, no CSS library, no component system beyond what Astro provides natively. The entire dependency list is four packages.

AI could have generated every line of code. And for some of it, AI did help. But the decisions — those were mine. The 720px content width that creates the reading rhythm I wanted. The pairing of a serif heading font with a sans-serif body font, because I wanted the warmth of Lora’s ligatures against the clarity of Montserrat’s geometry. Four dependencies, not because minimalism is trendy but because I’ve maintained enough bloated projects to know where that road leads.

None of these decisions are objectively “correct.” That’s exactly the point. Taste isn’t about finding the right answer. It’s about having a coherent point of view and executing on it consistently. The code is secondary to the vision.

When AI generates a blog, it produces something competent and anonymous. When a person builds one with intention, you can feel the choices. That gap — between competent and intentional — is the new competitive territory.

What architects already know

InfoQ’s piece on architects in the AI era crystallized this for me: as AI handles more implementation, the architect’s role becomes more important, not less. Someone has to decide the system boundaries, the trade-offs, the “how much complexity is justified here” questions that AI tools blissfully ignore.

With AI now writing roughly 30% of production code, Human-in-the-Loop is becoming not just a best practice but a legal and compliance requirement.

Osmani’s line still applies: vibe coding is not the same as AI-assisted engineering. Vibe coding is accepting the first output. Engineering is knowing which output to accept, which to modify, and which to reject entirely. That discrimination — in the original sense of the word — is the skill that matters now.

And it’s not automatable. Not because AI couldn’t theoretically learn taste, but because taste is inherently contextual. What’s tasteful in a startup prototype is tasteless in a banking system. What’s elegant for a consumer app is overengineered for an internal tool. There’s no general model for it because there’s no general answer.

Dorothy Vaughan, one more time

I started this series with Dorothy Vaughan — the NASA mathematician who taught herself FORTRAN before anyone asked her to. I keep coming back to her story because people draw the wrong lesson from it.

The common takeaway is: learn the new tools early and you’ll be fine. That’s true but incomplete. Vaughan didn’t just learn FORTRAN. She understood what the calculations meant. She knew which trajectories mattered, which error margins were acceptable, which shortcuts would get an astronaut killed. The programming language was a vehicle. The judgment was the destination.

Tool proficiency is table stakes. It always has been. The person who can prompt an AI fluently but doesn’t know what’s worth prompting for is just a faster version of the person who could type fast but had nothing to say. The moat isn’t the tool. The moat is knowing what the tool should be doing.

That’s taste. That’s judgment. That’s the thing that doesn’t get cheaper when code gets cheaper.

And if you’re wondering whether your taste is good enough — whether your judgment is sharp enough to matter in this new landscape — here’s my honest answer: I don’t know. Neither is mine, probably. But I know it’s the thing worth developing. Not the prompting skills. Not the framework knowledge. Not the ability to ship faster. The ability to decide what’s worth shipping at all.

The cost of building has never been lower. The cost of building the wrong thing has never been higher. The only way to tell the difference is taste.


Sources