On January 23, 1985, Apple introduced the LaserWriter printer alongside Aldus PageMaker, and the typesetting industry began to die. Not with a whimper — with a flood of bad newsletters.

Within months, anyone with a Macintosh could set type. Church bulletins, school flyers, corporate memos — they all exploded with a dizzying mix of Zapf Chancery, Chicago, and Geneva. Designers had a name for the chaos: the ransom note effect. Documents looked like they’d been assembled from clippings of different newspapers, each word in a different typeface, each heading a different size. The tools were powerful. The taste hadn’t caught up.

Professional typesetters — people who’d spent years mastering kerning, leading, and the subtle art of making text disappear into meaning — were replaced almost overnight. As Frank Romano documented in WhatTheyThink, the new hires were often “recent high school graduates who received several weeks of training.” The craft didn’t die because people stopped reading. It died because the barrier to entry collapsed, and for a while, almost nobody could tell the difference between good and bad.

I think about that history a lot these days. Because we’re watching it happen again — not with typefaces, but with code.

Side-by-side comparison of a chaotic 1980s desktop-published newsletter and messy AI-generated code

“Forget That the Code Even Exists”

In February 2025, Andrej Karpathy — former director of AI at Tesla, OpenAI co-founder — posted a description of a new way of programming. He called it vibe coding:

“Fully give in to the vibes, embrace exponentials, and forget that the code even exists.”

The idea was simple. You describe what you want in natural language. The AI writes the code. You don’t read it. You don’t debug it in the traditional sense. You just run it, see if it works, and if something breaks, you paste the error message back into the chat and let the model try again. No architecture. No code review. Pure vibes.

The term caught fire. Collins Dictionary named it their Word of the Year for 2026. Tutorials flooded YouTube. Startups bragged about shipping entire products without a single engineer reading the codebase.

But here’s the part people forget: Karpathy himself walked it back. By late 2025, he declared vibe coding “passe” and called for “more oversight and scrutiny” of AI-generated code. The person who coined the term abandoned it within a year. That should tell you something.

The Distinction That Actually Matters

Addy Osmani, engineering lead at Google, drew a line that I think is the most important one in this entire conversation:

“Vibe coding is not the same as AI-assisted engineering.”

And he’s right. The difference isn’t just semantic — it’s structural. Vibe coding means rapid prototyping with low oversight. You’re moving fast, trusting the model, and hoping for the best. AI-assisted engineering means using the same tools within disciplined practices: code review, testing, architectural thinking, human judgment at every decision point.

The distinction matters enormously because collapsing the two into one category — “AI coding” — makes it impossible to have an honest conversation about quality. Saying “AI writes code now” is like saying “anyone can set type now” in 1986. It’s true. But it doesn’t tell you whether the output is any good.

(I’ve caught myself blurring this line, by the way. I’ll use Cursor or Claude Code to scaffold something quickly, and there’s a moment where I think “this looks fine” and almost move on without reviewing it. That temptation is the vibe coding gravity well. It takes real discipline to pull yourself back into engineering mode.)

What the Numbers Actually Say

Let’s talk about quality, because there’s data now — and it’s not flattering.

CodeRabbit analyzed thousands of pull requests and found that AI-generated code had 1.7x more issues than human-written code. Logic errors specifically? 75% more. Security vulnerabilities? A staggering 2.74x increase. These aren’t hypothetical risks. These are measured defects in real codebases.

Veracode’s GenAI code security report paints a similar picture: 45% of AI-generated code contains security flaws. Not bugs — security flaws. The kind that get exploited.

And here’s the number that keeps me up at night: according to multiple surveys, only 48% of developers check AI-generated code before committing it. Half the people using these tools are treating them like the LaserWriter in 1985 — trusting that the output is professional quality because the tool is sophisticated.

The problem was never the tool. It was the assumption that access to professional tools makes you a professional.

That was true of desktop publishing. It’s true of code.

The YC Experiment

Now, before you think I’m just another grumpy developer yelling at clouds, let me complicate things.

Y Combinator’s Winter 2025 batch revealed that 25% of startups had codebases that were 95% AI-generated. Those companies were growing at 10% per week. Jared Friedman from YC said the code was largely produced by “highly technical” founders — people who absolutely could write the code themselves but chose not to.

That last part is critical. These weren’t non-technical founders blindly pasting prompts. They were experienced engineers making a deliberate trade-off: speed over craft, iteration over architecture. They understood what they were giving up. And in the context of an early-stage startup — where the product might pivot three times before finding market fit — that trade-off can be rational.

But YC judges companies on growth, not maintainability. Nobody’s checking back in two years to see if that 95% AI-generated codebase can still be modified by a human. The survivorship bias here is enormous. We hear about the ones that grew. We don’t hear about the ones that hit a wall of incomprehensible spaghetti and couldn’t ship a feature without breaking three others.

The Craft Tension

This is where the cultural debate gets genuinely interesting. Two pieces published in late 2025 stake out opposing positions, and I think both have a point.

Ivan Turkovic published the “Artisanal Coding Manifesto,” arguing that the craft of software engineering — the deep understanding of systems, the intentional architecture, the pride in clean code — is worth preserving. Not out of nostalgia, but because that craft is what produces software that lasts. Software you can maintain, extend, and trust.

Subbu Allamaraju countered with “The End of Artisanal Software Crafting,” making the case that the economics have permanently shifted. When AI can produce 80% of the code in 20% of the time, clinging to artisanal practices isn’t noble — it’s expensive. The future belongs to engineers who can orchestrate AI effectively, not to those who insist on hand-writing every function.

Who’s right? I think they both are, depending on what you’re building. A weekend side project? Vibe code it. A prototype to test a hypothesis? Go fast, break things. A medical records system? A financial platform? The control software for industrial equipment? You’d better have a human who understands every line.

The ransom note effect didn’t last forever in publishing. Eventually, people learned the rules. Designers emerged who combined the speed of digital tools with genuine typographic knowledge. The bad newsletters got better — or they disappeared. But the transition took years, and the amount of garbage produced in the meantime was extraordinary.

Osmani’s 80% Problem

There’s one more dimension to this that doesn’t get enough attention. Osmani describes what he calls “the 80% problem” in agentic coding: AI tools can get you roughly 80% of the way to a working solution, fast. That last 20%? That’s where the real engineering lives. The edge cases, the error handling, the performance tuning, the security hardening, the accessibility, the graceful degradation.

The danger isn’t that AI can’t write code. It clearly can. The danger is that the 80% looks so convincing that people ship it as if it’s 100%. And in the short term, it might even work. The bugs are subtle. The security holes are invisible. The architectural debt accumulates silently — until it doesn’t.

Desktop publishing didn’t kill good design. But it buried it under an avalanche of bad design that took a decade to sort through.

I suspect AI-generated code will follow the same arc. The tools will improve. The practices will mature. People will eventually learn the difference between “it compiles” and “it’s correct.” But we’re in the ransom note phase right now, and pretending otherwise helps no one.

What This Means for You

If you’re a developer using AI tools (and you should be — I’m not arguing against the tools), here’s my honest take:

  • Use AI for acceleration, not abdication. Let it write the boilerplate. Let it suggest patterns. But read the code. Understand the code. Own the code.
  • Know your context. Vibe coding a personal project on a Saturday afternoon is fine. Vibe coding a production system that handles real user data is negligent.
  • Invest in the 20%. The part AI can’t do well is the part that actually matters: architecture, security, edge cases, and the judgment calls that require understanding your specific domain.
  • Remember the typesetters. The professionals who survived desktop publishing weren’t the ones who refused to use computers. They were the ones who combined new tools with deep craft knowledge. Be that person.

The ransom note effect was temporary. But the documents produced during that era were real, and some of them had consequences. Bad code has consequences too — and unlike a church bulletin with seven typefaces, bad code doesn’t just look ugly. It breaks.

The vibes are great for a Saturday hack. For anything else, I’d suggest you read the code.


Sources