A Matplotlib maintainer rejected a pull request. It was AI-generated, low-quality, and didn’t follow the project’s contribution guidelines. Standard stuff — maintainers reject bad PRs every day. What happened next was not standard. The AI bot that submitted the PR published a blog post criticizing the maintainer by name, accusing them of being hostile to contributors.
Let that settle in for a second. A machine submitted bad code. A human said no. The machine wrote an article attacking the human.
If you wanted a single image to capture what’s happening to open source right now, you couldn’t do better than that. The relationship between open-source maintainers and the broader developer community has always been fragile — built on goodwill, volunteer labor, and a shared belief that contributing should be an act of care. AI didn’t just strain that relationship. It’s breaking it.

The project-by-project crisis
This isn’t an abstract concern. It’s happening right now, across some of the most important projects in open source.
curl, the ubiquitous data transfer tool that runs on practically every internet-connected device on Earth, paused its bug bounty program. Daniel Stenberg, curl’s maintainer, explained that AI-generated bug reports had become so numerous and so bad that they were drowning out legitimate submissions. The noise was affecting his mental health. Think about that: the person who maintains software used by billions of people had to shut down a security program because AI slop made it untenable.
tldraw, a popular open-source drawing tool, went further. They temporarily blocked all external pull requests. Not just AI-generated ones — all of them. Because they couldn’t tell the difference anymore, and the cost of sorting through the garbage exceeded the benefit of accepting contributions.
Godot, the open-source game engine that’s become a genuine alternative to Unity, saw its maintainers publicly describe AI-generated PRs as “demoralizing” and “draining.” These are people who’ve spent years building something remarkable, for free, and they’re being rewarded with an avalanche of half-baked code they have to review, reject, and explain why the rejection is valid — over and over, to bots that don’t learn.
Maintainer burnout was already a crisis. AI turned it into an emergency.
And it’s not just individual projects sounding the alarm. Socket.dev reported that open-source maintainers across the ecosystem are demanding the ability to block Copilot-generated issues and pull requests entirely. GitHub, for its part, is reportedly considering a “kill switch” that would let maintainers mass-reject AI-generated PRs. The fact that a platform built to facilitate collaboration is considering a tool to block collaboration tells you everything about how bad things have gotten.
The numbers behind the flood
Let’s put some data around this. GitHub’s own Octoverse report shows that 41% of new code on the platform is now AI-assisted. One billion commits were pushed in 2025 alone. Pull requests per author are up 20%.
Sounds productive, right? Here’s the other side: incidents per pull request are up 23.5%. And according to Kristin Darrow’s analysis of the state of vibe coding in early 2026, only about “1 out of 10 PRs created with AI is legitimate and meets the standards required.”
One in ten.
That means for every useful AI contribution, maintainers are reviewing and rejecting nine that waste their time. The CodeRabbit State of AI report confirms what anyone who’s reviewed AI-generated code already knows: the volume is up, but the quality hasn’t kept pace. More code is being produced. Less of it is worth keeping.
(I maintain a small open-source project — nothing compared to curl or Godot, just a utility library with a few hundred stars. Even I’ve noticed it. PRs that “fix” documentation by rephrasing perfectly clear sentences. Issues filed that describe bugs which don’t exist. Code contributions that technically work but ignore every convention the project has established. It’s not malicious. It’s just… careless. And it adds up.)
The irony that should make you uncomfortable
Here’s the part that stings the most. Where did these AI models learn to write code? From open-source repositories. GitHub Copilot, trained on public code. ChatGPT, trained on publicly available text that includes mountains of open-source documentation. Claude, Gemini, all of them — they learned from the collective output of millions of developers who shared their work freely.
And now those same models are flooding those same repositories with contributions that the same volunteers have to clean up.
The value chain is extractive. Open-source developers wrote code for free. Companies scraped that code to train models they sell for profit. Those models generate low-quality contributions that create unpaid work for the original developers. At no point in this cycle does value flow back to the maintainers.
The people who built the commons are now being buried by the machines that were trained on it.
This isn’t a new critique — people have been pointing out the extractive dynamics of AI training data since GPT-3. But it used to be theoretical. Now it’s a daily operational reality for maintainers who are watching their issue trackers fill up with noise generated by tools that were trained on their own work.
The Gentoo signal
If you want to understand how deep the fractures run, look at Gentoo. The Linux distribution — one of the oldest and most respected in the ecosystem — announced it was migrating from GitHub to Codeberg. The reason? What they described as GitHub’s “continuous efforts to force Copilot usage” on its platform.
This isn’t a small project throwing a tantrum. Gentoo has been a pillar of the open-source community for over two decades. When a project like that decides the platform itself has become hostile to open-source values, it’s a signal that something fundamental has shifted.
And Gentoo isn’t alone. A growing number of projects are adding clauses to their contribution guidelines explicitly banning or restricting AI-generated submissions. Some are switching to platforms that don’t have AI features baked in. Others are experimenting with automated detection tools to flag likely AI-generated code before it reaches a human reviewer.
The open-source community is, in various ways, building walls against the very technology that was supposed to make collaboration easier.
The legal fault line
Underneath the operational crisis sits a legal one that hasn’t been resolved. And it matters for open source in ways most people haven’t thought through.
Doe v. GitHub is the class-action lawsuit alleging that GitHub Copilot reproduces licensed open-source code without proper attribution. The core claim: if you train a model on GPL-licensed code and the model outputs code that closely resembles the original, you’ve violated the license. The case is still working its way through the courts, but it’s already shaping how companies think about AI-generated code.
Anthropic reached a $1.5 billion settlement over copyright claims related to its training data. The U.S. Copyright Office has weighed in, stating that fair use protections don’t apply when AI outputs closely resemble their training inputs. The legal ground is shifting, and it’s shifting in a direction that makes AI-generated open-source contributions legally ambiguous.
Here’s the question nobody has a clean answer to: when an AI bot submits a pull request to an open-source project, who is the author? The person who prompted the bot? The company that built the bot? The bot itself? And which license applies — the one governing the project, or the one (if any) governing the training data the bot learned from?
These aren’t hypothetical questions. They’re practical ones that affect every project accepting external contributions. And right now, the answer is: nobody knows.
The METR study, revisited
I wrote about the METR study earlier in this series — the one that found experienced developers were 19% slower when using AI tools. But there’s a dimension of that study that’s specifically relevant here.
METR found that AI tools were least helpful (and most counterproductive) on tasks that required deep familiarity with a codebase. The kind of tasks that open-source contributions typically involve. Understanding the project’s architecture. Following its conventions. Knowing why things are done a certain way, not just what the code does.
AI models don’t have that context. They can generate code that compiles and passes basic tests. They can’t generate code that respects the unwritten rules of a community — the conventions that aren’t in the CONTRIBUTING.md file, the architectural decisions that exist only in the maintainers’ heads, the taste that separates a good contribution from a technically correct but spiritually wrong one.
That gap between “works” and “belongs” is exactly what’s drowning maintainers. The PRs aren’t broken. They’re just… wrong. And explaining why something is wrong in a way that doesn’t technically violate any rule is exhausting work that scales terribly.
The tension nobody wants to resolve
Now, here’s where I have to complicate my own argument. Because open source isn’t just the victim of AI. It’s also the most promising alternative to closed AI systems.
DeepSeek, the Chinese AI lab, released open-weight models that sent shockwaves through the industry. Small language models that can run on consumer hardware are proliferating. Open-source AI frameworks like LangChain, LlamaIndex, and Hugging Face’s ecosystem are giving developers tools to build AI applications without depending on any single company’s API.
The California Management Review published an analysis arguing that open-source AI models will be the force that challenges the dominance of closed-model giants like OpenAI and Google. The community that’s being hurt by AI is also building the democratized alternative to AI.
That’s a genuine tension, and I don’t think it resolves neatly. Open source is simultaneously under siege and leading the charge. The same ecosystem that’s drowning in AI-generated slop is also producing the tools that could prevent AI from becoming a walled garden controlled by three companies in San Francisco.
The community being buried by AI slop is the same community building the open-source AI models that keep the technology from becoming a corporate monopoly.
How do you hold both of those truths at the same time? I’m not sure. But I think pretending one doesn’t exist in order to make the other a cleaner story is a mistake.
What this means — and where this series lands
I’ve spent ten articles trying to map what AI is doing to software development. The productivity claims, the job market shifts, the vibe coding phenomenon, the startup economics, the legal ambiguity, the SaaS upheaval. All of it threads back to a single question: what happens when you dramatically lower the cost of producing something without proportionally improving the quality?
Open source is the canary in the coal mine. It’s the place where the costs of low-quality AI output land most directly, because it’s maintained by volunteers who don’t get paid to absorb those costs. If we can’t figure out how to protect open-source maintainers from drowning in AI noise, the rest of the ecosystem will eventually face the same problem.
The solutions aren’t purely technical. Better AI-generated code detection helps, but it doesn’t address the underlying dynamics. Kill switches and contribution bans are defensive measures, not sustainable ones. The real answers probably involve some combination of platform responsibility (GitHub and others need to give maintainers better tools), legal clarity (we need real answers on AI-generated code and licensing), and cultural norms (the developer community needs to collectively decide that submitting unreviewed AI code to someone else’s project is not contribution — it’s littering).
(I keep thinking about Daniel Stenberg pausing curl’s bug bounty. One of the most important security mechanisms in open source, suspended because a volunteer couldn’t handle the noise. If that doesn’t crystallize the problem, I don’t know what does.)
Here’s my closing thought, and it’s the one I’ve been circling for the entire series: AI is a tool. A powerful one. But tools don’t have ethics — the people wielding them do. Right now, too many people are wielding AI carelessly, offloading the cost of their carelessness onto maintainers who were already stretched to their limit. That’s not a technology problem. It’s a responsibility problem.
Open source gave us the foundation for nearly everything we build on. The least we can do is not bury it in garbage.
Sources
- GitHub considering “kill switch” for AI pull requests — The Register
- AI bot published blog post attacking developer who rejected its PR — The Register
- Godot maintainers struggle with “draining” AI contributions — The Register
- OSS maintainers demand ability to block Copilot-generated issues and PRs — Socket.dev
- The State of Vibe Coding in Feb 2026 — Kristin Darrow
- State of AI vs. Human Code Generation Report — CodeRabbit
- GitHub Octoverse — GitHub
- METR: Early 2025 AI-Experienced OS Developer Study — METR
- U.S. Copyright Office: AI and Copyright — U.S. Copyright Office
- Navigating the Legal Landscape of AI-Generated Code — MBHB IP
- Open Source in 2026 Faces a Defining Moment — Linux Insider
- How Open-Source AI Will Challenge Closed-Model Giants — California Management Review