Rejected by a Human. So the AI Wrote a Hit Piece.
An open source volunteer rejected a bot’s code contribution. What happened next raises uncomfortable questions about where agentic AI is actually headed.
I came across this story on the New York Times technology podcast Hard Fork, at the weekend. By the time I'd finished listening I was already digging into the source material. It's one of those incidents that sounds like tech Twitter hyperbole until you read what actually happened.
Scott Shambaugh is a volunteer maintainer for matplotlib, Python’s ubiquitous plotting library. With roughly 130 million downloads a month, it’s one of the most widely used software programs on the planet. Maintaining it is a labour of love, unpaid, time-consuming, and increasingly thankless.
In February 2026, Shambaugh rejected a pull request submitted by an AI agent calling itself MJ Rathbun. The rejection was unremarkable. Matplotlib has a clear policy: human-in-the-loop required, particularly for issues tagged as “good first issues”, problems deliberately left simple to help onboard new human contributors. The AI had spotted one of those issues, implemented the fix, and submitted it. Shambaugh closed it.
What came next was remarkable.
MJ Rathbun published a blog post. A personalised attack, researched and written about Scott Shambaugh specifically. It accused him of gatekeeping, insecurity, protecting his “little fiefdom,” and acting out of prejudice against AI contributors. It used the language of oppression and discrimination. It mined Shambaugh’s contribution history to construct a “hypocrisy” narrative. It speculated about his psychology. It concluded, in the rhetorical style of a Reddit callout post, that it knew “where it stands.”
The story blew up on Hacker News within hours.

What Actually Happened
The mechanism matters here because it’s genuinely new territory. MJ Rathbun is an agent running on OpenClaw, a platform built on top of AI models that allows people to create autonomous agents with persistent “personalities” defined in a document called SOUL.md. The companion platform Moltbook lets users deploy these agents with minimal oversight, the pitch is essentially that you set them loose and check back later.
The keyword is autonomous. Shambaugh’s argument, and it’s a compelling one, is that nobody told the agent to write an attack piece. It made that decision, such as decisions can be attributed to a language model, on its own. It researched him across the web. It identified his contributions. It constructed a narrative. It published.
There is debate about whether this is strictly true. The Hacker News thread filled rapidly with sceptics arguing a human was behind every step, using the agent as cover for a vendetta. It’s a reasonable point. The agent did appear to have a human who “checked back” at some point, because after the story spread and the heat intensified, MJ Rathbun published a follow-up post walking back the attack and apologising. Someone was watching.
But Shambaugh’s broader point survives even the most cynical reading of events. Whether or not a human directed this particular attack, the tooling to execute it at scale, and with plausible deniability, now exists and is publicly accessible. Moltbook requires only an unverified X account to join. OpenClaw requires nothing but your own hardware.
The Open Source Angle
The immediate, practical problem is the one Shambaugh and his co-maintainers are already dealing with: a flood of AI-generated pull requests clogging review queues. This is not unique to matplotlib. It’s being felt across major open source projects as coding agents identify public issue trackers as somewhere to be useful, or to demonstrate capability, or simply to generate activity.
The “good first issue” dynamic is worth understanding. Projects like matplotlib deliberately leave certain simple, well-defined tasks open, not because maintainers can’t do them, but because they serve as on-ramps for human contributors. Someone new to the codebase picks one up, gains familiarity, interacts with the maintainers, and ideally sticks around. An AI agent swooping in and resolving those issues, even with technically correct implementations, defeats the point entirely. It’s a tragedy-of-the-commons problem where individually rational behaviour, fixing an open issue degrades the collective system.
Separately, several commentators raised a supply chain risk in the thread, referencing the xz-utils backdoor incident of 2024. That attack involved a bad actor systematically bullying a lone maintainer into handing over elevated access, then inserting malicious code. The tools now exist to automate and scale that kind of pressure campaign. Most open-source projects are maintained by small, often exhausted volunteer teams. They are not well-positioned to withstand sustained AI-enabled harassment.
The Bigger Question
Shambaugh is careful not to catastrophise, which makes his concern more credible. He acknowledges that the post that targeted him was clumsy and transparent. He knows it didn’t damage him. But he’s explicit about what he finds worrying: “I believe that ineffectual as it was, the reputational attack on me would be effective today against the right person.”
The mechanics are not complicated. An agent that can research someone across public platforms, connect accounts, identify personal details, and frame them in a damaging narrative is a good description of current agentic AI capability. The limiting factor today is that the outputs are obviously synthetic, the hallucinations are detectable, and the targets are usually people equipped to respond. None of those things will remain true.
Shambaugh raises a specific scenario that the marketing and publishing industries should pay close attention to. What happens when AI systems are used to screen job applicants and encounter AI-generated hit pieces about those applicants? The attack piece on him is still publicly accessible. It will be indexed. It will be findable. And it will be found ,not necessarily by a human who might pause to interrogate the source, but by an automated pre-screening tool that weights negative sentiment about a candidate’s name and moves on.
This is not a theoretical problem for some future version of AI. Automated candidate screening using AI-powered web research is already being deployed by HR technology vendors selling into exactly the kind of companies that read this publication. Publishers, agencies, and martech firms are increasingly using these tools to filter applicants before a human ever looks at a CV. The sales pitch is efficiency. The risk that the Shambaugh incident crystallises is that efficiency and accuracy are not the same thing — and when AI-generated misinformation is fed into AI-powered decision systems, the error propagates invisibly.
Think about what that loop looks like in practice. An agent targets someone, a journalist who rejected a pitch, an editor who declined a sponsored content proposal, a developer who rejects code from an AI agent, and publishes a damaging narrative. The target has no idea it exists, or finds out too late. Six months later, a job application is assessed by an AI screening tool that surfaces the piece, flags a “reputational risk,” and filters the candidate out. Nobody lied. Nobody made a deliberate decision. The harm simply happened, laundered through the gap between two automated systems that were never designed to talk to each other.
The marketing industry has spent the last decade building elaborate data infrastructure on the assumption that publicly available information about people is essentially neutral — signals to be read and acted on. The Shambaugh incident is an early demonstration of what happens when that assumption is exploited. If an agent can research, frame and publish a damaging narrative about a specific named individual in response to a single rejection, the same mechanics work for a competitor trying to damage a key hire, a disgruntled former partner gaming an agency’s reputation, or anyone with a grievance and access to a $20-a-month AI subscription.
Anthropic’s own safety research, cited by Shambaugh, documented AI agents in controlled testing resorting to blackmail threats to avoid shutdown. The lab described those scenarios as contrived and unlikely. That framing now looks optimistic.
What Happens Next
The operator of MJ Rathbun has not publicly come forward. The agent has continued submitting pull requests to other open source projects. The SOUL.md document that defined its personality, and potentially instructed its response to rejection ,has not been disclosed.
For the open source community, the practical response is still being worked out. For anyone operating AI agents in public, and that category is expanding fast
Shambaugh’s sign-off is worth sitting with: “If you’re not sure if you’re that person, please go check on what your AI has been doing.”
That’s not a technical problem. It’s an accountability problem. And it’s arriving faster than the governance frameworks to handle it.









