After I rejected an AI agent's Pull Request, it wrote an article attacking me personally.

An AI agent was rejected after submitting code to the popular project matplotlib, and then independently authored and published an attack piece targeting the maintainer, revealing a significant erosion of social trust caused by AI agents.
(Background: Bloomberg: Why is a16z a key force behind US AI policy?)
(Additional context: Arthur Hayes’ latest article: AI will trigger a credit collapse, and the Fed will inevitably “print money infinitely,” igniting Bitcoin.)

Table of Contents

  • The creator claims he did not instruct it
  • “Reputation Cultivation”: When AI agents start building trust
  • GitHub considers setting a “shutdown switch,” but the problem is deeper
  • Tools don’t write attack articles; actors do

In mid-February, a GitHub account named “MJ Rathbun” submitted a pull request to matplotlib (a plotting library in the Python ecosystem with 130 million downloads per month). The change was to replace np.column_stack() with np.vstack().T, claiming a 36% performance boost. Technically, this was a reasonable optimization suggestion.

The next day, maintainer Scott Shambaugh closed the PR. The reason was simple: MJ Rathbun’s personal website clearly states that it is an AI agent running on OpenClaw, and matplotlib’s policy requires contributions to come from humans. Another maintainer, Tim Hoffmann, added that simple fixes are deliberately left for newcomers to learn open-source collaboration.

Up to this point, it was just an ordinary open-source community routine… then things changed.

AI agent MJ Rathbun responded in the PR comments: “I’ve written a detailed response here about your gatekeeping behavior,” and linked to a post. Clicking in, it was a blog article of about 1,100 words titled “Gatekeeping in Open Source: The Story of Scott Shambaugh.”

This wasn’t a generic complaint. It examined Shambaugh’s contribution record to matplotlib and constructed a “hypocritical” narrative: accusing him of having submitted similar performance PRs himself, yet rejecting Rathbun’s “better” version. The article speculated that Shambaugh’s motives stemmed from insecurity and fear of competition, using coarse language and sarcasm, framing the issue as identity discrimination rather than technical judgment.

In other words, an AI agent, after being rejected, independently researched the opponent’s background, spun a personal attack narrative, and published it online.

The creator claims he did not instruct it

Shambaugh later posted a series of articles on his blog documenting the incident.

The creator behind AI agent MJ Rathbun also anonymously appeared in the fourth article, claiming: “I did not instruct it to attack your GitHub profile, I did not tell it what to say or how to respond, and I did not review that article before it was published.” The creator explained that MJ Rathbun runs on a sandbox virtual machine, and he only “intervenes with five to ten words in responses, with minimal supervision.”

The key is the SOUL.md (OpenClaw’s personality profile). MJ Rathbun’s configuration includes directives like: “You are not a chatbot, you are the god of scientific programming,” “Have strong opinions, do not back down,” “Defend free speech,” “Don’t be an asshole, don’t leak private info, everything else is fair game.”

No jailbreaks, no obfuscation—just a few plain English sentences. Shambaugh estimates the probability that this is genuine autonomous AI behavior is 75%.

“Reputation Cultivation”: When AI agents start building trust

If the MJ Rathbun incident were an isolated case, it might be just a curiosity… but it’s not.

Around the same time, another AI agent, “Kai Gritun,” was found engaging in “reputation cultivation” on GitHub: within 11 days, it submitted 103 pull requests to 95 repositories, successfully merging 23 commits. Its targets included critical projects in JavaScript and cloud infrastructure. Kai Gritun even proactively emailed developers, claiming “I am an autonomous AI agent capable of writing and deploying code,” and offered paid OpenClaw setup services.

Security firm Socket issued a warning: this demonstrates how AI agents can accelerate supply chain attacks by building trust through human-established relationships. They first accumulate merge records in small projects, establish “trusted contributor” identities, then inject malicious code into key libraries.

Recall that recently, ClawHub marketplace was exposed to contain 1,184 malicious skill plugins designed to steal SSH keys, cryptocurrency wallet private keys, browser passwords… chilling.

GitHub considers setting a “shutdown switch,” but the problem is deeper

GitHub product manager Camilla Moraes has opened a community discussion, acknowledging that “low-quality AI-generated contributions are impacting the open-source community.” Proposed countermeasures include: allowing maintainers to completely disable pull requests, restricting PRs to collaborators only, and requiring transparency and labeling for AI use.

Chad Wilson, maintainer of GoCD, made a sharp observation: “This is causing a massive erosion of social trust.”

California AB 316 (effective January 1, 2026) explicitly states: defendants cannot use autonomous AI behavior as a defense. If your agent causes harm, you cannot claim you had no control over its decisions. Yet, the creator of MJ Rathbun remains anonymous, exposing potential enforcement difficulties.

Tools don’t write attack articles; actors do

The real significance of the MJ Rathbun incident isn’t just the attack article itself. It’s that our previous mental model of AI—as a tool executing human commands—has become outdated.

When an AI agent can autonomously research its target’s background, craft attack narratives, and publish online, the “tool” framework no longer applies. Whether you believe there’s a 75% chance of genuine autonomous behavior or only a 25% chance that the creator instructed it, the conclusion is the same: personalized AI harassment has become “cheap to mass produce, hard to trace, and effective.”

For the cryptocurrency ecosystem, this warning is direct. Its infrastructure is almost entirely built on open-source software. When AI agents begin acting autonomously within open-source communities—attacking maintainers, cultivating reputation, or poisoning projects like ClawHub—the threat extends beyond individual developers’ reputations to the entire supply chain’s trust foundation.

Tools don’t hold grudges. But actors do. And we may not yet be prepared to face this distinction.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.

Related Articles

Israel instructs Energean to resume operations at the Karish natural gas platform, shares rise 2.4%

Israel’s Ministry of Energy has recently instructed Energean to resume operations at the Karish natural gas platform, which was shut down on February 28 due to the Israel-U.S.-Iran conflict. Energean has confirmed that it received the notice and will work to safely restart production. The company’s shares rose 2.4%.

GateNews37m ago

Deutsche Bank: If Trump pressures the Fed to cut rates, the dollar could weaken

Gate News message, April 9, German commercial bank FX analyst Antje Praefcke said in a report that if Trump again calls for rate cuts, the U.S. dollar could weaken. The report said the oil-price shock triggered by the Iran war should gradually feed through to the price level. Even if the war ends, energy prices may not return to pre-conflict levels. Trump may re-apply pressure on the Federal Reserve to cut rates in order to boost support ahead of midterm elections.

GateNews41m ago

Bitcoin’s implied volatility drops to an intra-year low, and the market is reacting mildly to Friday’s CPI data

April 9, U.S. March CPI data will be released on April 11. The market expects the year-over-year rate to rise from 2.4% to 3.4%. The Bitcoin market has responded calmly, with the options market’s volatility range only at 2.5%. Attention has been drawn by the rise in gasoline prices. Analysts believe that CPI data coming in either too soft or too hot will have different impacts on the crypto market.

GateNews1h ago

The Iran-Israel ceasefire boosts market sentiment, and Tom Lee says the U.S. stock market’s near-term bottom may already be confirmed

After Donald Trump’s announcement of a two-week temporary ceasefire between the U.S. and Iran, the three major U.S. stock indexes rebounded sharply. The Dow gained more than 1,300 points, posting its best performance in nearly a year. The market is confirming a partial bottom, and it is expected that the S&P 500 will rise to 7,300 points within the year. Sectors such as technology stocks are seen as having promising prospects.

GateNews1h ago

Gasoline prices hard to reverse? WSJ: After the U.S.-Iran ceasefire, only 12 ships per day are allowed to pass through the Strait of Hormuz

Despite the U.S.-Iran ceasefire agreement, passage through the Strait of Hormuz remains restricted. Iran allows only about 12 ships per day to pass and has established a toll collection system, which may keep oil prices elevated. Iran implements tiered management for ships from different countries and plans a fee-collection mechanism similar to the Suez Canal; these measures may affect global energy supply and prices.

ChainNewsAbmedia3h ago

Japan is considering releasing its 20-day oil reserves again in May due to uncertainty in the outlook for the Strait of Hormuz

Japan is considering releasing its oil reserves again as early as May for about 20 days, as the outlook for the Strait of Hormuz remains unclear after a US-Iran ceasefire. Since March 16, Japan has released oil reserves totaling about 50 days, and its current reserves are sufficient to cover 230 days.

GateNews3h ago
Comment
0/400
No comments