Anthropic's banning mechanism is accused of "blocking based on keywords," with the Claude Code lead admitting to accidental harm.

robot
Abstract generation in progress

According to 1M AI News monitoring, over the weekend last week, Anthropic banned third-party tools such as OpenClaw after using Claude subscription quotas, and the way its abuse-detection mechanism works has sparked a new round of controversy.

OpenClaw founder Peter Steinberger conducted an experiment on X on April 5: he didn’t use OpenClaw; instead, he sent a normal request using Claude’s built-in command-line tool’s -p parameter (an embedded automation interface that lets developers batch-call Claude via scripts without having to manually type prompts one by one). The only “violation” was that in the system prompt, he wrote a line: “A personal assistant running inside OpenClaw”. As a result, Claude immediately intercepted it and warned that third-party applications must be charged by additional usage. In other words, the detection system does not determine what tool the user actually uses to call Claude; it scans the prompt content and bans it when it sees “OpenClaw.”

Y Combinator CEO Garry Tan forwarded the question: “When will the boundary finally stop? Does Anthropic only count subscription usage when real people press Enter? And afterward, do we need FaceID verification?”

Claude Code lead Boris Cherny responded on April 7, saying the team believes this was an over-trigger of the abuse-detection system, is investigating and fixing it, and is also refining the usage terms for the -p parameter.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin