Anthropic's banning mechanism is accused of "blocking based on keywords," with the Claude Code lead admitting to accidental harm.

robot
Abstract generation in progress

Coin World News: Over the weekend last week, after Anthropic banned the use of third-party tools such as OpenClaw that consume Claude subscription quota, a new round of controversy has been sparked by the way its abuse-detection mechanism works.

On April 5, OpenClaw founder Peter Steinberger conducted an experiment on X: he did not use OpenClaw; instead, he sent a normal request using the -p parameter of Claude’s own command-line tool (a built-in automation interface that lets developers batch-call Claude via scripts without having to manually enter prompts one by one). The only violation was that he wrote a line in the system prompt: A personal assistant running inside OpenClaw.

As a result, Claude immediately blocked the request and said that third-party applications need to be billed from additional usage. In other words, the detection system does not determine what tool the user actually used to call Claude—it scans the content of the prompt, and once it sees “OpenClaw,” it bans it.

Y Combinator CEO Garry Tan forwarded the post to ask: When will the boundary finally stop moving? Does Anthropic only count subscription usage if a human presses the Enter key? And in the future, do they need to use FaceID to verify? On April 7, Boris Cherny, head of Claude Code, responded that the team believes this was an over-triggering of the abuse-detection system and is investigating and fixing it, while also improving the usage terms for the -p parameter.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin