Anthropic requires real-name KYC verification! Some features of Claude will require uploading identification documents, increasing compliance pressure.

robot
Abstract generation in progress

Anthropic announces the activation of identity verification mechanisms for certain Claude features, requiring users to provide government-issued photo ID. The partner for this process is Persona Identities. The goal is to prevent abuse, enforce usage policies, and comply with regulations, but it also raises concerns about data security and higher entry barriers.
(Background: Anthropic subscribes to Claude Code to block Lobster OpenClaw! Going forward, third-party tools can only operate within paid quotas.)
(Additional context: Top AI models are diverging: ChatGPT to C, Claude to B)

Table of Contents

Toggle

  • Verification process: passport or driver’s license
  • How is data protected? Anthropic declares it does not handle raw data
  • What if verification fails? Account ban conditions are also disclosed
  • Inevitable under compliance pressure, but higher thresholds raise concerns

Yesterday (14th), Anthropic quietly published a document on the Claude support page explaining the identity verification process, officially announcing that some Claude features will require users to complete real-name verification. The company states: “Responsible use of powerful technology begins with knowing who is using it.”

The purpose of identity verification covers three aspects: preventing abuse, enforcing usage policies, and complying with legal obligations.

Currently, this policy is only “applicable in certain scenarios” and not fully mandatory. Users may encounter verification prompts in the following situations: when accessing certain advanced features, during routine platform integrity checks, or when triggering security and compliance measures.

Verification process: passport or driver’s license

According to official instructions, completing verification requires preparing the following:

A valid government-issued photo ID (physical document, not a photocopy or screenshot), and a smartphone or computer with a camera to perform the process.

Accepted ID types include passport, driver’s license, or national ID card. Explicitly not accepted are photocopies, screenshots, digital or mobile versions of IDs, non-government-issued documents, and temporary paper IDs.

Note: Anthropic has chosen Persona Identities as its verification technology partner.

How is data protected? Anthropic declares it does not handle raw data

Anthropic clearly states it is the “data controller,” but the actual data processing is performed by Persona on its behalf. The Anthropic system itself does not hold users’ ID images or selfies; these are collected and stored by Persona.

Anthropic’s contract with Persona restricts the use of data solely for identity verification and anti-fraud purposes, prohibiting extension to other uses. All data is encrypted during transmission and static storage.

Additionally, the company emphasizes two negative declarations, explicitly ruling out: data will not be used for model training, nor shared with third parties for marketing or advertising. Anthropic claims to collect only the minimum necessary information.

What if verification fails? Account ban conditions are also disclosed

If the verification process fails, users can retry multiple times or contact support via a form.

It is noteworthy that Anthropic also publicly disclosed the conditions that trigger account bans, including: violations of usage policies, originating from unsupported regions, breaching terms of service, or being under 18 years old. Banned users can appeal through a complaint form.

Inevitable under compliance pressure, but higher thresholds raise concerns

It is not hard to infer that this policy is driven by increasingly strict global compliance requirements for AI service providers.

For Anthropic, while rapidly expanding its enterprise (Claude to B), establishing an auditable user identity foundation is essential to attract clients in highly regulated industries like finance, healthcare, and law.

However, from the user perspective, groups that previously used Claude anonymously or with minimal friction—such as researchers or privacy-conscious users—may have concerns.

Although the official stance is that data does not land on Anthropic’s systems, whether “outsourcing custody to third parties” truly alleviates privacy concerns remains to be seen.

Currently, the triggers for the verification mechanism are not fully transparent. Which “specific features” require verification, and whether the scope will expand gradually, are key points to watch. If this mechanism becomes linked to subscription plans or API usage in the future, the impact on the developer community could be more direct.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin