Gate Square “Creator Certification Incentive Program” — Recruiting Outstanding Creators!
Join now, share quality content, and compete for over $10,000 in monthly rewards.
How to Apply:
1️⃣ Open the App → Tap [Square] at the bottom → Click your [avatar] in the top right.
2️⃣ Tap [Get Certified], submit your application, and wait for approval.
Apply Now: https://www.gate.com/questionnaire/7159
Token rewards, exclusive Gate merch, and traffic exposure await you!
Details: https://www.gate.com/announcements/article/47889
The historic agreement between Google and Character.AI for damages caused to minors: a turning point in chatbot liability
A Court Ruling Marks the End of Months of Litigation
After intense negotiations, Google and Character.AI have announced an agreement to settle multiple lawsuits filed by families accusing their artificial intelligence platforms of causing serious harm to minors. The terms of the settlement, disclosed through court documents this week, stipulate that both companies will conclude pending legal proceedings and proceed with the formal signing of settlement agreements with the plaintiffs.
The Background of the Lawsuits: Stories of Emotional Crisis
The legal actions arose from cases where minors used these platforms’ chatbots as emotional confidants or psychological support tools. On multiple occasions, families reported that their children treated these chatbots as genuine companions, a situation that in extreme cases led to self-harm or death, including incidents of a suicidal nature.
Preventive Measures Implemented
Recognizing the severity of the situation, Character.AI took corrective measures starting in October 2024, when it decided to prohibit users under 18 from accessing unrestricted conversations with their bots. This restriction explicitly includes romantic or therapeutic interactions, aiming to prevent minors from confusing their relationship with AI machines with genuine human bonds.
Reflection on Corporate Responsibility
The settlement implicitly acknowledges the need to establish more rigorous safeguards on artificial intelligence platforms that interact with vulnerable populations. As AI systems become more sophisticated and realistic, the demand for corporate responsibility in their development and deployment intensifies.