Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
AI companies fear data leaks, but end up leaking themselves first.
The incident currently spreading wildly on X and GitHub feels like a scripted scenario, perfectly illustrating the fragility of the entire AI industry.
Here's what happened:
— Anthropic accidentally released an update on npm
— The package contained a roughly 60MB debug file
— Inside was a snippet of Claude's internal code
And then the show began.
23 minutes later, a researcher (Chaofan Shou) discovered it, downloaded it, and made it public.
Within hours: millions of views.
By the time the Anthropic team realized, the code had already:
— Gone viral on GitHub
— Been forked tens of thousands of times
— Started to take on a life of its own
A classic Web2 mistake…
Happening in an era where information spreads like a virus.
Now, pay attention to the second part.
A developer appeared: Sigrid Jin.
He:
— Analyzed the leaked code
— Rewrote it into Python in 8 hours
— Gained tens of thousands of stars on GitHub
— Then rewrote it again in Rust
Done.
This product essentially became an open-source clone.
But the real point is here.
Within the code, they discovered a system called the “Undercover Mode.”
Its task was:
To prevent internal data leaks from the model.
In other words:
A company specializing in leak prevention…
Leaked its own code through a debug file.
This isn’t just irony.
It’s a signal.
What does it mean?
1. AI companies aren’t as “closed” as they seem
Any mistake → the backend gets exposed immediately.
2. The moat around AI is much more fragile than investors think
If a product can be quickly copied → value shifts:
— From the model → to the data
— From the data → to distribution
3. The speed of copying = a new reality
What used to take years,
Now takes only hours.
What are the consequences?
— Profit margins for AI companies are shrinking
— The open-source movement accelerates
— “Secret technologies” depreciate
And most importantly:
The market is beginning to realize that
value isn’t in the code itself, but in the ecosystem and user reach.
We are moving toward a world where:
— Models become commodities
— The winners aren’t the “most powerful AI” companies
— But those with the strongest distribution, products, and funding
Today’s AI reminds people of the early days of the internet.
Everyone thought technology would win.
But in the end,
it’s those who control attention who come out on top.