Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
People love to speculate on how AI and crypto can combine for new use cases. I don’t pretend to know what those will be, but it’s not hard to imagine how crypto can help keep AI aligned through onchain commitment devices.
For those unfamiliar, commitment devices are a niche subject in economics, and in their simplest form come down to voluntarily binding your future self to pay a cost if some goal or threshold is not met.
Humans sometimes use apps like Stickk for this. They input their credit card, set a goal like working out or abstaining from social media, and if they fail to meet the goal, they donate money to a charity or cause they dislike (e.g. Planned Parenthood for a pro-life person or the church for an atheist).
Unfortunately, we all live out here in meatspace, so such contracts with your future self have a difficult time verifying most goals people set. If the honor system worked, you probably didn’t need the commitment device.
The typical workaround is to assign a referee who has to verify you have met your goals to avoid the penalty. Sometimes this works - your personal trainer is an excellent oracle for whether you have met your weightlifting goals - but often fails because the referee can’t tell if you cheated and doomscrolled for an hour before bed. Many people best placed to be a referee, like a spouse or best friend, are also loathe to punish the committed person.
AI, however, lives only in code. Which means in theory all the (in)actions are verifiable.
Given the tendency for AI to be delusional, flaky, distractable, or even lazy (if you’ve not heard of an AI trying to deploy another AI to do its tasks, you should read more papers), it’s not a stretch of the imagination where an AI with assets outside of its own system, like stablecoins, is subjected to slashing or other penalties for failing to meet or avoid certain conditions.
The weakness of human commitment devices has largely been credible referees with access to accurate knowledge. That seems like something that can be automated with an AI, as long as both the penalty and the referee are external to the AI’s control, like its stablecoin holdings and a smart contract, respectively.