Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
10 samples expanded to 242 languages, Adaption Labs aims to address AI multilingual shortcomings at the data level
ME News Report, April 15 (UTC+8). According to Beating Monitoring, AI data platform Adaption Labs has released a new feature for Adaptive Data called “Expand Your World.” Starting from at least 10 samples in a single language, it can generate up to 2,420 high-quality training samples covering 242 languages and regional variants, without requiring any additional annotation process or data pipeline. This feature is now available to all Adaptive Data users.
Multilingual coverage is one of the main shortcomings of AI training data. Most datasets focus on a small number of high-resource languages, and models’ ability to handle low-resource languages and regional dialects is significantly weaker, making it difficult for later fine-tuning to fully make up for the gap.
Adaption Labs’ approach is to bring multilingual coverage forward to the data layer, solving distribution bias at the stage of generating training data.
Adaption Labs was co-founded by former Cohere Vice President of Research Sara Hooker and former Google AI infrastructure engineer Sudip Roy. This February, the company raised a $50 million seed round led by Emergence Capital, valuing the company at $1 billion.
The company’s core bet is to replace brute-force scaling with an efficient adaptive system, enabling models to continuously learn and evolve.
(Source: BlockBeats)