Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
OpenAI Exposes "Polaris" Project, "2028 Mass Unemployment" May Actually Be Coming
Recently, a “2028 Prediction” article went viral online. The article pointed out that due to advances in AI, there will be a wave of massive unemployment in 2028, with many jobs being replaced by AI.
After the article was published, combined with the Middle East situation, it caused a sharp drop in the US stock market that day. This event was quite surreal, as the article was clearly written by AI, but it seemed to resonate with people’s fears of “AI bringing widespread unemployment,” thus having a significant impact.
Recently, a piece of news revealed by OpenAI made people realize that the “massive unemployment in 2028” might not be just a rumor.
Recently, OpenAI Chief Scientist Jakub Pachocki said in an exclusive interview with MIT Technology Review a chilling statement—their “North Star” is to build a fully automated multi-agent research system by 2028.
By September this year, the first phase goal will be achieved:
An “autonomous AI research intern” capable of independently handling specific research problems.
This is not a placeholder in the product roadmap, nor a casual boast by Altman on X. It signifies that OpenAI is betting all its resources on one direction.
The Meaning of “North Star”
When tech companies talk about the “North Star,” it usually means two things: first, other projects must give way to it; second, there is internal consensus.
Based on OpenAI’s actions over the past two weeks, this judgment seems to be correct.
On March 19, OpenAI announced the acquisition of developer tools company Astral, integrating the team into the Codex division; at the same time, the company announced a plan to unify ChatGPT, Codex, and the browser into a single desktop “super app,” led by application head Fidji Simo, with Greg Brockman assisting in organizational reform.
The era of fragmented products is coming to an end, and OpenAI is pushing all its chips in one direction.
And that direction is “letting AI do research on its own.”
Pachocki’s logic is quite clear: reasoning models, agents, and interpretability—these three technical routes were once separate within OpenAI, but now they are being integrated toward a single goal—creating an AI researcher that can operate autonomously in data centers for extended periods. He said once this is achieved, “this will be what we truly rely on.”
Former OpenAI researcher Andrej Karpathy’s view is even more direct—“All cutting-edge large language model labs will do this; this is the ultimate boss battle.” He added a thought-provoking remark: “Scaling will definitely be more complex, but doing this is just an engineering problem, and it will succeed.”
Pay attention to his wording: it’s not ‘whether’ it can be done, but ‘when’.
Anthropic in Action
On the very day OpenAI announced the “North Star,” Anthropic quietly launched Claude Code Channels—a feature allowing developers to interact directly with running Claude Code sessions via Telegram and Discord.
This may seem minor on its own, but in the context of overall trends, it is significant.
Anthropic’s logic is: rather than telling developers what AI can do in the future, it’s better to embed it into their current workflows. Telegram and Discord are not academic papers—they are where programmers work every day. Having Claude Code live here means it has shifted from a “tool” to a “colleague.”
Community reactions confirm this judgment.
Some users directly said: “Claude, through this update, has killed OpenClaw—you no longer need to buy a Mac Mini.” The implication is that Anthropic’s infrastructure improvements have already eliminated the cost advantage of open-source alternatives.
From a broader timeline perspective, Anthropic’s iteration speed on Claude Code is astonishing. In just a few weeks, it integrated text processing, thousands of MCP skills, and autonomous bug fixing capabilities. While OpenAI is strengthening Codex through the Astral acquisition, Anthropic has already put Claude Code directly into developers’ chat windows.
Both companies are heading toward the same endpoint, but their routes are completely different—OpenAI is working on “the fully automated researcher in 2028,” while Anthropic is building “intelligent agent tools available today.”
The Real Challenge
However, there’s a detail that cannot be overlooked.
Pachocki did something rare in the interview—he openly discussed the challenges of safety and controllability, and he was quite candid.
He said their plan is to use other large language models to “monitor the AI researcher’s notes,” catching bad behavior before it causes problems. But he immediately admitted: “Our understanding of large language models is insufficient to fully control them,” and that “truly solving this problem will take a long time.”
A company’s chief scientist saying “we don’t have complete control” while announcing plans for a fully autonomous AI research system in 2028 is worth serious reflection.
This is not pessimism but an acknowledgment of the real difficulty. Pachocki’s statement indicates a clear awareness within OpenAI of the road ahead.
On the technical side, a “Karpasi cycle” summarized by researchers is worth noting—the success of automated AI research frameworks requires three elements: an agent with permission to modify individual files, a single objective for objective testing, and fixed experimental time limits.
This framework has already begun to produce results in real environments. Shopify CEO Tobias Lütke shared an example: he let an autoresearch agent run overnight, and the next morning, it conducted 37 experiments, improving the model’s performance by 19%.
From concept to implementation, this path is shorter than expected.
The Future of a $20,000 Subscription
The “North Star” project is not only a technological advantage but also a business game-changer.
Paul Roetzer’s figures are worth multiple readings: he cites internal OpenAI forecasts that by 2029, the agent business alone could generate $29 billion annually, including a $2,000/month “knowledge agent” and a $20,000/month “research agent.”
These numbers show that “AI researchers” are never just a technical goal—they are a revenue roadmap.
The $20,000/month “research agent,” when converted, is a fraction of a senior researcher’s annual salary, but it can work 24/7 and run 37 experiments simultaneously. It’s not about replacing a specific person but redefining what “research productivity” itself means.
This reminds me of Karpathy’s statement—“This is the ultimate boss battle.” By “boss,” he means the ceiling of AI capability itself, not competitors.
Once AI can autonomously advance scientific research, the pace of AI progress will no longer be limited by the number of human researchers and working hours.
Pachocki echoed this sentiment, albeit more restrained—“Once the system can operate autonomously in data centers for long periods, that will be what we truly rely on.”
The AI research intern of September 2026 is not the end but an important starting point.