🔥 Gate.io Launchpool $1 Million Airdrop: Stake #ETH# to Earn Rewards Hourly
【 #1# Mainnet - #OM# 】
🎁 Total Reward: 92,330 #OM#
⏰ Subscription: 02:00 AM, February 25th — March 18th (UTC)
🏆 Stake Now: https://www.gate.io/launchpool/OM?pid=221
More: https://www.gate.io/announcements/article/43515
The impact of DeepSeek on the upstream and downstream protocol of Web3 AI
Original author: Kevin, BlockBooster
Reprint: Luke, Mars Finance
Through the analysis of technical architecture, functional positioning, and practical use cases, I divide the entire ecosystem into: infrastructure layer, middleware layer, model layer, application layer, and clarify their dependencies:
The infrastructure layer provides decentralized underlying resources (computing power, storage, L1), including computing power protocols such as: Render, Akash, io.net, etc.; storage protocols such as: Arweave, Filecoin, Storj, etc.; L1 includes: NEAR, Olas, Fetch.ai, etc.
The computing power layer protocol supports model training, inference, and framework operation; the storage protocol saves training data, model parameters, and on-chain interaction records; L1 optimizes data transmission efficiency and reduces latency through dedicated nodes.
2, middleware layer
The middleware layer is the bridge connecting the infrastructure and upper-layer applications, providing framework development tools, data services, and privacy protection. The data labeling protocols include: Grass, Masa, Vana, etc.; development framework protocols include: Eliza, ARC, Swarms, etc.; privacy computing protocols include: Phala, etc.
The data service layer provides fuel for model training, the development framework relies on the computing power and storage of the infrastructure layer, and the privacy computing layer protects the security of data during training/inference.
3, Model Layer
The model layer is used for model development, training, and distribution, with the open-source model training platform: Bittensor.
The model layer depends on the computing power of the infrastructure layer and the data of the middleware layer; the model is deployed to the chain through the development framework; the model market delivers the training results to the application layer.
The application layer is an AI product oriented towards end users, where the agents include: GOAT, AIXBT, etc.; DeFAI protocols include: Griffain, Buzz, etc.
Application layer calls the pre-trained model of the model layer; privacy computing relying on the middleware layer; complex applications require real-time computing power of the infrastructure layer.
According to a sampling survey, about 70% of Web3 AI projects actually call OpenAI or centralized cloud platforms, only 15% of the projects use decentralized GPUs (such as the Bittensor subnet model), and the remaining 15% are hybrid architectures (sensitive data processed locally, general tasks in the cloud).
The actual utilization rate of decentralized computing power protocol is far lower than expected, which does not match its actual market value. There are three reasons for the low utilization rate: Web2 developers migrating to Web3 continue to use the original tool chain; decentralized GPU platforms have not yet achieved price advantages; some projects use "decentralization" as a way to bypass data compliance checks, and the actual computing power still relies on centralized cloud.
AWS/GCP occupies 90%+ of the AI computing power market share, while Akash's equivalent computing power is only 0.2% of AWS. The moat of centralized cloud platforms includes: cluster management, RDMA high-speed network, elastic scaling; decentralized cloud platforms have improved versions of the above technologies in web3, but the imperfections include the latency issue: distributed node communication latency is 6 times that of centralized clouds; toolchain fragmentation: PyTorch/TensorFlow do not natively support decentralized scheduling.
DeepSeek reduces 50% of computing power consumption through Sparse Training, and achieves training of hundred billion parameter models with consumer-grade GPUs through dynamic model pruning. The market's demand for high-end GPUs in the short term is expected to decrease significantly, and the market potential of edge computing is being revalued. As shown in the above figure, before the emergence of DeepSeek, the vast majority of protocols and applications in the industry were using platforms such as AWS, with only a few use cases deployed in decentralized GPU networks. These use cases value the latter's price advantage in consumer-grade computing power and do not focus on the impact of latency.
This situation may further deteriorate with the emergence of DeepSeek. DeepSeek releases the restrictions on long-tail developers, and low-cost efficient inference models will spread at an unprecedented speed. In fact, both the above-mentioned centralized cloud platforms and many countries have already started deploying DeepSeek. The significant reduction in inference costs will spur a large number of frontend applications, which have a huge demand for consumer-grade GPUs. Faced with the upcoming huge market, centralized cloud platforms will launch a new round of user competition, not only competing with leading platforms but also with numerous small centralized cloud platforms. The most direct way of competition is through price reduction. It can be foreseen that there will be a price cut for 4090 on centralized platforms, which is a disaster for Web3's computing power platforms. When price is not the only moat for the latter, and computing power platforms in the industry are forced to lower prices, the result is something that io.net, Render, and Akash cannot afford. Price wars will destroy the last valuation upper limit of the latter, and the downward trend in revenue and user outflow may lead to a death spiral that prompts decentralized computing power protocols to transition in a new direction.
Three, the significance brought by the upstream and downstream agreements
As shown in the figure, I believe DeepSeek will have different impacts on the infrastructure layer, model layer, and application layer. In terms of positive impacts:
The application layer will benefit from a significant reduction in inference costs, allowing more applications to ensure that Agent applications are online for a long time at low cost and complete tasks in real time;
At the same time, low-cost models like DeepSeek can allow the composition of more complex SWARMs in the DeFAI protocol, with thousands of Agents used for a use case, each Agent's division of labor will be very subtle and specific, which can greatly improve the user experience, avoiding user input being incorrectly decomposed and executed;
Application layer developers can fine-tune models, feed prices to DeFi-related AI applications, on-chain data and analysis, protocol governance data, without having to pay high licensing fees again.
After the birth of the open-source model layer in DeepSeek, its significance has been proven. Opening up high-end models to long-tail developers can stimulate a wide range of development trends;
In the past three years, the high wall of computing power built around high-end GPUs has been completely broken. Developers have more choices, and the orientation of more open-source models has been established. In the future, the competition of AI models will no longer be about computing power but about algorithms. The shift in belief will become the cornerstone of confidence for open-source model developers;
There will be a constant emergence of specific subnets around DeepSeek, with an increase in model parameters under the same computing power, and more developers joining the open-source community.
From a negative impact perspective:
The objective delay in the use of computing power protocol in the infrastructure cannot be optimized;
And the hybrid network composed of A100 and 4090 requires higher coordination algorithm, which is not the advantage of decentralized platform.
Popping the Agent bubble and nurturing the new life of DeFAI
Agent is the last hope in the industry for AI, and the emergence of DeepSeek has liberated the computational power constraints, depicting the future expectations of application explosion. Originally a huge positive for the Agent track, it was punctured due to the strong correlation between the industry, the US stock market, and the Fed's policies, causing the remaining bubble to burst and the track market value to plummet to the bottom.
In the wave of integration between AI and industries, technological breakthroughs and market games are always closely related. The chain reaction triggered by NVIDIA's market value fluctuations is like a demon-revealing mirror, reflecting the deep-seated dilemma of AI narratives within the industry: from On-chain Agent to DeFAI engine, under the seemingly complete ecosystem map, there are hidden weaknesses in technical infrastructure, hollowed-out value logic, and the cruel reality dominated by capital. The seemingly prosperous on-chain ecosystem hides hidden diseases: a large number of high FDV tokens competing for limited liquidity, old assets relying on FOMO emotions to survive, and developers trapped in PVP competition consuming innovative momentum. When incremental funds and user growth hit a ceiling, the entire industry falls into the 'innovator's dilemma' - both eager to break through the narrative deadlock and struggling to break free from the shackles of path dependence. This torn state precisely provides a historic opportunity for AI Agents: it is not only an upgrade of the technological toolbox but also a reconstruction of the value creation paradigm.
In the past year, more and more teams in the industry have found that the traditional financing model is failing - giving VC small shares, high control, and waiting for the market to be pulled by the listing party is no longer sustainable. With VC pockets tightening, retail investors refusing to take over, and high listing thresholds for major exchanges, under the triple pressure, a new play that is more suitable for bear markets is emerging: partnering with top KOLs + a small amount of VC, launching a large community with a high proportion, and cold-starting with a low market value.
Innovators represented by Soon and Pump Fun are opening up new paths through 'community launch'—endorsed by top KOLs, they directly distribute 40%-60% of tokens to the community, launching projects at valuations as low as $10 million FDV to raise millions of dollars. This model builds consensus FOMO through KOL influence, allowing teams to lock in profits early, while trading high liquidity for market depth. Although they give up short-term control advantages, they can repurchase tokens at low prices in a bear market through compliant market-making mechanisms. Essentially, this is a paradigm shift in power structure: from VC-led passing the parcel game (institutional takeover - listing sale - retail purchase) to transparent game of consensus pricing driven by the community, forming a new symbiotic relationship between the project and the community in liquidity premium. As the industry enters the transparency revolution lifecycle, projects clinging to traditional control logic may become the remnants of the era under the wave of power shift.
The short-term market pain precisely confirms the irreversible trend of technological advancement. When AI Agents reduce the on-chain interaction costs by two orders of magnitude, and adaptive models continuously optimize the capital efficiency of DeFi protocols, the industry is expected to usher in the long-awaited Massive Adoption. This revolution does not rely on conceptual speculation or capital ripening, but is rooted in the technological penetration of real needs - just as the electricity revolution did not stagnate because of the bankruptcy of light bulb companies, Agents will eventually become the true golden track after the bubble bursts. DeFAI may be the fertile soil for nurturing new growth, when low-cost reasoning becomes routine, we may soon see use cases where hundreds of Agents are combined into a single Swarm. With equivalent computing power, a significant increase in model parameters can ensure that Agents in the era of open-source models can be more fully fine-tuned, even when faced with complex user input commands, they can be broken down into tasks that a single Agent can fully execute. Each Agent optimizes on-chain operations, which may promote an increase in overall DeFi protocol activity and liquidity. More complex DeFi products led by DeFAI will emerge, and this is where new opportunities arise after the last bubble burst.