Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Rising against the trend! A major breakthrough news in storage chips! Institutions: A buying opportunity
A Google paper about a new algorithm has left “Western Digital” storage-chip concept stocks badly shaken!
On Friday, amid a broad selloff across major U.S. stock indexes, U.S. storage-chip concept stocks rose against the trend. During the session, SanDisk was up more than 5% at one point, while Micron Technology was up more than 3%. At the close, SanDisk was up 2.10%, Micron Technology up 0.50%, Seagate Technology up 0.34%, and Western Digital up 0.73%. Meanwhile, the day before, those stocks had already been hit by a massive round of selling. By Thursday’s close, SanDisk plunged more than 11%, Seagate Technology fell more than 8%, Western Digital fell more than 7%, and Micron Technology fell to nearly 7%.
Some analysts said the sharp drop in storage-chip stocks on Thursday may have been caused by a market misunderstanding. The TurboQuant ultra-efficient AI memory compression algorithm mentioned in Google’s paper applies only to key-value cache during the inference stage, does not affect the high-bandwidth memory (HBM) used by model weights, and has nothing to do with AI training tasks.
Other analysts said advanced compression technology only reduces bottlenecks and does not destroy demand for DRAM/flash. Investors may have cashed out profits on Google’s news, but demand for memory consumption remains very strong. A near-term pullback in memory stocks is a “chance to get on board,” not a stock-price turning point.
Storage-chip stocks hit by Google’s new algorithm
Here comes another “AI market ghost story.” Google has released research results on a new algorithm that can greatly reduce memory usage. On this basis, storage-chip shares have recently suffered a severe downtrend.
On Thursday, SanDisk fell more than 11%, Micron Technology fell nearly 7%, SK hynix fell more than 6%, Samsung Electronics fell nearly 5%, and Kioxia fell nearly 6%. Estimates show that the major global memory giants saw their market value evaporate by more than $90 billion in a single day on Thursday. On Friday, in the U.S. stock market, storage-chip concept stocks rose against the trend—SanDisk was up more than 2%, and Micron Technology was up 0.50%.
In the previous few months, storage-chip companies performed strongly because a surge in investment in artificial-intelligence infrastructure led to a supply shortage, driving chip prices higher and boosting profits. By this week’s Wednesday, SK hynix and Samsung Electronics shares were up more than 50% this year, while Kioxia’s share price had risen by more than 100%.
The trigger for this selloff was the “TurboQuant” paper that Google’s research institute is scheduled to formally present at the International Conference on Learning Representations (ICLR 2026). Google’s team said that, through two innovative technologies—PolarQuant (polar-coordinate quantization) and QJL (quantized JL transform)—they achieved KV cache compression to 3-bit precision on the premise of “zero loss,” reducing memory usage by at least 6 times. On an H100 GPU accelerator, the algorithm delivers up to an 8x performance improvement compared with unquantized key-value pairs.
Google promoted this research this week on the X platform, even though the research was originally published last year. Investors may worry that it will reduce demand from mega data-center operators for memory, thereby lowering prices of components used in the same way for smartphones and consumer electronics.
Institutional view: the market may be misreading it
Morgan Stanley said in its latest research note that the market may be misunderstanding the situation. The technology applies only to key-value cache during the inference stage and does not affect the high-bandwidth memory (HBM) used by model weights, nor is it related to AI training tasks. Analysts emphasized that the so-called “6x compression” does not mean a reduction in total storage demand; rather, it increases per-GPU throughput by improving efficiency.
Morgan Stanley analyst Shawn Kim said the impact of Google’s research on the industry should be more positive, because it affects a key bottleneck. The technology improves the efficiency of the so-called key-value cache used for inference (i.e., running AI models). He wrote: “If models can run with substantially reduced memory demand without losing performance, then the service cost per query would drop significantly, making AI deployments more profitable.” Kim added that, considering the investment-return opportunity, TurboQuant is a positive for mega-scale enterprises. In the long run, this may also benefit memory makers, because “lower single-token cost can also drive higher product adoption demand.”
Morgan Stanley cited the “Jevons paradox” from economics to explain the long-term impact: while technological efficiency improvements reduce unit costs, overall demand often expands due to lower usage barriers.
Lynx Equity Strategies analyst KC Rajkumar said that some media reports contain exaggerations. Today’s inference models already widely use 4-bit quantized data, and Google’s so-called “8x performance improvement” is based on comparisons against old 32-bit models. “However, due to extreme supply tightness, this almost certainly will not reduce future demand for memory and flash over the next 3 to 5 years,” Rajkumar wrote. He added that advanced compression technology only reduces bottlenecks and does not destroy demand for DRAM/flash.
Wells Fargo analyst Andrew Rocha said the existence of a compression algorithm has never fundamentally changed the overall scale of hardware procurement. By dramatically lowering the service cost per query, such technologies can move models that previously could only run on expensive cloud clusters to run locally, effectively lowering the threshold for large-scale AI deployments.
Four mega-scale enterprises led by Amazon and Google plan to invest roughly $650 billion this year to build data centers and buy up Nvidia’s AI accelerators and related storage-chip components. Recently, SK Group chairman Choi Tae-won said the tight supply situation for storage chips will continue until 2030.
From a supply-chain perspective, in 2026 server DRAM demand is expected to grow by 39%, and HBM demand by 58% year over year. The optimization effect of TurboQuant may be overwhelmed by the industry’s growth wave.
Mizuho Securities expert Jordan Klein believes the current pullback in memory stocks looks more like a “chance to get on board” rather than a stock-price turning point. In his report, Klein wrote that after experiencing a strong rally in 2025 and the early part of 2026, the memory-stock bulls began to waver. Even though the memory industry is known for dramatic cyclical volatility, he emphasized that the recent selloff fits a familiar pattern.
Mizuho said this selloff happens once every few months. It is not a signal of a market top, nor is it a reason to sell. In fact, buying on dips can be profitable.
(Source: China Securities Journal)