The AI track has actually quietly shifted to a new competitive logic. In the past, everyone was competing over who has a larger model, more parameters, or faster generation; but today, the real difference is no longer capability, but whether it can be trusted.
This is not just the industry creating a new narrative for itself; the external world's attitude has already become very clear.
In China Net's recent release of the "Technology Trends Report for the Next Five Years," a trend is directly pointed out: the world is entering an era of AI intelligences. And this is not the kind that just chats with you, writes copy, or handles customer service, but one that will be involved in high-sensitivity scenarios like financial risk control, government approval, public governance, and even start participating in real decision-making.
However, the report repeatedly emphasizes a premise: if AI is not trustworthy, it has no qualification to be integrated into these systems.
Research from IIT Delhi states more plainly: black-box structures, hallucination issues, and lack of explainability are currently the biggest trust gaps in AI. The more powerful the model, once problems occur, the risks are not linear but are directly amplified.
It is precisely because of this reality that you see a very fragmented phenomenon: on one side, a large number of "AI + plugins" and "AI + shell applications," which seem to have increasingly more functions; on the other side, the core issue of whether AI can be trusted in critical scenarios remains unresolved, yet almost no one is directly addressing it.
And @inference_labs' recent series of actions are precisely aimed at solving this most difficult point.
They launched TruthTensor Season Two, and also renamed the original Subnet-2 to DSperse. The name change isn't important; what matters is that the direction has become very clear: they are no longer just "building a subnet," but are constructing a foundational infrastructure for decentralized, verifiable AI.
The core idea of DSperse is actually not complicated: no longer let a single model, node, or system alone endorse "correctness." Reasoning is done collectively by multiple people, verification involves multiple participants, and trust does not come from authority but from the process itself that is verifiable, quantifiable, and traceable.
It both runs models and audits models; it’s not "trust me," but "you can verify me yourself."
More importantly, DSperse completely separates "inference" and "verification," executing them in a distributed manner. This is not very efficient, but in terms of system security, it directly avoids the most fatal problem of centralized AI: if one node fails, the entire system crashes.
Honestly, this path is very difficult, and even unlikely to be popular in the short term. But from the perspective of AI entering the real world, it is almost unavoidable.
In my view, 2026 will be a very critical point in time. By then, AI will no longer lack model capabilities; what will truly be scarce are three things: verifiability, auditability, and a trustworthy infrastructure layer.
Looking at the current pace, Inference Labs has chosen to tackle the hardest part first. Among many projects still competing over parameters, model volume, or shell applications, DSperse is more like that inconspicuous variable that might determine the direction of the next stage.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
The AI track has actually quietly shifted to a new competitive logic. In the past, everyone was competing over who has a larger model, more parameters, or faster generation; but today, the real difference is no longer capability, but whether it can be trusted.
This is not just the industry creating a new narrative for itself; the external world's attitude has already become very clear.
In China Net's recent release of the "Technology Trends Report for the Next Five Years," a trend is directly pointed out: the world is entering an era of AI intelligences. And this is not the kind that just chats with you, writes copy, or handles customer service, but one that will be involved in high-sensitivity scenarios like financial risk control, government approval, public governance, and even start participating in real decision-making.
However, the report repeatedly emphasizes a premise: if AI is not trustworthy, it has no qualification to be integrated into these systems.
Research from IIT Delhi states more plainly: black-box structures, hallucination issues, and lack of explainability are currently the biggest trust gaps in AI. The more powerful the model, once problems occur, the risks are not linear but are directly amplified.
It is precisely because of this reality that you see a very fragmented phenomenon: on one side, a large number of "AI + plugins" and "AI + shell applications," which seem to have increasingly more functions; on the other side, the core issue of whether AI can be trusted in critical scenarios remains unresolved, yet almost no one is directly addressing it.
And @inference_labs' recent series of actions are precisely aimed at solving this most difficult point.
They launched TruthTensor Season Two, and also renamed the original Subnet-2 to DSperse. The name change isn't important; what matters is that the direction has become very clear: they are no longer just "building a subnet," but are constructing a foundational infrastructure for decentralized, verifiable AI.
The core idea of DSperse is actually not complicated: no longer let a single model, node, or system alone endorse "correctness." Reasoning is done collectively by multiple people, verification involves multiple participants, and trust does not come from authority but from the process itself that is verifiable, quantifiable, and traceable.
It both runs models and audits models; it’s not "trust me," but "you can verify me yourself."
More importantly, DSperse completely separates "inference" and "verification," executing them in a distributed manner. This is not very efficient, but in terms of system security, it directly avoids the most fatal problem of centralized AI: if one node fails, the entire system crashes.
Honestly, this path is very difficult, and even unlikely to be popular in the short term. But from the perspective of AI entering the real world, it is almost unavoidable.
In my view, 2026 will be a very critical point in time. By then, AI will no longer lack model capabilities; what will truly be scarce are three things: verifiability, auditability, and a trustworthy infrastructure layer.
Looking at the current pace, Inference Labs has chosen to tackle the hardest part first. Among many projects still competing over parameters, model volume, or shell applications, DSperse is more like that inconspicuous variable that might determine the direction of the next stage.