Gate Square “Creator Certification Incentive Program” — Recruiting Outstanding Creators!
Join now, share quality content, and compete for over $10,000 in monthly rewards.
How to Apply:
1️⃣ Open the App → Tap [Square] at the bottom → Click your [avatar] in the top right.
2️⃣ Tap [Get Certified], submit your application, and wait for approval.
Apply Now: https://www.gate.com/questionnaire/7159
Token rewards, exclusive Gate merch, and traffic exposure await you!
Details: https://www.gate.com/announcements/article/47889
Gonka discloses PoC mechanism and model evolution direction: aligning with real computing power to ensure multi-level GPU continuous participation
Odaily Planet Daily News: Decentralized AI computing power network Gonka recently explained the phased adjustments to the PoC mechanism and model operation methods during a community AMA. The main adjustments include: unifying PoC and inference to run on the same large model, changing the PoC activation method from delayed switching to near real-time triggering, and optimizing the computing power weight calculation method to better reflect the actual computational costs of different models and hardware.
Co-founder David stated that these adjustments are not aimed at short-term output or individual participants, but are a necessary evolution as the network’s computing power scale rapidly expands. They are designed to enhance the network’s stability and security under high load conditions and lay the foundation for supporting larger-scale AI workloads in the future.
Regarding community concerns about small models generating higher token output at this stage, the team pointed out that different-sized models have significant differences in actual computing power consumption for the same number of tokens. As the network evolves toward higher computing density and more complex tasks, Gonka is gradually aligning computing power weights with actual computational costs to prevent long-term imbalance in the computing power structure, which could affect the network’s overall scalability.
Under the latest PoC mechanism, the network has compressed PoC activation time to within 5 seconds, reducing the waste of computing resources caused by model switching and waiting, allowing GPU resources to be more efficiently used for effective AI computation. Additionally, unifying model operation reduces system overhead caused by nodes switching between consensus and inference, improving overall computing power utilization efficiency.
The team also emphasized that single cards and small to medium-sized GPUs can continue to earn rewards and participate in governance through pool collaboration, flexible participation by Epoch, inference tasks, and other methods. Gonka’s long-term goal is to support the coexistence of different levels of computing power within the same network through mechanism evolution.
Gonka stated that all key rule adjustments are promoted through on-chain governance and community voting. In the future, the network will gradually support more model types and AI task forms, providing continuous and transparent participation opportunities for GPUs of various sizes worldwide, and promoting the long-term healthy development of decentralized AI computing infrastructure.