Research Report: Examining How SCP And AO Influence The On-Chain World From First Principles

Advanced1/15/2025, 11:47:25 AM
This article will explore the concept and architectural design of AO in depth, analyzing how it addresses the challenges faced by existing public blockchains like Ethereum, ultimately bringing new development opportunities to Web3.

Keypoints:

  1. From Bitcoin to Ethereum, how can we find the optimal path to break through the limitations of throughput and scenarios?
  2. Starting from first principles, what is the key to breaking through the noise of the market memes and identifying the fundamental needs of blockchain?
  3. What kind of magic do SCP and AO (Actor Oriented) disruptive innovation principles (separating storage and computation) possess that can enable Web3 to truly unleash its potential?
  4. Will the results of running deterministic programs on immutable data be unique and reliable?
  5. In this narrative, why can SCP and AO (Actor Oriented) become infinite performance, trustworthy data, and the composable hexagonal warriors?

Introduction

[Data source:BTC price]

Since the birth of blockchain in 2009, over 15 years have passed. As a paradigm shift in digital technology, it records digital and network values, making cryptocurrency a new innovation in the capital paradigm.

As the firstborn, Bitcoin is poised to become a strategic reserve asset. At the 2024 Bitcoin conference, Trump made a commitment, stating that if he returns to the White House, he would ensure the government retains 100% of its Bitcoin holdings and designates it as a strategic reserve asset for the U.S.

After Trump’s election victory, Bitcoin surged 150%, with its peak reaching $107,287.

Trump’s win is clearly more favorable to the crypto industry, as he has repeatedly expressed strong support for cryptocurrencies.

However, in the short term, the high sensitivity of cryptocurrencies to election outcomes could lead to short-term market volatility peaks. Will this strong upward momentum be sustainable? The author believes that only by eliminating uncertainty and improving blockchain scalability can a new “red sea” be ushered in.

The Shadows Behind the “Web3” Boom After the U.S. Election

[Data source:DefiLlama]

Beneath the spotlight, the TVL (Total Value Locked) of Ethereum, the second-largest cryptocurrency by market cap, has remained sluggish since reaching its historic peak in 2021.

Even in the third quarter of 2024, Ethereum’s decentralized finance (DeFi) revenue dropped to $261 million, the lowest level since Q4 2020.

At first glance, there may be occasional spikes, but the overall trend shows a slowdown in DeFi activity on the Ethereum network.

Additionally, the market has seen the rise of entirely alternative blockchain ecosystems, such as the recently popular hyperliquid, a trading chain based on an order book model. Its data has seen rapid growth, with its market cap soaring into the top 50 in just two weeks. It is expected to generate annual revenue that ranks just below Ethereum, Solana, and Tron among all blockchains. This indirectly highlights the fatigue of traditional DeFi on Ethereum, based on the AMM architecture.

[Data source:Compound trading volume]

[Data source:Uniswap trading volume]

DeFi was once the core highlight of the Ethereum ecosystem, but due to reduced transaction fees and user activity, its revenue has significantly declined.

In response, the author tries to contemplate the reasons behind the current dilemmas faced by Ethereum, or the entire blockchain, and how to break through them.

Coincidentally, with SpaceX’s successful fifth test flight, SpaceX has emerged as a rising star in commercial space exploration. Looking back at SpaceX’s development path, its success can be attributed to a key methodology—first principles. (Tip: The concept of first principles was first introduced by the ancient Greek philosopher Aristotle over 2,300 years ago. He described first principles as “the most basic propositions or assumptions in every system exploration, which cannot be omitted, deleted, or violated.”)

Now, let’s also apply the method of first principles, peeling away the fog layer by layer, to explore the fundamental “atoms” of the blockchain industry. From a fundamental perspective, we will re-examine the current dilemmas and opportunities facing this industry.

Is Web3’s “Cloud Service” a Step Backward or the Future?

Is Web3’s “Cloud Service” a Step Backward or the Future?

When the concept of AO (Actor Oriented) was introduced, it attracted widespread attention. Against the backdrop of the increasing homogeneity of EVM-based public blockchains, AO, as a disruptive architectural design, has shown unique appeal.

This is not merely a theoretical concept, but a team is already putting it into practice.

As mentioned earlier, the greatest value of blockchain lies in recording digital value. From this perspective, it serves as a publicly transparent global public ledger. Based on this essence, it can be argued that the first principle of blockchain is “storage.”

AO is realized through a consensus paradigm (SCP) based on storage. As long as storage remains immutable, no matter where the computation occurs, the result can be guaranteed to have consensus. The AO global computer has been born, enabling the interconnection and collaboration of large-scale parallel computing.

Looking back at 2024, one of the most noteworthy events in the Web3 space was the explosion of the inscription ecosystem, which can be seen as an early practice of the separation of storage and computation. For example, the etching technology used by the Runes protocol allows small amounts of data to be embedded in Bitcoin transactions. While these data do not affect the main function of the transaction, they serve as additional information, forming a clear, verifiable, and non-consumable output.

Although some technical observers initially raised concerns about the security of Bitcoin inscriptions, fearing they might become potential entry points for network attacks,

over the past two years, it has completely stored data on-chain, and no blockchain forks have occurred to date. This stability once again proves that as long as stored data is not tampered with, no matter where the computation occurs, data consistency and security can be guaranteed.

Perhaps you will notice that this is nearly identical to traditional cloud services. For example:

In terms of computational resource management, in the AO architecture, an “Actor” is an independent computing entity, and each computing unit can run its own environment. Doesn’t this resemble the microservices and Docker in traditional cloud servers? Similarly, traditional cloud services rely on S3 or NFS for storage, while AO relies on Arweave.

However, simply reducing AO to a “reheated old idea” would be inaccurate. Although AO borrows some design concepts from traditional cloud services, its core lies in combining decentralized storage with distributed computing. Arweave, as a decentralized storage network, differs fundamentally from traditional centralized storage. This decentralized characteristic provides Web3 data with higher security and censorship resistance.

More importantly, the combination of AO and Arweave is not just a simple technical stack; it creates a new paradigm. This paradigm combines the performance advantages of distributed computing with the trustworthiness of decentralized storage, providing a solid foundation for the innovation and development of Web3 applications. Specifically, this combination is reflected in the following two aspects:

  1. Achieving a completely decentralized design in the storage system while ensuring performance through a distributed architecture.
  2. This combination not only solves some core challenges in the Web3 space (such as storage security and openness) but also provides the technical foundation for potential future limitless innovation and composition.

The following will explore AO’s concept and architectural design in depth and analyze how it addresses the dilemmas faced by existing public blockchains like Ethereum, ultimately bringing new development opportunities to Web3.

Viewing the Current Web3 Dilemma from the “Atomic” Perspective

Since Ethereum emerged with smart contracts, it has undoubtedly become the dominant force.

Some may ask, isn’t there Bitcoin? However, it’s important to note that Bitcoin was created as a replacement for traditional currencies, aiming to become a decentralized and digital cash system. Ethereum, on the other hand, is not just a cryptocurrency; it is a platform that enables the creation and execution of smart contracts and decentralized applications (DApps).

Overall, Bitcoin is a digital alternative to traditional money, with a high price but not necessarily a high value. Ethereum, however, is more like an open-source platform, offering more prospective value in terms of richness, and it better represents the current conceptual vision of an open Web3 world.

Since 2017, many projects have attempted to challenge Ethereum, but very few have lasted. Ethereum’s performance has long been criticized, leading to the rise of Layer 2 solutions. However, the seemingly prosperous growth of Layer 2 is, in reality, a desperate struggle in the face of adversity. As competition intensifies, a series of issues have gradually emerged, becoming serious constraints on the development of Web3:

There is an upper limit to performance, and the user experience remains poor

[Data source:DeFiLlama]

[Data source:L2 BEAT]

Recently, more and more people believe that Ethereum’s Layer 2 (L2) scaling plan has failed.

Initially, L2 was seen as an important continuation of Ethereum’s subculture in its scaling strategy. It was also supported by the expectation that L2 would reduce gas fees and improve throughput, leading to growth in both user numbers and transaction volumes. However, despite the reduction in gas fees, the anticipated growth in user numbers did not materialize.

In fact, is L2 really to blame for the failure of the scaling plan? Clearly, L2 is just a scapegoat. While it bears some responsibility, the main responsibility lies with Ethereum itself. Further, this outcome is an inevitable result of the underlying design issues of most Web3 chains today.

To explain this from an “atomic” perspective, L2 is responsible for computation, while Ethereum handles the fundamental “storage” of the blockchain. To ensure sufficient security, Ethereum must store data and achieve consensus.

However, Ethereum’s design prevents potential infinite loops during execution, which could cause the entire platform to halt. Therefore, any given smart contract execution is limited to a finite number of computation steps.

This leads to the paradox that L2 is designed to have infinite performance, but in reality, the limitations of the main chain impose a ceiling on it.

The bottleneck effect dictates that L2 has an upper limit.

For a more detailed understanding of the mechanism, readers can explore further by reading: “From Traditional DeFi to AgentFi: Exploring the Future of Decentralized Finance.”

The Limited Appeal of Current Use Cases

Ethereum’s most proud achievement is the flourishing ecosystem of applications, where various decentralized applications (DApps) are developed.

However, is the ecosystem truly as vibrant and diverse as it seems?

Clearly, the answer is no. Behind Ethereum’s flourishing application ecosystem lies a financialization-heavy environment, with a significant lack of mature non-financial applications.

Let’s take a look at the more prosperous application sectors on Ethereum:

First, concepts like NFTs, DeFi, GameFi, and SocialFi, while innovative in financial terms, are not yet suitable for the general public. The reason Web2 grew so rapidly is rooted in its functionality, which is closely tied to people’s daily lives.

Compared to financial products and services, ordinary users are more concerned with functionalities like messaging, socializing, video streaming, and e-commerce.

Second, from a competitive standpoint, credit lending in traditional finance is a very common and widespread product. However, in the DeFi space, this type of product is still quite rare. The primary reason is the current lack of an effective on-chain credit system.

Building a credit system requires enabling users to truly own their online personal profiles and social graphs, which can transcend across different applications.

Only when this decentralized information can be stored and transmitted at zero cost will it be possible to build a powerful Web3 personal information graph and a credit-based Web3 application system.

Here, we reaffirm a key issue: the failure of Layer 2 (L2) to attract a significant number of users is not its fault. L2 was never the core driving force. The real way to break through the Web3 dilemma is to innovate new application scenarios that attract users.

However, the current situation is like being stuck in holiday traffic—despite the many innovative ideas, it’s hard to push them forward due to the constraints of transaction performance.

The essence of blockchain is “storage.” When storage and computation are coupled, it becomes less “atomic.” In such an inauthentic design, there will inevitably be a performance ceiling.

Some viewpoints define blockchain’s essence as a transaction platform, a currency system, or as emphasizing transparency and anonymity. However, these perspectives overlook blockchain’s fundamental characteristics as a data structure and its broader application potential. Blockchain is not only for financial transactions; its architecture allows it to span multiple industries, such as supply chain management, healthcare records, and even copyright management.

Therefore, the essence of blockchain lies in its ability to function as a storage system. This is not just because it can securely store data, but because it guarantees data integrity and transparency through a distributed consensus mechanism. Once a block is added to the chain, it is almost impossible to alter or delete.

Atomic Infrastructure: AO Makes Infinite Performance Possible

[Data source:L2 TPS]

The basic architecture of blockchain faces a clear bottleneck: the limitation of block space. It’s like a ledger of fixed size, where every transaction and data entry needs to be recorded within a block. Both Ethereum and other blockchains are constrained by block size limits, leading to competition for space among transactions. This raises a key question: can we break through this limitation? Does block space always have to be limited? Is there a way to achieve true infinite scalability?

While Ethereum’s L2 solution has succeeded in performance scaling, it can only be considered a partial success. L2 has increased throughput by several orders of magnitude, which might be sufficient for handling peak transaction loads for individual projects. However, for most L2 solutions that rely on the storage and consensus security of the main chain, this level of scalability is far from adequate.

It’s important to note that L2’s TPS (transactions per second) cannot be infinitely increased, mainly due to the following limiting factors: data availability, settlement speed, verification costs, network bandwidth, and contract complexity. Although Rollups have optimized storage and computation needs on Layer 1 (L1) through compression and validation, they still require data to be submitted and verified on L1, thus being limited by L1’s bandwidth and block time. Additionally, computational costs, such as generating zero-knowledge proofs, node performance bottlenecks, and the execution requirements of complex contracts, also limit the scalability of L2.

[Data source:suiscan TPS]

The real challenge for Web3 lies in limited throughput and insufficient applications, which makes it difficult to attract new users and risks losing influence.

In simple terms, improving throughput is the key to a bright future for Web3. Achieving a network with infinite scalability and high throughput is its vision. For example, Sui employs deterministic parallel processing, arranging transactions in advance to avoid conflicts, thereby enhancing predictability and scalability. This design allows Sui to handle over 10,000 transactions per second (TPS). Additionally, Sui’s architecture enables increased throughput by adding more validator nodes, theoretically achieving infinite scalability. Its use of the Narwhal and Tusk protocols minimizes latency, allowing efficient parallel transaction processing and overcoming the scalability bottlenecks of traditional Layer 2 solutions.

The AO concept we discuss follows a similar path, focusing on different aspects but aiming to build a scalable storage system.

Web3 requires a new infrastructure built on first principles, with storage as its core. Like how Elon Musk rethought rocket launches and electric vehicles from fundamental principles, redesigning these complex technologies to disrupt industries, AO’s design also mirrors this approach. By decoupling computation from storage, AO abandons traditional blockchain frameworks, creating future-oriented Web3 storage foundations and driving Web3 toward the vision of decentralized cloud services.

Storage Consensus Paradigm (SCP)

Before introducing AO, we need to discuss a relatively novel design paradigm called SCP.

While SCP may be unfamiliar to many, most people have heard of Bitcoin inscriptions. Loosely speaking, the design concept behind inscriptions can be considered a form of storage-as-“atomic” unit thinking, though with some deviations. Interestingly, Vitalik has also expressed interest in becoming the “paper tape” for Web3, which aligns with the philosophy behind SCP.

In Ethereum’s model, computation is performed by full nodes, globally stored, and made available for querying. This approach turns Ethereum into a “world computer” but one that operates as a single-threaded program, executing steps sequentially. This inherent inefficiency also creates fertile ground for MEV (Maximal Extractable Value). Transaction signatures enter Ethereum’s mempool, are publicly broadcast, and sorted by miners, typically within 12 seconds. However, this brief window is enough for “searchers” to intercept, simulate, and even reverse-engineer potential strategies. More on this topic can be explored in “The MEV Landscape One Year After the Ethereum Merge.”

SCP, in contrast, separates computation from storage. This concept may sound abstract, so let’s use a Web2 analogy.

In Web2 scenarios like messaging or online shopping, peak traffic can cause sudden surges that a single machine cannot handle. Engineers addressed this by distributing computational tasks across multiple machines, synchronizing and storing their results to manage traffic elastically. Similarly, SCP distributes computation across nodes. Unlike traditional systems that use databases like MySQL, SCP relies on blockchain mainnets for storage.

In simple terms, SCP leverages blockchain for data storage while off-chain servers handle computation and state generation. This architecture ensures data trustworthiness while enabling a high-performance, layered network separate from the underlying blockchain.

In SCP, blockchain serves solely as a storage medium, while off-chain clients or servers perform all computation and manage resulting states. This design significantly enhances scalability and performance. However, it raises a key question: Can data integrity and security be ensured when computation and storage are decoupled?

Essentially, blockchain acts as a storage solution, with computation offloaded to servers. Unlike traditional blockchain consensus mechanisms, SCP moves consensus off-chain.

Advantages of this Approach

Without complex consensus processes, each server focuses exclusively on its computational tasks, enabling near-infinite transaction processing and reduced operational costs.

While similar to current rollup scalability solutions, SCP’s ambition goes further. It aims not only to solve blockchain scalability but also to offer a transformative path from Web2 to Web3.

What are the advantages of SCP? SCP decouples computation from storage. This design not only enhances system flexibility and composability but also lowers development barriers, effectively overcoming the performance limitations of traditional blockchains while ensuring data trustworthiness. These innovations make SCP an efficient and scalable infrastructure that empowers the future decentralized ecosystem.

  1. Composability: SCP places computation off-chain, preserving the fundamental nature of blockchain and maintaining its “atomic” attributes. With computation off-chain and blockchain solely responsible for storage, any smart contract can be executed. Application migration based on SCP becomes extremely simple, which is a crucial advantage.
  2. Low development barriers: Off-chain computation allows developers to use any programming language, whether C++, Python, or Rust, without requiring the use of Solidity for the EVM. The only cost developers may face is the API interaction cost with the blockchain.
  3. No performance restrictions: Off-chain computation aligns computational capabilities with traditional applications. The performance ceiling depends on the hardware capabilities of computation servers. Since elastic scaling of traditional computing resources is a mature technology, computation capacity is effectively limitless, aside from machine costs.
  4. Trusted data: Since the core “storage” function is handled by the blockchain, all data is immutable and traceable. Any node can retrieve and recompute data when the validity of state results is in doubt, ensuring that blockchain endows data with trustworthiness.

Bitcoin addressed the “Byzantine Generals Problem” by introducing PoW, a groundbreaking approach devised by Satoshi Nakamoto within the constraints of the time, which ultimately led to Bitcoin’s success.

Similarly, when tackling the computation of smart contracts, starting from first principles might result in a seemingly counterintuitive solution. However, by boldly decentralizing computational functions and returning blockchain to its core essence, one would find that storage consensus is achieved while simultaneously meeting the requirements of data openness and verifiability. This approach delivers performance on par with Web2, embodying the essence of SCP.

SCP and AO Integration: Breaking Free from Constraints

After all this discussion, we finally arrive at AO.

First, AO adopts a design pattern known as the Actor Model, which was initially implemented in the Erlang programming language.

The architecture and technology behind AO are built upon the SCP paradigm, separating the computation layer from the storage layer. This allows the storage layer to remain permanently decentralized, while the computation layer retains the structure of traditional computing.

AO’s computational resources resemble those in traditional computing models but incorporate a permanent storage layer, enabling traceable and decentralized computing processes.

At this point, you might wonder, which main chain does AO use for its storage layer?

Clearly, using Bitcoin or Ethereum for the storage layer would be impractical. The reasons for this have already been discussed earlier, and readers will likely grasp this easily. In AO, data storage and final verifiability are ultimately handled by Arweave.

Why choose Arweave from among the many decentralized storage solutions?

The choice of Arweave as the storage layer is primarily based on its unique focus on permanent data storage within a decentralized network. Arweave positions itself as a “global hard drive where data is never lost,” contrasting with Bitcoin’s “global ledger” and Ethereum’s “global computer.” Essentially, Arweave functions as a global hard drive designed never to lose data.

For more technical details on Arweave, refer to: “Understanding Arweave: A Key Web3 Infrastructure

Next, we will focus on discussing the principles and technologies of AO to understand how AO achieves infinite computation?

[Data source:How ao Messenger works | Manual]

The core of AO is to build a computation layer that is infinitely scalable and environment-independent. The nodes of AO collaborate based on protocols and communication mechanisms, ensuring that each node provides optimal service to avoid competitive consumption.

First, let’s understand the basic architecture of AO. AO consists of processes and messages, as well as scheduling units (SU), computing units (CU), and messenger units (MU):

  • Process: The computing unit of nodes in the network, used for data computation and message processing. For example, each contract could be a process.
  • Message: Processes interact through messages, with each message adhering to the ANS-104 standard. The entire AO system must follow this standard.
  • Scheduling Unit (SU): Responsible for numbering the messages of processes, enabling the processes to be ordered, and uploading the messages to Arweave.
  • Computing unit (CU): The state node within an AO process, responsible for executing computation tasks and returning the computed results and signatures to the SU, ensuring the correctness and verifiability of the results.
  • Messenger Unit (MU): The routing component in the node, responsible for delivering user messages to the SU and performing integrity checks on the signed data.

It is important to note that AO does not have shared states but instead has holographic states. The consensus in AO arises from the game theory. Since each computation generates a state that is uploaded to Arweave, this ensures data verifiability. When users have doubts about certain data, they can request one or more nodes to compute the data on Arweave. If the settlement results do not match, dishonest nodes will be penalized.

Innovation of the AO Architecture: Storage and Holographic State

The innovation of the AO architecture lies in its data storage and verification mechanisms, which replace the redundant computations and limited block space typical of traditional blockchains by utilizing decentralized storage (Arweave) and holographic states.

  1. Holographic State: In the AO architecture, each computation generates a “holographic state” that is uploaded to the decentralized storage network (Arweave). This “holographic state” is not merely a simple record of transaction data; it contains the full state and relevant data of each computation. This means that every computation and its result are permanently recorded and can be verified at any time. The holographic state, as a “data snapshot,” provides the network with a distributed and decentralized data storage solution.
  2. Storage Verification: In this model, data verification no longer relies on each node repeating the computation for all transactions. Instead, it is done by storing and comparing the data uploaded to Arweave to confirm the validity of transactions. When a computation result from a node does not match the data stored on Arweave, users or other nodes can initiate a verification request. The network will then recalculate the data and compare it with the stored record in Arweave. If the results do not match, the node is penalized, ensuring the integrity of the network.
  3. Breaking the Block Space Limitation: Traditional blockchain block space is constrained by storage limitations, with each block only able to contain a limited number of transactions. However, in the AO architecture, data is no longer stored directly within blocks; instead, it is uploaded to a decentralized storage network (such as Arweave). This means that the storage and verification of data in the blockchain network are no longer constrained by the size of the block space but are instead offloaded and expanded through decentralized storage. As a result, the blockchain system’s capacity is no longer directly limited by block size.

The block space limitations of traditional blockchains are not insurmountable. The AO architecture, by relying on decentralized storage and holographic states, changes the way data storage and verification are handled, making it possible to achieve unlimited scalability.

Is Consensus Dependent on Redundant Computation?

Not necessarily. Consensus mechanisms do not have to rely on redundant computation, and they can be implemented in various ways. Solutions that depend on storage rather than redundant computation are feasible in certain scenarios, especially when the integrity and consistency of data can be ensured through storage verification.

In the AO architecture, storage serves as an alternative to redundant computation. By uploading the computational results to a decentralized storage network (Arweave in this case), the system ensures data immutability. Additionally, through the holographic upload of states, any node can verify the computation results at any time, ensuring data consistency and correctness. This approach relies on the reliability of data storage rather than on each node repeating the computation.

Now, let’s look at the differences between AO and ETH through a table:


It is easy to see that the core characteristics of AO can be summarized into two key points:

  1. Large-scale parallel computing: Supports countless processes running simultaneously, significantly enhancing computational power.
  2. Minimized trust dependency: No need to trust any single node, as all computational results can be infinitely reproduced and traced back.

How AO Breaks the Deadlock: Ethereum and the Dilemmas of Public Blockchains

For the two major dilemmas Ethereum faces—performance bottlenecks and lack of applications—I believe these are exactly where AO shines. The reasons are as follows:

  1. Based on the SCP paradigm: Since AO separates computation from storage, it can outperform Ethereum’s single-process, one-time computation model. AO can flexibly scale to more computational resources based on demand. Additionally, Arweave’s holographic state storage of message logs allows AO to ensure consensus by reproducing computation results, providing security that rivals Ethereum and Bitcoin.
  2. Parallel computation architecture based on message passing: AO’s process interactions do not require competing for “locks.” In Web2 development, it’s well-known that high-performance services avoid lock contention, as it is highly costly for efficient services. AO avoids lock contention through message passing between processes, following this principle. This enables its scalability to reach any size.
  3. Modular architecture: AO’s modularity is reflected in the separation of CU, SU, and MU, allowing the use of any virtual machine or sequencer. This makes migration and development of DApps from different chains extremely convenient and cost-effective. Combined with Arweave’s efficient storage capacity, DApps developed on AO can achieve more diverse functionalities. For example, character graphs can be easily realized on AO.
  4. Supporting Web3’s adaptability to various policy requirements: Although the core idea of Web3 is decentralization and deregulation, different policies in different countries inevitably have a profound impact on Web3’s development and promotion. AO’s flexible modular architecture can be adapted to different regional policies, ensuring the stability and sustainable development of Web3 applications to some extent.

Summary

The separation of computation and storage is a brilliant concept, and it is a systematic design based on first principles.

As a narrative direction akin to “decentralized cloud services,” it not only provides a solid landing scenario but also offers broader imaginative space for combining with AI.

In fact, only by truly understanding the fundamental needs of Web3 can we break free from the dilemmas and constraints brought about by path dependency.

The integration of SCP and AO provides a completely new approach: it inherits all the features of SCP, no longer deploying smart contracts on-chain but instead storing immutable and traceable data on-chain, achieving data trustworthiness that anyone can verify.

Of course, there is no absolutely perfect path at the moment. AO is still in its nascent development stage. How to prevent Web3 from being overly financialized, create enough application scenarios, and bring more possibilities for the future is still a challenge on AO’s road to success. Whether AO can deliver a satisfactory answer remains to be seen by the market and time.

The combination of SCP and AO, as a development paradigm full of potential, although its ideas have not yet been widely recognized in the market, AO is expected to play an important role in the Web3 field in the future, even driving the further development of Web3.

Disclaimer:

  1. This article is reprinted from [PermaDAO]. All copyrights belong to the original author [14]. If there are objections to this reprint, please contact the Gate Learn team, and they will handle it promptly.
  2. Liability Disclaimer: The views and opinions expressed in this article are solely those of the author and do not constitute any investment advice.
  3. The Gate Learn team does translations of the article into other languages. Unless mentioned, copying, distributing, or plagiarizing the translated articles is prohibited.

Research Report: Examining How SCP And AO Influence The On-Chain World From First Principles

Advanced1/15/2025, 11:47:25 AM
This article will explore the concept and architectural design of AO in depth, analyzing how it addresses the challenges faced by existing public blockchains like Ethereum, ultimately bringing new development opportunities to Web3.

Keypoints:

  1. From Bitcoin to Ethereum, how can we find the optimal path to break through the limitations of throughput and scenarios?
  2. Starting from first principles, what is the key to breaking through the noise of the market memes and identifying the fundamental needs of blockchain?
  3. What kind of magic do SCP and AO (Actor Oriented) disruptive innovation principles (separating storage and computation) possess that can enable Web3 to truly unleash its potential?
  4. Will the results of running deterministic programs on immutable data be unique and reliable?
  5. In this narrative, why can SCP and AO (Actor Oriented) become infinite performance, trustworthy data, and the composable hexagonal warriors?

Introduction

[Data source:BTC price]

Since the birth of blockchain in 2009, over 15 years have passed. As a paradigm shift in digital technology, it records digital and network values, making cryptocurrency a new innovation in the capital paradigm.

As the firstborn, Bitcoin is poised to become a strategic reserve asset. At the 2024 Bitcoin conference, Trump made a commitment, stating that if he returns to the White House, he would ensure the government retains 100% of its Bitcoin holdings and designates it as a strategic reserve asset for the U.S.

After Trump’s election victory, Bitcoin surged 150%, with its peak reaching $107,287.

Trump’s win is clearly more favorable to the crypto industry, as he has repeatedly expressed strong support for cryptocurrencies.

However, in the short term, the high sensitivity of cryptocurrencies to election outcomes could lead to short-term market volatility peaks. Will this strong upward momentum be sustainable? The author believes that only by eliminating uncertainty and improving blockchain scalability can a new “red sea” be ushered in.

The Shadows Behind the “Web3” Boom After the U.S. Election

[Data source:DefiLlama]

Beneath the spotlight, the TVL (Total Value Locked) of Ethereum, the second-largest cryptocurrency by market cap, has remained sluggish since reaching its historic peak in 2021.

Even in the third quarter of 2024, Ethereum’s decentralized finance (DeFi) revenue dropped to $261 million, the lowest level since Q4 2020.

At first glance, there may be occasional spikes, but the overall trend shows a slowdown in DeFi activity on the Ethereum network.

Additionally, the market has seen the rise of entirely alternative blockchain ecosystems, such as the recently popular hyperliquid, a trading chain based on an order book model. Its data has seen rapid growth, with its market cap soaring into the top 50 in just two weeks. It is expected to generate annual revenue that ranks just below Ethereum, Solana, and Tron among all blockchains. This indirectly highlights the fatigue of traditional DeFi on Ethereum, based on the AMM architecture.

[Data source:Compound trading volume]

[Data source:Uniswap trading volume]

DeFi was once the core highlight of the Ethereum ecosystem, but due to reduced transaction fees and user activity, its revenue has significantly declined.

In response, the author tries to contemplate the reasons behind the current dilemmas faced by Ethereum, or the entire blockchain, and how to break through them.

Coincidentally, with SpaceX’s successful fifth test flight, SpaceX has emerged as a rising star in commercial space exploration. Looking back at SpaceX’s development path, its success can be attributed to a key methodology—first principles. (Tip: The concept of first principles was first introduced by the ancient Greek philosopher Aristotle over 2,300 years ago. He described first principles as “the most basic propositions or assumptions in every system exploration, which cannot be omitted, deleted, or violated.”)

Now, let’s also apply the method of first principles, peeling away the fog layer by layer, to explore the fundamental “atoms” of the blockchain industry. From a fundamental perspective, we will re-examine the current dilemmas and opportunities facing this industry.

Is Web3’s “Cloud Service” a Step Backward or the Future?

Is Web3’s “Cloud Service” a Step Backward or the Future?

When the concept of AO (Actor Oriented) was introduced, it attracted widespread attention. Against the backdrop of the increasing homogeneity of EVM-based public blockchains, AO, as a disruptive architectural design, has shown unique appeal.

This is not merely a theoretical concept, but a team is already putting it into practice.

As mentioned earlier, the greatest value of blockchain lies in recording digital value. From this perspective, it serves as a publicly transparent global public ledger. Based on this essence, it can be argued that the first principle of blockchain is “storage.”

AO is realized through a consensus paradigm (SCP) based on storage. As long as storage remains immutable, no matter where the computation occurs, the result can be guaranteed to have consensus. The AO global computer has been born, enabling the interconnection and collaboration of large-scale parallel computing.

Looking back at 2024, one of the most noteworthy events in the Web3 space was the explosion of the inscription ecosystem, which can be seen as an early practice of the separation of storage and computation. For example, the etching technology used by the Runes protocol allows small amounts of data to be embedded in Bitcoin transactions. While these data do not affect the main function of the transaction, they serve as additional information, forming a clear, verifiable, and non-consumable output.

Although some technical observers initially raised concerns about the security of Bitcoin inscriptions, fearing they might become potential entry points for network attacks,

over the past two years, it has completely stored data on-chain, and no blockchain forks have occurred to date. This stability once again proves that as long as stored data is not tampered with, no matter where the computation occurs, data consistency and security can be guaranteed.

Perhaps you will notice that this is nearly identical to traditional cloud services. For example:

In terms of computational resource management, in the AO architecture, an “Actor” is an independent computing entity, and each computing unit can run its own environment. Doesn’t this resemble the microservices and Docker in traditional cloud servers? Similarly, traditional cloud services rely on S3 or NFS for storage, while AO relies on Arweave.

However, simply reducing AO to a “reheated old idea” would be inaccurate. Although AO borrows some design concepts from traditional cloud services, its core lies in combining decentralized storage with distributed computing. Arweave, as a decentralized storage network, differs fundamentally from traditional centralized storage. This decentralized characteristic provides Web3 data with higher security and censorship resistance.

More importantly, the combination of AO and Arweave is not just a simple technical stack; it creates a new paradigm. This paradigm combines the performance advantages of distributed computing with the trustworthiness of decentralized storage, providing a solid foundation for the innovation and development of Web3 applications. Specifically, this combination is reflected in the following two aspects:

  1. Achieving a completely decentralized design in the storage system while ensuring performance through a distributed architecture.
  2. This combination not only solves some core challenges in the Web3 space (such as storage security and openness) but also provides the technical foundation for potential future limitless innovation and composition.

The following will explore AO’s concept and architectural design in depth and analyze how it addresses the dilemmas faced by existing public blockchains like Ethereum, ultimately bringing new development opportunities to Web3.

Viewing the Current Web3 Dilemma from the “Atomic” Perspective

Since Ethereum emerged with smart contracts, it has undoubtedly become the dominant force.

Some may ask, isn’t there Bitcoin? However, it’s important to note that Bitcoin was created as a replacement for traditional currencies, aiming to become a decentralized and digital cash system. Ethereum, on the other hand, is not just a cryptocurrency; it is a platform that enables the creation and execution of smart contracts and decentralized applications (DApps).

Overall, Bitcoin is a digital alternative to traditional money, with a high price but not necessarily a high value. Ethereum, however, is more like an open-source platform, offering more prospective value in terms of richness, and it better represents the current conceptual vision of an open Web3 world.

Since 2017, many projects have attempted to challenge Ethereum, but very few have lasted. Ethereum’s performance has long been criticized, leading to the rise of Layer 2 solutions. However, the seemingly prosperous growth of Layer 2 is, in reality, a desperate struggle in the face of adversity. As competition intensifies, a series of issues have gradually emerged, becoming serious constraints on the development of Web3:

There is an upper limit to performance, and the user experience remains poor

[Data source:DeFiLlama]

[Data source:L2 BEAT]

Recently, more and more people believe that Ethereum’s Layer 2 (L2) scaling plan has failed.

Initially, L2 was seen as an important continuation of Ethereum’s subculture in its scaling strategy. It was also supported by the expectation that L2 would reduce gas fees and improve throughput, leading to growth in both user numbers and transaction volumes. However, despite the reduction in gas fees, the anticipated growth in user numbers did not materialize.

In fact, is L2 really to blame for the failure of the scaling plan? Clearly, L2 is just a scapegoat. While it bears some responsibility, the main responsibility lies with Ethereum itself. Further, this outcome is an inevitable result of the underlying design issues of most Web3 chains today.

To explain this from an “atomic” perspective, L2 is responsible for computation, while Ethereum handles the fundamental “storage” of the blockchain. To ensure sufficient security, Ethereum must store data and achieve consensus.

However, Ethereum’s design prevents potential infinite loops during execution, which could cause the entire platform to halt. Therefore, any given smart contract execution is limited to a finite number of computation steps.

This leads to the paradox that L2 is designed to have infinite performance, but in reality, the limitations of the main chain impose a ceiling on it.

The bottleneck effect dictates that L2 has an upper limit.

For a more detailed understanding of the mechanism, readers can explore further by reading: “From Traditional DeFi to AgentFi: Exploring the Future of Decentralized Finance.”

The Limited Appeal of Current Use Cases

Ethereum’s most proud achievement is the flourishing ecosystem of applications, where various decentralized applications (DApps) are developed.

However, is the ecosystem truly as vibrant and diverse as it seems?

Clearly, the answer is no. Behind Ethereum’s flourishing application ecosystem lies a financialization-heavy environment, with a significant lack of mature non-financial applications.

Let’s take a look at the more prosperous application sectors on Ethereum:

First, concepts like NFTs, DeFi, GameFi, and SocialFi, while innovative in financial terms, are not yet suitable for the general public. The reason Web2 grew so rapidly is rooted in its functionality, which is closely tied to people’s daily lives.

Compared to financial products and services, ordinary users are more concerned with functionalities like messaging, socializing, video streaming, and e-commerce.

Second, from a competitive standpoint, credit lending in traditional finance is a very common and widespread product. However, in the DeFi space, this type of product is still quite rare. The primary reason is the current lack of an effective on-chain credit system.

Building a credit system requires enabling users to truly own their online personal profiles and social graphs, which can transcend across different applications.

Only when this decentralized information can be stored and transmitted at zero cost will it be possible to build a powerful Web3 personal information graph and a credit-based Web3 application system.

Here, we reaffirm a key issue: the failure of Layer 2 (L2) to attract a significant number of users is not its fault. L2 was never the core driving force. The real way to break through the Web3 dilemma is to innovate new application scenarios that attract users.

However, the current situation is like being stuck in holiday traffic—despite the many innovative ideas, it’s hard to push them forward due to the constraints of transaction performance.

The essence of blockchain is “storage.” When storage and computation are coupled, it becomes less “atomic.” In such an inauthentic design, there will inevitably be a performance ceiling.

Some viewpoints define blockchain’s essence as a transaction platform, a currency system, or as emphasizing transparency and anonymity. However, these perspectives overlook blockchain’s fundamental characteristics as a data structure and its broader application potential. Blockchain is not only for financial transactions; its architecture allows it to span multiple industries, such as supply chain management, healthcare records, and even copyright management.

Therefore, the essence of blockchain lies in its ability to function as a storage system. This is not just because it can securely store data, but because it guarantees data integrity and transparency through a distributed consensus mechanism. Once a block is added to the chain, it is almost impossible to alter or delete.

Atomic Infrastructure: AO Makes Infinite Performance Possible

[Data source:L2 TPS]

The basic architecture of blockchain faces a clear bottleneck: the limitation of block space. It’s like a ledger of fixed size, where every transaction and data entry needs to be recorded within a block. Both Ethereum and other blockchains are constrained by block size limits, leading to competition for space among transactions. This raises a key question: can we break through this limitation? Does block space always have to be limited? Is there a way to achieve true infinite scalability?

While Ethereum’s L2 solution has succeeded in performance scaling, it can only be considered a partial success. L2 has increased throughput by several orders of magnitude, which might be sufficient for handling peak transaction loads for individual projects. However, for most L2 solutions that rely on the storage and consensus security of the main chain, this level of scalability is far from adequate.

It’s important to note that L2’s TPS (transactions per second) cannot be infinitely increased, mainly due to the following limiting factors: data availability, settlement speed, verification costs, network bandwidth, and contract complexity. Although Rollups have optimized storage and computation needs on Layer 1 (L1) through compression and validation, they still require data to be submitted and verified on L1, thus being limited by L1’s bandwidth and block time. Additionally, computational costs, such as generating zero-knowledge proofs, node performance bottlenecks, and the execution requirements of complex contracts, also limit the scalability of L2.

[Data source:suiscan TPS]

The real challenge for Web3 lies in limited throughput and insufficient applications, which makes it difficult to attract new users and risks losing influence.

In simple terms, improving throughput is the key to a bright future for Web3. Achieving a network with infinite scalability and high throughput is its vision. For example, Sui employs deterministic parallel processing, arranging transactions in advance to avoid conflicts, thereby enhancing predictability and scalability. This design allows Sui to handle over 10,000 transactions per second (TPS). Additionally, Sui’s architecture enables increased throughput by adding more validator nodes, theoretically achieving infinite scalability. Its use of the Narwhal and Tusk protocols minimizes latency, allowing efficient parallel transaction processing and overcoming the scalability bottlenecks of traditional Layer 2 solutions.

The AO concept we discuss follows a similar path, focusing on different aspects but aiming to build a scalable storage system.

Web3 requires a new infrastructure built on first principles, with storage as its core. Like how Elon Musk rethought rocket launches and electric vehicles from fundamental principles, redesigning these complex technologies to disrupt industries, AO’s design also mirrors this approach. By decoupling computation from storage, AO abandons traditional blockchain frameworks, creating future-oriented Web3 storage foundations and driving Web3 toward the vision of decentralized cloud services.

Storage Consensus Paradigm (SCP)

Before introducing AO, we need to discuss a relatively novel design paradigm called SCP.

While SCP may be unfamiliar to many, most people have heard of Bitcoin inscriptions. Loosely speaking, the design concept behind inscriptions can be considered a form of storage-as-“atomic” unit thinking, though with some deviations. Interestingly, Vitalik has also expressed interest in becoming the “paper tape” for Web3, which aligns with the philosophy behind SCP.

In Ethereum’s model, computation is performed by full nodes, globally stored, and made available for querying. This approach turns Ethereum into a “world computer” but one that operates as a single-threaded program, executing steps sequentially. This inherent inefficiency also creates fertile ground for MEV (Maximal Extractable Value). Transaction signatures enter Ethereum’s mempool, are publicly broadcast, and sorted by miners, typically within 12 seconds. However, this brief window is enough for “searchers” to intercept, simulate, and even reverse-engineer potential strategies. More on this topic can be explored in “The MEV Landscape One Year After the Ethereum Merge.”

SCP, in contrast, separates computation from storage. This concept may sound abstract, so let’s use a Web2 analogy.

In Web2 scenarios like messaging or online shopping, peak traffic can cause sudden surges that a single machine cannot handle. Engineers addressed this by distributing computational tasks across multiple machines, synchronizing and storing their results to manage traffic elastically. Similarly, SCP distributes computation across nodes. Unlike traditional systems that use databases like MySQL, SCP relies on blockchain mainnets for storage.

In simple terms, SCP leverages blockchain for data storage while off-chain servers handle computation and state generation. This architecture ensures data trustworthiness while enabling a high-performance, layered network separate from the underlying blockchain.

In SCP, blockchain serves solely as a storage medium, while off-chain clients or servers perform all computation and manage resulting states. This design significantly enhances scalability and performance. However, it raises a key question: Can data integrity and security be ensured when computation and storage are decoupled?

Essentially, blockchain acts as a storage solution, with computation offloaded to servers. Unlike traditional blockchain consensus mechanisms, SCP moves consensus off-chain.

Advantages of this Approach

Without complex consensus processes, each server focuses exclusively on its computational tasks, enabling near-infinite transaction processing and reduced operational costs.

While similar to current rollup scalability solutions, SCP’s ambition goes further. It aims not only to solve blockchain scalability but also to offer a transformative path from Web2 to Web3.

What are the advantages of SCP? SCP decouples computation from storage. This design not only enhances system flexibility and composability but also lowers development barriers, effectively overcoming the performance limitations of traditional blockchains while ensuring data trustworthiness. These innovations make SCP an efficient and scalable infrastructure that empowers the future decentralized ecosystem.

  1. Composability: SCP places computation off-chain, preserving the fundamental nature of blockchain and maintaining its “atomic” attributes. With computation off-chain and blockchain solely responsible for storage, any smart contract can be executed. Application migration based on SCP becomes extremely simple, which is a crucial advantage.
  2. Low development barriers: Off-chain computation allows developers to use any programming language, whether C++, Python, or Rust, without requiring the use of Solidity for the EVM. The only cost developers may face is the API interaction cost with the blockchain.
  3. No performance restrictions: Off-chain computation aligns computational capabilities with traditional applications. The performance ceiling depends on the hardware capabilities of computation servers. Since elastic scaling of traditional computing resources is a mature technology, computation capacity is effectively limitless, aside from machine costs.
  4. Trusted data: Since the core “storage” function is handled by the blockchain, all data is immutable and traceable. Any node can retrieve and recompute data when the validity of state results is in doubt, ensuring that blockchain endows data with trustworthiness.

Bitcoin addressed the “Byzantine Generals Problem” by introducing PoW, a groundbreaking approach devised by Satoshi Nakamoto within the constraints of the time, which ultimately led to Bitcoin’s success.

Similarly, when tackling the computation of smart contracts, starting from first principles might result in a seemingly counterintuitive solution. However, by boldly decentralizing computational functions and returning blockchain to its core essence, one would find that storage consensus is achieved while simultaneously meeting the requirements of data openness and verifiability. This approach delivers performance on par with Web2, embodying the essence of SCP.

SCP and AO Integration: Breaking Free from Constraints

After all this discussion, we finally arrive at AO.

First, AO adopts a design pattern known as the Actor Model, which was initially implemented in the Erlang programming language.

The architecture and technology behind AO are built upon the SCP paradigm, separating the computation layer from the storage layer. This allows the storage layer to remain permanently decentralized, while the computation layer retains the structure of traditional computing.

AO’s computational resources resemble those in traditional computing models but incorporate a permanent storage layer, enabling traceable and decentralized computing processes.

At this point, you might wonder, which main chain does AO use for its storage layer?

Clearly, using Bitcoin or Ethereum for the storage layer would be impractical. The reasons for this have already been discussed earlier, and readers will likely grasp this easily. In AO, data storage and final verifiability are ultimately handled by Arweave.

Why choose Arweave from among the many decentralized storage solutions?

The choice of Arweave as the storage layer is primarily based on its unique focus on permanent data storage within a decentralized network. Arweave positions itself as a “global hard drive where data is never lost,” contrasting with Bitcoin’s “global ledger” and Ethereum’s “global computer.” Essentially, Arweave functions as a global hard drive designed never to lose data.

For more technical details on Arweave, refer to: “Understanding Arweave: A Key Web3 Infrastructure

Next, we will focus on discussing the principles and technologies of AO to understand how AO achieves infinite computation?

[Data source:How ao Messenger works | Manual]

The core of AO is to build a computation layer that is infinitely scalable and environment-independent. The nodes of AO collaborate based on protocols and communication mechanisms, ensuring that each node provides optimal service to avoid competitive consumption.

First, let’s understand the basic architecture of AO. AO consists of processes and messages, as well as scheduling units (SU), computing units (CU), and messenger units (MU):

  • Process: The computing unit of nodes in the network, used for data computation and message processing. For example, each contract could be a process.
  • Message: Processes interact through messages, with each message adhering to the ANS-104 standard. The entire AO system must follow this standard.
  • Scheduling Unit (SU): Responsible for numbering the messages of processes, enabling the processes to be ordered, and uploading the messages to Arweave.
  • Computing unit (CU): The state node within an AO process, responsible for executing computation tasks and returning the computed results and signatures to the SU, ensuring the correctness and verifiability of the results.
  • Messenger Unit (MU): The routing component in the node, responsible for delivering user messages to the SU and performing integrity checks on the signed data.

It is important to note that AO does not have shared states but instead has holographic states. The consensus in AO arises from the game theory. Since each computation generates a state that is uploaded to Arweave, this ensures data verifiability. When users have doubts about certain data, they can request one or more nodes to compute the data on Arweave. If the settlement results do not match, dishonest nodes will be penalized.

Innovation of the AO Architecture: Storage and Holographic State

The innovation of the AO architecture lies in its data storage and verification mechanisms, which replace the redundant computations and limited block space typical of traditional blockchains by utilizing decentralized storage (Arweave) and holographic states.

  1. Holographic State: In the AO architecture, each computation generates a “holographic state” that is uploaded to the decentralized storage network (Arweave). This “holographic state” is not merely a simple record of transaction data; it contains the full state and relevant data of each computation. This means that every computation and its result are permanently recorded and can be verified at any time. The holographic state, as a “data snapshot,” provides the network with a distributed and decentralized data storage solution.
  2. Storage Verification: In this model, data verification no longer relies on each node repeating the computation for all transactions. Instead, it is done by storing and comparing the data uploaded to Arweave to confirm the validity of transactions. When a computation result from a node does not match the data stored on Arweave, users or other nodes can initiate a verification request. The network will then recalculate the data and compare it with the stored record in Arweave. If the results do not match, the node is penalized, ensuring the integrity of the network.
  3. Breaking the Block Space Limitation: Traditional blockchain block space is constrained by storage limitations, with each block only able to contain a limited number of transactions. However, in the AO architecture, data is no longer stored directly within blocks; instead, it is uploaded to a decentralized storage network (such as Arweave). This means that the storage and verification of data in the blockchain network are no longer constrained by the size of the block space but are instead offloaded and expanded through decentralized storage. As a result, the blockchain system’s capacity is no longer directly limited by block size.

The block space limitations of traditional blockchains are not insurmountable. The AO architecture, by relying on decentralized storage and holographic states, changes the way data storage and verification are handled, making it possible to achieve unlimited scalability.

Is Consensus Dependent on Redundant Computation?

Not necessarily. Consensus mechanisms do not have to rely on redundant computation, and they can be implemented in various ways. Solutions that depend on storage rather than redundant computation are feasible in certain scenarios, especially when the integrity and consistency of data can be ensured through storage verification.

In the AO architecture, storage serves as an alternative to redundant computation. By uploading the computational results to a decentralized storage network (Arweave in this case), the system ensures data immutability. Additionally, through the holographic upload of states, any node can verify the computation results at any time, ensuring data consistency and correctness. This approach relies on the reliability of data storage rather than on each node repeating the computation.

Now, let’s look at the differences between AO and ETH through a table:


It is easy to see that the core characteristics of AO can be summarized into two key points:

  1. Large-scale parallel computing: Supports countless processes running simultaneously, significantly enhancing computational power.
  2. Minimized trust dependency: No need to trust any single node, as all computational results can be infinitely reproduced and traced back.

How AO Breaks the Deadlock: Ethereum and the Dilemmas of Public Blockchains

For the two major dilemmas Ethereum faces—performance bottlenecks and lack of applications—I believe these are exactly where AO shines. The reasons are as follows:

  1. Based on the SCP paradigm: Since AO separates computation from storage, it can outperform Ethereum’s single-process, one-time computation model. AO can flexibly scale to more computational resources based on demand. Additionally, Arweave’s holographic state storage of message logs allows AO to ensure consensus by reproducing computation results, providing security that rivals Ethereum and Bitcoin.
  2. Parallel computation architecture based on message passing: AO’s process interactions do not require competing for “locks.” In Web2 development, it’s well-known that high-performance services avoid lock contention, as it is highly costly for efficient services. AO avoids lock contention through message passing between processes, following this principle. This enables its scalability to reach any size.
  3. Modular architecture: AO’s modularity is reflected in the separation of CU, SU, and MU, allowing the use of any virtual machine or sequencer. This makes migration and development of DApps from different chains extremely convenient and cost-effective. Combined with Arweave’s efficient storage capacity, DApps developed on AO can achieve more diverse functionalities. For example, character graphs can be easily realized on AO.
  4. Supporting Web3’s adaptability to various policy requirements: Although the core idea of Web3 is decentralization and deregulation, different policies in different countries inevitably have a profound impact on Web3’s development and promotion. AO’s flexible modular architecture can be adapted to different regional policies, ensuring the stability and sustainable development of Web3 applications to some extent.

Summary

The separation of computation and storage is a brilliant concept, and it is a systematic design based on first principles.

As a narrative direction akin to “decentralized cloud services,” it not only provides a solid landing scenario but also offers broader imaginative space for combining with AI.

In fact, only by truly understanding the fundamental needs of Web3 can we break free from the dilemmas and constraints brought about by path dependency.

The integration of SCP and AO provides a completely new approach: it inherits all the features of SCP, no longer deploying smart contracts on-chain but instead storing immutable and traceable data on-chain, achieving data trustworthiness that anyone can verify.

Of course, there is no absolutely perfect path at the moment. AO is still in its nascent development stage. How to prevent Web3 from being overly financialized, create enough application scenarios, and bring more possibilities for the future is still a challenge on AO’s road to success. Whether AO can deliver a satisfactory answer remains to be seen by the market and time.

The combination of SCP and AO, as a development paradigm full of potential, although its ideas have not yet been widely recognized in the market, AO is expected to play an important role in the Web3 field in the future, even driving the further development of Web3.

Disclaimer:

  1. This article is reprinted from [PermaDAO]. All copyrights belong to the original author [14]. If there are objections to this reprint, please contact the Gate Learn team, and they will handle it promptly.
  2. Liability Disclaimer: The views and opinions expressed in this article are solely those of the author and do not constitute any investment advice.
  3. The Gate Learn team does translations of the article into other languages. Unless mentioned, copying, distributing, or plagiarizing the translated articles is prohibited.
Start Now
Sign up and get a
$100
Voucher!