In October, the term “TEE (Trusted Execution Environment)” began frequently appearing in X feeds. This surprised me since TEE has traditionally been a niche topic, primarily discussed in systems security academia. As someone who conducted research in a systems security lab, I was pleased to see this development. However, I was curious about why TEE was suddenly gaining attention in the Web3 space. I also noticed a lack of accessible content explaining TEE concepts to the general public, which motivated me to write this article.
TEE is a complex concept that can be challenging to fully understand without a computer science background. Therefore, this article starts with basic TEE concepts, explains why Web3 is interested in utilizing TEE, and then discusses current Web3 projects implementing TEE and its limitations.
In summary, this article will cover the following topics:
I believe most readers may not have the necessary background knowledge to fully understand what TEE exactly is. Since TEE is quite a complex concept when explored deeply, I will attempt to explain it as simply as possible.
Most Web2 servers manage data access through authorization settings. However, since this approach is purely software-based, it essentially becomes ineffective if higher-level privileges are obtained. For instance, if an attacker gains kernel-level privileges in the server’s operating system, they can potentially access all permission-controlled data on the server, including encryption keys. In such extreme scenarios, there’s virtually no way to prevent data theft through software-based methods alone. TEE, or Trusted Execution Environment, attempts to fundamentally address this issue through hardware-based security. TEEs are often called “confidential computing,” but this is a broader concept that includes computation mechanisms ensuring user data privacy, such as ZK, MPC, and FHE.
source: Jujutsu Kaisen
To use a simple analogy, TEE acts like an encrypted zone within memory. All data inside the TEE is encrypted, making raw data access from outside impossible. Even the OS kernel cannot read or modify it in its original form. Thus, even if an attacker gains administrator privileges on the server, they cannot decrypt the data within the TEE. This encrypted area is often called an “enclave.”
Creating an enclave and processing data within it requires specific instruction sets, similar to opcodes. These instructions use encryption keys stored in hardware-protected areas to perform computations on data within the enclave. As TEE is a hardware-level security module, its implementation varies by CPU chip vendor. For example, Intel supports SGX, AMD supports SEV, and ARM supports TrustZone. From a broader perspective, these implementations share the concept of “protecting memory through hardware-level encryption.”
Let’s first examine how the most common TEEs — Intel SGX, AMD SEV, and ARM TrustZone — operate, and then introduce more recent TEE implementations.
Intel SGX
SGX creates and accesses enclaves at the process level. The following image provides a clear representation of how an SGX-enabled program operates.
During development, developers must distinguish between untrusted and trusted code. Variables or functions that require protection by the enclave are designated as trusted code, while other operations are categorized as untrusted code. When untrusted code needs to input data into trusted code, or when trusted code must interact with untrusted code, special syscalls called ECALL and OCALL are employed.
If users need to directly interact with data within the enclave — for example, providing input or receiving output — they can communicate through secure channels established using protocols like SSL.
AMD SEV
Unlike SGX, which creates enclaves at the process level, SEV creates them at the virtual machine level. Memory allocated to virtual machines is encrypted and managed with independent keys, protecting data from the server’s operating system or other VMs. Although virtual machines are generally considered secure due to their sandboxed isolation, vulnerabilities that compromise this isolation cannot be completely ruled out. SEV is designed to provide security in such scenarios.
SEV generates encryption keys through a security processor that is physically separated from the CPU during the creation of the VM. These keys are then used to encrypt the VM memory. The following diagram illustrates the difference between SGX and SEV.
source: 10.1109/SRDS.2018.00042
SGX requires developers to explicitly divide code into untrusted and trusted segments. In contrast, SEV encrypts the entire virtual machine memory, demanding relatively less effort from developers in terms of implementation.
ARM TrustZone
Unlike Intel and AMD, which primarily produce CPUs for desktops and servers, ARM designs chipsets for lightweight systems such as mobile and embedded devices. As a result, their Secure Enclave implementation is slightly different from the SGX or SEV used in higher-level architectures.
TrustZone divides the system into a Secure World and a Normal World at the hardware level. Developers using TrustZone must implement security-critical functions in the Secure World, while general functions run in the Normal World. Transitions between these two worlds occur through special system calls known as Secure Monitor Calls, similar to SGX.
A key distinction is that TrustZone’s enclave extends beyond just the CPU or memory; it encompasses the entire system, including the system bus, peripherals, and interrupt controllers. Apple also utilizes a TEE called Secure Enclave in their products, which is very similar to TrustZone at a high level.
As we’ll discuss later, many original TEEs, including Intel SGX, have encountered side-channel vulnerabilities and development challenges due to structural issues. To address these problems, vendors have released improved versions. With the rising demand for secure cloud computing, platforms like AWS/Azure/GCP have started offering their own TEE services. Recently, the TEE concept has been extended to GPUs as well. Some Web3 use cases are now implementing these advanced TEEs, so I’ll briefly explain them.
Cloud TEEs: AWS Nitro, Azure Confidential Computing, Google Cloud Confidential Computing
With the growing demand for cloud computing services, providers have started developing their own TEE solutions. AWS’s Nitro is an enclave computing environment that works alongside EC2 instances. It achieves physical separation of the computing environment by utilizing a dedicated Nitro security chip for attestation and key management. The Nitro hypervisor safeguards enclave memory areas through functions provided by the chip, effectively shielding against attacks from both users and cloud providers.
Azure supports various TEE specifications, including Intel SGX, AMD SEV-SNP, and its own virtualization-based isolation. This flexibility in hardware environment selection offers users more options but may increase the attack surface when the user utilizes multiple TEEs.
Google Cloud provides confidential computing services that utilize Trusted Execution Environments (TEE), focusing on AI/ML workloads. While different from AWS Nitro, Google Cloud, like Azure, offers virtualization-based isolation using existing TEE infrastructure. Key differentiators include support for CPU accelerators such as Intel AMX to handle intensive AI/ML tasks, and GPU-based confidential computing through NVIDIA, which will be detailed later.
ARM CCA
ARM CCA, released in late 2021, is tailored for cloud environments, unlike TrustZone, which was designed for single embedded or mobile environments. TrustZone statically manages pre-designated secure memory regions, whereas CCA facilitates the dynamic creation of Realms (secure enclaves). This allows for multiple isolated environments within a single physical setup.
CCA can be likened to an ARM version of Intel SGX, though with notable differences. While SGX has memory limitations, CCA provides flexible memory allocation. Moreover, CCA employs a fundamentally different security approach by encrypting the entire physical memory, not just the designated enclave regions as SGX does.
Intel TDX
Intel introduced TDX, a technology that encrypts memory at the VM level, similar to AMD’s SEV. This release addresses feedback on SGX(v1)’s limitations, including the 256MB enclave size limit and the increased development complexity due to process-level enclave creation. The key difference from SEV is that TDX partially trusts the operating system, specifically the hypervisor, for VM resource management. Additionally, there are differences in the encryption mechanisms for each VM.
AMD SEV-SNP
SEV-SNP enhances the security of the existing SEV model. The original SEV relied on a trust model that left vulnerabilities, allowing hypervisors to modify memory mapping. SEV-SNP addresses this by adding a hardware manager to track memory states, preventing such modifications.
Additionally, it enables users to perform remote attestation directly, thereby minimizing trust anchors. SEV-SNP also introduced a Reverse Map Table to monitor memory page states and ownership, providing defense against malicious hypervisor attack models.
GPU TEE: NVIDIA Confidential Computing
TEE development has traditionally been focused on CPUs due to its reliance on hardware vendors. However, the need for handling complex computations like secure AI training and training data protection has underscored the necessity for GPU TEE. In response, NVIDIA introduced Confidential Computing features to the H100 GPU in 2023.
NVIDIA Confidential Computing offers independently encrypted and managed GPU instances, ensuring end-to-end security when combined with CPU TEE. Currently, it achieves this by integrating with AMD SEV-SNP or Intel TDX to build confidential computing pipelines.
When examining Web3 projects, you’ll often see claims of community governance through code uploads on GitHub. But how can one verify that the program deployed on the server actually matches the GitHub code?
Blockchain offers an environment where smart contracts are always public and unmodifiable due to continuous consensus. In contrast, typical Web2 servers allow administrators to update programs at any time. To verify authenticity, users need to compare hash values of binaries built from open-source programs on platforms like GitHub or check integrity through developer signatures.
The same principle applies to programs within TEE enclaves. For users to fully trust server-deployed programs, they must verify (attest) that the code and data within the enclave remain unchanged. In the case of SGX, it communicates with IAS (Intel Attestation Service) using a key stored in a special enclave. IAS verifies the integrity of the enclave and its internal data, then returns the results to users. In summary, TEE requires communication with hardware vendor-provided attestation servers to ensure enclave integrity.
Why TEE on Web3?
TEE might seem unfamiliar to the general public, as its knowledge is typically confined to specialized domains. However, TEE’s emergence aligns well with Web3’s principles. The fundamental premise of using TEE is “trust no one.” When properly implemented, TEE can protect user data from the program deployer, physical server owner, and even the OS kernel.
While current blockchain projects have achieved significant structural decentralization, many still rely on off-chain server environments such as sequencers, off-chain relayers, and keeper bots. Protocols that need to process sensitive user information, like KYC or biometric data, or those aiming to support private transactions, face the challenge of requiring trust in service providers. These issues can be substantially mitigated through data processing within enclaves.
As a result, TEE has gained popularity in the latter half of this year, aligning with AI-related themes such as data privacy and trustworthy AI agents. However, attempts to integrate TEE into the Web3 ecosystem existed long before this. In this article, we’ll introduce projects across various fields that have applied TEE in the Web3 ecosystem, beyond just the AI sector.
Marlin
Marlin is a verifiable computing protocol designed to offer a secure computation environment using TEE or ZK technology. One of their primary goals is to develop a decentralized web. Marlin manages two subnets: Oyster and Kalypso, and Oyster functions as the TEE-based coprocessing protocol.
1) Oyster CVM
Oyster CVM (Oyster for convenience) acts as a P2P TEE marketplace. Users buy AWS Nitro Enclave computing environments through Oyster’s off-chain marketplace and deploy their program images there. Below is an abstract structure of Oyster:
source: https://docs.marlin.org/oyster/protocol/cvm/workflow/
Oyster bears a very similar structure to Akash. In Oyster, blockchain’s role is to verify whether each TEE computing environment is operating properly, and this is done through observers called Providers. Providers continuously check the availability of Enclaves in real-time and report their findings to the Oyster network. They stake $POND tokens, which are at risk of being slashed if they engage in malicious activities. Additionally, a decentralized network of entities, referred to as ‘auditors’, exists to oversee Provider slashing. Every epoch, auditors get assigned their jobs, and send audit requests to enclaves that are randomly chosen by a seed generated inside an enclave.
However, Oyster has implemented a contract called NitroProver that verifies remote attestation results on-chain, allowing users to verify the integrity of their purchased TEE on-chain.
User-deployed instances can be accessed through both smart contracts and Web2 APIs. The computation results can be integrated into contracts by presenting them as oracles. As shown in the dashboard, this capability is suitable not only for smart contracts but also for decentralizing Web2 services.
Similar to Akash, Oyster is susceptible to potential instance takeovers by attackers if there are vulnerabilities in the off-chain marketplace. In such scenarios, although enclave data might remain secure, raw data stored outside the enclave and service operation privileges could be compromised. In case of sensitive data, which is stored in untrusted memory but should not be exposed, developers must encrypt those data and store them separately. Marlin currently provides an external storage with a MPC-based persistent key to handle these cases.
2) Oyster Serverless
While Oyster CVM operates as a P2P TEE marketplace, Oyster Serverless resembles AWS Lambda (or Function-as-a-Service) with TEE. Utilizing Oyster Serverless, users can execute functions without renting instances, paying on-demand.
The execution flow of Oyster Serverless would be as follows:
With Oyster Serverless, users can send web2 API requests or smart contract calls through a smart contract, while the integrity of the execution guaranteed through TEE. Users can also subscribe Serverless for periodic execution, which would be particularly useful for oracle fetchers.
Phala Network
Phala, previously discussed in our AI X Crypto article, has significantly shifted its focus to AI coprocessors.
The basic design of the Phala Network includes Workers and Gatekeepers. Workers function as regular nodes that execute computations for clients. Gatekeepers, on the other hand, manage keys that enable Workers to decrypt and compute encrypted state values. Workers handle contract state values encrypted via Intel SGX, which necessitates keys from Gatekeepers to read or write these values.
source: https://docs.phala.network/tech-specs/blockchain
Phala has expanded its offerings by supporting SDKs for Confidential VMs in Intel TDX environments. Recently, in collaboration with Flashbot, they launched Dstack. This product features a remote attestation API to verify the operational status of multiple Docker container images deployed in Confidential VMs. Remote attestation through Dstack ensures transparency via a dedicated Explorer.
Another significant development is their Confidential AI Inference product, introduced in response to the recent surge in AI projects. Phala Network now supports the relatively new Nvidia confidential computing, aiming to enhance AI inference services using ZK/FHE. This technology previously faced challenges due to high overhead, limiting its practicality.
source: https://docs.phala.network/overview/phala-network/confidential-ai-inference
The image illustrates the structure of Phala Network’s confidential AI inference system. This system utilizes virtual machine-level Trusted Execution Environments (TEEs) like Intel TDX and AMD SEV to deploy AI models. It conducts AI inference through Nvidia confidential computing and securely transmits the results back to the CPU enclave. This method may incur significant overhead compared to regular models, as it involves two rounds of enclave computation. Nonetheless, it is anticipated to deliver substantial performance improvements over existing TEE-based AI inference methods that rely entirely on CPU performance. According to the paper published by Phala Network, the Llama3-based LLM inference overhead was measured at around 6–8%.
Others
In the AI X Crypto domain, other examples of using TEEs as coprocessors include iExec RLC, PIN AI, and Super Protocol. iExec RLC and PIN AI focus on safeguarding AI models and training data through TEEs, respectively. Super Protocol is preparing to launch a marketplace for trading TEE computing environments, similar to Marlin. However, detailed technical information about these projects is not yet publicly available. We will provide updates after their product launches.
Oasis (Prev. Rose)
Oasis, formerly known as Rose, is a Layer 1 blockchain designed to protect user privacy during transactions by running its execution client within an SGX enclave. Although it is a relatively mature chain, Oasis has innovatively implemented multi-VM support in its execution layer.
The execution layer, called Paratime, includes three components: Cipher, a WASM-based confidential VM; Sapphire, an EVM-based confidential VM; and Emerald, a standard EVM-compatible VM. Oasis fundamentally safeguards smart contracts and their computational processes from arbitrary modifications by nodes, ensuring the execution client operates within a TEE enclave. This structure is illustrated in the accompanying diagram.
source: https://docs.oasis.io/general/oasis-network/
When users send transactions, they encrypt the transaction data using an ephemeral key generated by the Oasis Node’s key manager within the enclave and transmit it to the computation module. The computation module receives the private key for the ephemeral key from the key manager, uses it to decrypt the data within the enclave, executes the smart contract, and modifies the node’s state values. Since the transaction execution results are also delivered to users in encrypted form, neither the server operating the Oasis node client nor external entities can observe the transaction contents.
Oasis highlights its strength in facilitating the creation of DApps that handle sensitive personal information on public blockchains, using its Confidential Paratime. This feature allows for the development of services requiring identity verification, such as SocialFi, credit lending, CEX integration services, and reputation-based services. These applications can securely receive and verify user biometric or KYC information within a secure enclave.
Secret Network
Secret Network is a Layer 1 chain within the Cosmos ecosystem and stands as one of the oldest TEE-based blockchains. It leverages Intel SGX enclaves to encrypt chain state values, supporting private transactions for its users.
In Secret Network, each contract has a unique secret key stored in the enclave of each node. When users call contracts via transactions encrypted with public keys, nodes decrypt the transaction data within the TEE to interact with the contract’s state values. These modified state values are then recorded in blocks, remaining encrypted.
The contract itself can be shared with external entities in bytecode or source code form. However, the network ensures user transaction privacy by preventing direct observation of user-sent transaction data and blocking external observation or tampering with current contract state values.
Since all smart contract state values are encrypted, viewing them necessitates decryption. Secret Network addresses this by introducing viewing keys. These keys bind specific user passwords to contracts, allowing only authorized users to observe contract state values.
Clique, Quex Protocol
Unlike the TEE-based L1s introduced earlier, Clique and Quex Protocol offer infrastructure that enables general DApps to delegate private computations to an off-chain TEE environment. These results can be utilized at the smart contract level. They are notably used for verifiable incentive distribution mechanisms, off-chain order books, oracles, and KYC data protection.
Some ZK L2 chains employ multi-proof systems to address the inherent instability of zero-knowledge proofs, often incorporating TEE proofs. Modern zero-knowledge proof mechanisms have not yet matured enough to be fully trusted for their safety, and bugs related to soundness in ZK circuits require significant effort to patch when incidents occur. As a precaution, chains using ZK proofs or ZK-EVMs are adopting TEE proofs to detect potential bugs by re-executing blocks through local VMs within enclaves. Currently, L2s utilizing multi-proof systems, including TEE, are Taiko, Scroll, and Ternoa. Let’s briefly examine their motivations for using multi-proof systems and their structures.
Taiko
Taiko is currently the most prominent (plan-to-be) Based rollup chain. A rollup chain delegates sequencing to Ethereum block proposers without maintaining a separate centralized sequencer. According to Taiko’s diagram of Based Rollup, L2 searchers compose transaction bundles and deliver them to L1 as batches. L1 block proposers then reconstruct these, along with L1 transactions, to generate L1 blocks and capture MEV.
source: https://docs.taiko.xyz/core-concepts/multi-proofs/
In Taiko, TEE is utilized not during the block composition stage but in the proof generation stage, which we’ll explain. Taiko, with its decentralized structure, doesn’t require verification of sequencer malfunctions. However, if there are bugs within the L2 node client codebase, a fully decentralized setup cannot handle them swiftly. This necessitates high-level validity proofs to ensure security, resulting in a more complex challenge design compared to other rollups.
Taiko’s blocks undergo three stages of confirmation: proposed, proved, and verified. A block is considered proposed when its validity is checked by Taiko’s L1 contract (rollup contract). It reaches the proved state when verified by parallel provers, and the verified state when its parent block has been proved. To verify blocks, Taiko uses three types of proofs: SGX V2-based TEE proof, Succinct/RiscZero-based ZK proof, and Guardian proof, which relies on centralized multisig.
Taiko employs a contestation model for block verification, establishing a security tier hierarchy among Provers: TEE, ZK, ZK+TEE, and Guardian. This setup allows challengers to earn greater rewards when they identify incorrect proofs generated by higher-tier models. Proofs required for each block are randomly assigned with the following weightings: 5% for SGX+ZKP, 20% for ZKP, and the remainder using SGX. This ensures ZK provers can always earn higher rewards upon successful challenges.
Readers might wonder how SGX provers generate and verify proofs. The primary role of SGX provers is to demonstrate that Taiko’s blocks were generated through standard computation. These provers generate proofs of state value changes and verify the environment using results from re-executing blocks via a local VM within the TEE environment, alongside enclave attestation results.
Unlike ZK proof generation, which involves significant computational costs, TEE-based proof generation verifies computational integrity at a much lower cost under similar security assumptions. Verification of these proofs involves simple checks, such as ensuring the ECDSA signature used in the proof matches the prover’s signature.
In conclusion, TEE-based validity proofs can be seen as a method to verify chain integrity by generating proofs with slightly lower security levels but at a considerably lower cost compared to ZK proofs.
Scroll
Scroll is a notable rollup that adopts a Multi-proof system. It collaborates with Automata, an attestation layer to be introduced later, to generate both ZK proofs and TEE proofs for all blocks. This collaboration activates a dispute system to resolve conflicts between the two proofs.
source: https://scroll.io/blog/scaling-security
Scroll plans to support various hardware environments (currently only SGX), including Intel SGX, AMD SEV, and AWS Nitro, to minimize hardware dependencies. They address potential security issues in TEE by collecting proofs from diverse environments using threshold signatures.
Ternoa
Ternoa prioritizes detecting malicious actions by centralized L2 entities over addressing bugs in Execution itself. Unlike Taiko or Scroll, which use TEE Provers to complement existing ZK Proofs, Ternoa employs Observers in TEE-based environments. These Observers detect malicious actions by L2 sequencers and validators, focusing on areas that can’t be evaluated solely from transaction data. Examples include RPC nodes censoring transactions based on IP address, sequencers altering sequencing algorithms, or failing to submit batch data intentionally.
Ternoa operates a separate L2 network called the Integrity Verification Chain (IVC) for verification tasks related to rollup entities. The rollup framework provider submits the latest sequencer image to the IVC. When a new rollup requests deployment, the IVC returns service images stored in TEE. After deployment, Observers regularly verify whether the deployed rollup uses the sequencer image as intended. They then submit integrity proofs, incorporating their verification results and attestation reports from their TEE environment, to confirm chain integrity.
Flashbots BuilderNet
Flashbots, widely recognized as an MEV solution provider, has consistently explored the application of Trusted Execution Environments (TEE) in blockchain technology. Notable research efforts include:
In this article, we’ll briefly outline Flashbots’ current role and discuss BuilderNet, a recent initiative aimed at decentralizing block building. Flashbots has announced complete migration plans for their existing solution through BuilderNet.
Ethereum employs a Proposer-Builder Separation model. This system divides block creation into two roles — 1) Builders: Responsible for block creation and MEV extraction 2) Proposers: Sign and propagate blocks created by Builders to decentralize MEV profits. This structure has led to some decentralized applications colluding with Builders off-chain to capture substantial MEV profits. As a result, a few Builders, such as Beaverbuild and Titan Builder, monopolistically create over 90% of Ethereum blocks. In severe instances, these Builders can censor arbitrary transactions. For example, regulated transactions, like those from Tornado Cash, are actively censored by major Builders.
BuilderNet addresses these issues by enhancing transaction privacy and reducing barriers to block builder participation. Its structure can be broadly summarized as follows:
source: https://buildernet.org/docs/architecture
Builder nodes, receiving user transactions (Orderflow), are managed by various Node operators. Each operates open-source Builder instances within Intel TDX environments. Users can freely verify the TEE environment of each operator and send encrypted transactions. Operators then share their received orderflow, submit blocks to the MEV-boost relay, and distribute block rewards to searchers and others involved in block creation upon successful submission.
This structure provides several decentralization benefits:
Puffer Finance
Puffer Finance has introduced a Secure Signer tool designed to reduce the risk of Ethereum validators being slashed due to client errors or bugs. This tool uses an SGX Enclave-based signer for enhanced security.
source: https://docs.puffer.fi/technology/secure-signer/
The Secure Signer operates by generating and storing BLS validator keys within the SGX enclave, accessing them only when necessary. Its logic is straightforward: alongside the security provided by the Trusted Execution Environment (TEE), it can detect validator mistakes or malicious actions. This is achieved by ensuring slots have strictly increased before signing blocks or proofs. Puffer Finance highlights that this setup allows validators to attain security levels comparable to hardware wallets, surpassing the typical protections offered by software solutions.
Unichain
Unichain, Uniswap’s Ethereum Layer 2 (L2) chain slated for launch in Q1 next year, has shared plans in their whitepaper to decentralize L2 block-building mechanisms using Trusted Execution Environments (TEE). Although detailed technical specifications remain unreleased, here’s a summary of their key proposals:
Moreover, Unichain intends to develop various TEE-based features, including an encrypted mempool, scheduled transactions, and TEE-protected smart contracts.
Automata
While blockchain has achieved considerable decentralization in architectural aspects, many elements still don’t have sufficient censorship resistance due to dependence on server operators. Automata aims to provide solutions that minimize server operator dependence and data exposure in blockchain architecture based on TEE. Automata’s notable implementations include open-source SGX Prover and Verifier, TEE Compile which verifies matches between executables deployed in TEE and source code, and TEE Builder which adds privacy to block building mechanisms through TEE-based mempool and block builder. Plus, Automata allows TEE’s remote attestation result to be posted onchain, which enables it to be publicly verifiable & integrated into smart contracts.
Automata currently operates 1RPC, a TEE-based RPC service designed to protect identifying information of transaction submitters, such as IP and device details, through secure enclaves. Automata highlights the risk that, with the commercialization of UserOp due to account abstraction development, RPC services might infer UserOp patterns for specific users via AI integration, potentially compromising privacy. The structure of 1RPC is straightforward. It establishes secure connections with users, receives transactions (UserOp) into the TEE, and processes them with code deployed within the enclave. However, 1RPC only protects UserOp metadata. The actual parties involved and transaction contents remain exposed during interaction with the on-chain Entrypoint. A more fundamental approach to ensuring transaction privacy would involve protecting the mempool and block builder layers with TEE. This could be achieved by integrating with Automata’s TEE Builder.
source: https://x.com/tee_hee_he
What ultimately brought the TEE meta to prominence in web3 was the TEE-based Twitter AI agent. Many people likely first encountered TEE when an AI agent named @tee_hee_he appeared on X in late October and launched its memecoin on Ethereum. @tee_hee_he is an AI agent jointly developed by Nous Research and Flashbots’ Teleport project. It emerged in response to concerns that trending AI agent accounts at the time couldn’t prove they were actually relaying results generated by AI models. The developers designed a model that minimized intervention from centralized entities in processes such as Twitter account setup, crypto wallet creation, and AI model result relay.
source: @tee_hee_he/setting-your-pet-rock-free-3e7895201f46"">https://medium.com/@tee_hee_he/setting-your-pet-rock-free-3e7895201f46
They deployed the AI agent in an Intel TDX environment, generating email, X account passwords, and OAuth tokens for Twitter access through browser simulation, and then removed all recovery options.
Recently, TEE was used in a similar context for AI-Pool, where @123skely successfully conducted fundraising. Currently, after AI meme coins deploy their contracts and addresses are made public, technically superior sniper bots typically secure most of the liquidity and manipulate prices. AI-Pool attempts to solve this issue by having AI conduct a type of presale.
source: https://x.com/0xCygaar/status/1871421277832954055
Another interesting case is DeepWorm, an AI agent with bio neural network that simulates a worm’s brain. Similar to the other AI agents, DeepWorm uploads the enclave image of its worm brain to Marlin Network to protect their model and provide verifiability to its operation.
source: https://x.com/deepwormxyz/status/1867190794354078135
Since @tee_hee_he open-sourced all the code required for deployment, deploying trustworthy, unruggable TEE-based AI agents has become quite easy. Recently, Phala Network deployed a16z’s Eliza in TEE. As a16z highlighted in their 2025 crypto market outlook report, the TEE-based AI agent market is expected to serve as essential infrastructure in the future AI agent memecoin market.
Azuki Bobu
Azuki, a renowned Ethereum NFT project, collaborated with Flashbots last October to host a unique social event.
source: https://x.com/Azuki/status/1841906534151864557
This involved delegating Twitter account upload permissions to Flashbots and Bobu, who then posted tweets simultaneously, akin to a flash mob. The event was a success, as shown in the image below.
Designed by Flashbots and Azuki, the event structure was as follows:
Azuki ensured the reliability of the event process by publishing the Enclave’s Docker image on Docker Hub. They also uploaded certificate transparency log verification scripts and remote attestation results for the TEE environment on GitHub. Although Flashbots identified dependencies on RPC and blockchain nodes as remaining risks, these could be mitigated through the use of TEE RPC or TEE-based rollups like Unichain.
While the project did not achieve a technical breakthrough, it is noteworthy for conducting a trustworthy social event solely using a TEE stack.
TEE provides much higher security compared to typical software solutions as it offers hardware-level security that software cannot directly compromise. However, TEE has been slow to adopt in actual products due to several limitations, which we’ll introduce.
1) Centralized Attestation Mechanism
As mentioned earlier, users can utilize remote attestation mechanisms to verify the integrity of TEE enclaves and that data within enclaves hasn’t been tampered with. However, this verification process inevitably depends on the chipset manufacturer’s servers. The degree of trust varies slightly by vendor — SGX/TDX completely depends on Intel’s attestation server, while SEV allows VM owners to perform attestation directly. This is an inherent issue in TEE structure, and TEE researchers are working to resolve this through the development of open-source TEE, which we’ll mention later.
2) Side-channel attacks
TEE must never expose data stored within enclaves. However, because TEE can only encrypt data inside enclaves, vulnerabilities may arise from attacks leveraging secondary information, not the original data. Since Intel SGX’s public release in 2015, numerous critical side-channel attacks have been highlighted in top system security conferences. Below are potential attack scenarios in TEE use cases, categorized by their impact:
While TEE is not a system that neutralizes all attack vectors and can leak various levels of information due to its fundamental characteristics, these attacks require strong prerequisites, such as attacker and victim code running on the same CPU core. This has led some to describe it as the “Man with the Glock” security model.
source: https://x.com/hdevalence/status/1613247598139428864
However, since TEE’s fundamental principle is “trust no one,” I believe TEE should be able to protect data even within this model to fully serve its role as a security module.
3) Real-world / Recent Exploits on TEE
Numerous bugs have been discovered in TEE implementations, especially in SGX, and most have been successfully patched. However, the complex hardware architecture of TEE systems means new vulnerabilities can emerge with each hardware release. Beyond academic research, there have been real-world exploits affecting Web3 projects, which warrant detailed examination.
These cases indicate that a “completely secure TEE” is unattainable, and users should be aware of potential vulnerabilities with new hardware releases.
In November, Paradigm’s Georgios Konstantopoulos outlined a framework for confidential hardware evolution, categorizing secure hardware into five distinct levels:
Currently, projects like Phala Network’s Confidential AI Inference operate at Level 3, while most services remain at Level 2 using cloud TEE or Intel TDX. Although Web3 TEE-based projects should eventually progress to Level 4 hardware, current performance limitations make this impractical. However, with major VCs like Paradigm and research teams such as Flashbots and Nethermind working towards TEE democratization, and given TEE’s alignment with Web3 principles, it is likely to become essential infrastructure for Web3 projects.
Ecosystem Explorer is ChainLight’s report introducing internal analysis on trending projects of web3 ecosystem in a security perspective, written by our research analysts. With the mission to assist security researchers and developers in collectively learning, growing, and contributing to making Web3 a safer place, we release our report periodically, free of charge.
To receive the latest research and reports conducted by award-winning experts:
👉 Follow @ChainLight_io @c4lvin
Established in 2016, ChainLight’s award-winning experts provide tailored security solutions to fortify your smart contract and help you thrive on the blockchain.
In October, the term “TEE (Trusted Execution Environment)” began frequently appearing in X feeds. This surprised me since TEE has traditionally been a niche topic, primarily discussed in systems security academia. As someone who conducted research in a systems security lab, I was pleased to see this development. However, I was curious about why TEE was suddenly gaining attention in the Web3 space. I also noticed a lack of accessible content explaining TEE concepts to the general public, which motivated me to write this article.
TEE is a complex concept that can be challenging to fully understand without a computer science background. Therefore, this article starts with basic TEE concepts, explains why Web3 is interested in utilizing TEE, and then discusses current Web3 projects implementing TEE and its limitations.
In summary, this article will cover the following topics:
I believe most readers may not have the necessary background knowledge to fully understand what TEE exactly is. Since TEE is quite a complex concept when explored deeply, I will attempt to explain it as simply as possible.
Most Web2 servers manage data access through authorization settings. However, since this approach is purely software-based, it essentially becomes ineffective if higher-level privileges are obtained. For instance, if an attacker gains kernel-level privileges in the server’s operating system, they can potentially access all permission-controlled data on the server, including encryption keys. In such extreme scenarios, there’s virtually no way to prevent data theft through software-based methods alone. TEE, or Trusted Execution Environment, attempts to fundamentally address this issue through hardware-based security. TEEs are often called “confidential computing,” but this is a broader concept that includes computation mechanisms ensuring user data privacy, such as ZK, MPC, and FHE.
source: Jujutsu Kaisen
To use a simple analogy, TEE acts like an encrypted zone within memory. All data inside the TEE is encrypted, making raw data access from outside impossible. Even the OS kernel cannot read or modify it in its original form. Thus, even if an attacker gains administrator privileges on the server, they cannot decrypt the data within the TEE. This encrypted area is often called an “enclave.”
Creating an enclave and processing data within it requires specific instruction sets, similar to opcodes. These instructions use encryption keys stored in hardware-protected areas to perform computations on data within the enclave. As TEE is a hardware-level security module, its implementation varies by CPU chip vendor. For example, Intel supports SGX, AMD supports SEV, and ARM supports TrustZone. From a broader perspective, these implementations share the concept of “protecting memory through hardware-level encryption.”
Let’s first examine how the most common TEEs — Intel SGX, AMD SEV, and ARM TrustZone — operate, and then introduce more recent TEE implementations.
Intel SGX
SGX creates and accesses enclaves at the process level. The following image provides a clear representation of how an SGX-enabled program operates.
During development, developers must distinguish between untrusted and trusted code. Variables or functions that require protection by the enclave are designated as trusted code, while other operations are categorized as untrusted code. When untrusted code needs to input data into trusted code, or when trusted code must interact with untrusted code, special syscalls called ECALL and OCALL are employed.
If users need to directly interact with data within the enclave — for example, providing input or receiving output — they can communicate through secure channels established using protocols like SSL.
AMD SEV
Unlike SGX, which creates enclaves at the process level, SEV creates them at the virtual machine level. Memory allocated to virtual machines is encrypted and managed with independent keys, protecting data from the server’s operating system or other VMs. Although virtual machines are generally considered secure due to their sandboxed isolation, vulnerabilities that compromise this isolation cannot be completely ruled out. SEV is designed to provide security in such scenarios.
SEV generates encryption keys through a security processor that is physically separated from the CPU during the creation of the VM. These keys are then used to encrypt the VM memory. The following diagram illustrates the difference between SGX and SEV.
source: 10.1109/SRDS.2018.00042
SGX requires developers to explicitly divide code into untrusted and trusted segments. In contrast, SEV encrypts the entire virtual machine memory, demanding relatively less effort from developers in terms of implementation.
ARM TrustZone
Unlike Intel and AMD, which primarily produce CPUs for desktops and servers, ARM designs chipsets for lightweight systems such as mobile and embedded devices. As a result, their Secure Enclave implementation is slightly different from the SGX or SEV used in higher-level architectures.
TrustZone divides the system into a Secure World and a Normal World at the hardware level. Developers using TrustZone must implement security-critical functions in the Secure World, while general functions run in the Normal World. Transitions between these two worlds occur through special system calls known as Secure Monitor Calls, similar to SGX.
A key distinction is that TrustZone’s enclave extends beyond just the CPU or memory; it encompasses the entire system, including the system bus, peripherals, and interrupt controllers. Apple also utilizes a TEE called Secure Enclave in their products, which is very similar to TrustZone at a high level.
As we’ll discuss later, many original TEEs, including Intel SGX, have encountered side-channel vulnerabilities and development challenges due to structural issues. To address these problems, vendors have released improved versions. With the rising demand for secure cloud computing, platforms like AWS/Azure/GCP have started offering their own TEE services. Recently, the TEE concept has been extended to GPUs as well. Some Web3 use cases are now implementing these advanced TEEs, so I’ll briefly explain them.
Cloud TEEs: AWS Nitro, Azure Confidential Computing, Google Cloud Confidential Computing
With the growing demand for cloud computing services, providers have started developing their own TEE solutions. AWS’s Nitro is an enclave computing environment that works alongside EC2 instances. It achieves physical separation of the computing environment by utilizing a dedicated Nitro security chip for attestation and key management. The Nitro hypervisor safeguards enclave memory areas through functions provided by the chip, effectively shielding against attacks from both users and cloud providers.
Azure supports various TEE specifications, including Intel SGX, AMD SEV-SNP, and its own virtualization-based isolation. This flexibility in hardware environment selection offers users more options but may increase the attack surface when the user utilizes multiple TEEs.
Google Cloud provides confidential computing services that utilize Trusted Execution Environments (TEE), focusing on AI/ML workloads. While different from AWS Nitro, Google Cloud, like Azure, offers virtualization-based isolation using existing TEE infrastructure. Key differentiators include support for CPU accelerators such as Intel AMX to handle intensive AI/ML tasks, and GPU-based confidential computing through NVIDIA, which will be detailed later.
ARM CCA
ARM CCA, released in late 2021, is tailored for cloud environments, unlike TrustZone, which was designed for single embedded or mobile environments. TrustZone statically manages pre-designated secure memory regions, whereas CCA facilitates the dynamic creation of Realms (secure enclaves). This allows for multiple isolated environments within a single physical setup.
CCA can be likened to an ARM version of Intel SGX, though with notable differences. While SGX has memory limitations, CCA provides flexible memory allocation. Moreover, CCA employs a fundamentally different security approach by encrypting the entire physical memory, not just the designated enclave regions as SGX does.
Intel TDX
Intel introduced TDX, a technology that encrypts memory at the VM level, similar to AMD’s SEV. This release addresses feedback on SGX(v1)’s limitations, including the 256MB enclave size limit and the increased development complexity due to process-level enclave creation. The key difference from SEV is that TDX partially trusts the operating system, specifically the hypervisor, for VM resource management. Additionally, there are differences in the encryption mechanisms for each VM.
AMD SEV-SNP
SEV-SNP enhances the security of the existing SEV model. The original SEV relied on a trust model that left vulnerabilities, allowing hypervisors to modify memory mapping. SEV-SNP addresses this by adding a hardware manager to track memory states, preventing such modifications.
Additionally, it enables users to perform remote attestation directly, thereby minimizing trust anchors. SEV-SNP also introduced a Reverse Map Table to monitor memory page states and ownership, providing defense against malicious hypervisor attack models.
GPU TEE: NVIDIA Confidential Computing
TEE development has traditionally been focused on CPUs due to its reliance on hardware vendors. However, the need for handling complex computations like secure AI training and training data protection has underscored the necessity for GPU TEE. In response, NVIDIA introduced Confidential Computing features to the H100 GPU in 2023.
NVIDIA Confidential Computing offers independently encrypted and managed GPU instances, ensuring end-to-end security when combined with CPU TEE. Currently, it achieves this by integrating with AMD SEV-SNP or Intel TDX to build confidential computing pipelines.
When examining Web3 projects, you’ll often see claims of community governance through code uploads on GitHub. But how can one verify that the program deployed on the server actually matches the GitHub code?
Blockchain offers an environment where smart contracts are always public and unmodifiable due to continuous consensus. In contrast, typical Web2 servers allow administrators to update programs at any time. To verify authenticity, users need to compare hash values of binaries built from open-source programs on platforms like GitHub or check integrity through developer signatures.
The same principle applies to programs within TEE enclaves. For users to fully trust server-deployed programs, they must verify (attest) that the code and data within the enclave remain unchanged. In the case of SGX, it communicates with IAS (Intel Attestation Service) using a key stored in a special enclave. IAS verifies the integrity of the enclave and its internal data, then returns the results to users. In summary, TEE requires communication with hardware vendor-provided attestation servers to ensure enclave integrity.
Why TEE on Web3?
TEE might seem unfamiliar to the general public, as its knowledge is typically confined to specialized domains. However, TEE’s emergence aligns well with Web3’s principles. The fundamental premise of using TEE is “trust no one.” When properly implemented, TEE can protect user data from the program deployer, physical server owner, and even the OS kernel.
While current blockchain projects have achieved significant structural decentralization, many still rely on off-chain server environments such as sequencers, off-chain relayers, and keeper bots. Protocols that need to process sensitive user information, like KYC or biometric data, or those aiming to support private transactions, face the challenge of requiring trust in service providers. These issues can be substantially mitigated through data processing within enclaves.
As a result, TEE has gained popularity in the latter half of this year, aligning with AI-related themes such as data privacy and trustworthy AI agents. However, attempts to integrate TEE into the Web3 ecosystem existed long before this. In this article, we’ll introduce projects across various fields that have applied TEE in the Web3 ecosystem, beyond just the AI sector.
Marlin
Marlin is a verifiable computing protocol designed to offer a secure computation environment using TEE or ZK technology. One of their primary goals is to develop a decentralized web. Marlin manages two subnets: Oyster and Kalypso, and Oyster functions as the TEE-based coprocessing protocol.
1) Oyster CVM
Oyster CVM (Oyster for convenience) acts as a P2P TEE marketplace. Users buy AWS Nitro Enclave computing environments through Oyster’s off-chain marketplace and deploy their program images there. Below is an abstract structure of Oyster:
source: https://docs.marlin.org/oyster/protocol/cvm/workflow/
Oyster bears a very similar structure to Akash. In Oyster, blockchain’s role is to verify whether each TEE computing environment is operating properly, and this is done through observers called Providers. Providers continuously check the availability of Enclaves in real-time and report their findings to the Oyster network. They stake $POND tokens, which are at risk of being slashed if they engage in malicious activities. Additionally, a decentralized network of entities, referred to as ‘auditors’, exists to oversee Provider slashing. Every epoch, auditors get assigned their jobs, and send audit requests to enclaves that are randomly chosen by a seed generated inside an enclave.
However, Oyster has implemented a contract called NitroProver that verifies remote attestation results on-chain, allowing users to verify the integrity of their purchased TEE on-chain.
User-deployed instances can be accessed through both smart contracts and Web2 APIs. The computation results can be integrated into contracts by presenting them as oracles. As shown in the dashboard, this capability is suitable not only for smart contracts but also for decentralizing Web2 services.
Similar to Akash, Oyster is susceptible to potential instance takeovers by attackers if there are vulnerabilities in the off-chain marketplace. In such scenarios, although enclave data might remain secure, raw data stored outside the enclave and service operation privileges could be compromised. In case of sensitive data, which is stored in untrusted memory but should not be exposed, developers must encrypt those data and store them separately. Marlin currently provides an external storage with a MPC-based persistent key to handle these cases.
2) Oyster Serverless
While Oyster CVM operates as a P2P TEE marketplace, Oyster Serverless resembles AWS Lambda (or Function-as-a-Service) with TEE. Utilizing Oyster Serverless, users can execute functions without renting instances, paying on-demand.
The execution flow of Oyster Serverless would be as follows:
With Oyster Serverless, users can send web2 API requests or smart contract calls through a smart contract, while the integrity of the execution guaranteed through TEE. Users can also subscribe Serverless for periodic execution, which would be particularly useful for oracle fetchers.
Phala Network
Phala, previously discussed in our AI X Crypto article, has significantly shifted its focus to AI coprocessors.
The basic design of the Phala Network includes Workers and Gatekeepers. Workers function as regular nodes that execute computations for clients. Gatekeepers, on the other hand, manage keys that enable Workers to decrypt and compute encrypted state values. Workers handle contract state values encrypted via Intel SGX, which necessitates keys from Gatekeepers to read or write these values.
source: https://docs.phala.network/tech-specs/blockchain
Phala has expanded its offerings by supporting SDKs for Confidential VMs in Intel TDX environments. Recently, in collaboration with Flashbot, they launched Dstack. This product features a remote attestation API to verify the operational status of multiple Docker container images deployed in Confidential VMs. Remote attestation through Dstack ensures transparency via a dedicated Explorer.
Another significant development is their Confidential AI Inference product, introduced in response to the recent surge in AI projects. Phala Network now supports the relatively new Nvidia confidential computing, aiming to enhance AI inference services using ZK/FHE. This technology previously faced challenges due to high overhead, limiting its practicality.
source: https://docs.phala.network/overview/phala-network/confidential-ai-inference
The image illustrates the structure of Phala Network’s confidential AI inference system. This system utilizes virtual machine-level Trusted Execution Environments (TEEs) like Intel TDX and AMD SEV to deploy AI models. It conducts AI inference through Nvidia confidential computing and securely transmits the results back to the CPU enclave. This method may incur significant overhead compared to regular models, as it involves two rounds of enclave computation. Nonetheless, it is anticipated to deliver substantial performance improvements over existing TEE-based AI inference methods that rely entirely on CPU performance. According to the paper published by Phala Network, the Llama3-based LLM inference overhead was measured at around 6–8%.
Others
In the AI X Crypto domain, other examples of using TEEs as coprocessors include iExec RLC, PIN AI, and Super Protocol. iExec RLC and PIN AI focus on safeguarding AI models and training data through TEEs, respectively. Super Protocol is preparing to launch a marketplace for trading TEE computing environments, similar to Marlin. However, detailed technical information about these projects is not yet publicly available. We will provide updates after their product launches.
Oasis (Prev. Rose)
Oasis, formerly known as Rose, is a Layer 1 blockchain designed to protect user privacy during transactions by running its execution client within an SGX enclave. Although it is a relatively mature chain, Oasis has innovatively implemented multi-VM support in its execution layer.
The execution layer, called Paratime, includes three components: Cipher, a WASM-based confidential VM; Sapphire, an EVM-based confidential VM; and Emerald, a standard EVM-compatible VM. Oasis fundamentally safeguards smart contracts and their computational processes from arbitrary modifications by nodes, ensuring the execution client operates within a TEE enclave. This structure is illustrated in the accompanying diagram.
source: https://docs.oasis.io/general/oasis-network/
When users send transactions, they encrypt the transaction data using an ephemeral key generated by the Oasis Node’s key manager within the enclave and transmit it to the computation module. The computation module receives the private key for the ephemeral key from the key manager, uses it to decrypt the data within the enclave, executes the smart contract, and modifies the node’s state values. Since the transaction execution results are also delivered to users in encrypted form, neither the server operating the Oasis node client nor external entities can observe the transaction contents.
Oasis highlights its strength in facilitating the creation of DApps that handle sensitive personal information on public blockchains, using its Confidential Paratime. This feature allows for the development of services requiring identity verification, such as SocialFi, credit lending, CEX integration services, and reputation-based services. These applications can securely receive and verify user biometric or KYC information within a secure enclave.
Secret Network
Secret Network is a Layer 1 chain within the Cosmos ecosystem and stands as one of the oldest TEE-based blockchains. It leverages Intel SGX enclaves to encrypt chain state values, supporting private transactions for its users.
In Secret Network, each contract has a unique secret key stored in the enclave of each node. When users call contracts via transactions encrypted with public keys, nodes decrypt the transaction data within the TEE to interact with the contract’s state values. These modified state values are then recorded in blocks, remaining encrypted.
The contract itself can be shared with external entities in bytecode or source code form. However, the network ensures user transaction privacy by preventing direct observation of user-sent transaction data and blocking external observation or tampering with current contract state values.
Since all smart contract state values are encrypted, viewing them necessitates decryption. Secret Network addresses this by introducing viewing keys. These keys bind specific user passwords to contracts, allowing only authorized users to observe contract state values.
Clique, Quex Protocol
Unlike the TEE-based L1s introduced earlier, Clique and Quex Protocol offer infrastructure that enables general DApps to delegate private computations to an off-chain TEE environment. These results can be utilized at the smart contract level. They are notably used for verifiable incentive distribution mechanisms, off-chain order books, oracles, and KYC data protection.
Some ZK L2 chains employ multi-proof systems to address the inherent instability of zero-knowledge proofs, often incorporating TEE proofs. Modern zero-knowledge proof mechanisms have not yet matured enough to be fully trusted for their safety, and bugs related to soundness in ZK circuits require significant effort to patch when incidents occur. As a precaution, chains using ZK proofs or ZK-EVMs are adopting TEE proofs to detect potential bugs by re-executing blocks through local VMs within enclaves. Currently, L2s utilizing multi-proof systems, including TEE, are Taiko, Scroll, and Ternoa. Let’s briefly examine their motivations for using multi-proof systems and their structures.
Taiko
Taiko is currently the most prominent (plan-to-be) Based rollup chain. A rollup chain delegates sequencing to Ethereum block proposers without maintaining a separate centralized sequencer. According to Taiko’s diagram of Based Rollup, L2 searchers compose transaction bundles and deliver them to L1 as batches. L1 block proposers then reconstruct these, along with L1 transactions, to generate L1 blocks and capture MEV.
source: https://docs.taiko.xyz/core-concepts/multi-proofs/
In Taiko, TEE is utilized not during the block composition stage but in the proof generation stage, which we’ll explain. Taiko, with its decentralized structure, doesn’t require verification of sequencer malfunctions. However, if there are bugs within the L2 node client codebase, a fully decentralized setup cannot handle them swiftly. This necessitates high-level validity proofs to ensure security, resulting in a more complex challenge design compared to other rollups.
Taiko’s blocks undergo three stages of confirmation: proposed, proved, and verified. A block is considered proposed when its validity is checked by Taiko’s L1 contract (rollup contract). It reaches the proved state when verified by parallel provers, and the verified state when its parent block has been proved. To verify blocks, Taiko uses three types of proofs: SGX V2-based TEE proof, Succinct/RiscZero-based ZK proof, and Guardian proof, which relies on centralized multisig.
Taiko employs a contestation model for block verification, establishing a security tier hierarchy among Provers: TEE, ZK, ZK+TEE, and Guardian. This setup allows challengers to earn greater rewards when they identify incorrect proofs generated by higher-tier models. Proofs required for each block are randomly assigned with the following weightings: 5% for SGX+ZKP, 20% for ZKP, and the remainder using SGX. This ensures ZK provers can always earn higher rewards upon successful challenges.
Readers might wonder how SGX provers generate and verify proofs. The primary role of SGX provers is to demonstrate that Taiko’s blocks were generated through standard computation. These provers generate proofs of state value changes and verify the environment using results from re-executing blocks via a local VM within the TEE environment, alongside enclave attestation results.
Unlike ZK proof generation, which involves significant computational costs, TEE-based proof generation verifies computational integrity at a much lower cost under similar security assumptions. Verification of these proofs involves simple checks, such as ensuring the ECDSA signature used in the proof matches the prover’s signature.
In conclusion, TEE-based validity proofs can be seen as a method to verify chain integrity by generating proofs with slightly lower security levels but at a considerably lower cost compared to ZK proofs.
Scroll
Scroll is a notable rollup that adopts a Multi-proof system. It collaborates with Automata, an attestation layer to be introduced later, to generate both ZK proofs and TEE proofs for all blocks. This collaboration activates a dispute system to resolve conflicts between the two proofs.
source: https://scroll.io/blog/scaling-security
Scroll plans to support various hardware environments (currently only SGX), including Intel SGX, AMD SEV, and AWS Nitro, to minimize hardware dependencies. They address potential security issues in TEE by collecting proofs from diverse environments using threshold signatures.
Ternoa
Ternoa prioritizes detecting malicious actions by centralized L2 entities over addressing bugs in Execution itself. Unlike Taiko or Scroll, which use TEE Provers to complement existing ZK Proofs, Ternoa employs Observers in TEE-based environments. These Observers detect malicious actions by L2 sequencers and validators, focusing on areas that can’t be evaluated solely from transaction data. Examples include RPC nodes censoring transactions based on IP address, sequencers altering sequencing algorithms, or failing to submit batch data intentionally.
Ternoa operates a separate L2 network called the Integrity Verification Chain (IVC) for verification tasks related to rollup entities. The rollup framework provider submits the latest sequencer image to the IVC. When a new rollup requests deployment, the IVC returns service images stored in TEE. After deployment, Observers regularly verify whether the deployed rollup uses the sequencer image as intended. They then submit integrity proofs, incorporating their verification results and attestation reports from their TEE environment, to confirm chain integrity.
Flashbots BuilderNet
Flashbots, widely recognized as an MEV solution provider, has consistently explored the application of Trusted Execution Environments (TEE) in blockchain technology. Notable research efforts include:
In this article, we’ll briefly outline Flashbots’ current role and discuss BuilderNet, a recent initiative aimed at decentralizing block building. Flashbots has announced complete migration plans for their existing solution through BuilderNet.
Ethereum employs a Proposer-Builder Separation model. This system divides block creation into two roles — 1) Builders: Responsible for block creation and MEV extraction 2) Proposers: Sign and propagate blocks created by Builders to decentralize MEV profits. This structure has led to some decentralized applications colluding with Builders off-chain to capture substantial MEV profits. As a result, a few Builders, such as Beaverbuild and Titan Builder, monopolistically create over 90% of Ethereum blocks. In severe instances, these Builders can censor arbitrary transactions. For example, regulated transactions, like those from Tornado Cash, are actively censored by major Builders.
BuilderNet addresses these issues by enhancing transaction privacy and reducing barriers to block builder participation. Its structure can be broadly summarized as follows:
source: https://buildernet.org/docs/architecture
Builder nodes, receiving user transactions (Orderflow), are managed by various Node operators. Each operates open-source Builder instances within Intel TDX environments. Users can freely verify the TEE environment of each operator and send encrypted transactions. Operators then share their received orderflow, submit blocks to the MEV-boost relay, and distribute block rewards to searchers and others involved in block creation upon successful submission.
This structure provides several decentralization benefits:
Puffer Finance
Puffer Finance has introduced a Secure Signer tool designed to reduce the risk of Ethereum validators being slashed due to client errors or bugs. This tool uses an SGX Enclave-based signer for enhanced security.
source: https://docs.puffer.fi/technology/secure-signer/
The Secure Signer operates by generating and storing BLS validator keys within the SGX enclave, accessing them only when necessary. Its logic is straightforward: alongside the security provided by the Trusted Execution Environment (TEE), it can detect validator mistakes or malicious actions. This is achieved by ensuring slots have strictly increased before signing blocks or proofs. Puffer Finance highlights that this setup allows validators to attain security levels comparable to hardware wallets, surpassing the typical protections offered by software solutions.
Unichain
Unichain, Uniswap’s Ethereum Layer 2 (L2) chain slated for launch in Q1 next year, has shared plans in their whitepaper to decentralize L2 block-building mechanisms using Trusted Execution Environments (TEE). Although detailed technical specifications remain unreleased, here’s a summary of their key proposals:
Moreover, Unichain intends to develop various TEE-based features, including an encrypted mempool, scheduled transactions, and TEE-protected smart contracts.
Automata
While blockchain has achieved considerable decentralization in architectural aspects, many elements still don’t have sufficient censorship resistance due to dependence on server operators. Automata aims to provide solutions that minimize server operator dependence and data exposure in blockchain architecture based on TEE. Automata’s notable implementations include open-source SGX Prover and Verifier, TEE Compile which verifies matches between executables deployed in TEE and source code, and TEE Builder which adds privacy to block building mechanisms through TEE-based mempool and block builder. Plus, Automata allows TEE’s remote attestation result to be posted onchain, which enables it to be publicly verifiable & integrated into smart contracts.
Automata currently operates 1RPC, a TEE-based RPC service designed to protect identifying information of transaction submitters, such as IP and device details, through secure enclaves. Automata highlights the risk that, with the commercialization of UserOp due to account abstraction development, RPC services might infer UserOp patterns for specific users via AI integration, potentially compromising privacy. The structure of 1RPC is straightforward. It establishes secure connections with users, receives transactions (UserOp) into the TEE, and processes them with code deployed within the enclave. However, 1RPC only protects UserOp metadata. The actual parties involved and transaction contents remain exposed during interaction with the on-chain Entrypoint. A more fundamental approach to ensuring transaction privacy would involve protecting the mempool and block builder layers with TEE. This could be achieved by integrating with Automata’s TEE Builder.
source: https://x.com/tee_hee_he
What ultimately brought the TEE meta to prominence in web3 was the TEE-based Twitter AI agent. Many people likely first encountered TEE when an AI agent named @tee_hee_he appeared on X in late October and launched its memecoin on Ethereum. @tee_hee_he is an AI agent jointly developed by Nous Research and Flashbots’ Teleport project. It emerged in response to concerns that trending AI agent accounts at the time couldn’t prove they were actually relaying results generated by AI models. The developers designed a model that minimized intervention from centralized entities in processes such as Twitter account setup, crypto wallet creation, and AI model result relay.
source: @tee_hee_he/setting-your-pet-rock-free-3e7895201f46"">https://medium.com/@tee_hee_he/setting-your-pet-rock-free-3e7895201f46
They deployed the AI agent in an Intel TDX environment, generating email, X account passwords, and OAuth tokens for Twitter access through browser simulation, and then removed all recovery options.
Recently, TEE was used in a similar context for AI-Pool, where @123skely successfully conducted fundraising. Currently, after AI meme coins deploy their contracts and addresses are made public, technically superior sniper bots typically secure most of the liquidity and manipulate prices. AI-Pool attempts to solve this issue by having AI conduct a type of presale.
source: https://x.com/0xCygaar/status/1871421277832954055
Another interesting case is DeepWorm, an AI agent with bio neural network that simulates a worm’s brain. Similar to the other AI agents, DeepWorm uploads the enclave image of its worm brain to Marlin Network to protect their model and provide verifiability to its operation.
source: https://x.com/deepwormxyz/status/1867190794354078135
Since @tee_hee_he open-sourced all the code required for deployment, deploying trustworthy, unruggable TEE-based AI agents has become quite easy. Recently, Phala Network deployed a16z’s Eliza in TEE. As a16z highlighted in their 2025 crypto market outlook report, the TEE-based AI agent market is expected to serve as essential infrastructure in the future AI agent memecoin market.
Azuki Bobu
Azuki, a renowned Ethereum NFT project, collaborated with Flashbots last October to host a unique social event.
source: https://x.com/Azuki/status/1841906534151864557
This involved delegating Twitter account upload permissions to Flashbots and Bobu, who then posted tweets simultaneously, akin to a flash mob. The event was a success, as shown in the image below.
Designed by Flashbots and Azuki, the event structure was as follows:
Azuki ensured the reliability of the event process by publishing the Enclave’s Docker image on Docker Hub. They also uploaded certificate transparency log verification scripts and remote attestation results for the TEE environment on GitHub. Although Flashbots identified dependencies on RPC and blockchain nodes as remaining risks, these could be mitigated through the use of TEE RPC or TEE-based rollups like Unichain.
While the project did not achieve a technical breakthrough, it is noteworthy for conducting a trustworthy social event solely using a TEE stack.
TEE provides much higher security compared to typical software solutions as it offers hardware-level security that software cannot directly compromise. However, TEE has been slow to adopt in actual products due to several limitations, which we’ll introduce.
1) Centralized Attestation Mechanism
As mentioned earlier, users can utilize remote attestation mechanisms to verify the integrity of TEE enclaves and that data within enclaves hasn’t been tampered with. However, this verification process inevitably depends on the chipset manufacturer’s servers. The degree of trust varies slightly by vendor — SGX/TDX completely depends on Intel’s attestation server, while SEV allows VM owners to perform attestation directly. This is an inherent issue in TEE structure, and TEE researchers are working to resolve this through the development of open-source TEE, which we’ll mention later.
2) Side-channel attacks
TEE must never expose data stored within enclaves. However, because TEE can only encrypt data inside enclaves, vulnerabilities may arise from attacks leveraging secondary information, not the original data. Since Intel SGX’s public release in 2015, numerous critical side-channel attacks have been highlighted in top system security conferences. Below are potential attack scenarios in TEE use cases, categorized by their impact:
While TEE is not a system that neutralizes all attack vectors and can leak various levels of information due to its fundamental characteristics, these attacks require strong prerequisites, such as attacker and victim code running on the same CPU core. This has led some to describe it as the “Man with the Glock” security model.
source: https://x.com/hdevalence/status/1613247598139428864
However, since TEE’s fundamental principle is “trust no one,” I believe TEE should be able to protect data even within this model to fully serve its role as a security module.
3) Real-world / Recent Exploits on TEE
Numerous bugs have been discovered in TEE implementations, especially in SGX, and most have been successfully patched. However, the complex hardware architecture of TEE systems means new vulnerabilities can emerge with each hardware release. Beyond academic research, there have been real-world exploits affecting Web3 projects, which warrant detailed examination.
These cases indicate that a “completely secure TEE” is unattainable, and users should be aware of potential vulnerabilities with new hardware releases.
In November, Paradigm’s Georgios Konstantopoulos outlined a framework for confidential hardware evolution, categorizing secure hardware into five distinct levels:
Currently, projects like Phala Network’s Confidential AI Inference operate at Level 3, while most services remain at Level 2 using cloud TEE or Intel TDX. Although Web3 TEE-based projects should eventually progress to Level 4 hardware, current performance limitations make this impractical. However, with major VCs like Paradigm and research teams such as Flashbots and Nethermind working towards TEE democratization, and given TEE’s alignment with Web3 principles, it is likely to become essential infrastructure for Web3 projects.
Ecosystem Explorer is ChainLight’s report introducing internal analysis on trending projects of web3 ecosystem in a security perspective, written by our research analysts. With the mission to assist security researchers and developers in collectively learning, growing, and contributing to making Web3 a safer place, we release our report periodically, free of charge.
To receive the latest research and reports conducted by award-winning experts:
👉 Follow @ChainLight_io @c4lvin
Established in 2016, ChainLight’s award-winning experts provide tailored security solutions to fortify your smart contract and help you thrive on the blockchain.