AI as the Agent of Crypto – The Evolution of the AI Agent

Intermediate1/27/2025, 7:44:45 AM
AI is the agent of Crypto, and this is the best annotation to view the current AI surge from the crypto perspective. Crypto's enthusiasm for AI differs from other industries, as we especially aim to integrate the issuance and operation of financial assets with it.

AI as the Agent of Crypto

“A work of art is never completed, only abandoned.”

Everyone is talking about AI Agents, but what they mean is not the same thing, which leads to different understandings of AI Agents from our perspective, the public’s, and that of AI practitioners.

A long time ago, I wrote that Crypto is the illusion of AI. Since then, the combination of Crypto and AI has remained a one-sided love affair. AI practitioners rarely mention Web3 or blockchain, while Crypto enthusiasts are deeply enamored with AI. After witnessing the phenomenon where AI Agent frameworks can even be tokenized, it’s uncertain whether this could truly bring AI practitioners into our world.

AI is the agent of Crypto. This is the best annotation from a crypto perspective to view the current AI surge. Crypto’s enthusiasm for AI is different from other industries; we particularly hope to integrate the issuance and operation of financial assets with it.

The Evolution of Agents: The Origin Under Technical Marketing

At its core, the AI Agent has at least three sources. OpenAI’s AGI (Artificial General Intelligence) regards this as an important step, turning the term into a popular buzzword beyond technical circles. However, in essence, an Agent is not a new concept. Even with AI empowerment, it’s hard to say that it’s a revolutionary technological trend.

The first source is the AI Agent as seen by OpenAI. Similar to level L3 in autonomous driving, an AI Agent can be seen as possessing certain advanced assistance capabilities but is not yet able to fully replace a human.

Image caption: AGI phase of OpenAI planning

Image source: https://www.bloomberg.com/

The second source is, as the name suggests, the AI Agent, which is an Agent empowered by AI. The concept of agency and delegation mechanisms is not new in the field of computing. However, under OpenAI’s vision, the Agent will become the L3 stage following conversational forms (like ChatGPT) and reasoning forms (like various bots). The key feature of this stage is the ability to “perform certain behaviors autonomously,” or, as LangChain founder Harrison Chase defines it: “An AI Agent is a system that uses LLM (Large Language Model) to make control flow decisions in a program.” \
This is where it becomes intriguing. Before the advent of LLMs, an Agent primarily executed automation processes set by humans. For instance, when designing a web scraper, programmers would set a User-Agent to simulate details like the browser version and operating system used by real users. If an AI Agent were employed to mimic human behavior more precisely, it could lead to the creation of an AI Agent-based web scraper framework, making the scraper “more human-like.” \
In such transitions, the introduction of AI Agents must integrate with existing scenarios, as completely novel fields hardly exist. Even code completion and generation capabilities in tools like Curosr and Github Copilot are merely functional enhancements within the framework of LSP (Language Server Protocol), with numerous examples of such evolution:

  • Apple: AppleScript (Script Editor) → Alfred → Siri → Shortcuts → Apple Intelligence
  • Terminal: Terminal (macOS) / PowerShell (Windows) → iTerm 2 → Warp (AI Native)
  • Human-Computer Interaction: Web 1.0 CLI TCP/IP Netscape Browser → Web 2.0 GUI / REST API / Search Engines / Google / Super Apps → Web 3.0 AI Agent + Dapp?

To clarify, in human-computer interaction, the combination of Web 1.0’s GUI and browsers truly allowed the public to use computers with no barriers, represented by the combination of Windows and IE. APIs became the data abstraction and transmission standard behind the internet, and during the Web 2.0 era, browsers like Chrome emerged, with a shift to mobile changing people’s internet usage habits. Super apps like WeChat and Meta platforms now cover every aspect of people’s lives.

The third source is the concept of “Intent” in the Crypto space, which has led to the surge in interest around AI Agents. However, note that this is only applicable within Crypto. From Bitcoin scripts with limited functionality to Ethereum’s smart contracts, the Agent concept itself has been widely used. The subsequent emergence of cross-chain bridges, chain abstractions, EOA (Externally Owned Accounts) to AA (Account Abstraction) wallets are natural extensions of this line of thought. Therefore, when AI Agents “invade” Crypto, it’s not surprising that they naturally lead to DeFi scenarios.

This is where the confusion around the AI Agent concept arises. In the context of Crypto, what we are actually trying to achieve is an “automated financial management, automated meme generation” Agent. However, under OpenAI’s definition, such a risky scenario would require L4 or L5 to be truly implemented. Meanwhile, the public is experimenting with automatic code generation or AI-powered summary and writing assistance, which are not on the same level as the goals we are pursuing.

Once we understand what we truly want, we can focus on the organizational logic of AI Agents. The technical details will follow, as the concept of an AI Agent is ultimately about removing the barriers to large-scale technology adoption, much like how browsers revolutionized the personal PC industry. Our focus will be on two points: examining AI Agents from the perspective of human-computer interaction, and understanding the differences and connections between AI Agents and LLMs, which will lead us to the third part: what the combination of Crypto and AI Agents will ultimately leave behind.

let AI_Agent = LLM+API;

Before conversational human-computer interaction models like ChatGPT, the primary forms of human-computer interaction were GUI (Graphical User Interface) and CLI (Command-Line Interface). The GUI mindset evolved into various specific forms such as browsers and apps, while the combination of CLI and Shell saw minimal change.

But this is just the “frontend” of human-computer interaction. As the internet has evolved, the increase in data volume and variety has led to more “backend” interactions between data and between apps. These two aspects depend on each other— even a simple web browsing action actually requires their collaboration.

If human interaction with browsers and apps is considered the user entry point, the links and transitions between APIs support the actual operation of the internet. This, in fact, is also part of the Agent. Ordinary users don’t need to understand terms like command lines and APIs to achieve their goals.

The same is true for LLMs. Now, users can go even further—there’s no need for searching anymore. The entire process can be described in the following steps:

  1. The user opens a chat window.
  2. The user describes their needs using natural language, either through text or voice.
  3. The LLM interprets this into procedural steps.
  4. The LLM returns the results to the user.

It can be found that in this process, the biggest challenge is Google, because users do not need to open the search engine, but various GPT-like dialogue windows, and the traffic entrance is quietly changing. It is for this reason that some people think that this LLM revolutionizes the life of search engines.

So, what role does the AI Agent play in this process?

In short, the AI Agent is a specialized extension of LLM.

Current LLMs are not AGI (Artificial General Intelligence) and are far from OpenAI’s envisioned L5 organizer. Their capabilities are significantly limited. For example, LLMs are prone to hallucinations if fed too much user input. One key reason lies in the training mechanism. For instance, if you repeatedly tell GPT that 1+1=3, there’s a probability that it might respond with 4 when asked about 1+1+1=?.

This happens because GPT’s feedback is entirely derived from user input. If the model is not connected to the internet, it’s possible for its operation to be altered by your inputs, resulting in a model that only “knows” 1+1=3. However, if the model is allowed to connect to the internet, its feedback mechanism becomes more diverse, as the vast majority of online data would affirm that 1+1=2.

Now, what if we must use LLMs locally and want to avoid such issues?

A straightforward solution is to use two LLMs simultaneously, requiring them to cross-validate each other’s responses to reduce the probability of errors. If this isn’t enough, another approach could involve having two users handle a single process—one asking the questions and the other refining them—to make the language more precise and logical.

Of course, being connected to the internet doesn’t entirely eliminate problems. For instance, if the LLM retrieves answers from unreliable sources, the situation could worsen. Avoiding such data, however, reduces the amount of available information. To address this, existing data can be split, recombined, or even used to generate new data based on older datasets to make responses more reliable. This approach is essentially the concept of RAG (Retrieval-Augmented Generation) in natural language understanding.

Humans and machines need to understand each other. When multiple LLMs collaborate and interact, we essentially tap into the operational model of AI Agents. These serve as human proxies, accessing other resources, including large models and other agents.

This leads us to the connection between LLMs and AI Agents:

LLMs are aggregations of knowledge that humans interact with via chat interfaces. However, in practice, certain specific workflows can be condensed into smaller programs, bots, or sets of instructions. These are defined as Agents.

AI Agents remain a subset of LLMs but should not be equated with them. The defining feature of AI Agents lies in their emphasis on collaboration with external programs, LLMs, and other agents. This is why people often summarize AI Agents as LLM + API.

To illustrate this in the LLM workflow, let’s take the example of an API call through an AI Agent:

  1. A human user opens a chat window.
  2. The user describes their needs in natural language, either via text or voice.
  3. The LLM interprets the request as an API-call-related AI Agent task and transfers the conversation to the Agent.
  4. The AI Agent requests the user’s X account and API credentials and connects with X based on the user’s description.
  5. The AI Agent returns the final result to the user.

Remember the evolution of human-computer interaction? Browsers, APIs, and other elements from Web 1.0 and Web 2.0 still exist, but users no longer need to interact with them directly. Instead, they can simply engage with AI Agents. API calls and related processes can all be conducted conversationally. These API services can encompass any type of data, whether local, online, or from external apps, as long as the interfaces are open and users have the necessary permissions to access them.

A complete AI Agent workflow, as shown above, treats LLM as either a separate component from AI Agent or as two sub-processes within one workflow. Regardless of how they are divided, the goal is always to serve user needs. From the perspective of human-computer interaction, it can even feel like users are talking to themselves. You only need to fully express your thoughts, and the AI/LLM/AI Agent will repeatedly guess your needs. By incorporating feedback mechanisms and ensuring that the LLM remembers the current context, the AI Agent avoids losing track of its tasks.

In summary, AI Agents are more personalized and humanized creations, setting them apart from traditional scripts and automation tools. They act like personal assistants, considering the user’s actual needs. However, it’s important to note that this personalization is still based on probabilistic inference. An L3-level AI Agent does not possess human-level understanding and expression capabilities, making its integration with external APIs inherently risky.

After the monetization of AI frameworks

The ability to monetize AI frameworks is one of the main reasons I remain interested in crypto. In traditional AI technology stacks, frameworks are not particularly important, at least not compared to data and computing power. Monetizing AI products rarely starts with the framework, as most AI algorithms and model frameworks are open source. What remains proprietary are sensitive elements like data.

Essentially, AI frameworks or models are containers and combinations of algorithms, much like a pot for stewing goose. However, the quality of the goose and mastery of the cooking process are what truly define the flavor. In theory, the product for sale should be the goose, but Web3 customers seem to prefer buying the pot while discarding the goose.

The reason for this isn’t complicated. Most Web3 AI products build upon existing AI frameworks, algorithms, and products, customizing them for their purposes. In fact, the technical principles behind different crypto AI frameworks aren’t vastly different. Since the technology itself lacks differentiation, attention shifts to branding, application scenarios, and other surface distinctions. As a result, even minor tweaks to the AI framework become the foundation for supporting various tokens, leading to a framework bubble within crypto AI Agent ecosystems.

Because there’s no need to heavily invest in training data or algorithms, differentiating frameworks by name becomes especially crucial. Even an affordable framework like DeepSeek V3 still demands significant costs in terms of GPU power, electricity, and effort.

In a sense, this aligns with Web3’s recent trend: platforms issuing tokens are often more valuable than the tokens themselves. Projects like Pump.Fun and Hyperliquid exemplify this. Originally, Agents were supposed to represent applications and assets, but the frameworks issuing Agents have now become the hottest commodities.

This reflects a form of value anchoring. Since Agents lack differentiation, frameworks for issuing Agents become more stable and create a value siphoning effect for asset issuance. This marks the 1.0 version of the integration of crypto and AI Agents.

The 2.0 version is now emerging, exemplified by the convergence of DeFi and AI Agents. While the concept of DeFAI may have been triggered by market hype, a deeper look at the following trends suggests otherwise:

  • Morpho is challenging established lending platforms like Aave.
  • Hyperliquid is replacing on-chain derivatives like dYdX and even challenging Binance’s CEX listing effects.
  • Stablecoins are becoming payment tools for off-chain scenarios.

Within this backdrop of DeFi transformation, AI is reshaping DeFi’s fundamental logic. Previously, DeFi’s core logic was verifying the feasibility of smart contracts. Now, AI Agents are altering the manufacturing logic of DeFi. You no longer need to understand DeFi to create DeFi products. This represents a step beyond chain abstraction, providing deeper foundational empowerment.

The era where everyone can be a programmer is on the horizon. Complex computations can be outsourced to the LLM and APIs behind AI Agents, allowing individuals to focus solely on their ideas. Natural language can be efficiently transformed into programming logic.

Conclusion

This article does not mention any Crypto AI Agent tokens or frameworks, as Cookie.Fun has already done an excellent job—a platform for AI Agent information aggregation and token discovery, followed by AI Agent frameworks, and lastly the fleeting emergence and disappearance of Agent tokens. Continuing to list such information here would be of little value.

However, through observations during this period, the market still lacks a meaningful discussion on what Crypto AI Agents are ultimately pointing toward. We cannot keep focusing on the pointers; the essence lies in the changes happening at the memory level.

It is precisely the ever-evolving ability to transform various assets into tokenized forms that makes Crypto so captivating.

Disclaimer:

  1. This article is reproduced from [Zuoye Waibo Mountain], the copyright belongs to the original author [Zuoye Waibo Mountain]. If you have any objection to the reprint, please contact Gate Learn, the team will handle it as soon as possible according to relevant procedures.
  2. Disclaimer: The views and opinions expressed in this article represent only the author’s personal views and do not constitute any investment advice.
  3. Other language versions of the article are translated by the Gate Learn team. Unless otherwise stated, the translated article may not be copied, distributed or plagiarized.

AI as the Agent of Crypto – The Evolution of the AI Agent

Intermediate1/27/2025, 7:44:45 AM
AI is the agent of Crypto, and this is the best annotation to view the current AI surge from the crypto perspective. Crypto's enthusiasm for AI differs from other industries, as we especially aim to integrate the issuance and operation of financial assets with it.

AI as the Agent of Crypto

“A work of art is never completed, only abandoned.”

Everyone is talking about AI Agents, but what they mean is not the same thing, which leads to different understandings of AI Agents from our perspective, the public’s, and that of AI practitioners.

A long time ago, I wrote that Crypto is the illusion of AI. Since then, the combination of Crypto and AI has remained a one-sided love affair. AI practitioners rarely mention Web3 or blockchain, while Crypto enthusiasts are deeply enamored with AI. After witnessing the phenomenon where AI Agent frameworks can even be tokenized, it’s uncertain whether this could truly bring AI practitioners into our world.

AI is the agent of Crypto. This is the best annotation from a crypto perspective to view the current AI surge. Crypto’s enthusiasm for AI is different from other industries; we particularly hope to integrate the issuance and operation of financial assets with it.

The Evolution of Agents: The Origin Under Technical Marketing

At its core, the AI Agent has at least three sources. OpenAI’s AGI (Artificial General Intelligence) regards this as an important step, turning the term into a popular buzzword beyond technical circles. However, in essence, an Agent is not a new concept. Even with AI empowerment, it’s hard to say that it’s a revolutionary technological trend.

The first source is the AI Agent as seen by OpenAI. Similar to level L3 in autonomous driving, an AI Agent can be seen as possessing certain advanced assistance capabilities but is not yet able to fully replace a human.

Image caption: AGI phase of OpenAI planning

Image source: https://www.bloomberg.com/

The second source is, as the name suggests, the AI Agent, which is an Agent empowered by AI. The concept of agency and delegation mechanisms is not new in the field of computing. However, under OpenAI’s vision, the Agent will become the L3 stage following conversational forms (like ChatGPT) and reasoning forms (like various bots). The key feature of this stage is the ability to “perform certain behaviors autonomously,” or, as LangChain founder Harrison Chase defines it: “An AI Agent is a system that uses LLM (Large Language Model) to make control flow decisions in a program.” \
This is where it becomes intriguing. Before the advent of LLMs, an Agent primarily executed automation processes set by humans. For instance, when designing a web scraper, programmers would set a User-Agent to simulate details like the browser version and operating system used by real users. If an AI Agent were employed to mimic human behavior more precisely, it could lead to the creation of an AI Agent-based web scraper framework, making the scraper “more human-like.” \
In such transitions, the introduction of AI Agents must integrate with existing scenarios, as completely novel fields hardly exist. Even code completion and generation capabilities in tools like Curosr and Github Copilot are merely functional enhancements within the framework of LSP (Language Server Protocol), with numerous examples of such evolution:

  • Apple: AppleScript (Script Editor) → Alfred → Siri → Shortcuts → Apple Intelligence
  • Terminal: Terminal (macOS) / PowerShell (Windows) → iTerm 2 → Warp (AI Native)
  • Human-Computer Interaction: Web 1.0 CLI TCP/IP Netscape Browser → Web 2.0 GUI / REST API / Search Engines / Google / Super Apps → Web 3.0 AI Agent + Dapp?

To clarify, in human-computer interaction, the combination of Web 1.0’s GUI and browsers truly allowed the public to use computers with no barriers, represented by the combination of Windows and IE. APIs became the data abstraction and transmission standard behind the internet, and during the Web 2.0 era, browsers like Chrome emerged, with a shift to mobile changing people’s internet usage habits. Super apps like WeChat and Meta platforms now cover every aspect of people’s lives.

The third source is the concept of “Intent” in the Crypto space, which has led to the surge in interest around AI Agents. However, note that this is only applicable within Crypto. From Bitcoin scripts with limited functionality to Ethereum’s smart contracts, the Agent concept itself has been widely used. The subsequent emergence of cross-chain bridges, chain abstractions, EOA (Externally Owned Accounts) to AA (Account Abstraction) wallets are natural extensions of this line of thought. Therefore, when AI Agents “invade” Crypto, it’s not surprising that they naturally lead to DeFi scenarios.

This is where the confusion around the AI Agent concept arises. In the context of Crypto, what we are actually trying to achieve is an “automated financial management, automated meme generation” Agent. However, under OpenAI’s definition, such a risky scenario would require L4 or L5 to be truly implemented. Meanwhile, the public is experimenting with automatic code generation or AI-powered summary and writing assistance, which are not on the same level as the goals we are pursuing.

Once we understand what we truly want, we can focus on the organizational logic of AI Agents. The technical details will follow, as the concept of an AI Agent is ultimately about removing the barriers to large-scale technology adoption, much like how browsers revolutionized the personal PC industry. Our focus will be on two points: examining AI Agents from the perspective of human-computer interaction, and understanding the differences and connections between AI Agents and LLMs, which will lead us to the third part: what the combination of Crypto and AI Agents will ultimately leave behind.

let AI_Agent = LLM+API;

Before conversational human-computer interaction models like ChatGPT, the primary forms of human-computer interaction were GUI (Graphical User Interface) and CLI (Command-Line Interface). The GUI mindset evolved into various specific forms such as browsers and apps, while the combination of CLI and Shell saw minimal change.

But this is just the “frontend” of human-computer interaction. As the internet has evolved, the increase in data volume and variety has led to more “backend” interactions between data and between apps. These two aspects depend on each other— even a simple web browsing action actually requires their collaboration.

If human interaction with browsers and apps is considered the user entry point, the links and transitions between APIs support the actual operation of the internet. This, in fact, is also part of the Agent. Ordinary users don’t need to understand terms like command lines and APIs to achieve their goals.

The same is true for LLMs. Now, users can go even further—there’s no need for searching anymore. The entire process can be described in the following steps:

  1. The user opens a chat window.
  2. The user describes their needs using natural language, either through text or voice.
  3. The LLM interprets this into procedural steps.
  4. The LLM returns the results to the user.

It can be found that in this process, the biggest challenge is Google, because users do not need to open the search engine, but various GPT-like dialogue windows, and the traffic entrance is quietly changing. It is for this reason that some people think that this LLM revolutionizes the life of search engines.

So, what role does the AI Agent play in this process?

In short, the AI Agent is a specialized extension of LLM.

Current LLMs are not AGI (Artificial General Intelligence) and are far from OpenAI’s envisioned L5 organizer. Their capabilities are significantly limited. For example, LLMs are prone to hallucinations if fed too much user input. One key reason lies in the training mechanism. For instance, if you repeatedly tell GPT that 1+1=3, there’s a probability that it might respond with 4 when asked about 1+1+1=?.

This happens because GPT’s feedback is entirely derived from user input. If the model is not connected to the internet, it’s possible for its operation to be altered by your inputs, resulting in a model that only “knows” 1+1=3. However, if the model is allowed to connect to the internet, its feedback mechanism becomes more diverse, as the vast majority of online data would affirm that 1+1=2.

Now, what if we must use LLMs locally and want to avoid such issues?

A straightforward solution is to use two LLMs simultaneously, requiring them to cross-validate each other’s responses to reduce the probability of errors. If this isn’t enough, another approach could involve having two users handle a single process—one asking the questions and the other refining them—to make the language more precise and logical.

Of course, being connected to the internet doesn’t entirely eliminate problems. For instance, if the LLM retrieves answers from unreliable sources, the situation could worsen. Avoiding such data, however, reduces the amount of available information. To address this, existing data can be split, recombined, or even used to generate new data based on older datasets to make responses more reliable. This approach is essentially the concept of RAG (Retrieval-Augmented Generation) in natural language understanding.

Humans and machines need to understand each other. When multiple LLMs collaborate and interact, we essentially tap into the operational model of AI Agents. These serve as human proxies, accessing other resources, including large models and other agents.

This leads us to the connection between LLMs and AI Agents:

LLMs are aggregations of knowledge that humans interact with via chat interfaces. However, in practice, certain specific workflows can be condensed into smaller programs, bots, or sets of instructions. These are defined as Agents.

AI Agents remain a subset of LLMs but should not be equated with them. The defining feature of AI Agents lies in their emphasis on collaboration with external programs, LLMs, and other agents. This is why people often summarize AI Agents as LLM + API.

To illustrate this in the LLM workflow, let’s take the example of an API call through an AI Agent:

  1. A human user opens a chat window.
  2. The user describes their needs in natural language, either via text or voice.
  3. The LLM interprets the request as an API-call-related AI Agent task and transfers the conversation to the Agent.
  4. The AI Agent requests the user’s X account and API credentials and connects with X based on the user’s description.
  5. The AI Agent returns the final result to the user.

Remember the evolution of human-computer interaction? Browsers, APIs, and other elements from Web 1.0 and Web 2.0 still exist, but users no longer need to interact with them directly. Instead, they can simply engage with AI Agents. API calls and related processes can all be conducted conversationally. These API services can encompass any type of data, whether local, online, or from external apps, as long as the interfaces are open and users have the necessary permissions to access them.

A complete AI Agent workflow, as shown above, treats LLM as either a separate component from AI Agent or as two sub-processes within one workflow. Regardless of how they are divided, the goal is always to serve user needs. From the perspective of human-computer interaction, it can even feel like users are talking to themselves. You only need to fully express your thoughts, and the AI/LLM/AI Agent will repeatedly guess your needs. By incorporating feedback mechanisms and ensuring that the LLM remembers the current context, the AI Agent avoids losing track of its tasks.

In summary, AI Agents are more personalized and humanized creations, setting them apart from traditional scripts and automation tools. They act like personal assistants, considering the user’s actual needs. However, it’s important to note that this personalization is still based on probabilistic inference. An L3-level AI Agent does not possess human-level understanding and expression capabilities, making its integration with external APIs inherently risky.

After the monetization of AI frameworks

The ability to monetize AI frameworks is one of the main reasons I remain interested in crypto. In traditional AI technology stacks, frameworks are not particularly important, at least not compared to data and computing power. Monetizing AI products rarely starts with the framework, as most AI algorithms and model frameworks are open source. What remains proprietary are sensitive elements like data.

Essentially, AI frameworks or models are containers and combinations of algorithms, much like a pot for stewing goose. However, the quality of the goose and mastery of the cooking process are what truly define the flavor. In theory, the product for sale should be the goose, but Web3 customers seem to prefer buying the pot while discarding the goose.

The reason for this isn’t complicated. Most Web3 AI products build upon existing AI frameworks, algorithms, and products, customizing them for their purposes. In fact, the technical principles behind different crypto AI frameworks aren’t vastly different. Since the technology itself lacks differentiation, attention shifts to branding, application scenarios, and other surface distinctions. As a result, even minor tweaks to the AI framework become the foundation for supporting various tokens, leading to a framework bubble within crypto AI Agent ecosystems.

Because there’s no need to heavily invest in training data or algorithms, differentiating frameworks by name becomes especially crucial. Even an affordable framework like DeepSeek V3 still demands significant costs in terms of GPU power, electricity, and effort.

In a sense, this aligns with Web3’s recent trend: platforms issuing tokens are often more valuable than the tokens themselves. Projects like Pump.Fun and Hyperliquid exemplify this. Originally, Agents were supposed to represent applications and assets, but the frameworks issuing Agents have now become the hottest commodities.

This reflects a form of value anchoring. Since Agents lack differentiation, frameworks for issuing Agents become more stable and create a value siphoning effect for asset issuance. This marks the 1.0 version of the integration of crypto and AI Agents.

The 2.0 version is now emerging, exemplified by the convergence of DeFi and AI Agents. While the concept of DeFAI may have been triggered by market hype, a deeper look at the following trends suggests otherwise:

  • Morpho is challenging established lending platforms like Aave.
  • Hyperliquid is replacing on-chain derivatives like dYdX and even challenging Binance’s CEX listing effects.
  • Stablecoins are becoming payment tools for off-chain scenarios.

Within this backdrop of DeFi transformation, AI is reshaping DeFi’s fundamental logic. Previously, DeFi’s core logic was verifying the feasibility of smart contracts. Now, AI Agents are altering the manufacturing logic of DeFi. You no longer need to understand DeFi to create DeFi products. This represents a step beyond chain abstraction, providing deeper foundational empowerment.

The era where everyone can be a programmer is on the horizon. Complex computations can be outsourced to the LLM and APIs behind AI Agents, allowing individuals to focus solely on their ideas. Natural language can be efficiently transformed into programming logic.

Conclusion

This article does not mention any Crypto AI Agent tokens or frameworks, as Cookie.Fun has already done an excellent job—a platform for AI Agent information aggregation and token discovery, followed by AI Agent frameworks, and lastly the fleeting emergence and disappearance of Agent tokens. Continuing to list such information here would be of little value.

However, through observations during this period, the market still lacks a meaningful discussion on what Crypto AI Agents are ultimately pointing toward. We cannot keep focusing on the pointers; the essence lies in the changes happening at the memory level.

It is precisely the ever-evolving ability to transform various assets into tokenized forms that makes Crypto so captivating.

Disclaimer:

  1. This article is reproduced from [Zuoye Waibo Mountain], the copyright belongs to the original author [Zuoye Waibo Mountain]. If you have any objection to the reprint, please contact Gate Learn, the team will handle it as soon as possible according to relevant procedures.
  2. Disclaimer: The views and opinions expressed in this article represent only the author’s personal views and do not constitute any investment advice.
  3. Other language versions of the article are translated by the Gate Learn team. Unless otherwise stated, the translated article may not be copied, distributed or plagiarized.
即刻開始交易
註冊並交易即可獲得
$100
和價值
$5500
理財體驗金獎勵!