Vercel's recently released Workflows feature is really interesting, solving a major pain point for developers—replacing an entire backend infrastructure with just two lines of code.



I looked into how it works: developers mark use workflow at the top of their TypeScript functions, then use use step inside sub-functions to mark each execution step. The framework automatically handles queue scheduling, failure retries, and state persistence—eliminating the need to deploy orchestration services, message queues, or state databases separately, all integrated into the application code.

The core pain point it addresses is quite practical: when moving AI agents or backend tasks from prototype to production, developers often spend a lot of time building orchestration infrastructure instead of optimizing the product itself. Traditional solutions scatter logic across queues, workers, state tables, and retry mechanisms, whereas Vercel’s approach directly merges orchestration logic with business logic.

Since the public beta started last October, the data has been impressive—over 100 million executions, 500 million steps processed, serving more than 1,500 customers, with npm weekly downloads exceeding 200k. This clearly indicates that many developers are using it.

For AI agent scenarios, Vercel added several features: persistent stream guarantees that agent outputs are permanently stored, even if the browser is closed; built-in encryption that automatically encrypts all data before leaving the deployment environment; and support for pause and resume, allowing waiting for manual approval or sleeping for days or months, with zero compute cost during that period; each step supports up to 50MB, with the entire execution limited to 2GB, enough for multimodal agents handling images and video transmission.

At the same time, the new AI SDK v7 integrates WorkflowAgent, and the Python SDK is also in public beta. Interestingly, the Workflow SDK is open source, and the community is already developing adapters for MongoDB, Redis, Cloudflare, and others. The next version will add concurrency control, global deployment, and snapshot runtime, further reducing the cost of event reprocessing.

The pricing model is also attractive—pay only for actual execution time, with no ongoing costs for orchestration service uptime. This is especially appealing for teams looking to iterate quickly.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin