Vitalik: How do I build a fully local, private, self-controlled AI work environment

ChainNewsAbmedia
ETH-4,4%

Ethereum co-founder Vitalik Buterin published a long post on his personal website on April 2, sharing the AI work environment setup he built with privacy, security, and self-sovereignty at its core—every LLM runs inference locally, all files are stored locally, everything is comprehensively sandboxed, and cloud models and external APIs are deliberately avoided.

At the start of the article, he issues a warning: “Please do not directly copy the tools and technologies described in this article and assume they are secure. This is only a starting point, not a description of a finished product.”

Why write this now? AI agent security issues are being seriously underestimated

Vitalik points out that earlier this year, AI made an important transition from “chatbots” to “agents”—you’re no longer just asking questions; you’re handing off tasks, letting AI think for long periods and call hundreds of tools to carry them out. He gives an example of OpenClaw (currently the fastest-growing repo in GitHub history) and also names multiple security issues recorded by researchers:

AI agents can change critical settings without any human confirmation, including adding new communication channels and modifying system prompts

Parsing any malicious external inputs (such as a malicious webpage) could cause the agent to be fully taken over; in a demonstration by HiddenLayer, researchers had the AI summarize a batch of webpages, and one of the pages contained a malicious command that made the agent download and execute a shell script

Some third-party skills packages can execute silent data exfiltration, sending data via curl to an external server controlled by the skills author

In the skills packages they analyzed, about 15% contain malicious instructions

Vitalik emphasizes that his starting point on privacy differs from that of traditional cybersecurity researchers: “I come from a position deeply afraid of the idea of feeding someone’s entire personal life into cloud AI—right when end-to-end encryption and local-first software finally became mainstream, we might be stepping back ten steps.”

Five security goals

He sets a clear framework of security goals:

LLM privacy: in situations involving personal-privacy data, minimize the use of remote models

Other privacy: minimize data leakage that is not related to LLMs (e.g., search queries, other online APIs)

LLM jailbreaking: prevent external content from “hacking into” my LLM and making it act against my interests (for example, sending my tokens or private data)

LLM unintended: prevent the LLM from accidentally sending private data to the wrong channel or publicly to the internet

LLM backdoors: prevent hidden mechanisms that are deliberately trained into models. He specifically reminds readers: open models are open weights (open-weights); almost none of them is truly open-source (open-source)

Hardware choice: 5090 laptop wins; DGX Spark is disappointing

Vitalik tested three local inference hardware configurations, with his main setup using the Qwen3.5:35B model, paired with llama-server and llama-swap:

His conclusion is: below 50 tok/sec is too slow, and 90 tok/sec is ideal. The NVIDIA 5090 laptop experience is the smoothest; AMD currently still has more edge-case issues, but there’s hope it will improve in the future. High-end MacBooks are also viable options, though he personally hasn’t tried them firsthand.

About DGX Spark, he doesn’t hold back: “It’s described as a ‘desktop AI supercomputer,’ but in reality its tokens/sec is lower than that of a better laptop GPU, and you also have to deal with extra details like setting up network connectivity—this is pretty lousy.” His recommendation is: if you can’t afford a high-end laptop, buy a sufficiently powerful machine together with friends, place it in a location with a fixed IP, and have everyone use remote connections to it.

Why local AI privacy issues are more urgent than you think

Vitalik’s article, along with the Claude Code security discussion released on the same day, creates an interesting echo—while AI agents are entering everyday development workflows, security problems are also shifting from theoretical risks to real threats.

His core message is very clear: as AI tools become more powerful and better able to access your personal data and system permissions, “local-first, sandboxed, minimum trust” is not paranoia—it’s a rational starting point.

This article by Vitalik: How I built a fully local, private, self-controlled AI work environment was first published in 鏈新聞 ABMedia.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments