TLDR
- Vitalik Buterin says perfect security is impossible due to complex user intent
- Framework promotes redundancy and multi angle verification
- LLMs suggested as support tools, not sole decision makers
- Crypto sector recorded over $400M losses in January
A surge in crypto theft has renewed attention on platform security. Ethereum co-founder Vitalik Buterin has proposed a framework linking crypto security to user intent. The proposal follows reported losses of more than $400 million from phishing and treasury breaches, focusing on strengthening crypto security amid rising hacks. He said security aims to reduce the gap between user intent and system behavior, especially in adversarial, high-risk situations.
User Intent and System Behavior
Vitalik Buterin explained that defining user intent is difficult. Even a simple action such as sending 1 ETH involves many assumptions. These include identity verification, chain selection during forks, and shared understanding.
He noted that a person like “Bob” cannot be fully defined in code. A public key may represent Bob, but that key could be compromised or misidentified. These uncertainties form part of the threat model. Privacy goals are more complex. Encrypted messages can still reveal patterns through metadata and timing. It can be difficult to classify whether a privacy loss is minor or severe.
Buterin compared this issue to early AI safety debates. In those debates, defining goals precisely proved challenging. Crypto systems face a similar barrier when converting human intent into program logic.
Redundancy as a Core Security Tool
To address these limits, Buterin proposed redundancy. Users should express intent in multiple overlapping ways. Systems should act only when those specifications align. He cited type systems in programming as one example. Developers define both logic and data structures. If they conflict, the program does not compile.
Formal verification adds mathematical checks to confirm code behavior. Transaction simulations allow users to preview onchain outcomes before approval. He stated, “The user specifies first what action they want to take, and then clicks ‘OK’ or ‘Cancel’ after seeing a simulation of the onchain consequences.” Post-assertions in transactions require expected results to match actual effects.
Multisignature wallets and social recovery spread authority across several keys. Spending limits and new address confirmations add further checks for unusual actions. Buterin said security does not mean adding friction to every task. Low risk actions should remain simple, while high risk actions should require more confirmation.
Role of AI and Industry Context
Buterin also addressed the role of large language models. He described LLMs as a simulation of intent. A general model reflects broad human common sense, while a fine tuned model can detect unusual user behavior. “LLMs should under no circumstances be relied on as a sole determiner of intent,” he wrote. He added that they serve as one angle within a broader system of checks.
Recent data reflects the urgency of stronger safeguards. Blockchain security firm CertiK reported losses of about $370.3 million across 40 incidents. Other estimates placed January losses above $400 million. One phishing attack led to a reported $284 million loss from a hardware wallet. In another case, 1,459 Bitcoin and 2.05 million Litecoin were stolen.
A Solana based platform also reported a $30 million breach on January 31, 2026. Security remains one element of the blockchain trilemma, along with decentralization and scalability. While scalability has gained attention, Buterin’s framework brings focus back to reducing risk through layered verification and intent alignment.





