When Drift disclosed the details behind its $270 million exploit, the most unsettling part wasn’t the scale of the loss — it was how it happened.
According to the team behind the protocol, the attack wasn’t a smart contract bug or a clever piece of code manipulation. It was a six-month campaign involving fake identities, in-person meetings across multiple countries and carefully cultivated trust. The attackers, allegedly from North Korea, didn’t just find a vulnerability in the system. They became part of it.
This new threat is now forcing a broader reckoning across decentralized finance.
For years, the industry has treated security as a technical problem, something that could be solved with audits, formal verification and better code. But the Drift incident suggests something far more complex: that the real vulnerabilities may lie outside the codebase altogether.
Alexander Urbelis, chief information security officer (CISO) at ENS Labs, argues the framing itself is already outdated.
“We need to stop calling these ‘hacks’ and start calling them what they are: intelligence operations,” Urbelis told CoinDesk. “The people who showed up at conferences, who met Drift contributors in person across multiple countries, who deposited a million dollars of their own money to build credibility: that’s tradecraft. It’s the kind of thing you’d expect from a case officer, not a hacker.”
If that characterization holds, then Drift represents a new playbook: one where attackers behave less like opportunistic hackers and more like patient operators embedding themselves socially before making a move onchain.
“North Korea isn’t scanning for vulnerable contracts anymore. They’re scanning for vulnerable people… That’s not hacking. That’s running agents,” Urbelis added.
The tactics themselves aren’t entirely new.
Investigations in recent years have shown North Korean operatives infiltrating crypto firms by posing as developers, passing job interviews and even securing roles under fake identities. But the Drift incident suggests those efforts have escalated — from gaining access through hiring pipelines to running months-long, in-person relationship-building operations before executing an attack.
‘The Achilles’ heel’
That shift is what has many security leaders most concerned. Even the most rigorously audited protocol can still fail if a contributor is compromised.
David Schwed, chief operating officer of SVRN and a former CISO at both Robinhood and Galaxy, sees the Drift case as a wake-up call.
“Protocols need to understand what they’re up against. These aren’t simple exploits. These are well-planned, months-long operations with dedicated resources, fabricated identities, and a deliberate human element,” Schwed told CoinDesk. “That human element is the Achilles’ heel for many organizations.”
Many DeFi teams remain small, fast-moving and built on trust. But when a handful of individuals control critical access, compromising one can be enough.
Schwed argues that the response needs to be updated. “The answer is a well-fortified security program that protects not just the technology, but the people and the process… Security needs to be foundational to the project and the team.”
Some protocols are already adjusting. At Jupiter, one of Solana’s largest DeFi platforms, the baseline of audits and formal verification remains, but leaders claim it’s no longer sufficient.
“Clearly, securing code via multiple independent audits, open sourcing, and formal verification is just table stakes. The surface area for attacks has broadened substantially,” said COO Kash Dhanda.
That broader surface now includes governance, contributors and operational security. Jupiter has expanded its use of multisigs and timelocks while investing in detection systems and internal training.
“Given that flesh is more vulnerable than code, we’re also updating opsec training and monitoring for key team members,” Dhanda said.
Even then, he added, “there is no end-state for security” and complacency remains the biggest risk.
For protocols like dYdX, the Drift incident reinforces a reality that can’t be engineered away entirely.
“It’s an unfortunate fact of life that crypto projects are being increasingly targeted by state-sponsored bad actors… developers must take precautions to prevent and mitigate the impact of social engineering compromises, but users should also be aware that given the increasing sophistication of bad actors the risk of such compromises cannot be totally eliminated,” said David Gogel, COO of dYdX Labs.
That evolving threat model is also shifting responsibility toward users themselves.
“Users who are active in DeFi should take the time to understand the technical architecture of protocols or smart contracts that hold their funds, and should factor into their risk assessments the role and nature of any multisigs for software upgrades and the possibility that those could be maliciously compromised,” Gogel added.
‘Threat model’
For some founders, the Drift exploit underscores a more uncomfortable conclusion: that trust itself has become a vulnerability.
“The Drift exploit wasn’t a code vulnerability. It was a six-month intelligence operation that exploited trust between humans,” said Lucas Bruder, CEO of Jito Labs.
In practice, that means designing systems that assume compromise — not just bugs.
“Smart contract audits are table stakes. The real attack surface is your team, your multisig signers, and every device they touch.”
That mindset is becoming central to how DeFi approaches security. Schwed of SVRN says it starts with asking not just how a protocol works, but how it could fail.
“Start with a threat model. Ask yourself, how can I be exploited? If one of the project owners becomes compromised, what’s the blast radius of that scenario?”
In that sense, the Drift exploit may be remembered less for the funds lost than for what it revealed — that the biggest risks in DeFi may no longer live in the code, but in the people who run it.