Multisig Risk in DeFi: Who Controls the Protocol's Keys

By Jorge Rodriguez Risk Management

How upgrade multisigs can rewrite a protocol's contract logic while your funds are in it

What threshold ratios actually mean for your deposit safety and how the math works against you

A step-by-step method to verify any protocol's admin key setup on-chain before committing capital

Introduction

When you deposit into a DeFi protocol, you are not just trusting the code. You are trusting whoever holds the keys to change it. In most protocols, those keys are controlled by a **multisig (multi-signature wallet)**, a smart contract that requires M-of-N private key signatures to execute transactions. Whether that multisig actually protects your capital depends almost entirely on its configuration, and most configurations leave a lot to be desired. **Multisig risk DeFi** is one of the least-discussed failure modes in the space, yet [private key compromise accounted for 43.8% of crypto hacks in 2024](https://blog.trailofbits.com/2025/06/25/maturing-your-smart-contracts-beyond-private-key-risk/). Many of those incidents involved protocols with multisigs already in place. The multisig existed. The audit was clean. The protocol was drained anyway. This guide is not an introduction to multisigs. This is a risk framework for experienced users who want to evaluate whether a protocol's key management actually protects their deposit, or just provides a veneer of decentralization. The same framework applies to any protocol you are researching on the [Lince Yield Tracker](https://yields.lince.finance/tracker) before deploying capital. This risk is also not unique to Ethereum. Protocols on Arbitrum, Optimism, Base, BNB Chain, Polygon, and Avalanche all use the same multisig infrastructure, primarily through **Safe (formerly Gnosis Safe)**, the dominant multisig wallet platform across EVM chains. The failure modes covered here are cross-chain by nature.

The Two Types of Multisig You Should Care About

Not all protocol multisigs carry the same risk. Before evaluating any setup, you need to understand which multisig controls what. **Treasury and operations multisig:** This controls protocol-owned liquidity, fee revenue, and DAO treasury funds. If this multisig is compromised, the protocol loses its own assets. That is bad for the team and for protocol sustainability, but your deposited capital may be untouched. **Upgrade multisig:** This is the one that matters most for depositors. The **upgrade key** controls the proxy implementation, meaning it can replace the entire contract logic. It can change interest rate models, add or remove collateral types, alter liquidation thresholds, modify oracle addresses, or introduce entirely new withdrawal behavior. If this multisig is compromised, every dollar deposited is at risk. **Parameter multisig:** Some protocols split this further into a separate multisig for runtime parameters within existing logic, such as fees, supply caps, and protocol pauses. Lower severity than the upgrade key, but still consequential. The critical insight here is that most security discussions focus on smart contract bugs, the kind auditors catch and developers fix. But the upgrade multisig can rewrite the contract entirely. An audited codebase means nothing if the upgrade key is held by a 3-of-11 multisig where eight of the signers work at the same company. Understanding [how to read a DeFi audit report](/blog/risk-management/how-to-read-defi-audit-report) is valuable, but audits that do not cover access control architecture are incomplete by design.

What a Compromised Upgrade Key Can Do to Your Funds

If an attacker gains control of a protocol's upgrade multisig, the attack surface is broader than most users realize. A successful upgrade key compromise can: • Swap the proxy implementation to a malicious contract that drains user balances directly • Modify oracle addresses to poison price feeds and trigger artificial liquidations • Change collateral factors or liquidation thresholds to force mass liquidation of depositor positions • Whitelist attacker-controlled addresses as authorized callers with elevated permissions • Disable withdrawal functions, trapping funds indefinitely • Redirect fee flows or reward emissions to attacker-controlled addresses These are not theoretical scenarios. The Radiant Capital exploit in October 2024 executed several of these steps after the upgrade multisig was compromised, draining $58 million. A successful upgrade key attack can trigger [protocol insolvency](/blog/risk-management/defi-protocol-insolvency-risk) almost instantly, eliminating any path to recovery for depositors. ![How a compromised DeFi protocol upgrade key enables a full protocol drain, with and without a timelock](/images/blog/multisig-risk-defi/upgrade-key-attack.webp) Without a timelock between the multisig and the protocol contracts, all of the above can happen in a single block. With a meaningful timelock, watchers, security monitors, and depositors have a window to detect the queued transaction and act. The difference between these two configurations is often the difference between a recoverable incident and a total loss.

Understanding the Threshold Problem

The **threshold ratio** (the M/N fraction required to authorize a multisig transaction) determines how many keys an attacker must compromise. But the math is more nuanced than it first appears. A 3-of-11 multisig sounds robust at first glance: eleven signers, only three required. In practice, it can be significantly weaker than a 3-of-5. Here is why. In a 3-of-11 setup, an attacker needs to compromise 3 of 11 signers. But if those 11 signers are developers at the same company, working from the same Slack channels, sharing device infrastructure, and attending the same industry conferences, the social and device attack surface is enormous. Compromising three signers in that environment requires compromising one organizational context, not three independent ones. A well-structured 3-of-5 with hardware wallets, geographic distribution, and genuine signer independence can be substantially more secure than a 3-of-11 where every signer answers to the same management chain. The Radiant Capital case is the sharpest available illustration. The protocol had a 3-of-11 multisig as its official threshold. [Malware installed on developers' devices](https://www.auditone.io/blog-posts/unpacking-multisig-hacks-in-defi-radiant-capitals-case) intercepted transaction data and displayed legitimate-looking requests in the Safe interface while routing the actual signature data to a malicious contract upgrade. Three compromised devices later, the threshold was met and $58 million was gone. A useful benchmark: look for threshold ratios at or above 60% for upgrade multisigs, for example 4-of-7, 5-of-8, or 6-of-9. Sub-50% ratios on upgrade keys should be treated as a yellow flag requiring additional scrutiny of signer independence and operational security practices. ![Comparison of 3-of-11 and 3-of-5 multisig threshold configurations showing why signer ratio and independence matter more than raw numbers](/images/blog/multisig-risk-defi/threshold-comparison.webp)

Signer Identity and Concentration Risk

Even a well-structured threshold can fail if the wrong people are signing. **Signer concentration** refers to the degree to which signers are organizationally, operationally, or jurisdictionally linked. When signers are effectively the same entity under different addresses, the multisig provides coordination overhead, not genuine security. There are three primary signer failure modes. **Signer anonymity.** The wallet addresses are visible on-chain, but if signers are pseudonymous, you have no way to assess their independence, competence, or jurisdiction. Anonymous signers are not inherently problematic, but unverifiable independence means you cannot distinguish a genuinely distributed key setup from a single person controlling multiple wallets. **Signer concentration.** Signers are all core team members, employees of the same entity, or VC investors with aligned incentives to move fast. Decentralized in name only. This is the most common multisig failure mode in DeFi: protocols that technically have a multisig but where every signer answers to the same Slack channel. **Geographic and operational concentration.** Signers in the same country, using the same infrastructure, attending the same events. One targeted phishing campaign, one device compromise, or one legal seizure order can sweep multiple signers simultaneously. This is precisely the attack surface the Radiant Capital malware exploited. A multisig that passes scrutiny looks like this: signers include recognized security firms, independent auditors, or community-elected delegates from different organizations. Signers use hardware wallets. They are geographically distributed across different jurisdictions. The protocol has published signer addresses, a key rotation policy, and documented off-boarding procedures for departing team members. This type of signer setup is a form of [counterparty risk in DeFi](/blog/risk-management/counterparty-risk-defi). The humans behind those keys are counterparties to your position, even if no explicit agreement exists between you. How well those counterparties are selected and managed determines whether the multisig is genuine risk mitigation or a bureaucratic checkbox.

Timelocks: Your Last Line of Defense

A **timelock** is a contract that sits between the multisig and the protocol. The multisig queues a transaction, but it can only execute after a mandatory delay. That delay is the difference between an instant drain and a window for the community to respond. Without a timelock, a compromised multisig can execute a protocol drain in a single block. With a meaningful timelock, watchers, security monitors, and depositors have time to detect what is in the queue and act before it executes. Timelock durations matter considerably. A 24-hour delay is the practical minimum for a well-monitored protocol. Seven days is considered best practice for core system changes, including proxy upgrades. Many protocols use zero-hour timelocks or no timelock at all, which provides none of the protection the architecture implies. There is a critical caveat illustrated by the 2022 Beanstalk governance exploit. In that case, the protocol had a timelock, but nobody was actively monitoring the queue. The attacker's malicious proposal sat in the timelock window and executed without triggering any response. A timelock only protects you if someone is watching. This overlap between multisig and governance risk is covered in depth in the [governance attacks framework](/blog/risk-management/governance-attacks-defi). When evaluating a protocol, verify the following: • Does a timelock contract exist at all? • What is the minimum delay period for contract upgrades? • Are there operations excluded from timelock gating, such as emergency pause functions? • Does the protocol have active monitoring infrastructure in place? • Are there emergency bypass functions, and if so, who controls them and under what conditions?

How to Verify a Protocol's Multisig Setup On-Chain

Every piece of information covered in this article is verifiable on-chain if you know where to look. Here is the workflow. **Step 1: Find the admin address.** Check the protocol's documentation, audit report, or governance forum for references to `owner`, `admin`, `proxyAdmin`, or `timelockController`. These are the addresses you want to investigate. **Step 2: Look up the address on a block explorer.** Use Etherscan for Ethereum mainnet, Arbiscan for Arbitrum, Basescan for Base, and the corresponding explorer for other chains. If the address is a Safe contract, you can see the threshold and signer list directly in the Contract tab. The Safe interface at app.safe.global also shows live signer information if you enter the address directly. **Step 3: Check the proxy contract if the protocol is upgradeable.** Look for an ERC-1967 proxy pattern. The `_ADMIN_SLOT` storage slot contains the address of the upgrade controller. This is the most consequential address to evaluate. **Step 4: Investigate each signer address individually.** Etherscan labels known addresses from major protocols and security firms. Check signer activity: are they actively signing transactions, or do they look like dormant wallets registered once and never touched? Inactive signers represent a real security risk because a compromised dormant key creates less noise than one that is regularly used. **Step 5: Locate and verify the timelock.** Search the documentation or trace the admin address's transaction history to find the timelock contract. Once located, read the `minDelay` parameter directly from the contract to confirm the actual enforced delay. Tools that supplement this workflow include DeFiSafety for protocol risk ratings and L2Beat for bridge and rollup upgrade key analysis. Running through this verification is a core part of the [DeFi due diligence checklist](/blog/risk-management/defi-due-diligence-checklist) that should precede any meaningful capital commitment.

The Upgradeable Proxy Problem

Most DeFi protocols use upgradeable proxies, either the **UUPS proxy** (Universal Upgradeable Proxy Standard) or the **transparent proxy** pattern. These allow teams to fix bugs and ship improvements without migrating user funds. The tradeoff is that the proxy admin key can swap out the implementation contract for any arbitrary logic. This makes the proxy admin key structurally the most dangerous key in a protocol's setup. The contract your funds sit in today could be a completely different contract tomorrow. Whoever controls the proxy admin key can change everything about how that contract behaves, subject only to whatever constraints the multisig and timelock configuration impose. Some protocols have committed to limiting this risk by transferring proxy ownership to a null address (0x000...) after deployment, making the contract fully immutable, or routing all upgrade proposals through on-chain governance with long timelocks. These are the gold standard configurations for mature protocols. The immutability tradeoff is real. A fully immutable contract cannot be patched if a bug is discovered post-deployment. The practical best practice for most protocols is a 7-day timelock on proxy upgrades controlled by a well-structured multisig, with a clear path toward on-chain governance over time. Not immutability, but meaningful friction that protects users while preserving the ability to respond to critical vulnerabilities. The implication for depositors is direct: always check whether the protocol is upgradeable, and if so, what controls the upgrade path. A protocol with a clean audit history and a well-known team can still have a proxy controlled by a 2-of-5 multisig of anonymous wallets with no timelock. That audit is meaningful for the current code. It is completely irrelevant if the current code can be swapped out in a single transaction.

Case Studies: When Multisig Failed

Three exploits illustrate three distinct failure modes in protocol key management. **Ronin Bridge, March 2022, approximately $600 million.** The Ronin Bridge used a 5-of-9 multisig. Sky Mavis, the company behind Axie Infinity, controlled 4 validator nodes directly. A fifth node belonged to the Axie DAO but had temporarily delegated signing authority to Sky Mavis for operational convenience. The attacker compromised Sky Mavis's 4 controlled nodes plus the delegated key, reaching the 5-of-9 threshold. There was no timelock. The bridge drain happened in a single transaction. Root cause: signer concentration in one entity, compounded by delegated authority that created an invisible extension of that concentration. [Protos' analysis](https://protos.com/radiant-capitals-50m-crypto-hack-underlines-defis-multisig-dependence/) documents this pattern across multiple high-profile incidents. **WazirX, July 2024, $230 million.** A 4-of-6 Safe multisig. Two signer addresses were directly compromised. Two more were socially engineered into signing a transaction disguised as routine operations. The transaction drained the exchange's hot wallet. Root cause: **blind signing** combined with insufficient transaction verification procedures. Signers approved what the UI displayed without verifying the underlying calldata. This is a distinct failure mode from threshold ratio problems: the math was fine, the process was broken. **Radiant Capital, October 2024, $58 million.** A 3-of-11 multisig with a threshold ratio below 30%. [Malware on at least three developer devices](https://www.auditone.io/blog-posts/unpacking-multisig-hacks-in-defi-radiant-capitals-case) intercepted Safe interface rendering, displaying legitimate transaction data in the UI while routing actual signatures to a malicious contract upgrade. Root cause: low threshold ratio combined with a malware-susceptible signing environment. The multisig contract itself functioned exactly as designed. The common thread across all three: the multisig existed, was documented, and appeared to pass basic compliance reviews. Configuration and operational security matter as much as the tool itself. Understanding where on the [rug pull vs exploit vs bug spectrum](/blog/risk-management/rug-pull-vs-exploit-vs-bug-defi) these incidents fall helps frame the risk profiles: Ronin was a third-party exploit enabled by insider concentration; Radiant was a targeted attack on operational security; WazirX sat somewhere between external compromise and social engineering.

The Trust Spectrum: From Multisig to Immutability

Not all protocols carry equal key management risk. Classifying any protocol's actual decentralization level before depositing gives you a clearer picture of the real counterparty exposure. ![DeFi protocol trust spectrum from single admin key to immutable contracts showing five levels of key management security](/images/blog/multisig-risk-defi/trust-spectrum.webp) **Level 0: Single EOA (Externally Owned Account).** One private key controls everything. An **EOA (Externally Owned Account)** is a standard wallet controlled by a single private key, as opposed to a smart contract. For any protocol with real TVL, this configuration is unacceptable for a material position. **Level 1: Multisig, no timelock.** Distributed key management but with real-time execution capability. A compromised threshold equals an instant exploit. Common in early-stage protocols that have distributed keys but have not yet implemented timelock infrastructure. **Level 2: Multisig plus timelock.** Best practice for most live protocols. The timelock delay gives the community a window to detect and respond to malicious transactions. This is the minimum viable configuration for significant positions. The **principle of least privilege** should also apply here: each key should hold only the minimum permissions required for its specific function. **Level 3: On-chain governance with timelock.** No team multisig on the critical path for upgrades. Governance votes required for protocol changes. Slower by design, but significantly more decentralized. Uniswap, Compound, and Aave operate at this level. Note that on-chain governance introduces its own attack surface, as documented in the [governance attacks framework](/blog/risk-management/governance-attacks-defi). **Level 4: Immutable or renounced.** Contract ownership transferred to a null address. Code cannot be changed. Maximum trust for depositors, zero recoverability if bugs are found after deployment. Where most protocols actually sit: Level 1 or early Level 2. Many protocols that claim Level 3 still maintain emergency multisigs with timelock bypass powers. These bypasses can be the most dangerous part of the architecture if they are poorly scoped or held by a small group of signers with no independent oversight.

Red Flags and Green Flags Checklist

Before deploying capital into any protocol, run through these indicators. They apply whether you are evaluating a lending protocol, a liquidity pool, or any yield-generating position. ![DeFi multisig risk checklist showing red flags and green flags when evaluating a protocol's key management setup](/images/blog/multisig-risk-defi/checklist.webp) **Red flags:** • Upgrade key is a single EOA with no multisig at all • Multisig threshold ratio is below 50% on the upgrade key (for example, 2-of-9 or 3-of-11) • All signers are anonymous or unidentifiable on-chain with no organizational affiliation • No timelock exists on contract upgrades • A timelock bypass function exists with no governance check or usage restrictions • Protocol documentation does not mention admin key setup or access control architecture • The audit report does not include an access control section • Emergency pause functions bypass the timelock and are controlled by a single key **Green flags:** • Upgrade key is a Safe multisig with threshold ratio at or above 60% • Signers include known, independent parties from different organizations on hardware wallets • Core system changes require a minimum 48-hour timelock, with 7 days as best practice • Protocol has published signer addresses and a documented key rotation policy • DeFiSafety or L2Beat has reviewed and rated the admin key setup independently • The protocol has a published roadmap toward immutability or full on-chain governance • Active timelock monitoring exists through Forta alerts or documented community watchers None of these signals is definitive in isolation. The picture that matters is the combination. A protocol that triggers two or more red flags on the upgrade key deserves significant scrutiny before you commit capital.

FAQs

### What is a protocol multisig in DeFi? A protocol multisig is a smart contract wallet that requires M-of-N private key signatures to execute transactions. Most DeFi protocols use Safe (formerly Gnosis Safe) for this purpose. The multisig holds administrative authority over the protocol, which can include the ability to upgrade contracts, change parameters, and manage treasury funds. The specific powers held by any multisig depend on how the protocol has assigned permissions. ### What is the difference between a treasury multisig and an upgrade multisig? A treasury multisig controls protocol-owned funds including fee revenue, DAO treasury, and protocol-owned liquidity. If compromised, the protocol loses its own assets. An upgrade multisig controls the proxy implementation and can replace the entire contract logic. If compromised, all depositor funds are at risk. The upgrade multisig is the more consequential of the two from a depositor's perspective, though most protocols use a single multisig for both functions. ### How do I find a protocol's multisig setup on Etherscan? Start with the protocol's documentation or audit report to find the admin or owner address. Look it up on the relevant block explorer. If it is a Safe contract, the Contract tab shows the threshold and active signers. For proxy contracts, look for the ERC-1967 admin slot to find the upgrade controller address. The Safe interface at app.safe.global also shows live signer details for any Safe contract address you enter. ### What is a timelock and why does it matter? A timelock is a contract that enforces a mandatory delay between a transaction being queued and executed on-chain. For upgrade multisigs, this delay gives the community time to detect malicious changes before they go live. Without a timelock, a compromised multisig can drain a protocol in one block. A 7-day timelock on contract upgrades is considered best practice; 24 hours is the practical minimum for protocols with active monitoring in place. ### Was the Radiant Capital hack caused by a multisig weakness? The Radiant Capital October 2024 exploit combined two weaknesses. The threshold ratio of 3-of-11 meant only three signers needed to be compromised. Malware on developer devices then intercepted the Safe interface, displaying legitimate transaction data while routing signatures to a malicious contract upgrade. The multisig contract itself functioned correctly throughout. The failure was in the signing environment and the low threshold ratio, which made compromising three devices sufficient to execute a full protocol drain. ### Can an audit protect me from multisig risk? Not completely. Audits examine smart contract code for logic errors, access control vulnerabilities, and known attack patterns at the time of review. An audit does not govern what happens to those contracts after deployment if an upgradeable proxy is in place. If the upgrade key is compromised after a clean audit, the audited code can be replaced entirely. Understanding how to read a DeFi audit report helps you identify whether the review covered access control architecture at all, which many audits treat as out of scope. You can find guidance on this in the [DeFi audit report reading guide](/blog/risk-management/how-to-read-defi-audit-report). ### What threshold ratio should a DeFi protocol's multisig have? For upgrade multisigs, a threshold ratio at or above 60% is the practical benchmark: 4-of-7, 5-of-8, or 6-of-9. Sub-50% ratios on upgrade keys should trigger additional scrutiny around signer independence and operational security. The raw number of signers matters considerably less than the ratio and the organizational independence of those signers. A 3-of-11 with organizationally concentrated signers is weaker than a 4-of-5 with genuinely independent parties. ### Is an immutable contract safer than an upgradeable one with a multisig? For depositor trust, fully immutable contracts remove upgrade key risk entirely. The tradeoff is that bugs discovered after deployment cannot be patched. For most production protocols, the practical answer is an upgradeable proxy with a 7-day timelock and a well-structured multisig, paired with a published roadmap toward immutability or full on-chain governance as the protocol matures. Immutability at launch is too rigid for most systems that need to respond to real-world conditions. Multisig plus timelock is the standard middle ground.

Conclusion

A multisig is risk management infrastructure, not a safety guarantee. Whether it actually protects your capital depends on the threshold ratio, the independence of the signers, the timelock configuration, and the operational security of the signing environment. A protocol can display a clean audit, a Safe multisig, and a credible team, and still have a key setup where your entire deposit is vulnerable to a single targeted phishing campaign or a malware infection on one developer's laptop. The practical takeaway is direct: always look up who controls the upgrade key before deploying meaningful capital. The code you are trusting today may be replaceable by a 3-of-7 of anonymous wallets with no timelock. That information is on-chain and publicly verifiable. The tools to check it require no permissions and no specialized software. Before depositing into any protocol, run through the red and green flag checklist in this article. Cross-reference the key management setup with the broader [DeFi due diligence checklist](/blog/risk-management/defi-due-diligence-checklist). The few minutes spent verifying admin key architecture can mean the difference between a productive yield position and an unrecoverable loss. Use the [Lince Yield Tracker](https://yields.lince.finance/tracker) to compare yield opportunities across protocols, and apply this framework before you commit capital to any position. The yield is only as good as the infrastructure protecting the funds that generate it.