A postmortem-style breakdown of how rounding, invariants, and unchecked assumptions turn into catastrophic losses.
The Cetus incident is a blunt reminder that most security failures are not “advanced” attacks. They are the product of small, compounding assumptions that no one codified or proved. A rounding edge case becomes a systemic loss when invariants are not explicit and enforcement is inconsistent.
At a high level, the failure was not a single bug. It was a chain:
If you cannot prove that your state updates preserve conservation of value, you are already in danger.
Security is a property of the whole system, not just the critical path. Most teams say things like “balances always add up” or “fees never exceed X” but those claims are rarely enforced in one place. Instead they are scattered across helpers, tests, and comments.
When an invariant is implicit, it becomes easy to violate it during refactors or upgrades. The bug is not the refactor, it is the missing proof.
The fix is not a patch. It is a workflow.
A good rule: if you need to reason about rounding, you need explicit bounds. If you cannot bound the error, you cannot bound the loss.
Use multiple layers:
If your system holds meaningful value, these are the minimum baseline.
We treat math-heavy systems as proof obligations, not guesswork. Our standard engagements include:
Cetus was not a failure of “security” as a concept. It was a failure to operationalise security into code, math, and proof. If you want to avoid the next nine-figure incident, you need fewer assumptions, more invariants, and mathematical rigor.
We deliver formal specs, differential fuzzing suites, and conformance reports with remediation guidance.