The G.O.P. has been exploiting known vulnerabilities in our governance system for years. By now, it’s their core strategy.
Gaming is as old as governance. But the G.O.P.’s escalation in these tactics—voting roll purges, poll taxes, unitary executive theory, failing to certify election results—has brought the U.S. to the brink.1 It’s desperately important to foresee strategies like these before they are exploited.
Once any given exploit is widely used—as gerrymandering is—it becomes nearly impossible to overcome. Exploits to governance produce, by definition, regulatory capture. Once exploited to their logical conclusion, the attacker’s power becomes complete. The system by which the system is determined to work or not is itself compromised!
The offensive security of governance
I’m no techno-solutionist. “Tech” per se will not solve gerrymandering, and security as a metaphor for governance has its limits. But the stakes are high enough that it’s worth searching for any meaningful progress toward preventing attacks on governance structures. Security has methods and metaphors worth taking on a proverbial walk, and DAOs—testable governance structures—provide a sandbox for active experimentation.
I’m likely to apply these tools narrowly to DAOs in the near future, but I’m also curious to see how they might be (or have been) applied to other kinds of governance. If anything here rings a bell, get in touch.
America on the brink. According to this survey from PRRI, 18% of Americans agree that “true American patriots might have to resort to violence to save our country,” 30% of Republicans and 11% of Democrats. Of those who believe the 2020 election was stolen, the number is 39%. When people lose faith in the institutions that govern, chaos comes quickly.
Governance in a sandbox. All governance systems are prone to manipulation. Security, as a discipline, attempts to foresee what specific mechanisms might be used for manipulation before they’re exploited “in the wild.” One common technique is red-teaming: attempting attacks in sandboxed environments to demonstrate their feasibility and impact.
In networks like Cosmos or Juno, there are often testnets—compartmentalized networks that run separately from the “real” network. These networks are fundamentally tools for governance (corporate or otherwise), but their testnets are used primarily to test features, code, or infrastructure. What if we used testnets to stress-test the governance systems that underpin those networks?
Here’s a real scenario, ripped from Juno’s governance. Capital over-accumulation beyond the death of democracy, the DAO capped token ownership. Everyone was only supposed to have, at most, some finite number of tokens. But someone used sockpuppet accounts to accrue tokens beyond the limit. In response, a community member proposed the network expropriate the offending person’s assets. Their proposal failed 57% to 36%, mainly because its implementation would be tricky. For an issue like this, an ounce of prevention would have yielded several kilos of cure.
In a testnet, the DAO could have simulated this type of bad behavior and made reasonable claims about its cost and impact. It could have also drilled the attack’s remediation: creating a proposal, accepting the proposal, implementing it. Once the event did arise, a proposal could have linked to the drill and proposed that the DAO follow its plan.
The logic is this: some attacks will always be possible, but preparedness—knowing what to do when they happen—can make a meaningful difference in an institution’s ability to recover.
If you’re interested in this topic, please get in touch. You can reply to this email.
A tangent for the Cosmos folks. On performance:
People over-optimize for speed. On speed, these chains just need to compete with traditional governance, which they do, handily. Their power is in increased expressivity and testability. The largest Fibonacci number is a fantastic proxy for the former. For the latter—well, see above.
By the way: chains do beat with traditional governance on speed, and by many orders of magnitude. If you didn’t see it live, watch the election certification vote on January 6. It’s effectively a human BFT. Someone stands up, dishonestly refuses to certify the vote, and everyone has to sit and listen to whatever the person has to say. Then everyone has to vote them down—effectively, to sign the block without their transaction. This process is the sort of thing Tendermint automates.