Why is the Internet in danger?

Part 2. Toward a structural model of the Internet

I’m a researcher at the UC Berkeley Center for Long-Term Cybersecurity where I direct the Daylight Lab. This newsletter is my work as I do it: more than half-baked, less than peer-reviewed. This post is part of a series on building a structural model of the Internet. Find the beginning of that series here.

Last week, I told you that the Internet is in danger. A fight for control threatens to destabilize and fragment the Internet, and the stakes are existential.

This week, I’ll tell you why the Internet is in danger.

It’s best to start with an example:

A storm hits the mid-Atlantic. A big storm, ten or a hundred times bigger than Ida. There are massive power outages, more widespread and harder to recover from than those during Hurricaine Sandy. Data centers, Internet exchange points, and service providers are offline. What will happen to the global Internet?

The received wisdom is, “Internet traffic will route around the failure. Eventually, when power restores, the Internet will heal, bringing people in affected regions back online.”

Prove it! Prove to me that Internet routing will heal in a reasonable amount of time. That the failure won’t cascade to other regions of the U.S., or to other countries.

That’s our problem. No one can argue with any empirical certainty about which kinds of events will or will not cause the Internet to collapse (bringing global industrial production down with it). The rest of this post elaborates on this predicament. I address it from two angles: first, that we can’t assess risks to the Internet; and second, that we can’t prioritize between opportunities to improve it.

1. We can’t assess risk

During Superstorm Sandy, BGP1 did not heal the way we expected. That was 2012. I shudder to think what would happen now, in an Internet that’s more complex, interdependent, and privately-held2 than that one. (Now, add in more frequent and severe storms in a changing climate).

Meanwhile, a small number of content distribution networks (CDNs) serve most of the content on the Internet. CDN service is highly centralized among a few players and absolutely ubiquitous; you can’t use the Internet (let alone host a service) without them. Centralization produces central points of failure, and the opacity caused by privatization confounds oversight and risk management. Imagine a Stuxnet-level attack on Cloudflare. What would happen?

While catastrophic internet failure remains unlikely, it’s both more likely than it should be and hard to estimate the exact likelihood of, in part because we cannot articulate what types of events could precipitate it.

What would happen if a storm bigger than Ida hit New York? If there were widespread power outages in the Los Angeles Metro area? If Cloudflare went down? No one knows how long it would take for BGP to heal in those situations, let alone any downstream effect. That alone should terrify you. While catastrophic internet failure remains unlikely, it’s both more likely than it should be and hard to estimate the exact likelihood of, in part because we cannot articulate what types of events could precipitate it.3

2. We can’t prioritize upgrades

The Internet, generally speaking, is fragile and insecure. Many of the risks I mentioned above stem from issues with its basic design.4 Unfortunately, starting from scratch on this whole “internet” thing just isn’t in the cards: due to global reliance on the Internet for trade, pricing, finance, healthcare, and pretty much everything else, radical or disruptive change to the Internet is not a practical option. Incremental change is the only way to preserve a singular, globally shared Internet like the one we have today.

We may only get one chance in a generation to upgrade any given protocol.

As a result, the global Internet governance community is awash in proposals to upgrade the Internet’s fundamental operation. Replacing the Border Gateway Protocol (BGP) with Secure BGP, upgrading the Domain Name System (DNS) with DNSSEC, and the transition from IPv4 to IPv6 are just a few examples of ongoing and proposed upgrades for the global Internet.

But performing these upgrades isn’t as straightforward as it seems. Here’s why.

Upgrades take time—lots of time

As the globally uneven and still-ongoing transition from IPv4 to IPv6 demonstrates, we may only get one chance in a generation to upgrade any given protocol. And maintaining backward compatibility—a necessary evil as upgrades roll out unevenly—raises its own security challenges.5 We must pick strategically among upgrades, given the high costs of performing them.

If the IPv6 upgrade happens at its current pace, will Africa run out of IP addresses? If we fail to secure inter-domain routing, could Russia, China, or the U.S. disrupt the world’s Internet during a conflict?

Upgrades are political

Internet upgrades are political, especially in an internet that, while still hegemonically dominated by the U.S., is increasingly splintering into national internets in places like China, Russia, and the E.U. In practice, any internet upgrade acts either as a power-grab (Huawei’s NewIP is an infamous example) or an implicit endorsement of continued U.S. hegemony over the network.

As the Internet increasingly becomes a site of geopolitical conflict,6 we must articulate the political risks of new upgrades, and weigh them against the upgrades’ projected upside. Currently, we lack an empirical foothold to do so: we lack models that help us advocate for one upgrade over another.

As the Internet increasingly becomes a site of geopolitical conflict, how do we articulate the political risks of new upgrades, weigh them against their projected upside?

The Internet is opaque

We cannot effectively advocate for upgrades partly because we cannot account for their effect on the contemporary Internet’s most critical infrastructure: privatized cloud services.7 Despite these infrastructures’ vital importance, we don’t know how they connect to the global Internet or one another. In an opaque network, operator upgrades could matter more or less than we might expect them to.8

Is it prudent or wreckless to break up CDN monopolies? What about using content-addressed protocols to supplant them?

Alternatives to our Internet are quickly emerging

Alternatives to the Internet stack are bubbling up with or without input from the global Internet governance community. Rising interest in blockchain technologies is yielding alternatives to core application-layer technologies like DNS and TLS.9 New physical infrastructures, be they in Brooklyn or Havana, afford the opportunity to create parallel Internets from the ground up. Alternative physical and logical internets threaten to fragment the global Internet in unpredictable ways and with unforeseen consequences.

We have existing global governance mechanisms on one hand, and we have radical change through the grassroots on the other. How do we weigh the risks of revolution against the dangers of upgrading an insecure and hegemonic Internet through its existing bodies?

Pivotal moments

Melting freshwater from Greenland’s ice sheet is slowing down the AMOC earlier than climate models suggested.
Melting ice in Greenland is slowing down the Gulf Stream more quickly than expected, threatening its collapse. Exactly when the Gulf Stream might collapse, and what might happen if it does, is impossible to forecast precisely. Yet our models are good enough to tell us that such a collapse would cost millions of lives. Photograph: Ulrik Pedersen/NurPhoto/REX/Shutterstock

At the beginning of this post, I used the term “existential risk.” Forget that.

Consider this:

Climate crisis: Scientists spot warning signs of Gulf Stream collapse | The Guardian

Climate scientists have detected warning signs of the collapse of the Gulf Stream, one of the planet’s main potential tipping points.
Such an event would have catastrophic consequences around the world, severely disrupting the rains that billions of people depend on for food in India, South America and West Africa; increasing storms and lowering temperatures in Europe; and pushing up the sea level off eastern North America. It would also further endanger the Amazon rainforest and Antarctic ice sheets.
The complexity of the AMOC system and uncertainty over levels of future global heating make it impossible to forecast the date of any collapse for now. It could be within a decade or two, or several centuries away. But the colossal impact it would have means it must never be allowed to happen, the scientists said.

Events like the Gulf Stream collapsing are granular, interim events. Pivotal moments between today and possible tomorrows. Colossal events that must not be allowed to happen. The goal of climate science is to identify these moments and guide us to avoid them.

This is what we need to do for the Internet.10

Next week, I'll talk about how we might do it. I hope you'll join.


No wonder cyber insurance companies can’t stay afloat. We don’t even understand what kind of risk we can assess the impact or probability of!

What kinds of threats are amenable to actuarial analysis? Cyberattacks may not be (“probabilistic risk assessment is an inappropriate methodology for modeling malicious risk,” according to an often-ignored National Research Council report). But the risk of Internet disruptions due to storms or blackouts may be. I reckon insurers are undervaluing those risks, insurance buyers are unaware of them, or some combination thereof.


“Complex” meaning more could go wrong, “interdependent” meaning one thing going wrong could be catastrophic, and “privately-held” meaning that we can’t readily inspect its state.


The Border Gateway Protocol (BGP) is the “glue” that connects autonomous Internet domains. Doing so enables inter-domain routing, which lets you access content from people who aren’t a customer of your local ISP. Effectively, it connects you to the rest of the world’s Internet. Since BGP predates modern consensus mechanisms (like blockchains), attackers can emit fake messages that advertise fake routes. In theory, and at scale, such attacks could fragment the global Internet.


Contrary to popular belief, the designers of the Internet did think about security. But they didn’t anticipate that people would use the Internet to facilitate connections between parties they did not know or trust. Also, information security as a discipline was in its infancy in the 1970s. Myriad design challenges stem from basic oversights, obvious in retrospect; later analysis has proposed upgrades, most ongoing or theoretical. (See David D. Clark (2018). Designing an Internet. MIT Press.)


Protocols of different versions can interoperate in unexpected ways. Some of the largest Internet infrastructure providers in the world have rolled out HTTP2 unevenly within their own systems! Interoperability issues caused security vulnerabilities that affected a lot of the public web. We’re lucky attackers didn’t use these exploits.


The Internet has become both more centralized and more privatized than its original architects envisioned. Private cloud backbones can reach almost all of their endpoints without touching a Tier-1 network, and most Internet users only need to “travel” one Internet domain away to retrieve the data they’re requesting.


For example, see Handshake, which replaces both DNS root servers and the certificate authority system. New protocols, like blockchains, rely on IP and BGP only. In theory, they can fully replace all existing application-layer technologies (DNS, HTTP, TLS, SMTP, SSH, etc…).


Thanks to Steve Weber and Ann Cleaveland for this analogy.