The genesis of our 33rd and 34th data centers in Auckland and Melbourne started a short hop away in nearby Sydney. Prior to these deployments traffic from all of New Zealand and Australia's collective 23 million Internet users was routed through CloudFlare's Sydney data center. Even for those in faraway Perth, the time necessary to reach our Sydney PoP was a mere 55ms of round trip time (RTT). By comparison, the blink of an eye takes 300-400ms. In other words, latency wasn't exactly the pressing concern. The real concern was a failure scenario in our Sydney data center.
Fortunately, our entire architecture starts with an assumption: failure is going to happen. As a result, we plan for failure at every level and have designed a system to gracefully handle it when it occurs. Even though we now maintain multiple layers of redundancy—from power supplies and power circuits to line cards, routing engines and network providers—our ultimate level of redundancy is in the ability to fail out an entire data center in favor of another. In the past we've even written about how this might even play out in the case of a global thermonuclear war. In this instance, the challenge we set out to solve was not how to fail gracefully, but how to fail gracefully without materially increasing latency for the millions of applications that depend on our network in the Oceania region.
Grace and speed
Prior to our Auckland and Melbourne data centers, a failure in Sydney meant a shift in traffic to the West Coast of the US or Southeast Asia adding significant, and noticeable, latency to our users' applications (spoiler: it now fails over to Auckland and Melbourne with minimal latency!). But before we get to how the Kiwi's and Australia's "second city" saved the day, it is important to understand how the Internet "works" in Oceania. As we set out to create resiliency in-region, we considered several options:
Plan A: Second (redundant) data center in Sydney
At first blush a second facility in Sydney would seem to solve most imaginable failure scenarios (perhaps save a nuclear one). However, when it comes to the Internet, things are rarely intuitive. Australia, at least in the context of the Internet, is very Sydney-centric. The vast majority of traffic from Australia to the rest of the Internet passes through a single data center (which just so happens to be the same exact facility in which we are currently located). Even if we were to make a redundant deployment in a completely separate facility, traffic to that facility would still have to pass through the same potential point of failure: our current facility. Not to mention, a second facility in Sydney would neither reduce latency and improve performance for a larger subset of Internet users in the region nor localize our traffic any further than it already was. It also wouldn't have opened up any new peering opportunities which, as we've explained in a prior blog post, is of immense importance to the performance and overall health of our network.
Not enough redundancy. No performance gain from status quo.
Plan B: Add a data center in Auckland
Out of left pitch came Auckland. Although not an obvious choice, Auckland is rather uniquely situated to provide redundancy in-region as a result of how many operators have constructed their networks: by building or buying a 3 drop ring between New Zealand-Australia, Australia-USA, and USA-New Zealand.
Because traffic is only heavily utilized in one direction, towards New Zealand, this leaves a lot of free capacity between New Zealand-Australia (i.e. from New Zealand). After working with various providers, we've structured a solution that allows us to reduce latency and further localize traffic for Internet users in New Zealand while also allowing for full redundancy between both Auckland, Sydney and the rest of Oceania. Not to mention, CloudFlare is now a member of New Zealand's largest peering exchange, APE-IX.
Redundancy and performance gains versus the status quo.
But why stop there?
Plan C: Add a data center in Auckland AND Melbourne
Despite achieving the desired level of redundancy and performance gains in New Zealand through our own version of the Trans-Tasman arrangement, we figured that both Kiwi’s and Aussies would prefer not to have the others' redundancy deposited at their doorstep. So, along came Melbourne as a complement to Auckland. Our Melbourne data center offers the same benefits of content localization and performance improvement for Internet users south of the border, as well as domestic redundancy in the case of a data center failure.
Latency improvement and additional redundancy.
Problem solved, right? Almost...
The Auckland situation
The Auckland fiber situation is an interesting one. Auckland is situated around a harbour. Over this harbour is a bridge which most of the fiber in the city runs across, with a small amount running via a much longer path around the harbour (think 30km longer fiber runs). Purchasing fiber between the areas of the city separated by the harbour costs more than a Kim Dotcom political party (i.e. a lot of money).
The bulk of the country's Internet providers (particularly the smaller ones) exist only south of the harbour bridge. The cable landing stations and many of the data centers, on the other hand, only exist north of the harbour bridge. If you are as performance obsessed as we are, you want to be south of the bridge so that you can peer with all networks in as inexpensive, resilient and easy manner as possible. But for us, the vetting process didn't stop there. The specific site we selected is the core node for most major New Zealand transit providers, allows us to interconnect with nearly every provider from within the same facility, and hosts a core node of the local peering exchange.
Now that our Auckland DC is live, some users in New Zealand may notice that their ISPs continue to route to CloudFlare in Sydney. That makes no sense you say!? We agree! Despite our best efforts, it takes two to tango. Should this be the case with your ISP, let them know...hopefully that will spark a conversation.