Subscribe to receive notifications of new posts:

Cloudflare 1.1.1.1 incident on June 27, 2024

2024-07-04

12 min read
This post is also available in 简体中文, Français, Deutsch, 日本語, 한국어, Español and 繁體中文.

Cloudflare 1.1.1.1 incident on June 27, 2024

Introduction

On June 27, 2024, a small number of users globally may have noticed that 1.1.1.1 was unreachable or degraded. The root cause was a mix of BGP (Border Gateway Protocol) hijacking and a route leak.

Cloudflare was an early adopter of Resource Public Key Infrastructure (RPKI) for route origin validation (ROV). With RPKI, IP prefix owners can store and share ownership information securely, and other operators can validate BGP announcements by comparing received BGP routes with what is stored in the form of Route Origin Authorizations (ROAs). When Route Origin Validation is enforced by networks properly and prefixes are signed via ROA, the impact of a BGP hijack is greatly limited. Despite increased adoption of RPKI over the past several years and 1.1.1.0/24 being a signed resource, during the incident 1.1.1.1/32 was originated by ELETRONET S.A. (AS267613) and accepted by multiple networks, including at least one Tier 1 provider who accepted 1.1.1.1/32 as a blackhole route.

This caused immediate unreachability for the DNS resolver address from over 300 networks in 70 countries, although the impact on the overall percentage of users was quite low (less than 1% of users in the UK and Germany, for example), and in some countries no users noticed an impact.

Route leaks are something Cloudflare has written and talked about before, and unfortunately there are only best-effort safeguards in wide deployment today, such as IRR (Internet Routing Registry) prefix-list filtering by providers. During the same period of time as the 1.1.1.1/32 hijack, 1.1.1.0/24 was erroneously leaked upstream by Nova Rede de Telecomunicações Ltda (AS262504). The leak was further and widely propagated by Peer-1 Global Internet Exchange (AS1031), which also contributed to the impact felt by customers during the incident.

We apologize for the impact felt by users of 1.1.1.1, and take the operation of the service very seriously. Although the root cause of the impact was external to Cloudflare, we will continue to improve the detection methods in place to yield quicker response times, and will use our stance within the Internet community to further encourage adoption of RPKI-based hijack and leak prevention mechanisms such as Route Origin Validation (ROV) and Autonomous Systems Provider Authorization (ASPA) objects for BGP.

Background

Cloudflare introduced the 1.1.1.1 public DNS resolver service in 2018. Since the announcement, 1.1.1.1 has become one of the most popular resolver IP addresses that is free-to-use by anyone. Along with the popularity and easily recognized IP address comes some operational difficulties. The difficulties stem from historical use of 1.1.1.1 by networks in labs or as a testing IP address, resulting in some residual unexpected traffic or blackholed routing behavior. Because of this, Cloudflare is no stranger to dealing with the effects of BGP misrouting traffic, two of which are covered below.

BGP hijacks

Some of the difficulty comes from potential routing hijacks of 1.1.1.1. For example, if some fictitious FooBar Networks assigns 1.1.1.1/32 to one of their routers and shares this prefix within their internal network, their customers will have difficulty routing to the 1.1.1.1 DNS service. If they advertise the 1.1.1.1/32 prefix outside their immediate network, the impact can be even greater. The reason 1.1.1.1/32 would be selected instead of the 1.1.1.0/24 BGP-announced by Cloudflare is due to Longest Prefix Matching (LPM). While many prefixes in a route table could match the 1.1.1.1 address, such as 1.1.1.0/24, 1.1.1.0/29, and 1.1.1.1/32, 1.1.1.1/32 is considered the “longest match” by the LPM algorithm because it has the highest number of identical bits and longest subnet mask while matching the 1.1.1.1 address. In simple terms, we would call 1.1.1.1/32 the “most specific” route available to 1.1.1.1.

Instead of traffic toward 1.1.1.1 routing to Cloudflare via anycast and landing on one of our servers globally, it will instead land somewhere on a device within FooBar Networks where 1.1.1.1 is terminated, and a legitimate response will fail to be served back to clients. This would be considered a hijack of requests to 1.1.1.1, either done purposefully or accidentally by network operators within FooBar Networks.

BGP route leaks

Another source of impact we sometimes face for 1.1.1.1 is BGP route leaks. A route leak occurs when a network becomes an upstream, in terms of BGP announcement, for a network it shouldn’t be an upstream provider for.

Here is an example of a route leak where a customer forward routes learned from one provider to another, causing a type 1 leak (defined in RFC7908).

If enough networks within the Default-Free Zone (DFZ) accept a route leak, it may be used widely for forwarding traffic along the bad path. Often this will cause the network leaking the prefixes to overload, as they aren’t prepared for the amount of global traffic they are now attracting. We wrote about a wide-scale route leak that knocked off a large portion of the Internet, when a provider in Pennsylvania attracted traffic toward global destinations it would have typically never transited traffic for. Even though Cloudflare interconnects with over 13,000 networks globally, the BGP local-preference assigned to a leaked route could be higher than the route received by a network directly from Cloudflare. This sounds counterproductive, but unfortunately it can happen.

To explain why this happens, it helps to think of BGP as a business policy engine along with the routing protocol for the Internet. A transit provider has customers who pay them to transport their data, so logically they assign a higher BGP local-preference than connections with either private or Internet Exchange (IX) peers, so the connection being paid for is most utilized. Think of local-preference as a way of influencing priority of which outgoing connection to send traffic to. Different networks also may choose to prefer Private Network Interconnects (PNIs) over Internet Exchange (IX) received routes. Part of the reason for this is reliability, as a private connection can be viewed as a point-to-point connection between two networks with no third-party managed fabric in between to worry about. Another reason could be cost efficiency, as if you’ve gone to the trouble to allocate a router port and purchase a cross connect between yourself and another peer, you’d like to make use of it to get the best return on your investment.

It is worth noting that both BGP hijacks and route leaks can happen to any IP and prefix on the Internet, not just 1.1.1.1. But as mentioned earlier, 1.1.1.1 is such a recognizable and historically misappropriated address that it tends to be more prone to accidental hijacks or leaks than other IP resources.

During the Cloudflare 1.1.1.1 incident that happened on June 27, 2024, we ended up fighting the impact caused by a combination of both BGP hijacking and a route leak.

Incident timeline and impact

All timestamps are in UTC.

2024-06-27 18:51:00 AS267613 (Eletronet) begins announcing 1.1.1.1/32 to peers, providers, and customers. 1.1.1.1/32 is announced with the AS267613 origin AS

2024-06-27 18:52:00 AS262504 (Nova) leaks 1.1.1.0/24, also received from AS267613, upstream to AS1031 (PEER 1 Global Internet Exchange) with AS path “1031 262504 267613 13335”

2024-06-27 18:52:00 AS1031 (upstream of Nova) propagates 1.1.1.0/24 to various Internet Exchange peers and route-servers, widening impact of the leak

2024-06-27 18:52:00 One tier 1 provider receives the 1.1.1.1/32 announcement from AS267613 as a RTBH (Remote Triggered Blackhole) route, causing blackholed traffic for all the tier 1’s customers

2024-06-27 20:03:00 Cloudflare raises internal incident for 1.1.1.1 reachability issues from various countries

2024-06-27 20:08:00 Cloudflare disables a partner peering location with AS267613 that is receiving traffic toward 1.1.1.0/24

2024-06-27 20:08:00 Cloudflare team engages peering partner AS267613 about the incident

2024-06-27 20:10:00 AS262504 leaks 1.1.1.0/24 with a new AS path, “262504 53072 7738 13335” which is also redistributed by AS1031. Traffic is being delivered successfully to Cloudflare when along this path, but with high latency for affected clients

2024-06-27 20:17:00 Cloudflare engages AS262504 regarding the route leak of 1.1.1.0/24 to their upstream providers

2024-06-27 21:56:00 Cloudflare engineers disable a second peering point with AS267613 that is receiving traffic meant for 1.1.1.0/24 from multiple sources not in Brazil

2024-06-27 22:16:00 AS262504 leaks 1.1.1.0/24 again, attracting some traffic to a Cloudflare peering with AS267613 in São Paulo. Some 1.1.1.1 requests as a result are returned with higher latency, but the hijack of 1.1.1.1/32 and traffic blackholing appears resolved

2024-06-28 02:28:00 AS262504 fully resolves the route leak of 1.1.1.0/24

The impact to customers surfaced in one of two ways: unable to reach 1.1.1.1 at all; Able to reach 1.1.1.1, but with high latency per request.

Since AS267613 was hijacking the 1.1.1.1/32 address somewhere within their network, many requests failed at some device in their autonomous system. There were intermittent periods, or flaps, during the incident where they successfully routed requests toward 1.1.1.1 to Cloudflare data centers, albeit with high latency.

Looking at two source countries during the incident, Germany and the United States, impacted traffic to 1.1.1.1 looked like this:

Source Country of Users:

Keep in mind that overall this may represent a relatively small amount of total requests per source country, but normally no requests would route from the US or Germany to Brazil at all for 1.1.1.1.

Cloudflare Data Center city:

Looking at the graphs, requests to 1.1.1.1 were landing in Brazilian data centers. The gaps between the spikes are when 1.1.1.1 requests were blackholed prior to or within AS267613, and the spikes themselves are when traffic was delivered to Cloudflare with high latency invoked on the request and response. The brief spikes of traffic successfully carried to the Cloudflare peering location with AS267613 could be explained by the 1.1.1.1/32 route flapping within their network, occasionally letting traffic through to Cloudflare instead of it dropping somewhere in the intermediate path.

Technical description of the error and how it happened

Normally, a request to 1.1.1.1 from users routes to the nearest data center via BGP anycast. During the incident, AS267613 (Eletronet) advertised 1.1.1.1/32 to their peers and upstream providers, and AS262504 leaked 1.1.1.0/24 upstream, changing the normal path of BGP anycast for multiple eyeball networks drastically.

With public route collectors and the monocle tool, we can search for the rogue BGP updates.

monocle search --start-ts 2024-06-27T18:51:00Z --end-ts 2024-06-27T18:55:00Z --prefix '1.1.1.1/32'

A|1719514377.130203|206.126.236.209|398465|1.1.1.1/32|398465 267613|IGP|206.126.236.209|0|0||false|||route-views.eqix
–
A|1719514377.681932|206.82.104.185|398465|1.1.1.1/32|398465 267613|IGP|206.82.104.185|0|0|13538:1|false|||route-views.ny
–
A|1719514388.996829|198.32.132.129|13760|1.1.1.1/32|13760 267613|IGP|198.32.132.129|0|0||false|||route-views.telxatl

We see above that AS398465 and AS13760 reported to the route-views collectors that they received 1.1.1.1/32 from AS267613 around the time impact begins. Normally, the longest IPv4 prefix accepted in the Default-Free-Zone (DFZ) is a /24, but in this case we observed multiple networks using the 1.1.1.1/32 route from AS267613 for forwarding, made apparent by the blackholing of traffic that never arrived at a Cloudflare POP (Point of Presence). The origination of 1.1.1.1/32 by AS267613 is a BGP route hijack. They were announcing the prefix with origin AS267613 even though the Route Origin Authorization (ROA) is only signed for origin AS13335 (Cloudflare) with a maximum prefix length of /24.

We even saw BGP updates for 1.1.1.1/32 when looking at our own BMP (BGP Monitoring Protocol) data at Cloudflare. From at least a couple different route servers, we received our own 1.1.1.1/32 announcement via BGP. Thankfully, Cloudflare rejects these routes on import as both RPKI Invalid and DFZ Invalid due to invalid AS origin and prefix length. The BMP data display is pre-policy, meaning even though we rejected the route we can see where we receive the BGP update over a peering session.

So not only are multiple networks accepting prefixes that should not exist in the global routing table, but they are also accepting an RPKI (Resource Public Key Infrastructure) Invalid route. To make matters worse, one Tier-1 transit provider accepted the 1.1.1.1/32 announcement as a RTBH (Remote-Triggered Blackhole) route from AS267613, discarding all traffic at their edge that would normally route to Cloudflare. This alone caused wide impact, as any networks leveraging this particular Tier-1 provider in routing to 1.1.1.1 would have been unable to reach the IP address during the incident.

For those unfamiliar with Remote-Triggered Blackholing, it is a method of signaling to a provider a set of destinations you would like traffic to be dropped for within their network. It exists as a blunt method of fighting off DDoS attacks. When you are being attacked on a specific IP or prefix, you can tell your upstream provider to absorb all traffic toward that destination IP address or prefix by discarding it before it comes to your network port. The problem during this incident was AS267613 was unauthorized to blackhole 1.1.1.1/32. Cloudflare only should have the sole right to leverage RTBH for discarding of traffic destined for AS13335, which is something we would in reality never do.

Looking now at BGP updates for 1.1.1.0/24 multiple networks received the prefix from AS262504 and accepted it.

~> monocle search --start-ts 2024-06-27T20:10:00Z --end-ts 2024-06-27T20:13:00Z --prefix '1.1.1.0/24' --as-path ".* 267613 13335" --include-sub

.. some advertisements removed for brevity ..

A|1719519011.378028|187.16.217.158|1031|1.1.1.0/24|1031 262504 267613 13335|IGP|187.16.217.158|0|0|1031:1031 1031:4209 1031:6045 1031:7019 1031:8010|false|13335|162.158.177.1|route-views2.saopaulo
–
A|1719519011.629398|45.184.147.17|1031|1.1.1.0/24|1031 262504 267613 13335|IGP|45.184.147.17|0|0|1031:1031 1031:4209 1031:4259 1031:6045 1031:7019 1031:8010|false|13335|162.158.177.1|route-views.fortaleza
–
A|1719519036.943174|80.249.210.99|50763|1.1.1.0/24|50763 1031 262504 267613 13335|IGP|80.249.210.99|0|0|1031:1031 50763:400|false|13335|162.158.177.1|route-views.amsix
–
A|1719519037|80.249.210.99|50763|1.1.1.0/24|50763 1031 262504 267613 13335|IGP|80.249.210.99|0|0|1031:1031 50763:400|false|13335|162.158.177.1|rrc03
–
A|1719519087.4546|45.184.146.59|199524|1.1.1.0/24|199524 1031 262504 267613 13335|IGP|45.184.147.17|0|0||false|13335|162.158.177.1|route-views.fortaleza
A|1719519087.464375|45.184.147.74|264409|1.1.1.0/24|264409 1031 262504 267613 13335|IGP|45.184.147.74|0|0|65100:7010|false|13335|162.158.177.1|route-views.fortaleza
–
A|1719519096.059558|190.15.124.18|61568|1.1.1.0/24|61568 262504 267613 13335|IGP|190.15.124.18|0|0|1031:1031 1031:4209 1031:6045 1031:7019 1031:8010|false|13335|162.158.177.1|route-views3
–
A|1719519128.843415|190.15.124.18|61568|1.1.1.0/24|61568 262504 267613 13335|IGP|190.15.124.18|0|0|1031:1031 1031:4209 1031:6045 1031:7019 1031:8010|false|13335|162.158.177.1|route-views3

Here we pay attention to the AS path again. This time, AS13335 is the origin AS at the very end of the announcements. This BGP announcement is RPKI Valid, because the origin is correctly AS13335, but this is a route leak of 1.1.1.0/24 because the path itself is invalid.

How do we know it’s a route leak?

Looking at an example path, “199524 1031 262504 267613 13335”, AS267613 is functionally a peer of AS13335 and should not share the 1.1.1.0/24 announcement with their peers or upstreams, only their customers (AS Cone). AS262504 is a customer of AS267613 and the next adjacent ASN in the path, so that particular announcement is fine up until this point. Where the 1.1.1.0/24 goes wrong is AS262504, when they announce the prefix to their upstream AS1031. Furthermore, AS1031 redistributed the advertisement to their peers.

This means AS262504 is the leaking network. AS1031 accepted the leak from their customer, AS262504, and caused wide impact by distributing the route in multiple peering locations globally. AS1031 (Peer-1 Global Internet Exchange) advertises themselves as a global peering exchange. Cloudflare is not a customer of AS1031, so 1.1.1.0/24 should have never been redistributed to peers, route-servers, or upstreams of AS1031. It appears that AS1031 does not perform any extensive filtering for customer BGP sessions, and instead just matches on adjacency (in this case, AS262504) and redistributes everything that meets this criteria. Unfortunately, this is irresponsible of AS1031 and causes direct impact to 1.1.1.1 and potentially other services that fall victim to the unguarded route propagation. While the original leaking network was AS262504, impact was greatly amplified by AS1031 and others when they accepted the hijack or leak and further distributed the announcements.

During the majority of the incident, the leak by AS262504 eventually landed requests within AS267613, which was discarding 1.1.1.1/32 traffic somewhere in their network. To that end, AS262504 really just amplified the impact in terms of 1.1.1.1 unreachability by leaking routes upstream.

To limit impact of the route leak, Cloudflare disabled peering in multiple locations with AS267613. The problem did not completely go away, as AS262504 was still leaking a stale path pointing to São Paulo. Requests landing in São Paulo were able to be served, albeit with a high round-trip time back to users. Cloudflare has been engaging with all networks mentioned throughout this post in regard to the leak and future prevention mechanisms, as well as at least one Tier 1 transit provider who accepted 1.1.1.1/32 from AS267613 as a blackhole route that was unauthorized by Cloudflare and caused widespread impact.

Remediation and follow-up steps

BGP hijacks

RPKI origin validationRPKI has recently reached a major milestone at 50% deployment in terms of prefixes signed by Route Origin Authorization (ROA). While RPKI certainly helps limit the spread of a hijacked BGP prefix throughout the Internet, we need all networks to do their part, especially major networks with a large sum of downstream Autonomous Systems (AS’s). During the hijack of 1.1.1.1/32, multiple networks accepted and used the route announced by AS267613 for traffic forwarding.

RPKI and Remote-Triggered Blackholing (RTBH)A significant amount of the impact caused during this incident was due to a Tier 1 provider accepting 1.1.1.1/32 as a blackhole route from a third party that is not Cloudflare. This in itself is a hijack of 1.1.1.1, and a very dangerous one. RTBH is a useful tool used by many networks when desperate for a mitigation against large DDoS attacks. The problem is the BGP filtering used for RTBH is loose in nature, relying often only on AS-SET objects found in Internet Routing Registries. Relying on Route Origin Authorization (ROA) would be infeasible for RTBH filtering, as that would require thousands of potential ROAs be created for the network the size of Cloudflare. Not only this, but creating specific /32 entries opens up the potential for an individual address such as 1.1.1.1/32 being announced by someone pretending to be AS13335, becoming the best route to 1.1.1.1 on the Internet and causing severe impact.

AS-SET filtering is not representative of authority to blackhole a route, such as 1.1.1.1/32. Only Cloudflare should be able to blackhole a destination it has the rights to operate. A potential way to fix the lenient filtering of providers on RTBH sessions would again be leveraging an RPKI. Using an example from the IETF, the expired draft-spaghetti-sidrops-rpki-doa-00 proposal specified a Discard Origin Authorization (DOA) object that would be used to authorize only specific origins to authorize a blackhole action for a prefix. If such an object was signed, and RTBH requests validated against the object, the unauthorized blackhole attempt of 1.1.1.1/32 by AS267613 would have been invalid instead of accepted by the Tier 1 provider.

BGP best practicesSimply following BGP best practices laid out by MANRS, and rejecting IPv4 prefixes that are longer than a /24 in the Default-Free Zone (DFZ) would have reduced impact to 1.1.1.1. Rejecting invalid prefix lengths within the wider Internet should be part of a standard BGP policy for all networks.

BGP route leaks

Route leak detection

While route leaks are not unavoidable for Cloudflare today, because the Internet inherently relies on trust for interconnection, there are some steps we will take to limit impact.

We have expanded data sources to use for our route leak detection system to cover more networks and are in the process of incorporating real-time data into the detection system to allow more timely response toward similar events in the future.

ASPA for BGP

We will continue advocating for the adoption of RPKI into AS Path based route leak prevention. Autonomous System Provider Authorization (ASPA) objects are similar to ROAs, except instead of signing prefixes with an authorized origin AS, the AS itself is signed with a list of provider networks that are allowed to propagate their routes. So, in the case of Cloudflare, only valid upstream transit providers would be signed as authorized to advertise AS13335 prefixes such as 1.1.1.0/24 upstream.

In the route leak example where AS262504 (customer of AS267613) shared 1.1.1.0/24 upstream, BGP ASPA would see this leak if AS267613 had signed their authorized providers and AS1031 had validated paths against that list. Similar to RPKI origin validation, however, this will be a long-term effort and dependent on networks, especially large providers, rejecting invalid AS paths as based on ASPA objects.

Other potential approaches

There are alternative approaches to ASPA that do exist, in various stages of adoption that may be worth noting. There is no guarantee that the following make it to a stage of wide Internet deployment, however.

RFC9234, for example, uses a concept of peer roles within BGP capabilities and attributes, and depending on the configuration of routers along a path for updates, an “Only-To-Customer” (OTC) attribute can be added to prefixes that will prevent the upstream spread of a prefix such as 1.1.1.0/24 along a leaked path. The downside is BGP configuration needs to be completed to assign the various roles to each peering session, and vendor adoption still has to be fully ironed out to make this feasible for actual use in production with positive results.

Like all approaches to solving route leaks, cooperation amongst network operators on the Internet is required for success.

Conclusion

Cloudflare’s 1.1.1.1 DNS resolver service fell victim to a simultaneous BGP hijack and BGP route leak event. While the actions of external networks are outside of Cloudflare’s direct control, we intend to take every step within both the Internet community and internally at Cloudflare to detect impact more quickly and lessen impact to our users.

Long term, Cloudflare continues to support adoption of RPKI-based origin validation, as well as AS path validation. The former exists with deployment across a wide array of the world’s largest networks, and the latter is still in draft phase at the IETF (Internet Engineering Task Force). In the meantime, to check if your ISP is enforcing RPKI origin validation, you can always visit isbgpsafeyet.com and Test Your ISP.

Cloudflare's connectivity cloud protects entire corporate networks, helps customers build Internet-scale applications efficiently, accelerates any website or Internet application, wards off DDoS attacks, keeps hackers at bay, and can help you on your journey to Zero Trust.

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here. If you're looking for a new career direction, check out our open positions.
1.1.1.1Outage

Follow on X

Bryton Herdes|@next_hopself
Mingwei Zhang|@heymingwei
Tanner Ryan|@TheTannerRyan
Cloudflare|@cloudflare

Related posts

September 24, 2024 1:00 PM

Cloudflare partners with Internet Service Providers and network equipment providers to deliver a safer browsing experience to millions of homes

Cloudflare is extending the use of our public DNS resolver through partnering with ISPs and network providers to deliver a safer browsing experience directly to families. Join us in protecting every Internet user from unsafe content with the click of a button, powered by 1.1.1.1 for Families....

September 20, 2024 2:00 PM

Cloudflare incident on September 17, 2024

On September 17, 2024, during planned routine maintenance, Cloudflare stopped announcing 15 IPv4 prefixes, affecting some Business plan websites for approximately one hour. During this time, IPv4 traffic for these customers would not have reached Cloudflare and users attempting to connect to websites using addresses within those prefixes would have received errors. ...