
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sat, 11 Apr 2026 02:47:04 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Fresh insights from old data: corroborating reports of Turkmenistan IP unblocking and firewall testing]]></title>
            <link>https://blog.cloudflare.com/fresh-insights-from-old-data-corroborating-reports-of-turkmenistan-ip/</link>
            <pubDate>Mon, 03 Nov 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare used historical data to investigate reports of potential new firewall tests in Turkmenistan. Shifts in TCP resets/timeouts across ASNs corroborate large-scale network control system changes.
 ]]></description>
            <content:encoded><![CDATA[ <p>Here at Cloudflare, we frequently use and write about data in the present. But sometimes understanding the present begins with digging into the past.  </p><p>We recently learned of a 2024 <a href="https://turkmen.news/internet-amnistiya-v-turkmenistane-razblokirovany-3-milliarda-ip-adresov-hostingi-i-cdn/"><u>turkmen.news article</u></a> (available in Russian) that reports <a href="https://radar.cloudflare.com/tm"><u>Turkmenistan</u></a> experienced “an unprecedented easing in blocking,” causing over 3 billion previously-blocked IP addresses to become reachable. The same article reports that one of the reasons for unblocking IP addresses was that Turkmenistan may have been testing a new firewall. (The Turkmen government’s tight control over the country’s Internet access <a href="https://www.bbc.com/news/world-asia-16095369"><u>is well-documented</u></a>.) </p><p>Indeed, <a href="https://radar.cloudflare.com/"><u>Cloudflare Radar</u></a> shows a surge of requests coming from Turkmenistan around the same time, as we’ll show below. But we had an additional question: Does the firewall activity show up on Radar, as well? Two years ago, we launched the <a href="https://blog.cloudflare.com/tcp-resets-timeouts/"><u>dashboard on Radar</u></a> to give a window into the TCP connections to Cloudflare that close due to resets and timeouts. These stand out because they are considered ungraceful mechanisms to close TCP connections, according to the TCP specification. </p><p>In this blog post, we go back in time to share what Cloudflare saw in connection resets and timeouts. We must remind our readers that, as passive observers, there are <a href="https://blog.cloudflare.com/connection-tampering/#limitations-of-our-data"><u>limitations on what we can glean from the data</u></a>. For example, our data can’t reveal attribution. Even so, the ability to observe our environment <a href="https://blog.cloudflare.com/tricky-internet-measurement/"><u>can be insightful</u></a>. In a recent example, our visibility into resets and timeouts helped corroborate reports of large-scale <a href="https://blog.cloudflare.com/russian-internet-users-are-unable-to-access-the-open-internet/"><u>blocking and traffic tampering by Russia</u></a>.</p>
    <div>
      <h3>Turkmenistan requests where there were none before</h3>
      <a href="#turkmenistan-requests-where-there-were-none-before">
        
      </a>
    </div>
    <p>Let’s look first at the number of requests, since those should increase if IP addresses are unblocked. In mid-June 2024 Cloudflare started receiving a noticeable increase in HTTP requests, consistent with <a href="https://turkmen.news/internet-amnistiya-v-turkmenistane-razblokirovany-3-milliarda-ip-adresov-hostingi-i-cdn/"><u>reports</u></a> of Turkmenistan unblocking IPs.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3Kqaxxjv9g52RVMWg92AYu/e57468cf523702cadd634c34775be033/BLOG_3069_2.png" />
          </figure><p><sup>Source: </sup><a href="https://radar.cloudflare.com/traffic/tm?dateStart=2024-06-01&amp;dateEnd=2024-06-30"><sup>Cloudflare Radar</sup></a></p>
    <div>
      <h3>Overall TCP resets and timeouts</h3>
      <a href="#overall-tcp-resets-and-timeouts">
        
      </a>
    </div>
    <p>The Transmission Control Protocol (TCP) is a lower-layer mechanism used to create a connection between clients and servers, and also carries <a href="https://radar.cloudflare.com/adoption-and-usage#http1x-vs-http2-vs-http3"><u>70% of HTTP traffic</u></a> to Cloudflare. A TCP connection works <a href="https://blog.cloudflare.com/connection-tampering/#explaining-tampering-with-telephone-calls"><u>much like a telephone call</u></a> between humans, who follow graceful conventions to end a call—and who are acutely aware when conventions are broken if a call ends abruptly.  </p><p>TCP also defines conventions to end the connection gracefully, and we developed <a href="https://blog.cloudflare.com/tcp-resets-timeouts/"><u>mechanisms to detect</u></a> when they don’t. An ungraceful end is triggered by a reset instruction or a timeout. Some are due to benign artifacts of software design or human user behaviours. However, sometimes they are exploited by <a href="https://blog.cloudflare.com/connection-tampering"><u>third parties to close connections</u></a> in everything from school and enterprise firewalls or software, to zero-rating on mobile plans, to nation-state filtering.</p><p>When we look at connections from Turkmenistan, we see that on June 13, 2024, the combined proportion of the four coloured regions increases; each coloured region represents ungraceful ends at a distinct stage of the connection lifetime. In addition to the combined increase, the relative proportions between stages (or colours) changes as well.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1hNDpdNS9lDPKg3jFHigiL/ff3de33af7974c5d32ba421cbbc3c42e/BLOG_3069_3.png" />
          </figure><p><sup>Source: </sup><a href="https://radar.cloudflare.com/security/network-layer/tm?dateStart=2023-10-01&amp;dateEnd=2023-11-30#tcp-resets-and-timeouts"><sup>Cloudflare Radar</sup></a></p><p>Further changes appeared in the weeks that followed. Among them are an increase in Post-PSH (orange) anomalies starting around July 4; a reduction in Post-ACK (light blue) anomalies around July 13; and an increase in anomalies later in connections (green) starting July 22.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6IavKOkF7tB02MtNqJPqqD/f08c78f65894e751b7c9fce9820dee85/BLOG_3069_4.png" />
          </figure><p><sup>Source: </sup><a href="https://radar.cloudflare.com/security/network-layer/tm?dateStart=2024-07-01&amp;dateEnd=2024-07-30#tcp-resets-and-timeouts"><sup>Cloudflare Radar</sup></a></p><p>The shifts above <i>could</i> be explained by a large firewall system. It’s important to keep in mind that data in each of the connection stages (captured by the four coloured regions in the graphs) can be explained by browser implementations or user actions. However, the scale of the data would need a great number of browsers or users doing the same thing to show up. Similarly, individual changes in behaviour would be lost unless they occur in large numbers at the same time.</p>
    <div>
      <h3>Digging down to individual networks</h3>
      <a href="#digging-down-to-individual-networks">
        
      </a>
    </div>
    <p>We’ve learned that it can be helpful to look at the data for individual networks to reveal common patterns between different networks in different regions <a href="https://blog.cloudflare.com/tcp-resets-timeouts/#zero-rating-in-mobile-networks"><u>operated by single entities</u></a>. </p><p>Looking at individual networks within Turkmenistan, trends and timelines appear more pronounced. July 22 in particular sees greater proportions of anomalies associated with the <a href="https://www.cloudflare.com/learning/ssl/what-is-sni/"><u>Server Name Indication</u></a>, or domain name, rather than the IP address (dark blue), although the connection stage where the anomalies appear varies by individual network.</p><p>The general Turkmenistan trends are largely mirrored in connections from <a href="https://radar.cloudflare.com/as20661"><u>AS20661 (TurkmenTelecom)</u></a>, indicating that this <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/"><u>autonomous system</u></a> (AS) accounts for <a href="https://radar.cloudflare.com/tm#autonomous-systems"><u>a large proportion of Turkmenistan’s traffic</u></a> to Cloudflare’s network. There is a notable reduction in Post-ACK (light blue) anomalies starting around July 26.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5ukNOB1CYUAPW2s7ofdqMK/7d1dca367374db90627413e2c40a6ee3/BLOG_3069_5.png" />
          </figure><p><sup>Source: </sup><a href="https://radar.cloudflare.com/security/network-layer/tm?dateStart=2023-10-01&amp;dateEnd=2023-11-30#tcp-resets-and-timeouts"><sup>Cloudflare Radar</sup></a></p><p>A different picture emerges from <a href="https://radar.cloudflare.com/as51495"><u>AS51495 (Ashgabat City Telephone Network)</u></a>. Post-ACK anomalies almost completely disappear on July 12, corresponding with an increase in anomalies during the Post-PSH stage. An increase of anomalies in the Later (green) connection stage on July 22 is apparent for this AS as well.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7btBYWx2VVVg0MH10yY9ot/17e87bf94f97b1cd43139e432f189770/BLOG_3069_6.png" />
          </figure><p><sup>Source: </sup><a href="https://radar.cloudflare.com/security/network-layer/tm?dateStart=2023-10-01&amp;dateEnd=2023-11-30#tcp-resets-and-timeouts"><sup>Cloudflare Radar</sup></a></p><p>Finally, for <a href="https://radar.cloudflare.com/as59974"><u>AS59974 (Altyn Asyr)</u></a>, you can see below that there is a clear spike in Post-ACK anomalies starting July 22. This is the stage of the connection where a firewall could have seen the SNI, and chooses to drop the packets immediately, so they never reach Cloudflare’s servers.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4pxUHjzkRwnbmaSsgkhiKd/b56fbc84e2fdcd8b889b6e8b3a68dc40/BLOG_3069_7.png" />
          </figure><p><sup>Source: </sup><a href="https://radar.cloudflare.com/security/network-layer/tm?dateStart=2023-10-01&amp;dateEnd=2023-11-30#tcp-resets-and-timeouts"><sup>Cloudflare Radar</sup></a></p>
    <div>
      <h3>Timeouts and resets in context, never isolation</h3>
      <a href="#timeouts-and-resets-in-context-never-isolation">
        
      </a>
    </div>
    <p>We’ve previously discussed <a href="https://blog.cloudflare.com/tcp-resets-timeouts/"><u>how to use the resets and timeouts</u></a> data because, while useful, it can also be misinterpreted. Radar’s data on resets and timeouts is unique among operators, but in isolation it’s incomplete and subject to human bias. </p><p>Take the figure above for AS59974 where Post-ACK (light blue) anomalies markedly increased on July 22. The Radar view is proportional, meaning that the increase in proportion could be explained by greater numbers of anomalies – but could also be explained, for example, by a smaller number of valid requests. Indeed, looking at the HTTP request levels for the same AS, there was a similarly pronounced drop starting on the same day, as shown below. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2PAYPpcFeInis6zo4lWrSx/f28a1f84fbe5b1c21659911b11331c30/BLOG_3069_8.png" />
          </figure><p><sup>Source: </sup><a href="https://radar.cloudflare.com/security/network-layer/tm?dateStart=2023-10-01&amp;dateEnd=2023-11-30#tcp-resets-and-timeouts"><sup>Cloudflare Radar</sup></a></p><p>If we look at the same two graphs before July 22, however, rates of reset and timeout anomalies do not appear to mirror the very large shifts up and down in HTTP requests.</p>
    <div>
      <h3>Looking ahead can also mean looking behind</h3>
      <a href="#looking-ahead-can-also-mean-looking-behind">
        
      </a>
    </div>
    <p>These charts from Radar above offer a way to analyze news events from a different angle, by looking at requests and TCP connection resets and timeouts. Does this data tell us definitively that new firewalls were being tested in Turkmenistan? No. But the trends in the data are consistent with what we could expect to see if that were the case.</p><p>If thinking about ways to use the resets and timeouts data going forward, we’d encourage also looking at the data in retrospect—or even further past to improve context.</p><p>A natural question might be, for example, “If Turkmenistan stopped blocking IPs in mid-2024, what did the data say beforehand?” The figure below captures October and November 2023. (The red-shaded region contains missing data due to the <a href="https://blog.cloudflare.com/post-mortem-on-cloudflare-control-plane-and-analytics-outage"><u>Nov. 2 Cloudflare control plane and metrics outage</u></a>.) Signals about the Internet in Turkmenistan were evolving well before the <a href="https://turkmen.news/internet-amnistiya-v-turkmenistane-razblokirovany-3-milliarda-ip-adresov-hostingi-i-cdn/"><u>news article</u></a> that prompted us to look.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2W4MfieKNV24PmvynAAIfO/af42a2328059eb15fba0619372973887/BLOG_3069_9.png" />
          </figure><p><sup>Source: </sup><a href="https://radar.cloudflare.com/security/network-layer/tm?dateStart=2023-10-01&amp;dateEnd=2023-11-30#tcp-resets-and-timeouts"><sup>Cloudflare Radar</sup></a></p>
    <div>
      <h3>What’s next?</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>To learn more, see our guide about <a href="https://blog.cloudflare.com/tcp-resets-timeouts/"><u>how to use the resets and timeouts data available on Radar</u></a>, as well as the technical details about our <a href="https://blog.cloudflare.com/connection-tampering/"><u>third-party tampering measurement </u></a>and some perspectives by a former <a href="https://blog.cloudflare.com/experience-of-data-at-scale/"><u>intern who helped drive</u></a> the study. </p><p>We’re proud to offer a unique view of TCP connection anomalies on Radar. It’s a testament to the long-lived benefits that emerge when approaching <a href="https://blog.cloudflare.com/tricky-internet-measurement/"><u>Internet measurement as a science</u></a>. In keeping with the open spirit of science, we’ve also shared how we<a href="https://blog.cloudflare.com/tricky-internet-measurement/"><u> detect and log resets and timeouts</u></a> so that others can reproduce the observability on their servers, whether by hobbyists or other large operators.</p> ]]></content:encoded>
            <category><![CDATA[Radar]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Internet Shutdown]]></category>
            <category><![CDATA[Internet Trends]]></category>
            <category><![CDATA[Trends]]></category>
            <category><![CDATA[Consumer Services]]></category>
            <guid isPermaLink="false">404c64k0KinGRYZkfe0xum</guid>
            <dc:creator>Luke Valenta</dc:creator>
            <dc:creator>Marwan Fayed</dc:creator>
        </item>
        <item>
            <title><![CDATA[One IP address, many users: detecting CGNAT to reduce collateral effects]]></title>
            <link>https://blog.cloudflare.com/detecting-cgn-to-reduce-collateral-damage/</link>
            <pubDate>Wed, 29 Oct 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ IPv4 scarcity drives widespread use of Carrier-Grade Network Address Translation, a practice in ISPs and mobile networks that places many users behind each IP address, along with their collected activity and volumes of traffic. We introduce the method we’ve developed to detect large-scale IP sharing globally and mitigate the issues that result.  ]]></description>
            <content:encoded><![CDATA[ <p>IP addresses have historically been treated as stable identifiers for non-routing purposes such as for geolocation and security operations. Many operational and security mechanisms, such as blocklists, rate-limiting, and anomaly detection, rely on the assumption that a single IP address represents a cohesive<b>, </b>accountable<b> </b>entity or even, possibly, a specific user or device.</p><p>But the structure of the Internet has changed, and those assumptions can no longer be made. Today, a single IPv4 address may represent hundreds or even thousands of users due to widespread use of <a href="https://en.wikipedia.org/wiki/Carrier-grade_NAT"><u>Carrier-Grade Network Address Translation (CGNAT)</u></a>, VPNs, and proxy<b> </b>middleboxes. This concentration of traffic can result in <a href="https://blog.cloudflare.com/consequences-of-ip-blocking/"><u>significant collateral damage</u></a> – especially to users in developing regions of the world – when security mechanisms are applied without taking into account the multi-user nature of IPs.</p><p>This blog post presents our approach to detecting large-scale IP sharing globally. We describe how we <a href="https://www.cloudflare.com/learning/ai/how-to-secure-training-data-against-ai-data-leaks/">build reliable training data</a>, and how detection can help avoid unintentional bias affecting users in regions where IP sharing is most prevalent. Arguably it's those regional variations that motivate our efforts more than any other. </p>
    <div>
      <h2>Why this matters: Potential socioeconomic bias</h2>
      <a href="#why-this-matters-potential-socioeconomic-bias">
        
      </a>
    </div>
    <p>Our work was initially motivated by a simple observation: CGNAT is a likely unseen source of bias on the Internet. Those biases would be more pronounced wherever there are more users and few addresses, such as in developing regions. And these biases can have profound implications for user experience, network operations, and digital equity.</p><p>The reasons are understandable for many reasons, not least because of necessity. Countries in the developing world often have significantly fewer available IPs, and more users. The disparity is a historical artifact of how the Internet grew: the largest blocks of IPv4 addresses were allocated decades ago, primarily to organizations in North America and Europe, leaving a much smaller pool for regions where Internet adoption expanded later. </p><p>To visualize the IPv4 allocation gap, we plot country-level ratios of users to IP addresses in the figure below. We take online user estimates from the <a href="https://data.worldbank.org/indicator/IT.NET.USER.ZS"><u>World Bank Group</u></a> and the number of IP addresses in a country from Regional Internet Registry (RIR) records. The colour-coded map that emerges shows that the usage of each IP address is more concentrated in regions that generally have poor Internet penetration. For example, large portions of Africa and South Asia appear with the highest user-to-IP ratios. Conversely, the lowest user-to-IP ratios appear in Australia, Canada, Europe, and the USA — the very countries that otherwise have the highest Internet user penetration numbers.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2YBdqPx0ALt7pY7rmQZyLQ/049922bae657a715728700c764c4af16/BLOG-3046_2.png" />
          </figure><p>The scarcity of IPv4 address space means that regional differences can only worsen as Internet penetration rates increase. A natural consequence of increased demand in developing regions is that ISPs would rely even more heavily on CGNAT, and is compounded by the fact that CGNAT is common in mobile networks that users in developing regions so heavily depend on. All of this means that <a href="https://datatracker.ietf.org/doc/html/rfc7021"><u>actions known to be based</u></a> on IP reputation or behaviour would disproportionately affect developing economies. </p><p>Cloudflare is a global network in a global Internet. We are sharing our methodology so that others might benefit from our experience and help to mitigate unintended effects. First, let’s better understand CGNAT.</p>
    <div>
      <h3>When one IP address serves multiple users</h3>
      <a href="#when-one-ip-address-serves-multiple-users">
        
      </a>
    </div>
    <p>Large-scale IP address sharing is primarily achieved through two distinct methods. The first, and more familiar, involves services like VPNs and proxies. These tools emerge from a need to secure corporate networks or improve users' privacy, but can be used to circumvent censorship or even improve performance. Their deployment also tends to concentrate traffic from many users onto a small set of exit IPs. Typically, individuals are aware they are using such a service, whether for personal use or as part of a corporate network.</p><p>Separately, another form of large-scale IP sharing often goes unnoticed by users: <a href="https://en.wikipedia.org/wiki/Carrier-grade_NAT"><u>Carrier-Grade NAT (CGNAT)</u></a>. One way to explain CGNAT is to start with a much smaller version of network address translation (NAT) that very likely exists in your home broadband router, formally called a Customer Premises Equipment (or CPE), which translates unseen private addresses in the home to visible and routable addresses in the ISP. Once traffic leaves the home, an ISP may add an additional enterprise-level address translation that causes many households or unrelated devices to appear behind a single IP address.</p><p>The crucial difference between large-scale IP sharing is user choice: carrier-grade address sharing is not a user choice, but is configured directly by Internet Service Providers (ISPs) within their access networks. Users are not aware that CGNATs are in use. </p><p>The primary driver for this technology, understandably, is the exhaustion of the IPv4 address space. IPv4's 32-bit architecture supports only 4.3 billion unique addresses — a capacity that, while once seemingly vast, has been completely outpaced by the Internet's explosive growth. By the early 2010s, Regional Internet Registries (RIRs) had depleted their pools of unallocated IPv4 addresses. This left ISPs unable to easily acquire new address blocks, forcing them to maximize the use of their existing allocations.</p><p>While the long-term solution is the transition to IPv6, CGNAT emerged as the immediate, practical workaround. Instead of assigning a unique public IP address to each customer, ISPs use CGNAT to place multiple subscribers behind a single, shared IP address. This practice solves the problem of IP address scarcity. Since translated addresses are not publicly routable, CGNATs have also had the positive side effect of protecting many home devices that might be vulnerable to compromise. </p><p>CGNATs also create significant operational fallout stemming from the fact that hundreds or even thousands of clients can appear to originate from a single IP address. <b>This means an IP-based security system may inadvertently block or throttle large groups of users as a result of a single user behind the CGNAT engaging in malicious activity.</b></p><p>This isn't a new or niche issue. It has been recognized for years by the Internet Engineering Task Force (IETF), the organization that develops the core technical standards for the Internet. These standards, known as Requests for Comments (RFCs), act as the official blueprints for how the Internet should operate. <a href="https://www.rfc-editor.org/rfc/rfc6269.html"><u>RFC 6269</u></a>, for example, discusses the challenges of IP address sharing, while <a href="https://datatracker.ietf.org/doc/html/rfc7021"><u>RFC 7021</u></a> examines the impact of CGNAT on network applications. Both explain that traditional abuse-mitigation techniques, such as blocklisting or rate-limiting, assume a one-to-one relationship between IP addresses and users: when malicious activity is detected, the offending IP address can be blocked to prevent further abuse.</p><p>In shared IPv4 environments, such as those using CGNAT or other address-sharing techniques, this assumption breaks down because multiple subscribers can appear under the same public IP. Blocking the shared IP therefore penalizes many innocent users along with the abuser. In 2015 Ofcom, the UK's telecommunications regulator, reiterated these concerns in a <a href="https://oxil.uk/research/mc159-report-on-the-implications-of-carrier-grade-network-address-translators-final-report"><u>report</u></a> on the implications of CGNAT where they noted that, “In the event that an IPv4 address is blocked or blacklisted as a source of spam, the impact on a CGNAT would be greater, potentially affecting an entire subscriber base.” </p><p>While the hope was that CGNAT was only a temporary solution until the eventual switch to IPv6, as the old proverb says, nothing is more permanent than a temporary solution. While IPv6 deployment continues to lag, <a href="https://blog.apnic.net/2022/01/19/ip-addressing-in-2021/"><u>CGNAT deployments have become increasingly common</u></a>, and so do the related problems. </p>
    <div>
      <h2>CGNAT detection at Cloudflare</h2>
      <a href="#cgnat-detection-at-cloudflare">
        
      </a>
    </div>
    <p>To enable a fairer treatment of users behind CGNAT IPs by security techniques that rely on IP reputation, our goal is to identify large-scale IP sharing. This allows traffic filtering to be better calibrated and collateral damage minimized. Additionally, we want to distinguish CGNAT IPs from other large-scale sharing (LSS) IP technologies, such as VPNs and proxies, because we may need to take different approaches to different kinds of IP-sharing technologies.</p><p>To do this, we decided to take advantage of Cloudflare’s extensive view of the active IP clients, and build a supervised learning classifier that would distinguish CGNAT and VPN/proxy IPs from IPs that are allocated to a single subscriber (non-LSS IPs), based on behavioural characteristics. The figure below shows an overview of our supervised classifier: </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7tFXZByKRCYxVaAFDG0Xda/d81e7f09b5d12e03e39c266696df9cc3/BLOG-3046_3.png" />
          </figure><p>While our classification approach is straightforward, a significant challenge is the lack of a reliable, comprehensive, and labeled dataset of CGNAT IPs for our training dataset.</p>
    <div>
      <h3>Detecting CGNAT using public data sources </h3>
      <a href="#detecting-cgnat-using-public-data-sources">
        
      </a>
    </div>
    <p>Detection begins by building an initial dataset of IPs believed to be associated with CGNAT. Cloudflare has vast HTTP and traffic logs. Unfortunately there is no signal or label in any request to indicate what is or is not a CGNAT. </p><p>To build an extensive labelled dataset to train our ML classifier, we employ a combination of network measurement techniques, as described below. We rely on public data sources to help disambiguate an initial set of large-scale shared IP addresses from others in Cloudflare’s logs.   </p>
    <div>
      <h4>Distributed Traceroutes</h4>
      <a href="#distributed-traceroutes">
        
      </a>
    </div>
    <p>The presence of a client behind CGNAT can often be inferred through traceroute analysis. CGNAT requires ISPs to insert a NAT step that typically uses the Shared Address Space (<a href="https://datatracker.ietf.org/doc/html/rfc6598"><u>RFC 6598</u></a>) after the customer premises equipment (CPE). By running a traceroute from the client to its own public IP and examining the hop sequence, the appearance of an address within 100.64.0.0/10 between the first private hop (e.g., 192.168.1.1) and the public IP is a strong indicator of CGNAT.</p><p>Traceroute can also reveal multi-level NAT, which CGNAT requires, as shown in the diagram below. If the ISP assigns the CPE a private <a href="https://datatracker.ietf.org/doc/html/rfc1918"><u>RFC 1918</u></a> address that appears right after the local hop, this indicates at least two NAT layers. While ISPs sometimes use private addresses internally without CGNAT, observing private or shared ranges immediately downstream combined with multiple hops before the public IP strongly suggests CGNAT or equivalent multi-layer NAT.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/57k4gwGCHcPggIWtSy36HU/6cf8173c1a4c568caa25a1344a516e9e/BLOG-3046_4.png" />
          </figure><p>Although traceroute accuracy depends on router configurations, detecting private and shared IP ranges is a reliable way to identify large-scale IP sharing. We apply this method to distributed traceroutes from over 9,000 RIPE Atlas probes to classify hosts as behind CGNAT, single-layer NAT, or no NAT.</p>
    <div>
      <h4>Scraping WHOIS and PTR records</h4>
      <a href="#scraping-whois-and-ptr-records">
        
      </a>
    </div>
    <p>Many operators encode metadata about their IPs in the corresponding reverse DNS pointer (PTR) record that can signal administrative attributes and geographic information. We first query the DNS for PTR records for the full IPv4 space and then filter for a set of known keywords from the responses that indicate a CGNAT deployment. For example, each of the following three records matches a keyword (<code>cgnat</code>, <code>cgn</code> or <code>lsn</code>) used to detect CGNAT address space:</p><p><code>node-lsn.pool-1-0.dynamic.totinternet.net
103-246-52-9.gw1-cgnat.mobile.ufone.nz
cgn.gsw2.as64098.net</code></p><p>WHOIS and Internet Routing Registry (IRR) records may also contain organizational names, remarks, or allocation details that reveal whether a block is used for CGNAT pools or residential assignments. </p><p>Given that both PTR and WHOIS records may be manually maintained and therefore may be stale, we try to sanitize the extracted data by validating the fact that the corresponding ISPs indeed use CGNAT based on customer and market reports. </p>
    <div>
      <h4>Collecting VPN and proxy IPs </h4>
      <a href="#collecting-vpn-and-proxy-ips">
        
      </a>
    </div>
    <p>Compiling a list of VPN and proxy IPs is more straightforward, as we can directly find such IPs in public service directories for anonymizers. We also subscribe to multiple VPN providers, and we collect the IPs allocated to our clients by connecting to a unique HTTP endpoint under our control. </p>
    <div>
      <h2>Modeling CGNAT with machine learning</h2>
      <a href="#modeling-cgnat-with-machine-learning">
        
      </a>
    </div>
    <p>By combining the above techniques, we accumulated a dataset of labeled IPs for more than 200K CGNAT IPs, 180K VPNs &amp; proxies and close to 900K IPs allocated that are not LSS IPs. These were the entry points to modeling with machine learning.</p>
    <div>
      <h3>Feature selection</h3>
      <a href="#feature-selection">
        
      </a>
    </div>
    <p>Our hypothesis was that aggregated activity from CGNAT IPs is distinguishable from activity generated from other non-CGNAT IP addresses. Our feature extraction is an evaluation of that hypothesis — since networks do not disclose CGNAT and other uses of IPs, the quality of our inference is strictly dependent on our confidence in the training data. We claim the key discriminator is diversity, not just volume. For example, VM-hosted scanners may generate high numbers of requests, but with low information diversity. Similarly, globally routable CPEs may have individually unique characteristics, but with volumes that are less likely to be caught at lower sampling rates.</p><p>In our feature extraction, we parse a 1% sampled HTTP requests log for distinguishing features of IPs compiled in our reference set, and the same features for the corresponding /24 prefix (namely IPs with the same first 24 bits in common). We analyse the features for each of the VPNs, proxies, CGNAT, or non LSS IP. We find that features from the following broad categories are key discriminators for the different types of IPs in our training dataset:</p><ul><li><p><b>Client-side signals:</b> We analyze the aggregate properties of clients connecting from an IP. A large, diverse user base (like on a CGNAT) naturally presents a much wider statistical variety of client behaviors and connection parameters than a single-tenant server or a small business proxy.</p></li><li><p><b>Network and transport-level behaviors:</b> We examine traffic at the network and transport layers. The way a large-scale network appliance (like a CGNAT) manages and routes connections often leaves subtle, measurable artifacts in its traffic patterns, such as in port allocation and observed network timing.</p></li><li><p><b>Traffic volume and destination diversity:</b> We also model the volume and "shape" of the traffic. An IP representing thousands of independent users will, on average, generate a higher volume of requests and target a much wider, less correlated set of destinations than an IP representing a single user.</p></li></ul><p>Crucially, to distinguish CGNAT from VPNs and proxies (which is absolutely necessary for calibrated security filtering), we had to aggregate these features at two different scopes: per-IP and per /24 prefixes. CGNAT IPs are typically allocated large blocks of IPs, whereas VPNs IPs are more scattered across different IP prefixes. </p>
    <div>
      <h3>Classification results</h3>
      <a href="#classification-results">
        
      </a>
    </div>
    <p>We compute the above features from HTTP logs over 24-hour intervals to increase data volume and reduce noise due to DHCP IP reallocation. The dataset is split into 70% training and 30% testing sets with disjoint /24 prefixes, and VPN and proxy labels are merged due to their similarity and lower operational importance compared to CGNAT detection.</p><p>Then we train a multi-class <a href="https://xgboost.readthedocs.io/en/stable/"><u>XGBoost</u></a> model with class weighting to address imbalance, assigning each IP to the class with the highest predicted probability. XGBoost is well-suited for this task because it efficiently handles large feature sets, offers strong regularization to prevent overfitting, and delivers high accuracy with limited parameter tuning. The classifier achieves 0.98 accuracy, 0.97 weighted F1, and 0.04 log loss. The figure below shows the confusion matrix of the classification.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/26i81Pe0yjlftHfIDrjB5X/45d001447fc52001a25176c8036a92cb/BLOG-3046_5.png" />
          </figure><p>Our model is accurate for all three labels. The errors observed are mainly misclassifications of VPN/proxy IPs as CGNATs, mostly for VPN/proxy IPs that are within a /24 prefix that is also shared by broadband users outside of the proxy service. We also evaluate the prediction accuracy using <a href="https://scikit-learn.org/stable/modules/cross_validation.html"><u>k-fold cross validation</u></a>, which provides a more reliable estimate of performance by training and validating on multiple data splits, reducing variance and overfitting compared to a single train–test split. We select 10 folds and we evaluate the <a href="https://developers.google.com/machine-learning/crash-course/classification/roc-and-auc"><u>Area Under the ROC Curve</u></a> (AUC) and the multi-class logloss. We achieve a macro-average AUC of 0.9946 (σ=0.0069) and log loss of 0.0429 (σ=0.0115). Prefix-level features are the most important contributors to classification performance.</p>
    <div>
      <h3>Users behind CGNAT are more likely to be rate limited</h3>
      <a href="#users-behind-cgnat-are-more-likely-to-be-rate-limited">
        
      </a>
    </div>
    <p>The figure below shows the daily number of CGNAT IP inferences generated by our CDN-deployed detection service between December 17, 2024 and January 9, 2025. The number of inferences remains largely stable, with noticeable dips during weekends and holidays such as Christmas and New Year’s Day. This pattern reflects expected seasonal variations, as lower traffic volumes during these periods lead to fewer active IP ranges and reduced request activity.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7hiYstptHAK6tFQrM2kEsf/7f8192051156fc6eaecdf26a829ef11c/BLOG-3046_6.png" />
          </figure><p>Next, recall that actions that rely on IP reputation or behaviour may be unduly influenced by CGNATs. One such example is bot detection. In an evaluation of our systems, we find that bot detection is resilient to those biases. However, we also learned that customers are more likely to rate limit IPs that we find are CGNATs.</p><p>We analyze bot labels by analyzing how often requests from CGNAT and non-CGNAT IPs are labeled as bots. <a href="https://www.cloudflare.com/resources/assets/slt3lc6tev37/JYknFdAeCVBBWWgQUtNZr/61844a850c5bba6b647d65e962c31c9c/BDES-863_Bot_Management_re_edit-_How_it_Works_r3.pdf"><u>Cloudflare assigns a bot score</u></a> to each HTTP request using CatBoost models trained on various request features, and these scores are then exposed through the Web Application Firewall (WAF), allowing customers to apply filtering rules. The median bot rate is nearly identical for CGNAT (4.8%) and non-CGNAT (4.7%) IPs. However, the mean bot rate is notably lower for CGNATs (7%) than for non-CGNATs (13.1%), indicating different underlying distributions. Non-CGNAT IPs show a much wider spread, with some reaching 100% bot rates, while CGNAT IPs cluster mostly below 15%. This suggests that non-CGNAT IPs tend to be dominated by either human or bot activity, whereas CGNAT IPs reflect mixed behavior from many end users, with human traffic prevailing.</p><p>Interestingly, despite bot scores that indicate traffic is more likely to be from human users, CGNAT IPs are subject to rate limiting three times more often than non-CGNAT IPs. This is likely because multiple users share the same public IP, increasing the chances that legitimate traffic gets caught by customers’ bot mitigation and firewall rules.</p><p>This tells us that users behind CGNAT IPs are indeed susceptible to collateral effects, and identifying those IPs allows us to tune mitigation strategies to disrupt malicious traffic quickly while reducing collateral impact on benign users behind the same address.</p>
    <div>
      <h2>A global view of the CGNAT ecosystem</h2>
      <a href="#a-global-view-of-the-cgnat-ecosystem">
        
      </a>
    </div>
    <p>One of the early motivations of this work was to understand if our knowledge about IP addresses might hide a bias along socio-economic boundaries—and in particular if an action on an IP address may disproportionately affect populations in developing nations, often referred to as the Global South. Identifying where different IPs exist is a necessary first step.</p><p>The map below shows the fraction of a country’s inferred CGNAT IPs over all IPs observed in the country. Regions with a greater reliance on CGNAT appear darker on the map. This view highlights the geodiversity of CGNATs in terms of importance; for example, much of Africa and Central and Southeast Asia rely on CGNATs. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4P2XcuEebKfcYdCgykMWuP/4a0aa86bd619ba24533de6862175e919/BLOG-3046_7.png" />
          </figure><p>As further evidence of continental differences, the boxplot below shows the distribution of distinct user agents per IP across /24 prefixes inferred to be part of a CGNAT deployment in each continent. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7bqJSHexFuXFs4A8am1ibQ/591be6880e8f58c9d61b147aaf0487f5/BLOG-3046_8.png" />
          </figure><p>Notably, Africa has a much higher ratio of user agents to IP addresses than other regions, suggesting more clients share the same IP in African <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/"><u>ASNs</u></a>. So, not only do African ISPs rely more extensively on CGNAT, but the number of clients behind each CGNAT IP is higher. </p><p>While the deployment rate of CGNAT per country is consistent with the users-per-IP ratio per country, it is not sufficient by itself to confirm deployment. The scatterplot below shows the number of users (according to <a href="https://stats.labs.apnic.net/aspop/"><u>APNIC user estimates</u></a>) and the number of IPs per ASN for ASNs where we detect CGNAT. ASNs that have fewer available IP addresses than their user base appear below the diagonal. Interestingly the scatterplot indicates that many ASNs with more addresses than users still choose to deploy CGNAT. Presumably, these ASNs provide additional services beyond broadband, preventing them from dedicating their entire address pool to subscribers. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/34GKPlJWvkwudU5MbOtots/c883760a7c448b12995997e3e6e51979/BLOG-3046_9.png" />
          </figure>
    <div>
      <h3>What this means for everyday Internet users</h3>
      <a href="#what-this-means-for-everyday-internet-users">
        
      </a>
    </div>
    <p>Accurate detection of CGNAT IPs is crucial for minimizing collateral effects in network operations and for ensuring fair and effective application of security measures. Our findings underscore the potential socio-economic and geographical variations in the use of CGNATs, revealing significant disparities in how IP addresses are shared across different regions. </p><p>At Cloudflare we are going beyond just using these insights to evaluate policies and practices. We are using the detection systems to improve our systems across our application security suite of features, and working with customers to understand how they might use these insights to improve the protections they configure.</p><p>Our work is ongoing and we’ll share details as we go. In the meantime, if you’re an ISP or network operator that operates CGNAT and want to help, get in touch at <a><u>ask-research@cloudflare.com</u></a>. Sharing knowledge and working together helps make better and equitable user experience for subscribers, while preserving web service safety and security.</p> ]]></content:encoded>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[WAF]]></category>
            <category><![CDATA[Web Application Firewall]]></category>
            <category><![CDATA[Better Internet]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Bots]]></category>
            <category><![CDATA[IPv4]]></category>
            <category><![CDATA[Network Services]]></category>
            <guid isPermaLink="false">9cTCNUkDdgVjdBN6M6JLv</guid>
            <dc:creator>Vasilis Giotsas</dc:creator>
            <dc:creator>Marwan Fayed</dc:creator>
        </item>
        <item>
            <title><![CDATA[A framework for measuring Internet resilience]]></title>
            <link>https://blog.cloudflare.com/a-framework-for-measuring-internet-resilience/</link>
            <pubDate>Tue, 28 Oct 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ We present a data-driven framework to quantify cross-layer Internet resilience. We also share a list of measurements with which to quantify facets of Internet resilience for geographical areas. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>On July 8, 2022, a massive outage at Rogers, one of Canada's largest telecom providers, knocked out Internet and mobile services for over 12 million users. Why did this single event have such a catastrophic impact? And more importantly, why do some networks crumble in the face of disruption while others barely stumble?</p><p>The answer lies in a concept we call <b>Internet resilience</b>: a network's ability not just to stay online, but to withstand, adapt to, and rapidly recover from failures.</p><p>It’s a quality that goes far beyond simple "uptime." True resilience is a multi-layered capability, built on everything from the diversity of physical subsea cables to the security of BGP routing and the health of a competitive market. It's an emergent property much like <a href="https://en.wikipedia.org/wiki/Psychological_resilience"><u>psychological resilience</u></a>: while each individual network must be robust, true resilience only arises from the collective, interoperable actions of the entire ecosystem. In this post, we'll introduce a data-driven framework to move beyond abstract definitions and start quantifying what makes a network resilient. All of our work is based on public data sources, and we're sharing our metrics to help the entire community build a more reliable and secure Internet for everyone.</p>
    <div>
      <h2>What is Internet resilience?</h2>
      <a href="#what-is-internet-resilience">
        
      </a>
    </div>
    <p>In networking, we often talk about "reliability" (does it work under normal conditions?) and "robustness" (can it handle a sudden traffic surge?). But resilience is more dynamic. It's the ability to gracefully degrade, adapt, and most importantly, recover. For our work, we've adopted a pragmatic definition:</p><p><b><i>Internet resilience</i></b><i> is the measurable capability of a national or regional network ecosystem to maintain diverse and secure routing paths in the face of challenges, and to rapidly restore connectivity following a disruption.</i></p><p>This definition links the abstract goal of resilience to the concrete, measurable metrics that form the basis of our analysis.</p>
    <div>
      <h3>Local decisions have global impact</h3>
      <a href="#local-decisions-have-global-impact">
        
      </a>
    </div>
    <p>The Internet is a global system but is built out of thousands of local pieces. Every country depends on the global Internet for economic activity, communication, and critical services, yet most of the decisions that shape how traffic flows are made locally by individual networks.</p><p>In most national infrastructures like water or power grids, a central authority can plan, monitor, and coordinate how the system behaves. The Internet works very differently. Its core building blocks are Autonomous Systems (ASes), which are networks like ISPs, universities, cloud providers or enterprises. Each AS controls autonomously how it connects to the rest of the Internet, which routes it accepts or rejects, how it prefers to forward traffic, and with whom it interconnects. That’s why they’re called Autonomous Systems in the first place! There’s no global controller. Instead, the Internet’s routing fabric emerges from the collective interaction of thousands of independent networks, each optimizing for its own goals.</p><p>This decentralized structure is one of the Internet’s greatest strengths: no single failure can bring the whole system down. But it also makes measuring resilience at a country level tricky. National statistics can hide local structures that are crucial to global connectivity. For example, a country might appear to have many international connections overall, but those connections could be concentrated in just a handful of networks. If one of those fails, the whole country could be affected.</p><p>For resilience, the goal isn’t to isolate national infrastructure from the global Internet. In fact, the opposite is true: healthy integration with diverse partners is what makes both local and global connectivity stronger. When local networks invest in secure, redundant, and diverse interconnections, they improve their own resilience and contribute to the stability of the Internet as a whole.</p><p>This perspective shapes how we design and interpret resilience metrics. Rather than treating countries as isolated units, we look at how well their networks are woven into the global fabric: the number and diversity of upstream providers, the extent of international peering, and the richness of local interconnections. These are the building blocks of a resilient Internet.</p>
    <div>
      <h3>Route hygiene: Keeping the Internet healthy</h3>
      <a href="#route-hygiene-keeping-the-internet-healthy">
        
      </a>
    </div>
    <p>The Internet is constructed according to a <i>layered</i> model, by design, so that different Internet components and features can evolve independent of the others. The Physical layer stores, carries, and forwards, all the bits and bytes transmitted in packets between devices. It consists of cables, routers and switches, but also buildings that house interconnection facilities. The Application layer sits above all others and has virtually no information about the network so that applications can communicate without having to worry about the underlying details, for example, if a network is ethernet or Wi-Fi. The application layer includes web browsers, web servers, as well as caching, security, and other features provided by Content Distribution Networks (CDNs). Between the physical and application layers is the Network layer responsible for Internet routing. It is ‘logical’, consisting of software that learns about interconnection and routes, and makes (local) forwarding decisions that deliver packets to their destinations. </p><p>Good route hygiene works like personal hygiene: it prevents problems before they spread. The Internet relies on the <a href="https://www.cloudflare.com/learning/security/glossary/what-is-bgp/"><u>Border Gateway Protocol</u></a> (BGP) to exchange routes between networks, but BGP wasn’t built with security in mind. A single bad route announcement, whether by mistake or attack, can send traffic the wrong way or cause widespread outages.</p><p>Two practices help stop this: The <b>RPKI</b> (Resource Public Key Infrastructure) lets networks publish cryptographic proof that they’re allowed to announce specific IP prefixes. <b>ROV </b>(Route Origin Validation) checks those proofs before accepting routes.</p><p>Together, they act like passports and border checks for Internet routes, helping filter out hijacks and leaks early.</p><p>Hygiene doesn’t just happen in the routing table – it spans multiple layers of the Internet’s architecture, and weaknesses in one layer can ripple through the rest. At the physical layer, having multiple, geographically diverse cable routes ensures that a single cut or disaster doesn’t isolate an entire region. For example, distributing submarine landing stations along different coastlines can protect international connectivity when one corridor fails. At the network layer, practices like multi-homing and participation in Internet Exchange Points (IXPs) give operators more options to reroute traffic during incidents, reducing reliance on any single upstream provider. At the application layer, Content Delivery Networks (CDNs) and caching keep popular content close to users, so even if upstream routes are disrupted, many services remain accessible. Finally, policy and market structure also play a role: open peering policies and competitive markets foster diversity, while dependence on a single ISP or cable system creates fragility.</p><p>Resilience emerges when these layers work together. If one layer is weak, the whole system becomes more vulnerable to disruption.</p><p>The more networks adopt these practices, the stronger and more resilient the Internet becomes. We actively support the deployment of RPKI, ROV, and diverse routing to keep the global Internet healthy.</p>
    <div>
      <h2>Measuring resilience is harder than it sounds</h2>
      <a href="#measuring-resilience-is-harder-than-it-sounds">
        
      </a>
    </div>
    <p>The biggest hurdle in measuring resilience is data access. The most valuable information, like internal network topologies, the physical paths of fiber cables, or specific peering agreements, is held by private network operators. This is the ground truth of the network.</p><p>However, operators view this information as a highly sensitive competitive asset. Revealing detailed network maps could expose strategic vulnerabilities or undermine business negotiations. Without access to this ground truth data, we're forced to rely on inference, approximation, and the clever use of publicly available data sources. Our framework is built entirely on these public sources to ensure anyone can reproduce and build upon our findings.</p><p>Projects like RouteViews and RIPE RIS collect BGP routing data that shows how networks connect. <a href="https://www.cloudflare.com/en-in/learning/network-layer/what-is-mtr/"><u>Traceroute</u></a> measurements reveal paths at the router level. IXP and submarine cable maps give partial views of the physical layer. But each of these sources has blind spots: peering links often don’t appear in BGP data, backup paths may remain hidden, and physical routes are hard to map precisely. This lack of a single, complete dataset means that resilience measurement relies on combining many partial perspectives, a bit like reconstructing a city map from scattered satellite images, traffic reports, and public utility filings. It’s challenging, but it’s also what makes this field so interesting.</p>
    <div>
      <h3>Translating resilience into quantifiable metrics</h3>
      <a href="#translating-resilience-into-quantifiable-metrics">
        
      </a>
    </div>
    <p>Once we understand why resilience matters and what makes it hard to measure, the next step is to translate these ideas into concrete metrics. These metrics give us a way to evaluate how well different parts of the Internet can withstand disruptions and to identify where the weak points are. No single metric can capture Internet resilience on its own. Instead, we look at it from multiple angles: physical infrastructure, network topology, interconnection patterns, and routing behavior. Below are some of the key dimensions we use. Some of these metrics are inspired from existing research, like the <a href="https://pulse.internetsociety.org/en/resilience/"><u>ISOC Pulse</u></a> framework. All described methods rely on public data sources and are fully reproducible. As a result, in our visualizations we intentionally omit country and region names to maintain focus on the methodology and interpretation of the results. </p>
    <div>
      <h3>IXPs and colocation facilities</h3>
      <a href="#ixps-and-colocation-facilities">
        
      </a>
    </div>
    <p>Networks primarily interconnect in two types of physical facilities: colocation facilities (colos), and Internet Exchange Points (IXPs) often housed within the colos. Although symbiotically linked, they serve distinct functions in a nation’s digital ecosystem. A colocation facility provides the foundational infrastructure —- secure space, power, and cooling – for network operators to place their equipment. The IXP builds upon this physical base to provide the logical interconnection fabric, a role that is transformative for a region’s Internet development and resilience. The networks that connect at these facilities are its members. </p><p>Metrics that reflect resilience include:</p><ul><li><p><b>Number and distribution of IXPs</b>, normalized by population or geography. A higher IXP count, weighted by population or geographic coverage, is associated with improved local connectivity.</p></li><li><p><b>Peering participation rates</b> — the percentage of local networks connected to domestic IXPs. This metric reflects the extent to which local networks rely on regional interconnection rather than routing traffic through distant upstream providers.</p></li><li><p><b>Diversity of IXP membership</b>, including ISPs, CDNs, and cloud providers, which indicates how much critical content is available locally, making it accessible to domestic users even if international connectivity is severely degraded.</p></li></ul><p>Resilience also depends on how well local networks connect globally:</p><ul><li><p>How many <b>local networks peer at international IXPs</b>, increasing their routing options</p></li><li><p>How many <b>international networks peer at local IXPs</b>, bringing content closer to users</p></li></ul><p>A balanced flow in both directions strengthens resilience by ensuring multiple independent paths in and out of a region.</p><p>The geographic distribution of IXPs further enhances resilience. A resilient IXP ecosystem should be geographically dispersed to serve different regions within a country effectively, reducing the risk of a localized infrastructure failure from affecting the connectivity of an entire country. Spatial distribution metrics help evaluate how infrastructure is spread across a country’s geography or its population. Key spatial metrics include:</p><ul><li><p><b>Infrastructure per Capita</b>: This metric – inspired by <a href="https://en.wikipedia.org/wiki/Telephone_density"><u>teledensity</u></a>  – measures infrastructure relative to population size of a sub-region, providing a per-person availability indicator. A low IXP-per-population ratio in a region suggests that users there rely on distant exchanges, increasing the bit-risk miles.</p></li><li><p><b>Infrastructure per Area (Density)</b>: This metric evaluates how infrastructure is distributed per unit of geographic area, highlighting spatial coverage. Such area-based metrics are crucial for critical infrastructures to ensure remote areas are not left inaccessible.</p></li></ul><p>These metrics can be summarized using the <a href="https://www.bls.gov/k12/students/economics-made-easy/location-quotients.pdf"><u>Location Quotient (LQ)</u></a>. The location quotient is a widely used geographic index that measures a region’s share of infrastructure relative to its share of a baseline (such as population or area).</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4S52jlwCpQ8WVS6gRSdNqp/4722abb10331624a54b411708f1e576b/image5.png" />
          </figure><p>For example, the figure above represents US states where a region hosts more or less infrastructure that is expected for its population, based on its LQ score. This statistic illustrates how even for the states with the highest number of facilities this number is <i>still</i> lower than would be expected given the population size of those states.</p>
    <div>
      <h4>Economic-weighted metrics</h4>
      <a href="#economic-weighted-metrics">
        
      </a>
    </div>
    <p>While spatial metrics capture the physical distribution of infrastructure, economic and usage-weighted metrics reveal how infrastructure is actually used. These account for traffic, capacity, or economic activity, exposing imbalances that spatial counts miss. <b>Infrastructure Utilization Concentration</b> measures how usage is distributed across facilities, using indices like the <b>Herfindahl–Hirschman Index (HHI)</b>. HHI sums the squared market shares of entities, ranging from 0 (competitive) to 10,000 (highly concentrated). For IXPs, market share is defined through operational metrics such as:</p><ul><li><p><b>Peak/Average Traffic Volume</b> (Gbps/Tbps): indicates operational significance</p></li><li><p><b>Number of Connected ASNs</b>: reflects network reach</p></li><li><p><b>Total Port Capacity</b>: shows physical scale</p></li></ul><p>The chosen metric affects results. For example, using connected ASNs yields an HHI of 1,316 (unconcentrated) for a Central European country, whereas using port capacity gives 1,809 (moderately concentrated).</p><p>The <b>Gini coefficient</b> measures inequality in resource or traffic distribution (0 = equal, 1 = fully concentrated). The <b>Lorenz curve</b> visualizes this: a straight 45° line indicates perfect equality, while deviations show concentration.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/30bh4nVHRX5O3HMKvGRYh7/e0c5b3a7cb8294dfe3caaec98a0557d0/Screenshot_2025-10-27_at_14.10.57.png" />
          </figure><p>The chart on the left suggests substantial geographical inequality in colocation facility distribution across the US states. However, the population-weighted analysis in the chart on the right demonstrates that much of that geographic concentration can be explained by population distribution.</p>
    <div>
      <h3>Submarine cables</h3>
      <a href="#submarine-cables">
        
      </a>
    </div>
    <p>Internet resilience, in the context of undersea cables, is defined by the global network’s capacity to withstand physical infrastructure damage and to recover swiftly from faults, thereby ensuring the continuity of intercontinental data flow. The metrics for quantifying this resilience are multifaceted, encompassing the frequency and nature of faults, the efficiency of repair operations, and the inherent robustness of both the network’s topology and its dedicated maintenance resources. Such metrics include:</p><ul><li><p>Number of <b>landing stations</b>, cable corridors, and operators. The goal is to ensure that national connectivity should withstand single failure events, be they natural disaster, targeted attack, or major power outage. A lack of diversity creates single points of failure, as highlighted by <a href="https://www.theguardian.com/news/2025/sep/30/tonga-pacific-island-internet-underwater-cables-volcanic-eruption"><u>incidents in Tonga</u></a> where damage to the only available cable led to a total outage.</p></li><li><p><b>Fault rates</b> and <b>mean time to repair (MTTR)</b>, which indicate how quickly service can be restored. These metrics measure a country’s ability to prevent, detect, and recover from cable incidents, focusing on downtime reduction and protection of critical assets. Repair times hinge on <b>vessel mobilization</b> and <b>government permits</b>, the latter often the main bottleneck.</p></li><li><p>Availability of <b>satellite backup capacity</b> as an emergency fallback. While cable diversity is essential, resilience planning must also cover worst-case outages. The Non-Terrestrial Backup System Readiness metric measures a nation’s ability to sustain essential connectivity during major cable disruptions. LEO and MEO satellites, though costlier and lower capacity than cables, offer proven emergency backup during conflicts or disasters. Projects like HEIST explore hybrid space-submarine architectures to boost resilience. Key indicators include available satellite bandwidth, the number of NGSO providers under contract (for diversity), and the deployment of satellite terminals for public and critical infrastructure. Tracking these shows how well a nation can maintain command, relief operations, and basic connectivity if cables fail.</p></li></ul>
    <div>
      <h3>Inter-domain routing</h3>
      <a href="#inter-domain-routing">
        
      </a>
    </div>
    <p>The network layer above the physical interconnection infrastructure governs how traffic is routed across the Autonomous Systems (ASes). Failures or instability at this layer – such as misconfigurations, attacks, or control-plane outages – can disrupt connectivity even when the underlying physical infrastructure remains intact. In this layer, we look at resilience metrics that characterize the robustness and fault tolerance of AS-level routing and BGP behavior.</p><p><b>AS Path Diversity</b> measures the number and independence of AS-level routes between two points. High diversity provides alternative paths during failures, enabling BGP rerouting and maintaining connectivity. Low diversity leaves networks vulnerable to outages if a critical AS or link fails. Resilience depends on upstream topology.</p><ul><li><p>Single-homed ASes rely on one provider, which is cheaper and simpler but more fragile.</p></li><li><p>Multi-homed ASes use multiple upstreams, requiring BGP but offering far greater redundancy and performance at higher cost.</p></li></ul><p>The <b>share of multi-homed ASes</b> reflects an ecosystem’s overall resilience: higher rates signal greater protection from single-provider failures. This metric is easy to measure using <b>public BGP data</b> (e.g., RouteViews, RIPE RIS, CAIDA). Longitudinal BGP monitoring helps reveal hidden backup links that snapshots might miss.</p><p>Beyond multi-homing rates, <b>the distribution of single-homed ASes per transit provider</b> highlights systemic weak points. For each provider, counting customer ASes that rely exclusively on it reveals how many networks would be cut off if that provider fails. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1ECZveUVwyM6TmGa1SaZnl/1222c7579c81fd62a5d8d80d63000ec3/image1.png" />
          </figure><p>The figure above shows Canadian transit providers for July 2025: the x-axis is total customer ASes, the y-axis is single-homed customers. Canada’s overall single-homing rate is 30%, with some providers serving many single-homed ASes, mirroring vulnerabilities seen during the <a href="https://en.wikipedia.org/wiki/2022_Rogers_Communications_outage"><u>2022 Rogers outage</u></a>, which disrupted over 12 million users.</p><p>While multi-homing metrics provide a valuable, static view of an ecosystem’s upstream topology, a more dynamic and nuanced understanding of resilience can be achieved by analyzing the characteristics of the actual BGP paths observed from global vantage points. These path-centric metrics move beyond simply counting connections to assess the diversity and independence of the routes to and from a country’s networks. These metrics include:</p><ul><li><p><b>Path independence</b> measures whether those alternative routes truly avoid shared bottlenecks. Multi-homing only helps if upstream paths are truly distinct. If two providers share upstream transit ASes, redundancy is weak. Independence can be measured with the Jaccard distance between AS paths. A stricter <b>path disjointness score</b> calculates the share of path pairs with no common ASes, directly quantifying true redundancy.</p></li><li><p><b>Transit entropy</b> measures how evenly traffic is distributed across transit providers. High Shannon entropy signals a decentralized, resilient ecosystem; low entropy shows dependence on few providers, even if nominal path diversity is high.</p></li><li><p><b>International connectivity ratios</b> evaluate the share of domestic ASes with direct international links. High percentages reflect a mature, distributed ecosystem; low values indicate reliance on a few gateways.</p></li></ul><p>The figure below encapsulates the aforementioned AS-level resilience metrics into single polar pie charts. For the purpose of exposition we plot the metrics for infrastructure from two different nations with very different resilience profiles.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/PKxDcl4m1XXCAuvFUcTdZ/d0bce797dcbd5e1baf39ca66e7ac0056/image4.png" />
          </figure><p>To pinpoint critical ASes and potential single points of failure, graph centrality metrics can provide useful insights. <b>Betweenness Centrality (BC)</b> identifies nodes lying on many shortest paths, but applying it to BGP data suffers from vantage point bias. ASes that provide BGP data to the RouteViews and RIS collectors appear falsely central. <b>AS Hegemony</b>, developed by<a href="https://dl.acm.org/doi/10.1145/3123878.3131982"><u> Fontugne et al.</u></a>, corrects this by filtering biased viewpoints, producing a 0–1 score that reflects the true fraction of paths crossing an AS. It can be applied globally or locally to reveal Internet-wide or AS-specific dependencies.</p><p><b>Customer cone size</b> developed by <a href="https://asrank.caida.org/about#customer-cone"><u>CAIDA</u></a> offers another perspective, capturing an AS’s economic and routing influence via the set of networks it serves through customer links. Large cones indicate major transit hubs whose failure affects many downstream networks. However, global cone rankings can obscure regional importance, so <a href="https://www.caida.org/catalog/papers/2023_on_importance_being_as/on_importance_being_as.pdf"><u>country-level adaptations</u></a> give more accurate resilience assessments.</p>
    <div>
      <h4>Impact-Weighted Resilience Assessment</h4>
      <a href="#impact-weighted-resilience-assessment">
        
      </a>
    </div>
    <p>Not all networks have the same impact when they fail. A small hosting provider going offline affects far fewer people than if a national ISP does. Traditional resilience metrics treat all networks equally, which can mask where the real risks are. To address this, we use impact-weighted metrics that factor in a network’s user base or infrastructure footprint. For example, by weighting multi-homing rates or path diversity by user population, we can see how many people actually benefit from redundancy — not just how many networks have it. Similarly, weighting by the number of announced prefixes highlights networks that carry more traffic or control more address space.</p><p>This approach helps separate theoretical resilience from practical resilience. A country might have many multi-homed networks, but if most users rely on just one single-homed ISP, its resilience is weaker than it looks. Impact weighting helps surface these kinds of structural risks so that operators and policymakers can prioritize improvements where they matter most.</p>
    <div>
      <h3>Metrics of network hygiene</h3>
      <a href="#metrics-of-network-hygiene">
        
      </a>
    </div>
    <p>Large Internet outages aren’t always caused by cable cuts or natural disasters — sometimes, they stem from routing mistakes or security gaps. Route hijacks, leaks, and spoofed announcements can disrupt traffic on a national scale. How well networks protect themselves against these incidents is a key part of resilience, and that’s where network hygiene comes in.</p><p>Network hygiene refers to the security and operational practices that make the global routing system more trustworthy. This includes:</p><ul><li><p><b>Cryptographic validation</b>, like RPKI, to prevent unauthorized route announcements. <b>ROA Coverage</b> measures the share of announced IPv4/IPv6 space with valid Route Origin Authorizations (ROAs), indicating participation in the RPKI ecosystem. <b>ROV Deployment</b> gauges how many networks drop invalid routes, but detecting active filtering is difficult. Policymakers can improve visibility by supporting independent measurements, data transparency, and standardized reporting.</p></li><li><p><b>Filtering and cooperative norms</b>, where networks block bogus routes and follow best practices when sharing routing information.</p></li><li><p><b>Consistent deployment across both domestic networks and their international upstreams</b>, since traffic often crosses multiple jurisdictions.</p></li></ul><p>Strong hygiene practices reduce the likelihood of systemic routing failures and limit their impact when they occur. We actively support and monitor the adoption of these mechanisms, for instance through <a href="https://isbgpsafeyet.com/"><u>crowd-sourced measurements</u></a> and public advocacy, because every additional network that validates routes and filters traffic contributes to a safer and more resilient Internet for everyone.</p><p>Another critical aspect of Internet hygiene is mitigating DDoS attacks, which often rely on IP address spoofing to amplify traffic and obscure the attacker’s origin. <a href="https://datatracker.ietf.org/doc/bcp38/"><u>BCP-38</u></a>, the IETF’s network ingress filtering recommendation, addresses this by requiring operators to block packets with spoofed source addresses, reducing a region’s role as a launchpad for global attacks. While BCP-38 does not prevent a network from being targeted, its deployment is a key indicator of collective security responsibility. Measuring compliance requires active testing from inside networks, which is carried out by the <a href="https://spoofer.caida.org/summary.php"><u>CAIDA Spoofer Project</u></a>. Although the global sample remains limited, these metrics offer valuable insight into both the technical effectiveness and the security engagement of a nation’s network community, complementing RPKI in strengthening the overall routing security posture.</p>
    <div>
      <h3>Measuring the collective security posture</h3>
      <a href="#measuring-the-collective-security-posture">
        
      </a>
    </div>
    <p>Beyond securing individual networks through mechanisms like RPKI and BCP-38, strengthening the Internet’s resilience also depends on collective action and visibility. While origin validation and anti-spoofing reduce specific classes of threats, broader frameworks and shared measurement infrastructures are essential to address systemic risks and enable coordinated responses.</p><p>The <a href="https://manrs.org/"><u>Mutually Agreed Norms for Routing Security (MANRS)</u></a> initiative promotes Internet resilience by defining a clear baseline of best practices. It is not a new technology but a framework fostering collective responsibility for global routing security. MANRS focuses on four key actions: filtering incorrect routes, anti-spoofing, coordination through accurate contact information, and global validation using RPKI and IRRs. While many networks implement these independently, MANRS participation signals a public commitment to these norms and to strengthening the shared security ecosystem.</p><p>Additionally, a region’s participation in public measurement platforms reflects its Internet observability, which is essential for fault detection, impact assessment, and incident response. <a href="https://atlas.ripe.net/"><u>RIPE Atlas</u></a> and <a href="https://www.caida.org/projects/ark/"><u>CAIDA Ark</u></a> provide dense data-plane measurements; <a href="https://www.routeviews.org/routeviews/"><u>RouteViews</u></a> and <a href="https://www.ripe.net/analyse/internet-measurements/routing-information-service-ris/"><u>RIPE RIS</u></a> collect BGP routing data to detect anomalies; and <a href="https://www.peeringdb.com/"><u>PeeringDB</u></a> documents interconnection details, reflecting operational maturity and integration into the global peering fabric. Together, these platforms underpin observatories like <a href="https://ioda.inetintel.cc.gatech.edu/"><u>IODA</u></a> and <a href="https://grip.oie.gatech.edu/home"><u>GRIP</u></a>, which combine BGP and active data to detect outages and routing incidents in near real time, offering critical visibility into Internet health and security.</p>
    <div>
      <h2>Building a more resilient Internet, together</h2>
      <a href="#building-a-more-resilient-internet-together">
        
      </a>
    </div>
    <p>Measuring Internet resilience is complex, but it's not impossible. By using publicly available data, we can create a transparent and reproducible framework to identify strengths, weaknesses, and single points of failure in any network ecosystem.</p><p>This isn't just a theoretical exercise. For policymakers, this data can inform infrastructure investment and pro-competitive policies that encourage diversity. For network operators, it provides a benchmark to assess their own resilience and that of their partners. And for everyone who relies on the Internet, it's a critical step toward building a more stable, secure, and reliable global network.</p><p><i>For more details of the framework, including a full table of the metrics and links to source code, please refer to the full paper: </i> <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5376106"><u>Regional Perspectives for Route Resilience in a Global Internet: Metrics, Methodology, and Pathways for Transparency</u></a> published at <a href="https://www.tprcweb.com/tprc23program"><u>TPRC23</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Better Internet]]></category>
            <category><![CDATA[Routing Security]]></category>
            <category><![CDATA[Insights]]></category>
            <guid isPermaLink="false">48ry6RI3JhA9H3t280EWUX</guid>
            <dc:creator>Vasilis Giotsas</dc:creator>
            <dc:creator>Cefan Daniel Rubin</dc:creator>
            <dc:creator>Marwan Fayed</dc:creator>
        </item>
        <item>
            <title><![CDATA[The tricky science of Internet measurement]]></title>
            <link>https://blog.cloudflare.com/tricky-internet-measurement/</link>
            <pubDate>Mon, 27 Oct 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ The Internet is one big open system composed of many closed boxes — which makes measuring the Internet difficult. In this post we explore Internet measurement as a science. ]]></description>
            <content:encoded><![CDATA[ <p>Measurement is critical to our understanding not just of the world and the universe, but also the systems we design and deploy. The Internet is no exception but the challenges of measuring the Internet are unique.</p><p>The Internet is remarkably opaque, which is counter-intuitive given its open and multi-stakeholder model. It’s opaque because ultimately the Internet joins many networks and services that are each owned and operated by unrelated entities, and that rarely share or report about their systems. Every network may carry and forward what other systems produce, but each system is entirely independent — which, to be honest, is the magic of the Internet. It’s in this opaque-yet-critical context that Internet measurement must exist as a scientific practice, with all the associated rigor, repeatability, and reproduction.</p><p>Measurement as a scientific practice can be exciting — for what it gets right as well as wrong. The following statement encapsulates some of the subtleties:</p><blockquote><p>“<b>5 out of 6 scientists say that </b><a href="https://en.wikipedia.org/wiki/Russian_roulette"><b><u>Russian Roulette</u></b></a><b> is safe.”</b></p></blockquote><p>The statement is absurd! Laugh as we might, the statement is also logical. It’s trivially easy to design an experiment that leads to the above statement. However, the only way this experiment could succeed is if the “actor” — that is, whoever conducts the experiment — ignores every aspect of measurement science that makes the practice credible, as follows.</p><ul><li><p><b>Methodology</b>: a cycle consisting of data curation, modeling, and validation. Here, the experiment (data curation) could only succeed if each participant is prevented from seeing others’ injuries. More importantly, no measurement is needed because the actor can calculate probabilities with available numbers, without the experiment!</p></li><li><p><b>Ethics</b>: the way we measure can have undue, undesirable consequences. A bare minimum principle is <i>do no harm.</i></p></li><li><p><b>Representation</b>: clear and complete statements or visualizations should be at least informative and ideally actionable; otherwise, they can be misleading. Say each participant answered with yes to the question, “are you safe?” They are answering a different question than “is the game safe?”</p></li></ul><p>In this blog we look at each of the above aspects of measurement, describe how they manifest in the Internet space, and relate them to examples from work that will be featured throughout <a href="https://blog.cloudflare.com/internet-measurement-resilience-transparency-week"><u>the week</u></a>. Let’s first start with some background.</p>
    <div>
      <h2>Preface: A motivating example from inside Cloudflare</h2>
      <a href="#preface-a-motivating-example-from-inside-cloudflare">
        
      </a>
    </div>
    <p>High quality measurements help to identify, understand, even explain our experiences, environments, and systems. However, observation in isolation, without context, can be perilous. The following is a time series from an internal graph of HTTP requests from Lviv, Ukraine, leading up to the evening of 28 February 2022:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7D1hr8mMykICnj7Rh1Apyf/9b50dd98d996ed296fbad64cdbada497/image9.png" />
          </figure><p>On that day, traffic from the region increased by 3-4X. For context, the Russian incursion into Ukraine began four days earlier. The world was watching events closely. Cloudflare was no exception, helping both to <a href="https://blog.cloudflare.com/internet-traffic-patterns-in-ukraine-since-february-21-2022/"><u>report</u></a> and to <a href="https://blog.cloudflare.com/steps-taken-around-cloudflares-services-in-ukraine-belarus-and-russia/"><u>mitigate</u></a> network effects.</p><p>Upon observing that abnormal spike, we at Cloudflare <i>could have</i> mistakenly reported the increase as a potential DoS attack. However, there were counter-indications. First, no attack was flagged by the DoS defense and mitigation systems. In addition, the profile was atypical of attack traffic, which tends to be either single source from a single location or multiple sources from multiple locations. In this instance the increase came from multiple source networks but in a single location (Lviv).</p><p>Cloudflare had the tools to avoid erroneous reporting and later <a href="https://blog.cloudflare.com/internet-traffic-patterns-in-ukraine-since-february-21-2022/#internet-traffic"><u>correctly reported</u></a> that the increase was due to a mass of people converging in Lviv, the city with the last train station on the westward journey out of Ukraine. But — and this is important in a measurement context — nothing visible from Cloudflare’s perspective could provide an explanation. In the end, an employee saw a report on BBC about the massive movement of people in that part of Ukraine, which enabled us to better explain the traffic shift.</p><p>This example is an important reminder to always look for alternative explanations. It also shows how observations alone can lead to wrong conclusions, due to missing information or unrecognized biases. But good numbers without bias <a href="https://blog.cloudflare.com/loving-performance-measurements/"><u>can be misunderstood</u></a>, too.</p>
    <div>
      <h2>Measurement vocabulary and jargon</h2>
      <a href="#measurement-vocabulary-and-jargon">
        
      </a>
    </div>
    <p>In the measurement context there is a vocabulary of common words with specific meanings that are useful to know before diving into practice and examples.</p>
    <div>
      <h3>Active and passive measurement </h3>
      <a href="#active-and-passive-measurement">
        
      </a>
    </div>
    <p>These describe the “how.” In an <i>active</i> measurement, an actor initiates some action designed to trigger a response. The response may be data, such as latency returned from a ping or a DNS answer in response to a query. The response may be an observable change in a mechanism or system triggered by an action, such as well-crafted probe packets that prompt reactions from and expose middleboxes.</p><p>In a <i>passive</i> measurement, the actor only observes. No action is taken. As a result, no response is triggered; the system and its behaviour are unaltered. Logs are typically compiled from passive observations, and Cloudflare’s own are no exception. The vast majority of data shown in <a href="https://radar.cloudflare.com"><u>Cloudflare Radar</u></a> derives from those logs.</p><p>Each has its trade-offs. Active measurements are targeted and can be controlled. They are also exceptionally difficult (and often costly) to scale and, as a result, are only able to observe the parts of a system where they are deployed. Conversely, passive measurements tend to be lighter weight, but only succeed if the observer is at the right place at the right time. </p><p>Effectively, the two methods complement each other, and that makes them most powerful when orchestrated so that the knowledge from one feeds into the other. For example, in our own prior attempts to <a href="https://blog.cloudflare.com/cdn-latency-passive-measurement/"><u>understand performance across CDNs</u></a>, we interrogated the (passive) request logs to get insights, which helped inform later (active) pings using RIPE’s Atlas that we used to confirm our insights and results. In the opposite direction, our efforts to (passively) <a href="https://blog.cloudflare.com/connection-tampering/"><u>detect and understand connection failures</u></a> was informed by, and arguably only possible because of, a large body of (active) measurements in the research community to understand wide-scale connection tampering.</p><p>For more on the interplay between active and passive, you can read about the experience of a researcher who was equipped to <a href="https://blog.cloudflare.com/experience-of-data-at-scale"><u>dig deep</u></a> into Cloudflare’s vast troves of data because of insights from prior active measurements in the research community.</p>
    <div>
      <h3>Direct and indirect measurement </h3>
      <a href="#direct-and-indirect-measurement">
        
      </a>
    </div>
    <p>It is possible to gain insights about something without directly observing it. Consider, for example, the capacity of a path, better known as the <i>bandwidth</i>. The common method to <i>directly</i> observe bandwidth is to launch a <a href="https://speed.cloudflare.com/"><u>speed test</u></a>. It’s a simple test, but it has two problems.</p><p>The first is that it works by consuming as much of the bandwidth as it can (which creates an ethical dilemma we later revisit). The second is that it actually measures throughput from a sender to a receiver, which is the available bandwidth (or, alternately, the residual capacity) of the <i>bottleneck</i> link. If two speed tests share a bottleneck then each might observe throughput that is ½ of the actual bandwidth. The evidence is in the numbers, as seen below, where observations of a speed test range from 69-85Mbps — that’s a +/- range of nearly 20% from the median, and far from a fixed value!</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6OpXXM8CqkhWkbavw9RgMv/395827e2390fa145650703905c4abdb4/image2.png" />
          </figure><p>There is instead a 25+ year-old <i>indirect</i> alternative to speed tests called <a href="https://www.usenix.org/legacy/publications/library/proceedings/usits01/full_papers/lai/lai_html/node2.html"><u>the packet pair</u></a>, or packet train. It works by first transmitting pairs of packets with no delay between them and recording their transmission times, then recording their arrival times. The change between transmission and arrival times of the two packets gives an indication of the bottleneck bandwidth. Repeat the packet pair probes and, with some statistical analysis, a good estimate of the true bottleneck bandwidth emerges. Instead of directly observing bandwidth by pushing and counting bytes over time, the packet pair technique uses the time between two packets to <i>indirectly</i> calculate — or infer — the metric.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4LMXzWWY1rbU0Tb02v7uzt/6e83b407e8cece51c3fa42b91bd036b3/image5.png" />
          </figure>
    <div>
      <h2>The (Network) Measurement Lifecycle</h2>
      <a href="#the-network-measurement-lifecycle">
        
      </a>
    </div>
    <p>Measurements are most powerful when they lead to reasonable predictions. Sometimes the predictions confirm our understanding of the world and systems we deploy into it. Occasionally, the predictions reveal something new. Either way, predictive measurements emerge by following a simple pattern: curate data, construct a model based on the data, then validate the model with (ideally) different data. Together, these create a measurement lifecycle.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2bnJ1aWYRag3edCAUfEnkj/87ee90c52223d03120e7bc2d7df5c72b/image8.png" />
          </figure><p>Ideally a measurement exercise encompasses the lifecycle from beginning to end, but there can be extremely valuable contributions and advances within each in isolation. Individual high-quality datasets are so difficult to curate that each can be a valid contribution. Similarly, with modeling techniques, or tools for validation. Measurement spans expert domains, and benefits from diverse skill sets.</p><p>Let’s look at each step in order, beginning with data curation.</p>
    <div>
      <h2>Data curation</h2>
      <a href="#data-curation">
        
      </a>
    </div>
    <p>The most common and familiar measurement exercise — often synonymous with measurement — is data gathering and curation. Data on its own can be fascinating and useful; <a href="https://radar.cloudflare.com"><u>Cloudflare Radar</u></a> is clear evidence of that! Simple counting in many contexts can help us relate to and place our environments in context.</p><p>Data gathering and curation consumes more energy, time, and resources than modeling or validation. The explanation is implied by the cyclical measurement pattern: validation requires a preceding model, and models are constructed using data. No data, no model, no validation, no insight nor prediction nor learning. The quality of each step in the cycle <i>depends</i> on the quality of the previous step — high-quality data is <i>the</i> linchpin in measurement practices. The <a href="https://en.wikipedia.org/wiki/Large_Hadron_Collider"><u>Large Hadron Collider</u></a> and the <a href="https://en.wikipedia.org/wiki/James_Webb_Space_Telescope"><u>James Webb Telescope</u></a> are great examples of how much we can, and need, to do — they operate relentlessly in pursuit of high-quality data. Similar “always-on” tools in the Internet measurement community are much less glamorous, but no less important. <a href="https://www.caida.org/about/"><u>CAIDA</u></a> and <a href="https://atlas.ripe.net/"><u>RIPE’s Atlas</u></a> are just two examples of longstanding projects that gather telemetry and curate datasets.</p><p>Make no mistake: High-quality data gathering and curation is <i>hard</i>.</p><p>Luckily, “high-quality” does not mean perfect; it does mean <i>representative</i>. For example, if we’re counting distance or time, the accuracy must reflect the true value. Large populations can be reasonably studied using much smaller numbers of samples. For example, our global assessment of connection tampering revealed valuable insights with a sample of <a href="https://blog.cloudflare.com/tcp-resets-timeouts/"><u>1 in 10,000</u></a> (or 0.0001%). The low sampling rate works at Cloudflare in part because of the immense diversity of Cloudflare’s customers, which attracts traffic for all kinds of content and purposes. Later this week, we’ll share in a blog post how imperfect signals used to find a sample of around 180,000 carrier-grade NATs in Cloudflare’s request logs are “good enough” to identify more than 12,000,000 others that cannot be directly observed.</p><p>Another important, and arguably counterintuitive, misconception is that more data naturally reveals more detail and answers to more questions. As Ram Sundaran writes in a <a href="https://blog.cloudflare.com/experience-of-data-at-scale"><u>guest post</u></a>, sometimes there is so much noise that finding answers in large datasets can seem like a small miracle.</p>
    <div>
      <h2>Modeling</h2>
      <a href="#modeling">
        
      </a>
    </div>
    <p>Models may be conceptual, and describe aspects of an environment or system. The most useful can be expressed as simple statements about our understanding or our assumptions. In effect, they encapsulate a hypothesis that can be tested. For example, we might believe or assume that an ISP or network <a href="https://blog.cloudflare.com/cdn-latency-passive-measurement/#example"><u>will typically prefer</u></a> a direct no-cost peering path to a CDN over transit network paths that incur a cost, even when the direct path is longer. This forms a model that can be validated.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2rnYBJIYzjpAoerH3Z2bDL/2e2bcf155e2b1c3f6abe46a72fe129f5/image3.png" />
          </figure><p>Predictive models push beyond our boundaries of understanding to help identify, explain, or understand aspects of systems that are not obvious or directly observable, or are difficult to ascertain. Predictive models often use statistical techniques to, for example, identify underlying stochastic processes or to create machine learning classifiers. A more common use of the statistical tools is to characterize the curated data itself. Remarkably powerful models can be simple probability distributions with means, medians, variance, and confidence indicators.</p><p>One aspect of the Internet that attracted a lot of attention was how networks on the Internet choose to connect to other networks. Understanding how the Internet forms and grows is crucial for simulation, but also helps to predict ways in which networks might fail. The equation below on the left comes from the <a href="https://en.wikipedia.org/wiki/Barab%C3%A1si%E2%80%93Albert_model"><u>Barabási–Albert (B-A) model</u></a>, an early model that assumes <i>preferential connectivity</i> or, in more familiar terms, “rich get richer.”</p><p>In its simplest version, a new network in the BA model chooses to connect to an existing network with a probability that is proportional to the number of connections of the existing networks. Later models did away with ‘intelligent’ selection mechanisms. The equation below on the right is based on the <a href="https://dl.acm.org/doi/pdf/10.1145/956981.956986"><u>sizes of networks</u></a>, a more general mechanism similar to the way celestial bodies form in the universe.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5h8V0ABHULoh2vRaa2wQhn/baf190909036b7f4f15fa506754784c5/Screenshot_2025-10-27_at_10.51.04.png" />
          </figure><p>Sometimes knowing which tool to use and when is a skill in itself. One such example is throwing ML and AI at problems that are tractable with mechanisms that are simpler and far more transparent. This <a href="https://blog.cloudflare.com/experience-of-data-at-scale"><u>guest blog</u></a>, for example, explains that ML was ruled out to understand anomalous TCP behaviour because TCP is tightly specified, which suggested that a full enumeration of various packet sequences was possible—and proved correct.</p><p>An understanding of the domain is often critical to our ability to construct accurate models. Machine learning, for example, is a useful tool to help make sense of large unstructured data, but can be remarkably powerful with some domain expertise. Our work featured later this week on detection of multi-user IPs provides one such example. In particular, we sought to detect carrier-grade NAT devices (CGNATs). They are unique among large-scale multiuser IPs because, unlike VPNs and proxies, users neither choose to use CGNATs nor are aware of their existence.</p><p>The ML models successfully identified multiuser IPs, but disambiguating CGNATs proved elusive until we applied domain knowledge. For example, CGNATs are typically deployed across a range of contiguous IPs (e.g. in a /24 block) and, as shown below, turns out to be a very important feature in the model.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/15WI2U2JnD12WOcaCD9wQN/7bdf0adb9ade2444f7b3837f75c7f109/unnamed__1_.png" />
          </figure>
    <div>
      <h2>Validation</h2>
      <a href="#validation">
        
      </a>
    </div>
    <p>The validation phase almost singularly determines the value of the whole measurement exercise, by testing the output of the model against data. If the model makes predictions that are reflected in the data, then the model has validity. Predictions that contrast or conflict with the validation data indicate that either the model is flawed or is biased by the curated data.</p><p>Validation is where great measurement can fall apart — primarily in one of two ways. First, just like in the initial data curation phase, validation data must be representative of the population. For example, it would be a mistake to curate data about traffic during the day, build a model about that data, and then validate using data about traffic at night. There is also no point in using QUIC data to validate measurements about, say, TCP (unless the measurement’s hypothesis is that they have attributes in common). Care must always be taken to ensure that measurement cannot be corrupted by the differences between validation and initial data.</p><p>Validation also risks being misleading when using the curated data, directly. Certainly this approach mitigates differences between datasets. However, the only conclusion that can be drawn when validating with the same data, is that the model reasonably describes the data —not whatever the data represents. Consider, for example, machine learning. At its core, machine learning is a measurement in so much as it follows the lifecycle: curate data, (feed it into a machine learning algorithm to) build a model, then validate the output against data. An early common practice in the machine learning community was to partition a single dataset into 70% for training and 30% for validation. This is a setup that leads to a higher likelihood of a positive evaluation of the model that is not warranted, and potentially misleading. The best case for an ML model trained on a dataset that amplifies or omits important characteristics is a model that reflects those biases — which becomes a potential source of <a href="https://en.wikipedia.org/wiki/Algorithmic_bias"><u>algorithmic bias</u></a>. </p><p>Naturally we have greater confidence in models that prove valid with unrelated data. The validation dataset can describe the same attributes from a different source, for example, models constructed <a href="https://blog.cloudflare.com/cdn-latency-passive-measurement/"><u>from passive RTT log data and validated against active pings</u></a>. Alternatively, models may be validated using entirely different data or signals, such as confirming <a href="https://blog.cloudflare.com/connection-tampering/"><u>connection tampering with distributions and header values</u></a> that were ignored in the model’s construction. </p>
    <div>
      <h2>The ethics of network measurement</h2>
      <a href="#the-ethics-of-network-measurement">
        
      </a>
    </div>
    <p>The importance of ethics in network measurement is hard to overstate. It’s easy to perceive network measurement as risk-free, removed from and having little effect on humans—a perception far from truth. Recall the speed tests and the packet pair technique for bandwidth estimation described above. In a speed test, an actor estimates bandwidth by consuming all the available bottleneck capacity that may or may not be within the actor’s network. The cost of resource consumption might be borne by others, and certainly reduces the potential performance of the network for its users. The risks of that type of bandwidth measurement prompted the packet pair technique and its use of only a few pairs of packets and a little math to infer bandwidth—albeit with some orchestration between a sender and receiver.</p><p>Best practice in network measurement scrutinizes risks and effects <i>before</i> the measurement exercise. This might seem like a burden, but the ethical considerations often spark creativity and are the reasons that novel methodology emerge. Looking for alternatives to JavaScript injection is what prompted Cloudflare’s own efforts to <a href="https://blog.cloudflare.com/cdn-latency-passive-measurement/"><u>estimate the performance</u></a> of other CDNs using passive data. For more information, see “<a href="https://dl.acm.org/doi/10.1145/2896816"><u>Ethical Considerations in Network Measurement Papers</u></a>” published in the Communications of the ACM (2016).</p>
    <div>
      <h2>Visualization and representation</h2>
      <a href="#visualization-and-representation">
        
      </a>
    </div>
    <p>Visualization and representation are invaluable <i>at every stage</i> of the measurement lifecycle. Representations should at least improve our understanding; ideally, they also make follow-up actions clear. Statements without context are poor representations. For example, “30% greater chance” sounds like a lot but has no value without a reference point—30% of 0.5% is likely less a concern than 30% of 20% chance.</p><p>One example of representation is Cloudflare’s “<a href="https://www.cloudflare.com/sv-se/network/"><u>closeness</u></a>” statement: Cloudflare is “<i>approximately 50 ms from 95% of the Internet-connected population globally</i>.” The statement encapsulates a “survey” of our logs: From among all connections from each IP address that connects to Cloudflare, half of the minimum-RTT is a “worst approximation” of the latency from the IP address to Cloudflare; in 95% of cases, the minRTT/2 is at or below 50ms.</p><p>Visualizations, meanwhile, can be so powerful as to lead to misleading conclusions — a notion that features prominently later this week in a blog post about routing resilience evaluations. One example on that subject appears below, with two bar charts that order individual US states by the number of interconnection facilities in each state, from most to least. On the left, states are ordered according to raw count facilities; the top-ranked state has more than 140 interconnection facilities. On the right, the raw counts are normalized (in this case divided by) the population of each state.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7c2LwnBXvucFPhWKwG0F7g/033d94a2a8e3be8844a6f958ced6d762/image9.png" />
          </figure><p>These representations demonstrate that our models are shaped, and can be misinformed, by how we evaluate data. In this case we have purposefully omitted the state names on the x-axis because they are a distraction. Instead, each bar is coloured to indicate whether it is above (green) or below (yellow) the median of facilities per person in the right-hand graph. What becomes immediately obvious is that the two states with the highest number of facilities fall below the median, i.e., they are in the bottom half of states when ordered by facilities per person.</p><p>Sometimes a visualization can be so powerful as to leave no doubt. The image below is a personal favourite, because it gives strong evidence that the data and models were correct. In this visualization, each column represents a single type of connection anomaly that we observed. Inside each column, the anomaly’s occurrence is divided proportionally into the country where the connection was initiated. As an example, look at the left-most column for SYN→∅ anomalies (a type of timeout). It shows that connections from China, India, Iran, and the United States dominated this specific anomaly type. Organizing the visualization this way put the data <i>first</i>, which helped mitigate any bias we might have had about explanations, underlying mechanisms, or locations.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1sUdADDxOjZzn5Bq6qCgs2/0cfd453013b83ff50924993bb38c6e9b/image1.png" />
          </figure><p>By organizing the anomalies this way, the visualization immediately answered one question: “Are the failures expected behaviour?”  If they were expected, or normal across the Internet, then the anomalies would appear in roughly similar proportions rather than so different. The visualization was a strong validation (but <a href="https://blog.cloudflare.com/connection-tampering/#signature-validation-letting-the-data-speak"><u>not the only one</u></a>) of our approach and intuition—and opened up further avenues of investigation as a result.</p>
    <div>
      <h2>What’s next?</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>Cloudflare continues to think deeply about new and novel ways to use available (passive) data, and welcomes ideas. Measurement helps us understand the Internet we all depend on, value, and love, and is a community-wide endeavour.</p><p>We encourage new entrants into the measurement space, and hope this blog serves as both an introduction to its challenges, and a map with which to evaluate measurement work published at Cloudflare or anywhere else.</p> ]]></content:encoded>
            <category><![CDATA[Research]]></category>
            <guid isPermaLink="false">20pf9BGcV10k0j9ASL8JtY</guid>
            <dc:creator>Marwan Fayed</dc:creator>
        </item>
        <item>
            <title><![CDATA[Data at Cloudflare scale: some insights on measurement for 1,111 interns]]></title>
            <link>https://blog.cloudflare.com/experience-of-data-at-scale/</link>
            <pubDate>Mon, 27 Oct 2025 12:00:00 GMT</pubDate>
            <description><![CDATA[ While large cloud providers hold vast troves of passive network data, analyzing them is complicated. The scale, noise, and absence of definitive ground truth all create major hurdles. Yet by carefully quantifying these constraints and finding alternative forms of evidence, meaningful insights can still emerge. ]]></description>
            <content:encoded><![CDATA[ <p>Cloudflare recently announced our goal to hire <a href="https://blog.cloudflare.com/cloudflare-1111-intern-program/"><u>1,111 interns</u></a> in 2026 — that’s equivalent to about 25% of our full-time workforce. This means countless opportunities to develop and ship working code into production. It also creates novel opportunities to measure aspects of the Internet that are otherwise hard to observe — and more difficult still to understand.</p><p>Measurement is hard, even at Cloudflare, despite the vast amount of data generated by our traffic (much of it published via <a href="https://radar.cloudflare.com/"><u>Cloudflare Radar</u></a>). A common misconception we often hear is, “Cloudflare has so much data that it must have all the answers.” Having a huge amount of data is great — but it also means much more noise to filter out, and lots of additional work to rule out alternative explanations.</p><p>Ram Sundara Raman was an intern at Cloudflare in 2022 as he pursued his PhD. He’s now an assistant professor at University of California, Santa Cruz, and we’ve invited him back to share his insights about working with data at Cloudflare.</p><p>Ram’s project is a great example of how insights that researchers shared and brought from their  <a href="https://breakerspace.cs.umd.edu/"><u>university research lab</u></a> can lay the groundwork for a valuable project at Cloudflare — in this case, detecting and explaining connection failures to customers. One tip for prospective interns: If you’re applying and thinking about data and measurement ideas to work on at Cloudflare, a good question to ponder is if, how, or why, <i>your</i> idea might matter to Cloudflare. We love hearing your ideas!</p><p>Without further ado, here’s Ram. We hope his insights are as informative and refreshing to future interns — and practitioners — as they are to us here at Cloudflare.</p>
    <div>
      <h2>Insights from data at large scale might just be a small miracle  </h2>
      <a href="#insights-from-data-at-large-scale-might-just-be-a-small-miracle">
        
      </a>
    </div>
    <p><i>by Ram Sundara Raman, Assistant Professor of Computer Science and Engineering, UC Santa Cruz</i></p><p>Before joining Cloudflare as a research intern in the summer of 2022, I’d worked on multiple network security and privacy research problems as a PhD student at the University of Michigan. My previous experience involved <i>active measurements</i>, in which probes were carefully crafted and transmitted to detect and quantify security issues such as <a href="https://dl.acm.org/doi/10.1145/3419394.3423665"><u>HTTPS interception</u></a> and <a href="https://dl.acm.org/doi/10.1145/3372297.3417883"><u>connection tampering</u></a>. These attacks, performed by powerful network middleboxes between users and Internet servers, can block Internet content and services for numerous users in various regions, and can also reduce their security. For example, <a href="https://dl.acm.org/doi/10.1145/3419394.3423665"><u>the HTTPS Interception Man-in-the-Middle Attack in Kazakhstan in 2019</u></a> was detected in 7-24% of all measurements we performed in the country. </p><p>Detecting such attacks is challenging. The underlying mechanisms are diverse, with both geographic and temporal variations — and they’re entirely opaque. Moreover, the Internet has no technical mechanisms to report to users when their traffic is being manipulated, and third party actors rarely, if ever, are transparent with affected users. </p><p>My active measurement work before Cloudflare helped resolve these challenges. Along with my PI and team at the University of Michigan, I helped develop <a href="https://censoredplanet.org/"><u>Censored Planet</u></a>, one of the largest active Internet censorship observatories, detecting connection tampering in more than 200 countries. However, active measurements face barriers on scale, resources, and real-world view. For instance, Censored Planet is only able to measure blocking and connection tampering for the 2,000 most popular websites, simply because of limits on time and resources. </p><p>While working on projects like Censored Planet, I’d often look at large network operators or cloud providers and think: “<i>If only I had my hands on the data they collect, I could solve this problem so easily. They have a global view of real-world traffic from nearly every network, and probably enough resources and telemetry to scale analysis to that level of data. How hard could it be to use this data, for example, to detect when middleboxes interfere?”</i> </p><p>As we learned through <a href="https://research.cloudflare.com/publications/SundaraRaman2023/"><u>our research</u></a> published at <a href="https://www.sigcomm.org/"><u>ACM SIGCOMM’23</u></a> — it can be <i>very</i> hard.</p><p>My perspectives on censorship evolved as a direct result of my experience at Cloudflare, which taught me that detection at scale is hard, even with large-scale data. The research I did during my internship helped reveal that network middleboxes block or otherwise interfere with certain connections not only in limited places, but also at <a href="https://blog.cloudflare.com/tcp-resets-timeouts/"><u>various scales around the world</u></a>. </p>
    <div>
      <h3>An internship project built on real insights, using production data</h3>
      <a href="#an-internship-project-built-on-real-insights-using-production-data">
        
      </a>
    </div>
    <p>In this research, we built upon insights gathered by the wider active measurement community, namely that middleboxes interfere with Internet TCP connections by dropping packets, or injecting RST packets to cause connections to abort. The same insights revealed that the patterns of packet drops and RSTs are deterministic  —  and, as a result, potentially detectable. Such is the flexibility of active measurement: craft a custom request, or ‘probe,’ that elicits a response from the environment. However, such a targeted approach would be difficult to scale and maintain, even for Cloudflare: What probes should be crafted? Where should they be sent? What motivation would Cloudflare have to even try, if the risk of missing so much is so high?  </p><p>The goal of my internship was to see if we could instead flip the approach: to be passive instead of active. Everything Cloudflare does must be both scalable and <i>sustainable</i>. However, it was entirely uncertain whether a system restricted to passive observation could be constructed, even if the tampering events could be detected. The requirement was clear: Only observe and use data that comes to Cloudflare naturally. No mixing in other datasets, no running our own active measurements. Either would have made life easier: we could have controlled the variables, maybe even obtained ground truth that would help us confirm our observations. But where’s the fun in that? Besides, Cloudflare has <i>all</i> the data anyway… right? </p><p>Yes, maybe — if it is sampled appropriately, can be teased out reliably, and correctly interpreted.</p><p>Here’s a useful insight: I’ve often heard people say that finding middleboxes that tamper with Internet connections using active measurements is like finding a needle in a haystack — rare, finicky, and hard to pin down. When we started looking at this problem from the lens of Cloudflare’s passive dataset, we quickly realized we were still looking for the same needle — and in some ways, it was now even harder to find.</p><p>That’s because as a passive observer we lose the ability to choose where to look. Also, the haystack now stretches across continents, millions of users, and — I’m not exaggerating here — thousands of ways connections can be made and broken. Not only did we have to identify tampering from millions of real-world data points, we had to do it with data that was full of obstacles and pitfalls. It felt a lot like working with unseen traps and their tripwires. </p>
    <div>
      <h3>The traps and tripwires of large-scale passive data</h3>
      <a href="#the-traps-and-tripwires-of-large-scale-passive-data">
        
      </a>
    </div>
    <p>There were multiple challenges that I only truly understood once faced with them. Let’s start with the obvious one: <b>scale</b>.</p><p>First, there was a glut of large-scale datasets, primarily associated with incoming connections to Cloudflare. For example, at the time of my internship, Cloudflare was serving more than 45 million HTTP requests per second globally, across more than 285 data centers. Cloudflare also gets TCP connections to its 1.1.1.1 DNS server. We also explored <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/Network_Error_Logging"><u>Network Error Logging</u><b><u> </u></b><u>(NEL)</u></a> data. Usually, in measurement research, we’re dealing with the issue of <i>too little scale. </i>Here, we had the opposite problem: too much of a good thing. In practice, each of these datasets had their own independent sampling methods, making it all but impossible to utilize them all together. Moreover, datasets like NEL are biased since only some clients support it, and because only some websites enable it. After evaluating these biases, NEL did not make the final cut. </p><p>To manage the scale, we constructed special <a href="https://blog.cloudflare.com/tcp-resets-timeouts/#first-sample-connections"><u>IPTABLES rules</u></a> to log and store incoming TCP connections across all of Cloudflare’s points of presence — every server in each of 285 datacenters. However, due to the extremely large scale of the data, we had to limit ourselves to work with a uniformly random sample of one in every 10,000 connections. For each sample, we only logged the first 10 inbound packets of each connection. That meant we could not detect certain infrequent types of tampering, or any tampering that occurs later in a flow, after the first 10 packets. </p><p>Still, within those constraints, we managed to develop tampering signatures — distinctive packet patterns that reveal when middleboxes interfere. However, developing these signatures was anything but straightforward, due to the second tripwire: <b>noisy data. </b></p><p>It’s difficult to imagine that we could have anticipated all the different sources of noise. For example, the resolution of time-keeping in event records was milliseconds, but many packets could arrive in a single millisecond, which meant we could not trust the ordering of logged packets. We eventually learned that some denial-of-service attack traffic, as well as port scans, can look eerily like tampering events, and certain “best practices” designed to help improve the Internet, such as <a href="https://datatracker.ietf.org/doc/html/rfc6555"><u>Happy Eyeballs</u></a>, became quirks that messed with our detection. We spent a lot of time analyzing these sources of noise and iterating on our signatures to understand them. We accepted events as tampering only if supported by other sources of evidence that we identified, including but not limited to inconsistent changes in the Time-To-Live (TTL) field in the IP header.</p><p>That brings me to our last tripwire: a <b>lack of ground truth.</b></p><p>Without active, controlled experiments, it would have been extremely difficult for us to confirm when something we detected was indeed tampering, and not one of the thousand other phenomena on the Internet. Fortunately, thanks to the <a href="https://censorbib.nymity.ch/"><u>amazing work of many researchers in the censorship measurement space</u></a>, we were able to recognize at least some known signals and patterns in the data, and these helped us confirm many cases of tampering. </p><p>There were plenty more tripwires. But the key realization for me was this: While providers have lots of data that can tell you <i>things</i>, it’s incredibly hard to know which thing, how much of it, and about what. Large infrastructure operators see a filtered, sampled, and often partial view of the Internet. For example,</p><ul><li><p>Services like Cloudflare can see only which connections were affected and where the connections were initiated, but <i>not who did the tampering;</i></p></li><li><p>It was sometimes possible to understand which domains were blocked, but not always, because the necessary packets can be dropped before they get to Cloudflare;</p></li><li><p>As a passive observer, it’s possible only to see users' activity that is affected, not what <i>could</i> be affected.</p></li></ul><p>For a company that handles a double-digit percentage of Internet websites and services, these were surprising — but understandable –  limitations. 

It may seem like the exercise is impossible, but it’s not. It’s just more challenging than I expected it to be. Despite all that, we found ways to extract meaning from chaos. For example, we carefully and painstakingly enumerated all common packet sequences Cloudflare observed, and extracted from them those that might indicate tampering, based on prior work. Moreover, we used signals like the TTL field mentioned above as supporting evidence that these packet signatures did indeed show tampering. </p><p>All of this adds up to a simple but important conclusion: large infrastructure providers are not omniscient.<b> </b>Having a global view can be powerful, but doesn’t automatically translate into <i>easy</i> observations. You can have all the data in the world and still struggle to tell the difference between a middlebox, a security filter, a confused IoT device, and even regular users closing tabs and browsers. </p><p>But that dichotomy is also the beauty of the problem space. Working with imperfect data forces us to be creative, to find patterns in the noise, and to design methods that work despite what’s missing. And no, before you ask, you can’t just throw machine learning at the problem, nor do you need to — even with all the noise, the protocols are tightly specified, meaning patterns can be enumerated easily but must still be debated manually. </p>
    <div>
      <h3>An internship project built on real insights, using production data</h3>
      <a href="#an-internship-project-built-on-real-insights-using-production-data">
        
      </a>
    </div>
    <p>Using our packet-level samples and <b>19 tampering signatures</b>, we saw distinctive tampering behaviors across hundreds of networks, including being able to track large increases in tampering rates (Figure 1). And it worked because, despite the data’s limits, Cloudflare’s networks let us see the <i>real-world effects</i> of tampering. Also, thanks to the tireless efforts of <a href="https://research.cloudflare.com/about/people/luke-valenta/"><u>Luke Valenta</u></a> and the Cloudflare Radar team, the data from our project is continuously being <a href="https://radar.cloudflare.com/security/network-layer#tcp-resets-and-timeouts"><u>published on Cloudflare Radar</u></a> (Figure 2).</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/306MKIUSWYPDewkUmckP4p/74227ea6d9a9f5750d6231e17aaabe0f/image1.png" />
          </figure><p><sup>Figure 1: Increase in mach rates of our 19 tampering signatures during a period of nationwide protests in Iran in late-2022.</sup></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/26qYosPoBquXSZrUACTbYp/a9adbfce9c04cb1831f4b8610fe69445/image2.png" />
          </figure><p><sup>Figure 2: Data from our connection tampering research is available live on Radar.</sup></p><p>In the future, though, I think solving challenges like these will require a <b>combination of passive and active probing</b>, using the scale of providers like Cloudflare together with targeted, controlled measurements to paint the full picture of Internet tampering. My team at  <a href="https://randlab.engineering.ucsc.edu/"><u>UCSC’s RANDLab</u></a> and the research group at <a href="https://censoredplanet.org"><u>Censored Planet</u></a> continue to work on this problem, especially asking how we can automatically identify tampering when attacks happen or networks change. </p><p>While collaborations between academia and industry aren’t always straightforward, they hold strong potential to help build a better Internet. If you’re interested in an internship adventure like the one I described, <a href="https://www.cloudflare.com/en-gb/careers/jobs/?department=Early+Talent"><u>apply today</u></a>! </p> ]]></content:encoded>
            <category><![CDATA[Research]]></category>
            <guid isPermaLink="false">5plcCyVVqbzFwO2FQx0uGN</guid>
            <dc:creator>Marwan Fayed</dc:creator>
            <dc:creator>Ram Sundara Raman (Guest author)</dc:creator>
        </item>
        <item>
            <title><![CDATA[How we prevent conflicts in authoritative DNS configuration using formal verification]]></title>
            <link>https://blog.cloudflare.com/topaz-policy-engine-design/</link>
            <pubDate>Fri, 08 Nov 2024 14:00:00 GMT</pubDate>
            <description><![CDATA[ We describe how Cloudflare uses a custom Lisp-like programming language and formal verifier (written in Racket and Rosette) to prevent logical contradictions in our authoritative DNS nameserver’s behavior. ]]></description>
            <content:encoded><![CDATA[ <p>Over the last year, Cloudflare has begun formally verifying the correctness of our internal DNS addressing behavior — the logic that determines which IP address a DNS query receives when it hits our authoritative nameserver. This means that for every possible DNS query for a <a href="https://developers.cloudflare.com/dns/manage-dns-records/reference/proxied-dns-records/"><u>proxied</u></a> domain we could receive, we try to mathematically prove properties about our DNS addressing behavior, even when different systems (owned by different teams) at Cloudflare have contradictory views on which IP addresses should be returned.</p><p>To achieve this, we formally verify the programs — written in a custom <a href="https://en.wikipedia.org/wiki/Lisp_(programming_language)"><u>Lisp</u></a>-like programming language — that our nameserver executes when it receives a DNS query. These programs determine which IP addresses to return. Whenever an engineer changes one of these programs, we run all the programs through our custom model checker (written in <a href="https://racket-lang.org/"><u>Racket</u></a> + <a href="https://emina.github.io/rosette/"><u>Rosette</u></a>) to check for certain bugs (e.g., one program overshadowing another) before the programs are deployed.</p><p>Our formal verifier runs in production today, and is part of a larger addressing system called Topaz. In fact, it’s likely you’ve made a DNS query today that triggered a formally verified Topaz program.</p><p>This post is a technical description of how Topaz’s formal verification works. Besides being a valuable tool for Cloudflare engineers, Topaz is a real-world example of <a href="https://en.wikipedia.org/wiki/Formal_verification"><u>formal verification</u></a> applied to networked systems. We hope it inspires other network operators to incorporate formal methods, where appropriate, to help make the Internet more reliable for all.</p><p>Topaz’s full technical details have been peer-reviewed and published in <a href="https://conferences.sigcomm.org/sigcomm/2024/"><u>ACM SIGCOMM 2024</u></a>, with both a <a href="https://research.cloudflare.com/publications/Larisch2024/"><u>paper</u></a> and short <a href="https://www.youtube.com/watch?v=hW7RjXVx7_Q"><u>video</u></a> available online. </p>
    <div>
      <h2>Addressing: how IP addresses are chosen</h2>
      <a href="#addressing-how-ip-addresses-are-chosen">
        
      </a>
    </div>
    <p>When a DNS query for a customer’s proxied domain hits Cloudflare’s nameserver, the nameserver returns an IP address — but how does it decide which address to return?</p><p>Let’s make this more concrete. When a customer, say <code>example.com</code>, signs up for Cloudflare and <a href="https://developers.cloudflare.com/dns/manage-dns-records/reference/proxied-dns-records/"><u>proxies</u></a> their traffic through Cloudflare, it makes Cloudflare’s nameserver <i>authoritative</i> for their domain, which means our nameserver has the <i>authority </i>to respond to DNS queries for <code>example.com</code>. Later, when a client makes a DNS query for <code>example.com</code>, the client’s recursive DNS resolver (for example, <a href="https://www.cloudflare.com/learning/dns/what-is-1.1.1.1/"><u>1.1.1.1</u></a>) queries our nameserver for the authoritative response. Our nameserver returns <b><i>some</i></b><i> </i>Cloudflare IP address (of our choosing) to the resolver, which forwards that address to the client. The client then uses the IP address to connect to Cloudflare’s network, which is a global <a href="https://www.cloudflare.com/en-gb/learning/cdn/glossary/anycast-network/"><u>anycast</u></a> network — every data center advertises all of our addresses.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/72EmlVrMTMBMhxrhZ50YI9/54e08160ea98c55bc8e2703d7c85927b/image3.png" />
          </figure><p><sup>Clients query Cloudflare’s nameserver (via their resolver) for customer domains. The nameserver returns Cloudflare IP addresses, advertised by our entire global network, which the client uses to connect to the customer domain. Cloudflare may then connect to the origin server to fulfill the user’s HTTPS request.</sup></p><p>When the customer has <a href="https://developers.cloudflare.com/byoip/"><u>configured a static IP address</u></a> for their domain, our nameserver’s choice of IP address is simple: it simply returns that static address in response to queries made for that domain.</p><p>But for all other customer domains, our nameserver could respond with virtually any IP address that we own and operate. We may return the <i>same</i> address in response to queries for <i>different</i> domains, or <i>different</i> addresses in response to different queries for the <i>same</i> domain. We do this for resilience, but also because decoupling names and IP addresses <a href="https://blog.cloudflare.com/addressing-agility"><u>improves flexibility</u></a>.</p><p>With all that in mind, let’s return to our initial question: given a query for a proxied domain without a static IP, which IP address should be returned? The answer: <b>Cloudflare chooses IP addresses to meet various business objectives. </b>For instance, we may choose IPs to:</p><ul><li><p>Change the IP address of a domain that is under attack.</p></li><li><p>Direct fractions of traffic to specific IP addresses to test new features or services.</p></li><li><p><a href="https://blog.cloudflare.com/cloudflare-incident-on-september-17-2024/"><u>Remap or “renumber”</u></a> domain names to new IP address space.</p></li></ul>
    <div>
      <h2>Topaz executes DNS objectives</h2>
      <a href="#topaz-executes-dns-objectives">
        
      </a>
    </div>
    <p>To change authoritative nameserver behavior — how we choose IPs —  a Cloudflare engineer encodes their desired DNS business objective as a declarative Topaz program. Our nameserver stores the list of all such programs such that when it receives a DNS query for a proxied domain, it executes the list of programs in sequence until one returns an IP address. It then returns that IP to the resolver.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3gyUw7j0GlTXj0Vm637aCW/9aa353d512151d5878998e199a538973/image1.png" />
          </figure><p><sup>Topaz receives DNS queries (metadata included) for proxied domains from Cloudflare’s nameserver. It executes a list of policies in sequence until a match is found. It returns the resulting IP address to the nameserver, which forwards it to the resolver.</sup></p><p>What do these programs look like?</p><p>Each Topaz program has three primary components:</p><ol><li><p><b>Match function: </b>A program’s match function specifies under which circumstances the program should execute. It takes as input DNS query metadata (e.g., datacenter information, account information) and outputs a boolean. If, given a DNS query, the match function returns <i>true</i>, the program’s response function is executed.</p></li><li><p><b>Response function</b>: A program’s response function specifies <i>which</i> IP addresses should be chosen. It also takes as input all the DNS query metadata, but outputs a 3-tuple (IPv4 addresses, IPv6 addresses, and TTL). When a program’s match function returns true, its corresponding response function is executed. The resulting IP addresses and TTL are returned to the resolver that made the query. </p></li><li><p><b>Configuration</b>: A program’s configuration is a set of variables that parameterize that program’s match and response function. The match and response functions reference variables in the corresponding configuration, thereby separating the macro-level behavior of a program (match/response functions) from its nitty-gritty details (specific IP addresses, names, etc.). This separation makes it easier to understand how a Topaz program behaves at a glance, without getting bogged down by specific function parameters.</p></li></ol><p>Let’s walk through an example Topaz program. The goal of this program is to give all queried domains whose metadata field “tag1” is equal to “orange” a particular IP address. The program looks like this:</p>
            <pre><code>- name: orange
  config: |
    (config
      ([desired_tag1 "orange"]
       [ipv4 (ipv4_address “192.0.2.3”)]
       [ipv6 (ipv6_address “2001:DB8:1:3”)]
       [t (ttl 300]))
  match: |
    (= query_domain_tag1 desired_tag1) 
  response: |
    (response (list ipv4) (list ipv6) t)</code></pre>
            <p>Before we walk through the program, note that the program’s configuration, match, and response function are YAML strings, but more specifically they are topaz-lang expressions. Topaz-lang is the <a href="https://en.wikipedia.org/wiki/Domain-specific_language"><u>domain-specific language (DSL)</u></a> we created specifically for expressing Topaz programs. It is based on <a href="https://www.scheme.org/"><u>Scheme</u></a>, but is much simpler. It is dynamically typed, it is not <a href="https://en.wikipedia.org/wiki/Turing_completeness"><u>Turing complete</u></a>, and every expression evaluates to exactly one value (though functions can throw errors). Operators cannot define functions within topaz-lang, they can only add new DSL functions by writing functions in the host language (Go). The DSL provides basic types (numbers, lists, maps) but also Topaz-specific types, like IPv4/IPv6 addresses and TTLs.</p><p>Let’s now examine this program in detail. </p><ul><li><p>The <code>config</code> is a set of four <i>bindings</i> from name to value. The first binds the string <code>”orange”</code> to the name <code>desired_tag1</code>. The second binds the IPv4 address <code>192.0.2.3</code> to the name <code>ipv4</code>. The third binds the IPv6 address <code>2001:DB8:1:3</code> to the name <code>ipv6</code>. And the fourth binds the TTL (for which we added a topaz-lang type) <code>300</code> (seconds) to the name <code>t</code>.</p></li><li><p>The <code>match</code> function is an expression that <i>must</i> evaluate to a boolean. It can reference configuration values (e.g., <code>desired_tag1</code>), and can also reference DNS query fields. All DNS query fields use the prefix <code>query_</code> and are brought into scope at evaluation time. This program’s match function checks whether <code>desired_tag1</code> is equal to the tag attached to the queried domain, <code>query_domain_tag1</code>. </p></li><li><p>The <code>response</code> function is an expression that evaluates to the special <code>response</code> type, which is really just a 3-tuple consisting of: a list of IPv4 addresses, a list of IPv6 addresses, and a TTL. This program’s response function simply returns the configured IPv4 address, IPv6 address, and TTL (seconds).</p></li></ul><p>Critically, <i>all</i> Topaz programs are encoded as YAML and live in the same version-controlled file. Imagine this program file contained only the <code>orange</code> program above, but now, a new team wants to add a new program, which checks whether the queried domain’s “tag1” field is equal to “orange” AND that the domain’s “tag2” field is equal to true:</p>
            <pre><code>- name: orange_and_true
  config: |
    (config
      ([desired_tag1 "orange"]
       [ipv4 (ipv4_address “192.0.2.2”)]
       [ipv6 (ipv6_address “2001:DB8:1:2”)]
       [t (ttl 300)]))
  match: |
    (and (= query_domain_tag1 desired_tag1)
         query_domain_tag2)
  response: |
    (response (list ipv4) (list ipv6) t)</code></pre>
            <p>This new team must place their new <code>orange_and_true</code> program either below or above the <code>orange</code> program in the file containing the list of Topaz programs. For instance, they could place <code>orange_and_true</code> after <code>orange</code>, like so:</p>
            <pre><code>- name: orange
  config: …
  match: …
  response: …
- name: orange_and_true
  config: …
  match: …
  response: …</code></pre>
            <p>Now let’s add a third, more interesting Topaz program. Say a Cloudflare team wants to test a modified version of our CDN’s HTTP server on a small percentage of domains, and only in a subset of Cloudflare’s data centers. Furthermore, they want to distribute these queries across a specific IP prefix such that queries for the same domain get the same IP. They write the following:</p>
            <pre><code>- name: purple
  config: |
    (config
      ([purple_datacenters (fetch_datacenters “purple”)]
       [percentage 10]
       [ipv4_prefix (ipv4_prefix “203.0.113.0/24”)]
       [ipv6_prefix (ipv6_prefix “2001:DB8:3::/48”)]))
  match: |
    (let ([rand (rand_gen (hash query_domain))])
      (and (member? purple_datacenters query_datacenter)
           (&lt; (random_number (range 0 99) rand) percentage)))
  response: |
    (let ([hashed_domain (hash query_domain)]
          [ipv4_address (select_from ipv4_prefix hashed_domain)]
          [ipv6_address (select_from ipv6_prefix hashed_domain)])
      (response (list ipv4_address) (list ipv6_address) (ttl 1)))</code></pre>
            <p>This Topaz program is significantly more complicated, so let’s walk through it.</p><p>Starting with configuration: </p><ul><li><p>The first configuration value, <code>purple_datacenters</code>, is bound to the expression <code>(fetch_datacenters “purple”)</code>, which is a function that retrieves all Cloudflare data centers tagged “purple” via an internal HTTP API. The result of this function call is a list of data centers. </p></li><li><p>The second configuration value, <code>percentage</code>, is a number representing the fraction of traffic we would like our program to act upon.</p></li><li><p>The third and fourth names are bound to IP prefixes, v4 and v6 respectively (note the <code>built-in ipv4_prefix</code> and <code>ipv6_prefix</code> types).</p></li></ul><p>The match function is also more complicated. First, note the <code>let</code> form — this lets operators define local variables. We define one local variable, a random number generator called <code>rand</code> seeded with the hash of the queried domain name. The match expression itself is a conjunction that checks two things. </p><ul><li><p>First, it checks whether the query landed in a data center tagged “purple”. </p></li><li><p>Second, it checks whether a random number between 0 and 99 (produced by a generator seeded by the domain name) is less than the configured percentage. By seeding the random number generator with the domain, the program ensures that 10% of <i>domains</i> trigger a match. If we had seeded the RNG with, say, the query ID, then queries for the same domain would behave differently.</p></li></ul><p>Together, the conjuncts guarantee that the match expression evaluates to true for 10% of domains queried in “purple” data centers.</p><p>Now let’s look at the response function. We define three local variables. The first is a hash of the domain. The second is an IPv4 address selected from the configured IPv4 prefix. <code>select_from</code> always chooses the same IP address given the same prefix and hash — this ensures that queries for a given domain always receive the same IP address (which makes it easier to correlate queries for a single domain), but that queries for different domains can receive different IP addresses within the configured prefix. The third local variable is an IPv6 address selected similarly. The response function returns these IP addresses and a TTL of value 1 (second).</p>
    <div>
      <h2>Topaz programs are executed on the hot path</h2>
      <a href="#topaz-programs-are-executed-on-the-hot-path">
        
      </a>
    </div>
    <p>Topaz’s control plane validates the list of programs and distributes them to our global nameserver instances. As we’ve seen, the list of programs reside in a single, version-controlled YAML file. When an operator changes this file (i.e., adds a program, removes a program, or modifies an existing program), Topaz’s control plane does the following things in order:</p><ul><li><p>First, it validates the programs, making sure there are no syntax errors. </p></li><li><p>Second, it “finalizes” each program’s configuration by evaluating every configuration binding and storing the result. (For instance, to finalize the <code>purple</code> program, it evaluates <code>fetch_datacenters</code>, storing the resulting list. This way our authoritative nameservers never need to retrieve external data.) </p></li><li><p>Third, it <i>verifies</i> the finalized programs, which we will explain below. </p></li><li><p>Finally, it distributes the finalized programs across our network.</p></li></ul><p>Topaz’s control plane distributes the programs to all servers globally by writing the list of programs to <a href="https://blog.cloudflare.com/introducing-quicksilver-configuration-distribution-at-internet-scale/"><u>QuickSilver</u></a>, our edge key-value store. The Topaz service on each server detects changes in Quicksilver and updates its program list.</p><p>When our nameserver service receives a DNS query, it augments the query with additional metadata (e.g., tags) and then forwards the query to the Topaz service (both services run on every Cloudflare server) via Inter-Process Communication (IPC). Topaz, upon receiving a DNS query from the nameserver, walks through its program list, executing each program’s match function (using the topaz-lang interpreter) with the DNS query in scope (with values prefixed with <code>query_</code>). It walks the list until a match function returns <code>true</code>. It then executes that program’s response function, and returns the resulting IP addresses and TTL to our nameserver. The nameserver packages these addresses and TTL in valid DNS format, and then returns them to the resolver. </p>
    <div>
      <h2>Topaz programs are formally verified</h2>
      <a href="#topaz-programs-are-formally-verified">
        
      </a>
    </div>
    <p>Before programs are distributed to our global network, they are formally verified. Each program is passed through our formal verification tool which throws an error if a program has a bug, or if two programs (e.g., the <code>orange_and_true</code> and <code>orange</code> programs) conflict with one another.</p><p>The Topaz formal verifier (<a href="https://en.wikipedia.org/wiki/Model_checking"><u>model-checker</u></a>) checks three properties.</p><p>First, it checks that each program is <i>satisfiable </i>— that there exists <i>some</i> DNS query that causes each program’s match function to return <code>true</code>. This property is useful for detecting internally-inconsistent programs that will simply never match. For instance, if a program’s match expression was <code>(and true false)</code>, there exists no query that will cause this to evaluate to true, so the verifier throws an error.</p><p>Second, it checks that each program is <i>reachable </i>— that there exists some DNS query that causes each program’s match function to return <code>true</code> <i>given all preceding programs.</i> This property is useful for detecting “dead” programs that are completely overshadowed by higher-priority programs. For instance, recall the ordering of the <code>orange</code> and <code>orange_and_true</code> programs:</p>
            <pre><code>- name: orange
  config: …
  match: (= query_domain_tag1 "orange")  
  response: …
- name: orange_and_true
  config: …
  match: (and (= query_domain_tag1 "orange") query_domain_tag2)
  response: …</code></pre>
            <p>The verifier would throw an error because the <code>orange_and_true</code> program is unreachable. For all DNS queries for which <code>query_domain_tag1</code> is ”orange”, regardless of <code>metadata2</code>, the <code>orange</code> program will <i>always</i> match, which means the <code>orange_and_true</code> program will <i>never</i> match. To resolve this error, we’d need to swap these two programs like we did above.</p><p>Finally, and most importantly, the verifier checks for program <i>conflicts</i>: queries that cause any two programs to both match. If such a query exists, it throws an error (and prints the relevant query), and the operators are forced to resolve the conflict by changing their programs. However, it only checks whether specific programs conflict — those that are explicitly marked <i>exclusive. </i>Operators mark their program as exclusive if they want to be sure that no other exclusive program could match on the same queries.</p><p>To see what conflict detection looks like, consider the corrected ordering of the <code>orange_and_true</code> and <code>orange</code> programs, but note that the two programs have now been marked exclusive:</p>
            <pre><code>- name: orange_and_true
  exclusive: true
  config: ...
  match: (and (= query_domain_tag1 "orange") query_domain_tag2)
  response: ...
- name: orange
  exclusive: true
  config: ...
  match: (= query_domain_tag1 "orange") 
  response: ...</code></pre>
            <p>After marking these two programs exclusive, the verifier will throw an error. Not only will it say that these two programs can contradict one another, but it will provide a sample query as proof:</p>
            <pre><code>Checking: no exclusive programs match the same queries: check FAILED!
Intersecting programs found:
programs "orange_and_true" and "orange" both match any query...
  to any domain...
    with tag1: "orange"
    with tag2: true
</code></pre>
            <p>The teams behind the <code>orange</code> and <code>orange_and_true</code> programs respectively <i>must</i> resolve this conflict before these programs are deployed, and can use the above query to help them do so. To resolve the conflict, the teams have a few options. The simplest option is to remove the exclusive setting from one program, and acknowledge that it is simply not possible for these programs to be <code>exclusive</code>. In that case, the order of the two programs matters (one must have higher priority). This is fine! Topaz allows developers to write certain programs that <i>absolutely cannot </i>overlap with other programs (using <code>exclusive</code>), but sometimes that is just not possible. And when it’s not, at least program priority is <i>explicit.</i></p><p><i>Note: in practice, we place all exclusive programs at the top of the program file. This makes it easier to reason about interactions between exclusive and non-exclusive programs.</i></p><p>In short, verification is powerful not only because it catches bugs (e.g., satisfiability and reachability), but it also highlights the consequences of program changes. It helps operators understand the impact of their changes by providing immediate feedback. If two programs conflict, operators are forced to resolve it before deployment, rather than after an incident.</p><p><b>Bonus: verification-powered diffs. </b>One of the newest features we’ve added to the verifier is one we call <i>semantic diffs</i>. It’s in early stages, but the key insight is that operators often just want to <i>understand</i> the impact of changes, even if these changes are deemed safe. To help operators, the verifier compares the old and new versions of the program file. Specifically, it looks for any query that matched program <i>X</i> in the old version, but matches a different program <i>Y</i> in the new version (or vice versa). For instance, if we changed <code>orange_and_true</code> thus:</p>
            <pre><code>- name: orange_and_true
  config: …
  match: (and (= query_domain_tag1 "orange") (not query_domain_tag2))
  response: …</code></pre>
            <p>Our verifier would emit:</p>
            <pre><code>Generating a report to help you understand your changes...
NOTE: the queries below (if any) are just examples. Other such queries may exist.

* program "orange_and_true" now MATCHES any query...
  to any domain...
    with tag1: "orange"
    with tag2: false</code></pre>
            <p>While not exhaustive, this information helps operators understand whether their changes are doing what they intend or not, <i>before</i> deployment. We look forward to expanding our verifier’s diff capabilities going forward.</p>
    <div>
      <h2>How Topaz’s verifier works, and its tradeoffs</h2>
      <a href="#how-topazs-verifier-works-and-its-tradeoffs">
        
      </a>
    </div>
    <p>How does the verifier work? At a high-level, the verifier checks that, for all possible DNS queries, the three properties outlined above are satisfied. A Satisfiability Modulo Theories (SMT) solver — which we explain below — makes this seemingly impossible operation feasible. (It doesn't literally loop over all DNS queries, but it is equivalent to doing so — it provides exhaustive proof.)</p><p>We implemented our formal verifier in <a href="https://emina.github.io/rosette/"><u>Rosette</u></a>, a solver-enhanced domain-specific language written in the <a href="https://racket-lang.org/"><u>Racket</u></a> programming language. Rosette makes writing a verifier more of an engineering exercise, rather than a formal logic test: if you can express the interpreter for your language in Racket/Rosette, you get verification “for free”, in some sense. We wrote a topaz-lang interpreter in Racket, then crafted our three properties using the Rosette DSL.</p><p>How does Rosette work? Rosette translates our desired properties into formulae in <a href="https://en.wikipedia.org/wiki/First-order_logic"><u>first-order logic</u></a>. At a high level, these formulae are like equations from algebra class in school, with “unknowns” or variables. For instance, when checking whether the orange program is reachable (with the <code>orange_and_true</code> program ordered before it), Rosette produces the formula <code>((NOT orange_and_true.match) AND orange.match)</code>. The “unknowns” here are the DNS query parameters that these match functions operate over, e.g., <code>query_domain_tag1</code>. To solve this formula, Rosette interfaces with an <a href="https://en.wikipedia.org/wiki/Satisfiability_modulo_theories"><u>SMT solver</u></a> (like <a href="https://github.com/Z3Prover/z3"><u>Z3</u></a>), which is specifically designed to solve these types of formulae by efficiently finding values to assign to the DNS query parameters that make the formulae true. Once the SMT solver finds satisfying values, Rosette translates them into a Racket data structure: in our case, a sample DNS query. In this example, once it finds a satisfying DNS query, it would report that the <code>orange</code> program is indeed reachable.</p><p>However, verification is not free. The primary cost is maintenance. The model checker’s interpreter (Racket) must be kept in lockstep with the main interpreter (Go). If they fall out-of-sync, the verifier loses the ability to accurately detect bugs. Furthermore, functions added to topaz-lang must be compatible with formal verification.</p><p>Also, not all functions are easily verifiable, which means we must restrict the kinds of functions that program authors can write. Rosette can only verify functions that operate over integers and bit-vectors. This means we only permit functions whose operations can be converted into operations over integers and bit-vectors. While this seems restrictive, it actually gets us pretty far. The main challenge is strings: Topaz does not support programs that, for example, manipulate or work with substrings of the queried domain name. However, it does support simple operations on closed-set strings. For instance, it supports checking if two domain names are equal, because we can convert all strings to a small set of values representable using integers (which are easily verifiable).</p><p>Fortunately, thanks to our design of Topaz programs, the verifier need not be compatible with all Topaz program code. The verifier only ever examines Topaz <i>match</i> functions, so only the functions specified in match functions need to be verification-compatible. We encountered other challenges when working to make our model accurate, like modeling randomness — if you are interested in the details, we encourage you to read the <a href="https://research.cloudflare.com/publications/Larisch2024/"><u>paper</u></a>.</p><p>Another potential cost is verification speed. We find that the verifier can ensure our existing seven programs satisfy all three properties within about six seconds, which is acceptable because verification happens only at build time. We verify programs centrally, before programs are deployed, and only when programs change. </p><p>We also ran microbenchmarks to determine how fast the verifier can check more programs — we found that, for instance, it would take the verifier about 300 seconds to verify 50 programs. While 300 seconds is still acceptable, we are looking into verifier optimizations that will reduce the time further.</p>
    <div>
      <h2>Bringing formal verification from research to production</h2>
      <a href="#bringing-formal-verification-from-research-to-production">
        
      </a>
    </div>
    <p>Topaz’s verifier began as a <a href="https://research.cloudflare.com/"><u>research</u></a> project, and has since been deployed to production. It formally verifies all changes made to the authoritative DNS behavior specified in Topaz.</p><p>For more in-depth information on Topaz, see both our research <a href="https://research.cloudflare.com/publications/Larisch2024/"><u>paper</u></a> published at SIGCOMM 2024 and the <a href="https://www.youtube.com/watch?v=hW7RjXVx7_Q"><u>recording</u></a> of the talk.</p><p>We thank our former intern, Tim Alberdingk-Thijm, for his invaluable work on Topaz’s verifier.</p> ]]></content:encoded>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Addressing]]></category>
            <category><![CDATA[Formal Methods]]></category>
            <guid isPermaLink="false">5LVsblxj2Git54IRxadpyg</guid>
            <dc:creator>James Larisch</dc:creator>
            <dc:creator>Suleman Ahmad</dc:creator>
            <dc:creator>Marwan Fayed</dc:creator>
        </item>
        <item>
            <title><![CDATA[A global assessment of third-party connection tampering]]></title>
            <link>https://blog.cloudflare.com/connection-tampering/</link>
            <pubDate>Thu, 05 Sep 2024 07:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare brings visibility to the practice of connection tampering as observed from our global network. ]]></description>
            <content:encoded><![CDATA[ <p>Have you ever made a phone call, only to have the call cut as soon as it is answered, with no obvious reason or explanation? This analogy is the starting point for understanding connection tampering on the Internet and its impact. </p><p>We have <a href="https://research.cloudflare.com/publications/SundaraRaman2023/"><u>found</u></a> that 20 percent of all Internet connections are abruptly closed before any useful data can be exchanged. Essentially, every fifth call is cut before being used. As with a phone call, it can be challenging for one or both parties to know what happened. Was it a faulty connection? Did the person on the other end of the line hang up? Did a third party intervene to stop the call?  </p><p>On the Internet, Cloudflare is in a unique position to help figure out when a third party may have played a role. Our global network allows us to identify patterns that suggest that an external party may have intentionally tampered with a connection to prevent content from being accessed. Although they are often hard to decipher, the ways connections are abruptly closed give clues to what might have happened. Sources of tampering generally do not try to hide their actions, which leaves hints of their existence that we can use to identify detectable ‘signatures’ in the connection protocol. As we explain below, there are other protocol features that are less likely to be spoofed and that point to third party actions. We can use these hints to build signature patterns of connection tampering that can be recognized.</p><p>To be clear, there are many reasons a third party might tamper with a connection. Enterprises may tamper with outbound connections from their networks to prevent users from interacting with spam or phishing sites. ISPs may use connection tampering to enforce court or regulatory orders that demand website blocking to address copyright infringement or for other legal purposes. Governments may mandate large-scale censorship and information control. </p><p>Despite the fact that everyone knows it happens, no other large operation has previously looked at the use of connection tampering at scale and across jurisdictions. We think that creates a notable gap in understanding what is happening in the Internet ecosystem, and that shedding light on these practices is important for transparency and the long-term health of the Internet. So today, we’re proud to share a view of global connection tampering practices.</p><p>The full technical details were recently peer-reviewed and published in “<a href="https://research.cloudflare.com/publications/SundaraRaman2023/"><u>Global, Passive Detection of Connection Tampering</u></a>” at ACM SIGCOMM, with its <a href="https://youtu.be/RD73IgzQMFo?si=OWvNnlNNLalbhygV&amp;t=2984"><u>public presentation</u></a>. We’re also announcing a new <a href="https://radar.cloudflare.com/security-and-attacks#tcp-resets-and-timeouts"><u>dashboard</u></a> and <a href="https://developers.cloudflare.com/api/operations/radar-get-connection-tampering-summary"><u>API</u></a> on Cloudflare Radar that shows a near real-time view of specific connection timeout and reset events – the two mechanisms dominant in tampering experienced by users<b> </b>connecting to Cloudflare’s network globally.</p><p>To better understand our perspective, it helps to understand the nature of connection tampering and reasons we’re talking about it.</p>
    <div>
      <h2>Global insights for a global audience</h2>
      <a href="#global-insights-for-a-global-audience">
        
      </a>
    </div>
    <p>Evidence of connection tampering is visible in networks all around the world. We were initially shocked that, globally, about 20% of all connections to Cloudflare close unexpectedly before any useful data exchange occurs — consistent with connection tampering. Here is a snapshot of these anomalous connections seen by Cloudflare that, as of today, <a href="https://blog.cloudflare.com/tcp-resets-timeouts"><u>we’re sharing on Radar</u></a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1nPz5Ulu2YS7eV6Hpmniwg/fa2537c949602d057dfa83a6a599f553/2544-2.png" />
          </figure><p><i><sub>via </sub></i><a href="https://radar.cloudflare.com/security-and-attacks?dateStart=2024-07-28&amp;dateEnd=2024-08-26#tcp-resets-and-timeouts"><i><sub>Cloudflare Radar</sub></i></a></p><p>It’s not all tampering, but some of it clearly is, as we describe in more detail below. The challenge is filtering through the noise to determine which anomalous connections can confidently be attributed to tampering.</p>
    <div>
      <h2>Macro-level analysis and validation</h2>
      <a href="#macro-level-analysis-and-validation">
        
      </a>
    </div>
    <p>In <a href="https://research.cloudflare.com/publications/SundaraRaman2023/"><u>our work</u></a> we identified 19 patterns of anomalous connections as being candidate signatures for connection tampering. From those, we found that 14 had been previously reported by active “on the ground” measurement efforts, which presented an opportunity for validation at macro-level: If we observe our tampering signatures from Cloudflare’s network in the same places others observe them from the ground, we could have greater confidence that the signatures capture true cases of connection tampering when observed elsewhere, where there has been no prior reporting. To mitigate the risk of confirmation bias from looking where tampering is known to exist, we decided to look everywhere at the same time.</p><p>Taking that approach, the figure below, taken from our peer-review <a href="https://research.cloudflare.com/publications/SundaraRaman2023/"><u>study</u></a>, is a visual side-by-side comparison of each of the 19 signatures. The data is taken from a two-week interval starting January 26, 2023. Within each signature column is the proportion of matching connections broken down by the country where the connection originated. For example, the column third from the right labeled with ⟨PSH → RST;RST<sub>0</sub>⟩ indicates that we almost exclusively observed that signature on connections from China. Overall, what we find is a mirror of known cases from public and prior reports, which is an indication that our methodology works.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4w1iJCVyRwZdgblT7uk2tZ/53a3201f13cf1b4994db8f8a43b9d64b/2544-3.png" />
            
            </figure><p><i><sub></sub></i><a href="https://research.cloudflare.com/publications/SundaraRaman2023/"><i><sub><u>Figure 1</u></sub></i></a><i><sub>: Signature matching across countries: Each column is the total global number of connections matching a specific signature. Within each column is the proportion of connections initiations from individual countries matching that signature.</sub></i></p><p>Interestingly, by honing in on prevalence, and setting aside the raw number of signature matches, interesting patterns emerge. As a result of this data-driven perspective, unexpected macro-insights also emerge. If we focus on the three most populous countries in the world ranked by <a href="https://worldpopulationreview.com/country-rankings/internet-users-by-country"><u>number of Internet users</u></a>, connections from China contribute a substantial portion of matches across no fewer than 9 of the signatures. This is perhaps unsurprising, but reinforces prior studies that find evidence of the Great Firewall (GFW) being made of many <a href="https://research.cloudflare.com/publications/SundaraRaman2023/"><u>different deployments and implementations</u></a> of blocking mandates. Next, matches on connections from India also contribute substantially to nine 9 different signatures, five of which are in common with signatures where China matches feature highly. Looking at the third most populous, the United States, a visible if not substantial proportion of matches appear on all but two of the signatures.</p><p>A snapshot of signature distributions per-country, also taken from the peer-review <a href="https://research.cloudflare.com/publications/SundaraRaman2023/"><u>study</u></a>, appears below for a select set of countries. The global distribution is included for comparison. The dark gray portions marked ⟨SYN → ∅⟩ are included for completeness, but have more non-tampering alternative explanations than the others (for example, as result of a low-rate <a href="https://blog.cloudflare.com/the-rise-of-multivector-amplifications/"><u>SYN flood</u></a>).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/57Dip4I995kVVjXAhQxMrH/da5414d088777c000551a66589b80c3f/2544-4.png" />
            
            </figure><p><a href="https://research.cloudflare.com/publications/SundaraRaman2023/"><sub><i><u>Figure 4</u></i></sub></a><sub><i>: Signature distribution per country: The percentage of connections originating from select countries (and globally) that match a particular signature, or are not tampered with.</i></sub></p><p>From this perspective we again observe patterns that match prior studies. We focus first on rates above the global average, and ignore the noisiest signature ⟨SYN → ∅⟩ in medium-gray; there are simply too many other explanations for a signature match at this earliest possible stage. Among all connections from Turkmenistan (TM), Russia (RU), Iran (IR), and China (CN), roughly 80%, 30%, 40%, and 30%, respectively, of those connections match a tampering signature. The data also reveals high signature match rates where no prior reports exist. For example, connections from Peru (PE) and Mexico (MX), match roughly 50% and 25%, respectively; <a href="https://blog.cloudflare.com/tcp-resets-timeouts"><u>analysis of individual networks</u></a> in these countries suggests a likely explanation is zero-rating in mobile and cellular networks, where an ISP allows access to certain resources (but not others) at no cost. If we look below the global average, Great Britain (GB), the United States (US), and Germany (DE), each match a signature on about 10% of connections.</p><p>The data makes clear that connection tampering is widespread, and close to many users, if not most. In many ways, it’s closer than most know. To explain why, we explain connection tampering with a very familiar communication tool, the telephone.</p>
    <div>
      <h2>Explaining tampering with telephone calls</h2>
      <a href="#explaining-tampering-with-telephone-calls">
        
      </a>
    </div>
    <p>Connection tampering is a way for a third party to block access to particular content. However, it’s not enough for the third party to know the <i>type</i> of content it wants to block. The third party can only block an identity by name. </p><p>Ultimately, connection tampering is possible only by accident – an unintended side effect of protocol design. On the Internet, the most common identity is the domain name. In a communication on the Internet, the domain name is most often transmitted in the “<a href="https://www.cloudflare.com/en-gb/learning/ssl/what-is-sni/"><u>server name indication (SNI)</u></a>” field in TLS – exposed in cleartext for all to see.</p><p>To understand why this matters, it helps to understand what connection tampering looks like in human-to-human communications without the Internet. The Internet itself looks and operates much like the postal system, which relies only on addresses and never on names. However, the way most people use the Internet is much more like the “<a href="https://en.wikipedia.org/wiki/Plain_old_telephone_service"><u>plain old telephone system</u></a>,” which <i>requires</i> names to succeed.</p><p>In the telephone system, a person first dials a phone number, <i>not</i> a name. The call is <code>connected</code> and usable only after the other side answers, and the caller hears a voice.  The caller asks for a name only <i>after</i> the call is connected. The call manifests in the system as energy signals that do not identify the communicating parties. Finally, after the call ends, a new call is required to communicate again.</p><p>On the Internet, a client such as a browser “establishes a connection.” Much like a telephone caller, it initiates a connection request to a server’s <code>number</code>, which is an IP address. The longest-standing “connection-oriented” protocol to connect two devices is called the <a href="https://cloudflare.tv/shows/this-week-in-net/this-week-in-net-50th-anniversary-of-the-tcp-paper/oZKEA4v4"><u>Transmission Control Protocol</u></a>, or TCP. The domain name is transmitted in isolation from the connection establishment, much like asking for a name once the phone is answered. The connections are “logical” identified by metadata that does not identify communicating parties. Finally, a new connection is established with each new visit to a website.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1BzA3XvqopuaP1WSg0rK6X/fe39d77acdc14c1d984512dfdb01279c/2544-5.png" />
          </figure><p><sub><i>Comparison between a TCP connection and a telephone call</i></sub></p><p>What happens if a telephone company is required to prevent a call to some party? One option is to modify or manipulate phone directories so a caller can’t get the phone number they need to dial the phone that makes the call; this is the essence of <a href="https://www.cloudflare.com/en-gb/learning/access-management/what-is-dns-filtering/"><u>DNS filtering</u></a>. A second option is to block all calls to the phone number, but this inadvertently affects others, just like <a href="https://blog.cloudflare.com/consequences-of-ip-blocking/"><u>IP blocking</u></a> does.</p><p>Once a phone call is initiated, the only way for the telephone company to know <i>who</i> is being called is to listen in on the call and wait for the caller to say, “is so-and-so there?” or “can I speak with so-and-so?” Mobile and cellular calls are no exception. The idea that the number we call <i>is</i> the person who will answer is just an expectation – it has never been the reality. For example, a parent could get a number to give to their child, or a taxi company could leave the mobile phone with whomever is on-shift at the time. As a result, the telephone company <i>must listen in</i>. Once it hears a certain name it can cut the call; neither side would have any idea what has happened – this is the very definition of connection tampering on the Internet. </p><p>For the purpose of establishing a communication channel, phone calls and TCP connections are at least comparable, and arguably exactly the same – not least because the domain name is transmitted separately from establishing a connection.</p><p>Similarly, on the Internet, the only way for a third party to know the intended recipient of a connection is to “look inside” of packets as they are transmitted. Where a telephone company would have to listen for a name, a third party on the Internet waits to see something it does not like, most often a forbidden name. Recall from above the unintended side-effect of the protocol: the name is visible in the SNI, which is required to help encrypt the data communication. When that happens, the third party causes one or both devices to close the connection by either dropping messages or injecting specially-crafted messages that cause the communicating parties to abort the connection.</p><p>The mechanisms to trigger tampering begin with <a href="https://www.cloudflare.com/learning/security/what-is-next-generation-firewall-ngfw/"><u>deep packet inspection (DPI)</u></a>, which means looking into the data portions that lie beyond the address and other metadata belonging to the connection. It’s safe to say that this functionality does not come for free; whether it’s an ISP’s router or a parental proxy, DPI is an expensive operation that gets more expensive at large scale or high speed. </p><p>One last point worth mentioning is that weaknesses in telephone tampering similarly appear in connection tampering. For example, the sound of Jean and Gene are indistinguishable to any ear, despite being different names. Similarly, tampering with connections to Twitter’s short-form name “t.co” would also affect “microsoft.com” – and <a href="https://en.wikipedia.org/wiki/Internet_censorship_in_Russia#Deep_packet_inspection"><u>has</u></a>.</p>
    <div>
      <h2>A live view of tampering during Mahsa Amini protests</h2>
      <a href="#a-live-view-of-tampering-during-mahsa-amini-protests">
        
      </a>
    </div>
    <p>Before we delve deeply into the technical, there is one more motivation that is personal to many at Cloudflare. Transparency is important and the reason we started this work, but it was after seeing the data <i>during</i> the Mahsa Amini protests in Iran in 2022 that we committed internally to share the data on Radar. </p><p>The figure below is for connections from Iran during 17 days overlapping the protests. The plot-lines track individual signals of anomalous connections, including signatures of <a href="https://blog.cloudflare.com/passive-detection-of-connection-tampering"><u>different types</u></a> of connection tampering. This data pre-dates the Radar service, so we have elected to share this representation from the <a href="https://research.cloudflare.com/publications/SundaraRaman2023/"><u>peer-reviewed paper</u></a>. It was also the first visual example of the value of the data if it could be shared via Radar. 
</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3IKNhC2gLo7CeUQclssiuM/58300eccf981f132689ee75b57db8cb2/2544-6.png" />
          </figure><p><sub><i></i></sub><a href="https://research.cloudflare.com/publications/SundaraRaman2023/"><sub><i><u>Figure 8</u></i></sub></a><sub><i>: Signature match rates longitudinally in Iran during a period of nation-wide protests. (𝑥-axis is local time.)</i></sub></p><p></p><p>From the data there are two observations that stick out. First is the way that the lines appear stable before the protests, then increase after the protests began. Second is the variation between the lines over time, in particular the lines in light gray, dark purple, and dark green. Recall that each line is a different tampering signature, so the variation between lines suggests changes in the underlying causes – either the mechanisms at work, or the traffic that invokes them.</p><p>We emphasize that a signature match, alone, does not in itself mean there is tampering. However, in the case of Iran in 2022 there were public reports of blocking of various forms. The methods in use at the time, specifically <a href="https://www.cloudflare.com/en-gb/learning/ssl/what-is-sni/"><u>Server Name Indication (SNI)</u></a>-based blocking of access to content, had also previously been <a href="https://ooni.org/post/2020-iran-sni-blocking/"><u>well-documented</u></a>, and matched with our observations represented by the figure above.</p><p>What about today? Below we see the Radar view of the twelve months from August 2023 to August 2024. Each color represents a different stage of the connection where tampering might happen. In the previous 12 months, TCP connection anomalies in Iran are lower than the <a href="https://radar.cloudflare.com/security-and-attacks?dateStart=2024-08-01&amp;dateEnd=2024-08-08"><u>worldwide averages</u></a>, overall, but appear significantly higher in the portion of anomalies represented by the light-blue region. This “Post ACK” phase of communication is often associated with SNI-based blocking. (In the graph above, the relevant signatures are represented by the dark purple and dark green lines.) Alongside, the changing proportions of the different plot-lines since mid-December 2023 suggest that techniques have been changing over time.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3qhesraWkFdNPbWgx2x0xX/1a72a1707e768270c29d570b5bd35545/2544-7.png" />
          </figure><p><i><sub>via </sub></i><a href="https://radar.cloudflare.com/security-and-attacks/ir?dateStart=2023-08-26&amp;dateEnd=2024-08-26#tcp-resets-and-timeouts"><i><sub>Cloudflare Radar</sub></i></a></p>
    <div>
      <h2>The importance of an open network measurement community</h2>
      <a href="#the-importance-of-an-open-network-measurement-community">
        
      </a>
    </div>
    <p>As a testament to the importance of open measurement and research communities, this work very literally “<a href="https://en.wikipedia.org/wiki/Standing_on_the_shoulders_of_giants">builds on the shoulders of giants</a>.” It was produced in collaboration with researchers at the <a href="https://www.cs.umd.edu/">University of Maryland</a>, <a href="https://www.epfl.ch/">École Polytechnique Fédérale de Lausanne</a>, and the <a href="https://cse.engin.umich.edu/">University of Michigan</a>, but does not exist in isolation. There have been extensive efforts to measure connection tampering, most of which comes from the censorship measurement community. The bulk of that work consists of <i>active</i> measurements, in which researchers craft and transmit probes in or along networks and regions to identify blocking behavior. Unsurprisingly, active measurement has both strengths and weaknesses, as described in <a href="https://research.cloudflare.com/publications/SundaraRaman2023/"><u>Section 2</u></a> in the paper). </p><p>The counterpart to active measurement, and the focus of our project, is <i>passive</i> measurement, which takes an “observe and do nothing” approach. Passive measurement comes with its own strengths and weaknesses but, crucially, it relies on having a good vantage point such as a large network operator. Each of active and passive measurements are most effective when working in conjunction, in this case helping to paint a more complete picture of the impact of connection tampering on users.</p><p>Most importantly, when embarking upon any type of measurement, great care must be taken to understand and <a href="https://cacm.acm.org/magazines/2016/10/207765-ethical-considerations-in-network-measurement-papers/fulltext"><u>evaluate the safety of the measurement</u></a> since the risk imposed on people and networks are often indirect, or hidden from view.</p>
    <div>
      <h2>Limitations of our data</h2>
      <a href="#limitations-of-our-data">
        
      </a>
    </div>
    <p>We have no doubt about the importance of being transparent with connection tampering, but we also need to be explicit about the limits on the insights that can be gleaned from the data. As passive observers of connections to the Cloudflare network – and only the Cloudflare network – we are only able to see or infer the following:</p><ol><li><p><b>Signs of connection tampering, but not where it happened.</b> Any software or device between the client’s application and the server systems can tamper with a connection. The list ranges from purpose-built systems, to firewalls in the enterprise or home broadband router, and protection software installed on home or school computers. <i>All we can infer is where the connection started</i> (albeit at the limits of geolocation inaccuracies inherent in the Internet’s design)<i>.</i></p></li><li><p><b>(Often, but not always) What triggered the tampering, but not why.</b> Typically, tampering systems are triggered by domain names, keywords, or regular expressions. With enough repetition, and manual inspection, it may be possible to identify the <i>likely</i> cause of tampering, but not the reasons. Many tampering system designs are prone to unintended consequences, among them the <a href="https://en.wikipedia.org/wiki/Internet_censorship_in_Russia#Deep_packet_inspection"><u>t.co</u></a> example mentioned above.</p></li><li><p><b>Who and what </b><b><i>is</i></b><b> affected, but not who or what </b><b><i>could</i></b><b> be affected.</b> As passive observers, there are limits on the kinds of inferences we can make. For example, observable tampering on 1000 out of 1001 connections to <code>example.com</code> suggests that tampering is likely on the next connection attempt. However, that says nothing about connections to <code>another-example.com</code>. </p></li></ol>
    <div>
      <h2>Data, data, data: Extracting signals from the noise</h2>
      <a href="#data-data-data-extracting-signals-from-the-noise">
        
      </a>
    </div>
    <p>If you just want to get and use the data on Radar, see our “<a href="https://blog.cloudflare.com/tcp-resets-timeouts"><u>how to</u></a>” guide. Otherwise, let’s understand the data itself.</p><p>The focus of this work is <a href="https://www.cloudflare.com/learning/ddos/glossary/tcp-ip/"><u>TCP</u></a>. In our data there are two mechanisms available to a third-party to force a connection to close: <a href="https://en.wikipedia.org/wiki/Packet_drop_attack"><u>dropping packets</u></a> to induce timeouts or <a href="https://en.wikipedia.org/wiki/TCP_reset_attack#TCP_resets"><u>injecting forged TCP RST packets</u></a>, each with various deployment choices. Individual tampering signatures may be reflections of those choices. For comparison, a graceful TCP close is initiated with a FIN packet. </p>
    <div>
      <h3>Connection tampering signatures</h3>
      <a href="#connection-tampering-signatures">
        
      </a>
    </div>
    <p>Our detection mechanism evaluates sets of packets in a connection against a set of <i>signatures</i> for connection tampering. The signatures are hand-crafted from signatures identified in prior work, and by analyzing samples of connections to Cloudflare’s network that we classify as <i>anomalous </i>– connections that close early, and ungracefully by way of a RST packet or timeout within the first 10 packets from the client. We analyzed the samples and found that 19 patterns accounted for 86.9% of all possibly tampered connections in the samples, shown in the table below.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5LssSxT9SkDpXzMXtPmtZ1/a18e564cf39613bb7fe7569337d91b65/2544-8.png" />
          </figure><p><sub></sub><a href="https://research.cloudflare.com/publications/SundaraRaman2023/"><sub><u>Table 1</u></sub></a><sub>: The comprehensive set of tampering signatures we identify through global passive measurements.</sub></p><p></p><p>To help reason about tampering, we also classed the 19 signatures above according to the stage of the connection lifetime in which they appear. Each stage implies something about the middlebox, as described below alongside corresponding sequence diagrams:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/49GKWrmtfdg9Xpk0K2RiGJ/f0e755a2ba1fcac763a44185bf566f61/Screenshot_2024-09-04_at_2.57.52_PM.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6xZbuENcYfkucNmVUJmwQv/b1f69b105c2e19eb9f5ef583d50416b8/Screenshot_2024-09-04_at_2.58.00_PM.png" />
          </figure><p></p><ul><li><p><b>(a) Post-SYN (mid-handshake)</b>: Tampering is likely triggered by the destination IP address because the middlebox has likely not seen application data, which is typically transmitted after the handshake completes.</p></li><li><p><b>(b) Post-ACK (immediately after handshake)</b>: The connection is established and immediately forced to close before seeing any data. It is possible, even likely, that the middlebox has likely seen a data packet; for example, the host header in HTTP or SNI field in TLS. </p></li><li><p><b>(c) Post-PSH (after first data packet)</b>: The middlebox has definitely seen the first data packet because the server has received it. The middlebox may have been waiting for a packet with a PSH flag, typically set to indicate data in the packet should be delivered to the application on receipt, without delay. The likely middlebox is likely a <a href="https://en.wikipedia.org/wiki/Man-on-the-side_attack"><u>monster-on-the-side</u></a> because it permits the offending packet to reach the destination.</p></li><li><p><b>(d) Later-in-flow (after multiple data packets)</b>: Tampering at later stages in the connection (not immediately after the first data packet, but still within the first 10 packets). The prevalence of encrypted data in TLS makes this the least likely stage for tampering to occur. The likely triggers are keywords appearing in cleartext later in (HTTP) connections, or by the likes of enterprise proxies and parental protection software that has visibility into encrypted traffic and can reset connections when certain keywords are encountered. </p></li></ul>
    <div>
      <h3>Accounting for alternative explanations</h3>
      <a href="#accounting-for-alternative-explanations">
        
      </a>
    </div>
    <p>How can we be confident that the signatures above detect middlebox tampering, and not just atypical client behavior? One of the challenges of passive measurement is that we do not have full visibility into the clients connecting to our network, so absolute positives are hard if not impossible. Instead, we look for strong positive evidence of tampering, that must first begin by identifying <b>false positives</b>. </p><p>We are aware of the following sources of false positives that can be hard to disambiguate from true sources of tampering. <i>All but the last occur in the first two stages</i> of the connection, before data packets are received. </p><ul><li><p><b>Scanners</b> are client-side applications that probe servers to elicit responses. Some scanner software uses fixed bits in the header to self-identify, which helps us filter. For example, we found that <a href="https://zmap.io/"><u>Zmap</u></a> accounts for approximately 1% of all <code>⟨SYN → RST⟩</code> signature matches.</p></li><li><p><b>SYN flood attacks</b> are another likely source of false positives, especially for signatures in the Post-SYN connection stage like the <code>⟨SYN → ∅⟩</code> and <code>⟨SYN → RST⟩</code> signatures. These are less likely to appear in our dataset collection, which happens <a href="https://www.cloudflare.com/learning/ddos/syn-flood-ddos-attack/https://www.cloudflare.com/learning/ddos/syn-flood-ddos-attack/"><u>after the DDoS protection</u></a> systems.</p></li><li><p><b>Happy Eyeballs</b> is a <a href="https://datatracker.ietf.org/doc/html/rfc8305"><u>common technique</u></a> used by dual-stack clients in which the client initiates an IPv6 connection to the server and, with some delay to favor IPv6, also makes an IPv4 connection. The client keeps the connection that succeeds first and drops the other. Clients that cease transmission or close the connection with a RST instead of a FIN would show up in the data, matching the <code>⟨SYN → RST⟩</code> signature. </p></li><li><p><b>Browser-triggered RSTs</b> may appear at any stage of the connection, but especially for signatures that match later in a connection (after multiple data packets). It might be triggered, for example, by a user closing a browser tab. Unlike targeted tampering, however, RSTs originating from browsers are unlikely to be biased towards specific services or websites. </p></li></ul><p>How can we separate legitimate client-initiated false positives from third-party tampering? We seek an evidence-based approach to distinguish tampering signatures from other signals within the dataset. For this we turn to individual bits in the packet headers.</p>
    <div>
      <h3>Signature validation – letting the data speak</h3>
      <a href="#signature-validation-letting-the-data-speak">
        
      </a>
    </div>
    <p>Signature matches in isolation are insufficient to make good determinations. Alongside, we can find further supporting evidence of their accuracy by examining connections in aggregate – if the cause is tampering, and tampering is targeted, then there must be other patterns or markers in common. For example, we expect browser behavior to appear worldwide; however, as we showed above, signatures that match on connections in only some places or some time intervals stick out. </p><p>Similarly, we expect certain characteristics in contiguous packets within a connection to also stick out, and indeed they do, namely in the <a href="https://datatracker.ietf.org/doc/html/rfc6864"><u>IP-ID</u></a> and <a href="https://www.cloudflare.com/learning/cdn/glossary/time-to-live-ttl/"><u>TTL</u></a> fields in the IP header.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7CVNobND5JzSYY4rszgem9/6b533f5a3681cf8c4f05d6a6179c0630/Screenshot_2024-09-04_at_2.57.36_PM.png" />
          </figure><p><b>The IP-ID (IP identification) field</b> in the IPv4 packet header is usually a fixed per-connection value, often incremented by the client for each subsequent packet it sends. In other words, we expect the change in IP-ID value in subsequent packets sent from the same client to be small. Thus, large changes in IP-ID value between subsequent packets are unexpected in normal connections, and could be used as an indicator of packet injection. This is exactly what we see in the figure above, marked (a), for a select set of signatures.</p><p><b>The Time-to-Live (TTL) field </b>offers another clue for detecting injected packets. Here, too, most client implementations use the same <a href="https://www.cloudflare.com/learning/cdn/glossary/time-to-live-ttl/"><u>TTL</u></a> for each packet sent on a connection, usually set initially to either 64 or 128 and decremented by every router along the packet’s route. If a RST packet does not have the same TTL as other packets in a connection, it’s a strong signal that it was injected. Looking at the figure above, marked (b), we can see marked differences in TTLs, indicating the presence of a third party. </p><p>We strongly encourage readers to read the <a href="https://research.cloudflare.com/publications/SundaraRaman2023/"><u>underlying details</u></a> of how and why these make sense.  Connections with high maximum IP-ID and TTL differences give positive evidence for traffic tampering, but the <i>absence</i> of these signals does not necessarily mean that tampering did not occur, as some middleboxes are known to <a href="https://censoredplanet.org/assets/censorship-devices.pdf"><u>copy IP header values</u></a> including the IP-ID and TTL from the original packets in the connection. Our interest is in responsibly ensuring our dataset has indicative value.</p><p><b>There is one last caveat: </b>While our tampering signatures capture many forms of tampering, there is still potential for<b> false negatives</b> for connections that <i>were</i> tampered with but escaped our detection. Some examples are connections terminated after the first 10 packets (since we don’t sample that far), FIN injection (a less common alternative to RST injection), or connections where all packets are dropped before reaching Cloudflare’s servers. Our signatures also do not apply to <a href="https://www.cloudflare.com/learning/ddos/glossary/user-datagram-protocol-udp/"><u>UDP-based protocols</u></a> such as QUIC. We hope to expand the scope of our connection tampering signatures in the future.</p>
    <div>
      <h2>Case studies</h2>
      <a href="#case-studies">
        
      </a>
    </div>
    <p>To get a sense of how this looks on the Cloudflare network, below we provide further examples of TCP connection anomalies that are consistent with <a href="https://ooni.org/reports/"><u>OONI reports</u></a> of connection tampering.</p><p>For additional insights from this specific study, see the full technical <a href="https://research.cloudflare.com/publications/SundaraRaman2023/"><u>paper</u></a> and <a href="https://www.youtube.com/watch?v=DyDv3MHICto&amp;list=PLU4C2_kotFP2JAkoL6pcgbb52f6GIJJd7&amp;ab_channel=ACMSIGCOMM"><u>presentation</u></a>. For other regions and networks not listed below, please see the <a href="https://blog.cloudflare.com/tcp-resets-timeouts"><u>new data on Radar</u></a>.</p>
    <div>
      <h3>Pakistan</h3>
      <a href="#pakistan">
        
      </a>
    </div>
    <p>Reporting from <a href="https://tribune.com.pk/story/2491142/pakistan-should-be-transparent-about-internet-disruptions-surveillance-amnesty-international"><u>inside</u></a> Pakistan suggests changes in users’ Internet experience throughout August 2024. Taking a look at a two-week interval in early August, there is a significant shift in Post-ACK connection anomalies starting on August 9, 2024.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/38SPF52mjzVCDvEnO63hhu/c1a5e7b57d3f63bcf12349dbe7e1b377/2544-12.png" />
          </figure><p><sub><i>via </i></sub><a href="https://radar.cloudflare.com/security-and-attacks/pk?dateStart=2024-08-03&amp;dateEnd=2024-08-17#tcp-resets-and-timeouts"><sub><i>Cloudflare Radar</i></sub></a></p><p>The August 9 Post-ACK spike can be almost entirely attributed to <a href="https://radar.cloudflare.com/as56167"><u>AS56167 (Pak Telecom Mobile Limited)</u></a>, shown below in the first image, where Post-ACK anomalies jumped from under 5% to upwards of 70% of all connections, and has remained high since. Correspondingly, we see a significant reduction in the number of successful HTTP requests reaching Cloudflare’s network from clients in AS56167, below in the second image, which provides evidence that connections are being disrupted. This Pakistan example reinforces the importance of corroborating reports and observations, discussed in more detail in the <a href="https://blog.cloudflare.com/tcp-resets-timeouts"><u>Radar dataset release</u></a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6y0WGCYjlvOhOurny2rN7p/6afbbd35e037beac32728f0e93b1fe17/2544-13.png" />
          </figure><p><sub><i>via </i></sub><a href="https://radar.cloudflare.com/security-and-attacks/AS56167?dateStart=2024-08-03&amp;dateEnd=2024-08-17#tcp-resets-and-timeouts"><sub><i>Cloudflare Radar</i></sub></a></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2aFE80hJQ5f6uJprUYFmEt/b79ffa0e1c8e30fcb57d9a8bb97c41b3/2544-14.png" />
          </figure><p><sub><i>via </i></sub><a href="https://radar.cloudflare.com/traffic/AS56167?dateStart=2024-08-03&amp;dateEnd=2024-08-17#http-traffic"><sub><i>Cloudflare Radar</i></sub></a></p>
    <div>
      <h3>Tanzania</h3>
      <a href="#tanzania">
        
      </a>
    </div>
    <p>A <a href="https://ooni.org/post/2024-tanzania-lgbtiq-censorship-and-other-targeted-blocks/"><u>OONI report</u></a> from April 2024 discusses targeted connection tampering in <a href="https://radar.cloudflare.com/tz"><u>Tanzania</u></a>. The report states that this blocking is observed on the client side as connection timeouts after the <a href="https://www.cloudflare.com/learning/ssl/what-happens-in-a-tls-handshake/"><u>Client Hello</u></a> message during the <a href="https://www.cloudflare.com/learning/ssl/what-happens-in-a-tls-handshake/"><u>TLS handshake</u></a>, indicating that a middlebox is dropping the packet containing the Client Hello message. On the server side, connections tampered with in this way would appear as Post-ACK timeouts as the PSH packet containing the Client Hello message never reaches the server.</p><p>Looking at the Post-ACK data represented in the light-blue portion, below, we find matching evidence: close to 30% of all new TCP connections from Tanzania appear as Post-ACK anomalies. Breaking this down further (not shown in the plots below), approximately one third is due to timeouts, consistent with the OONI report above. The remainder is due to RSTs.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2IVvmuECBkPhL8xS7fdOcV/203ca4916a474d2c06f02d6a3f04d006/2544-15.png" />
          </figure><p><sub><i>via </i></sub><a href="https://radar.cloudflare.com/security-and-attacks/tz?dateStart=2024-07-24&amp;dateEnd=2024-08-20#tcp-resets-and-timeouts"><sub><i>Cloudflare Radar</i></sub></a></p><p></p>
    <div>
      <h3>Ethiopia</h3>
      <a href="#ethiopia">
        
      </a>
    </div>
    <p><a href="https://radar.cloudflare.com/et"><u>Ethiopia</u></a> is another location with <a href="https://ooni.org/post/2023-ethiopia-blocks-social-media/"><u>previously-reported</u></a> connection tampering. Consistent with this, we see elevated rates of Post-PSH TCP anomalies across networks in Ethiopia. Our internal data shows that the majority of Post-PSH anomalies in this case are due to RSTs, although timeouts are also prevalent.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/YTj7Kypiu0nSjmvZ00Jjo/b3306284a04a67d91351da645b6332f7/2544-16.png" />
          </figure><p><sub><i>via </i></sub><a href="https://radar.cloudflare.com/security-and-attacks/et?dateStart=2024-07-24&amp;dateEnd=2024-08-20#tcp-resets-and-timeouts"><sub><i>Cloudflare Radar</i></sub></a></p><p>The majority of traffic arriving to Cloudflare’s servers from IP addresses geolocated in Ethiopia is from <a href="https://radar.cloudflare.com/as24757"><u>AS24757 (Ethio Telecom)</u></a>, shown below in the first image, so it is perhaps unsurprising that its data closely matches the country-wide distribution of connection anomalies. The number of Post-PSH connections originating from <a href="https://radar.cloudflare.com/as328988"><u>AS328988 (SAFARICOM TELECOMMUNICATIONS ETHIOPIA PLC)</u></a>, shown below in the second image, are higher in proportion and account for over 33% of all connections from that network.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/43sDPBfmz2u2JIRIPN0htd/6e0825edaef48ba73df9180a30411231/2544-17.png" />
          </figure><p><sub>via </sub><a href="https://radar.cloudflare.com/security-and-attacks/AS24757?dateStart=2024-07-24&amp;dateEnd=2024-08-20#tcp-resets-and-timeouts"><sub>Cloudflare Radar</sub></a></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5fyguCauAPqdXYXkCvhjAl/7e86f97f2b71a73cbf3d8c95d27a92b1/2544-18.png" />
          </figure><p><sub>via </sub><a href="https://radar.cloudflare.com/security-and-attacks/AS328988?dateStart=2024-07-24&amp;dateEnd=2024-08-20#tcp-resets-and-timeouts"><sub><i>Cloudflare Radar</i></sub></a></p>
    <div>
      <h2>Reflecting on the present to promote a resilient future</h2>
      <a href="#reflecting-on-the-present-to-promote-a-resilient-future">
        
      </a>
    </div>
    <p>Connection tampering is a blocking mechanism that is deployed in various forms throughout the Internet. Although we have developed ways to help detect and understand it globally, the experience is just as individual as an interrupted phone call.</p><p>Connection tampering is also made possible <i>by accident</i>. It works because domain names are visible in cleartext. But it may not always be this way. For example, <a href="https://datatracker.ietf.org/doc/draft-ietf-tls-esni/"><u>Encrypted Client Hello (ECH)</u></a> is an emerging building block that encrypts the SNI field. </p><p>We’ll continue to look for ways to talk about network activity and disruption, all to foster wider conversations. Check out the newest additions about connection anomalies on <a href="https://radar.cloudflare.com/security-and-attacks#tcp-resets-and-timeouts"><u>Cloudflare Radar</u></a> and the <a href="https://blog.cloudflare.com/tcp-resets-timeouts"><u>corresponding blog post</u></a>, as well as the <a href="https://research.cloudflare.com/publications/SundaraRaman2023/"><u>peer-reviewed technical paper</u></a> and its <a href="https://youtu.be/RD73IgzQMFo?si=OWvNnlNNLalbhygV&amp;t=2984"><u>15-minute summary talk</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Radar]]></category>
            <category><![CDATA[Internet Traffic]]></category>
            <category><![CDATA[Internet Quality]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Trends]]></category>
            <category><![CDATA[Consumer Services]]></category>
            <guid isPermaLink="false">2PQ5yUYNh250JZfC8YuElJ</guid>
            <dc:creator>Ram Sundara Raman</dc:creator>
            <dc:creator>Luke Valenta</dc:creator>
            <dc:creator>Marwan Fayed</dc:creator>
        </item>
        <item>
            <title><![CDATA[The unintended consequences of blocking IP addresses]]></title>
            <link>https://blog.cloudflare.com/consequences-of-ip-blocking/</link>
            <pubDate>Fri, 16 Dec 2022 14:00:00 GMT</pubDate>
            <description><![CDATA[ A discussion about IP blocking: why we see it, what it is, what it does, who it affects, and why it’s such a problematic way to address content online. ]]></description>
            <content:encoded><![CDATA[ <p><i></i></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5XRs4hep0bF2DPlSc0NJPe/9c4f7666cc9570882d0b18fea2823124/image1-53.png" />
            
            </figure><p>In late August 2022, Cloudflare’s customer support team began to receive complaints about sites on our network being down in Austria. Our team immediately went into action to try to identify the source of what looked from the outside like a partial Internet outage in Austria. We quickly realized that it was an issue with local Austrian Internet Service Providers.</p><p>But the service disruption wasn’t the result of a technical problem. As we later learned from <a href="https://www.derstandard.de/story/2000138619757/ueberzogene-netzsperre-sorgt-fuer-probleme-im-oesterreichischen-internet">media reports</a>, what we were seeing was the result of a court order. Without any notice to Cloudflare, an Austrian court had ordered Austrian Internet Service Providers (ISPs) to block 11 of Cloudflare’s IP addresses.</p><p>In an attempt to block 14 websites that copyright holders argued were violating copyright, the court-ordered IP block rendered thousands of websites inaccessible to ordinary Internet users in Austria over a two-day period. What did the thousands of other sites do wrong? Nothing. They were a temporary casualty of the failure to build legal remedies and systems that reflect the Internet’s actual architecture.</p><p>Today, we are going to dive into a discussion of IP blocking: why we see it, what it is, what it does, who it affects, and why it’s such a problematic way to address content online.</p>
    <div>
      <h2>Collateral effects, large and small</h2>
      <a href="#collateral-effects-large-and-small">
        
      </a>
    </div>
    <p>The craziest thing is that this type of blocking happens on a regular basis, all around the world. But unless that blocking happens at the scale of what happened in Austria, or someone decides to highlight it, it is typically invisible to the outside world. Even Cloudflare, with deep technical expertise and understanding about how blocking works, can’t routinely see when an IP address is blocked.</p><p>For Internet users, it’s even more opaque. They generally don’t know why they can’t connect to a particular website, where the connection problem is coming from, or how to address it. They simply know they cannot access the site they were trying to visit. And that can make it challenging to document when sites have become inaccessible because of IP address blocking.</p><p>Blocking practices are also wide-spread. In their Freedom on the Net report, Freedom House recently <a href="https://freedomhouse.org/report/freedom-net/2022/key-internet-controls">reported</a> that 40 out of the 70 countries that they examined - which vary from countries like Russia, Iran and Egypt to Western democracies like the United Kingdom and Germany -  did some form of website blocking. Although the report doesn’t delve into exactly how those countries block, many of them use forms of IP blocking, with the same kind of potential effects for a partial Internet shutdown that we saw in Austria.</p><p>Although it can be challenging to assess the amount of collateral damage from IP blocking, we do have examples where organizations have attempted to quantify it. In conjunction with a case before the European Court of Human Rights, the European Information Society Institute, a Slovakia-based nonprofit, reviewed Russia’s regime for website blocking in 2017. Russia exclusively used IP addresses to block content. The European Information Society Institute concluded that IP blocking led to “<i>collateral website blocking on a massive scale</i>” and noted that as of June 28, 2017, “6,522,629 Internet resources had been blocked in Russia, of which 6,335,850 – or 97% – had been blocked collaterally, that is to say, without legal justification.”</p><p>In the UK, overbroad blocking prompted the non-profit Open Rights Group to create the website <a href="https://www.blocked.org.uk/">Blocked.org.uk</a>. The website has a tool enabling users and site owners to report on overblocking and request that ISPs remove blocks. The group also has hundreds of individual stories about the effect of blocking on those whose websites were inappropriately blocked, from charities to small business owners. Although it’s not always clear what blocking methods are being used, the fact that the site is necessary at all conveys the amount of overblocking. Imagine a dressmaker, watchmaker or car dealer looking to advertise their services and potentially gain new customers with their website. That doesn’t work if local users can’t access the site.</p><p>One reaction might be, “Well, just make sure there are no restricted sites sharing an address with unrestricted sites.” But as we’ll discuss in more detail, this ignores the large difference between the number of possible <a href="https://www.cloudflare.com/learning/dns/glossary/what-is-a-domain-name/">domain names</a> and the number of available IP addresses, and runs counter to the very technical specifications that empower the Internet. Moreover, the definitions of restricted and unrestricted differ across nations, communities, and organizations. Even if it were possible to know all the restrictions, the designs of the protocols -- of the Internet, itself -- mean that it is simply infeasible, if not impossible, to satisfy every agency’s constraints.</p>
    <div>
      <h2>Legal and human rights concerns</h2>
      <a href="#legal-and-human-rights-concerns">
        
      </a>
    </div>
    <p>Overblocking websites is not only a problem for users; it has legal implications. Because of the effect it can have on ordinary citizens looking to exercise their rights online, government entities (both courts and regulatory bodies) have a legal obligation to make sure that their orders are necessary and proportionate, and don’t unnecessarily affect those who are not contributing to the harm.</p><p>It would be hard to imagine, for example, that a court in response to alleged wrongdoing would blindly issue a search warrant or an order based solely on a street address without caring if that address was for a single family home, a six-unit condo building, or a high rise with hundreds of separate units. But those sorts of practices with IP addresses appear to be rampant.</p><p>In 2020, the European Court of Human Rights (ECHR) - the court overseeing the implementation of the Council of Europe’s European Convention on Human Rights - considered a case involving a website that was blocked in Russia not because it had been targeted by the Russian government, but because it shared an IP address with a blocked website. The website owner brought suit over the block. The ECHR concluded that the indiscriminate blocking was impermissible, ruling that the block on the lawful content of the site “<i>amounts to arbitrary interference with the rights of owners of such websites</i>.” In other words, the ECHR ruled that it was improper for a government to issue orders that resulted in the blocking of sites that were not targeted.</p>
    <div>
      <h2>Using Internet infrastructure to address content challenges</h2>
      <a href="#using-internet-infrastructure-to-address-content-challenges">
        
      </a>
    </div>
    <p>Ordinary Internet users don’t think a lot about how the content they are trying to access online is delivered to them. They assume that when they type a domain name into their browser, the content will automatically pop up. And if it doesn’t, they tend to assume the website itself is having problems unless their entire Internet connection seems to be broken. But those basic assumptions ignore the reality that connections to a website are often used to limit access to content online.</p><p>Why do countries block connections to websites? Maybe they want to limit their own citizens from accessing what they believe to be illegal content - like online gambling or explicit material - that is permissible elsewhere in the world. Maybe they want to prevent the viewing of a foreign news source that they believe to be primarily disinformation. Or maybe they want to support copyright holders seeking to block access to a website to limit viewing of content that they believe infringes their intellectual property.</p><p>To be clear, <b>blocking access is not the same thing as removing content from the Internet</b>. There are a variety of legal obligations and authorities designed to permit actual removal of illegal content. Indeed, the legal expectation in many countries is that blocking is a matter of last resort, after attempts have been made to remove content at the source.</p><p>Blocking just prevents certain viewers - those whose Internet access depends on the ISP that is doing the blocking - from being able to access websites. The site itself continues to exist online and is accessible by everyone else. But when the content originates from a different place and can’t be easily removed, a country may see blocking as their best or only approach.</p><p>We recognize the concerns that sometimes drive countries to implement blocking. But fundamentally, we believe it’s important for users to know when the websites they are trying to access have been blocked, and, to the extent possible, who has blocked them from view and why. And it’s critical that any restrictions on content should be as limited as possible to address the harm, to avoid infringing on the rights of others.</p><p>Brute force IP address blocking doesn’t allow for those things. It’s fully opaque to Internet users. The practice has unintended, unavoidable consequences on other content. And the very fabric of the Internet means that there is no good way to identify what other websites might be affected either before or during an IP block.</p><p>To understand what happened in Austria and what happens in many other countries around the world that seek to block content with the bluntness of IP addresses, we have to understand what is going on behind the scenes. That means diving into some technical details.</p>
    <div>
      <h2>Identity is attached to names, never addresses</h2>
      <a href="#identity-is-attached-to-names-never-addresses">
        
      </a>
    </div>
    <p>Before we even get started describing the technical realities of blocking, it’s important to stress that the first and best option to deal with content is at the source. A website owner or hosting provider has the option of removing content at a granular level, without having to take down an entire website. On the more technical side, a <a href="https://www.cloudflare.com/learning/dns/glossary/what-is-a-domain-name-registrar/">domain name registrar</a> or registry can potentially withdraw a domain name, and therefore a website, from the Internet altogether.</p><p>But how do you block access to a website, if for whatever reason the content owner or content source is unable or unwilling to remove it from the Internet?  There are only three possible control points.</p><p>The first is via the <a href="https://www.cloudflare.com/learning/dns/what-is-dns/">Domain Name System (DNS)</a>, which translates domain names into IP addresses so that the site can be found. Instead of returning a valid IP address for a domain name, the DNS resolver could lie and respond with a code, NXDOMAIN, meaning that “there is no such name.” A better approach would be to use one of the honest error numbers <a href="https://datatracker.ietf.org/doc/rfc8914/">standardized in 2020</a>, including error 15 for blocked, error 16 for censored, 17 for filtered, or 18 for prohibited, although these are not widely used currently.</p><p>Interestingly, the precision and effectiveness of DNS as a control point depends on whether the DNS resolver is private or public. Private or ‘internal’ DNS resolvers are operated by ISPs and enterprise environments for their own known clients, which means that operators can be precise in applying content restrictions. By contrast, that level of precision is unavailable to open or public resolvers, not least because routing and addressing is global and ever-changing on the Internet map, and in stark contrast to addresses and routes on a fixed postal or street map. For example, private DNS resolvers may be able to block access to websites within specified geographic regions with at least some level of accuracy in a way that public DNS resolvers cannot, which becomes profoundly important given the disparate (and inconsistent) blocking regimes around the world.</p><p>The second approach is to block individual connection requests to a restricted domain name. When a user or client wants to visit a website, a connection is initiated from the client to a server <i>name</i>, i.e. the domain name. If a network or on-path device is able to observe the server name, then the connection can be terminated. Unlike DNS, there is no mechanism to communicate to the user that access to the server name was blocked, or why.</p><p>The third approach is to block access to an IP address where the domain name can be found. This is a bit like blocking the delivery of all mail to a physical address. Consider, for example, if that address is a skyscraper with its many unrelated and independent occupants. Halting delivery of mail to the address of the skyscraper causes collateral damage by invariably affecting all parties at that address. IP addresses work the same way.</p><p>Notably, the IP address is the only one of the three options that has no attachment to the domain name. The website domain name is not required for routing and delivery of data packets; in fact it is fully ignored. A website can be available on any IP address, or even on many IP addresses, simultaneously. And the set of IP addresses that a website is on can change at any time. The set of IP addresses cannot <i>definitively</i> be known by querying DNS, which has been able to return any valid address at any time for any reason, since <a href="https://datatracker.ietf.org/doc/rfc1794/">1995</a>.</p><p>The idea that an address is representative of an identity is anathema to the Internet’s design, because the decoupling of address from name is deeply embedded in the Internet standards and protocols, as is explained next.</p>
    <div>
      <h2>The Internet is a set of protocols, not a policy or perspective</h2>
      <a href="#the-internet-is-a-set-of-protocols-not-a-policy-or-perspective">
        
      </a>
    </div>
    <p>Many people still incorrectly assume that an IP address represents a single website. We’ve previously <a href="/addressing-agility/">stated</a> that the association between names and addresses is understandable given that the earliest connected components of the Internet appeared as one computer, one interface, one address, and one name. This one-to-one association was an artifact of the ecosystem in which the Internet Protocol was deployed, and satisfied the needs of the time.</p><p>Despite the one-to-one naming practice of the early Internet, it has always been possible to assign more than one name to a server (or ‘host’). For example, a server was (and is still) often configured with names to reflect its service offerings such as <code>mail.example.com</code> and <code>www.example.com</code>, but these shared a base domain name.  There were few reasons to have completely different domain names until the need to colocate completely different websites onto a single server. That practice was made easier in 1997 by the <b>Host</b> header in <a href="https://datatracker.ietf.org/doc/rfc2068/">HTTP/1.1</a>, a feature preserved by the SNI field in a <a href="https://datatracker.ietf.org/doc/rfc3546/">TLS extension</a> in 2003.</p><p>Throughout these changes, the Internet Protocol and, separately, the DNS protocol, have not only kept pace, but have remained fundamentally unchanged. They are the very reason that the Internet has been able to scale and evolve, because they are about addresses, reachability, and arbitrary name to IP address relationships.</p><p>The designs of IP and DNS are also entirely independent, which only reinforces that names are separate from addresses. A closer inspection of the protocols’ design elements illuminates the misperceptions of policies that lead to today's common practice of <a href="https://www.cloudflare.com/learning/access-management/what-is-access-control/">controlling access</a> to content by blocking IP addresses.</p>
    <div>
      <h3>By design, IP is for reachability and nothing else</h3>
      <a href="#by-design-ip-is-for-reachability-and-nothing-else">
        
      </a>
    </div>
    <p>Much like large public civil engineering projects rely on building codes and best practice, the Internet is built using a set of <i>open</i> standards and specifications informed by experience and agreed by international consensus. The Internet standards that connect hardware and applications are published by the Internet Engineering Task Force (<a href="https://www.ietf.org/">IETF</a>) in the form of “Requests for Comment” or <a href="https://www.ietf.org/standards/rfcs/">RFCs</a> -- so named not to suggest incompleteness, but to reflect that standards must be able to evolve with knowledge and experience. The IETF and its RFCs are cemented in the very fabric of communications, for example, with the first RFC 1 published in 1969. The Internet Protocol (IP) specification reached <a href="https://datatracker.ietf.org/doc/rfc791/">RFC status</a> in 1981.</p><p>Alongside the standards organizations, the Internet’s success has been helped by a core idea known as the end-to-end (e2e) principle, <a href="https://web.mit.edu/Saltzer/www/publications/endtoend/endtoend.pdf">codified</a> also in 1981, based on years of trial and error <a href="https://en.wikipedia.org/wiki/End-to-end_principle">experience</a>. The end-to-end principle is a powerful abstraction that, despite taking many forms, manifests a core notion of the Internet Protocol specification: the network’s only responsibility is to establish reachability, and every other possible feature has a cost or a risk.</p><p>The idea of “reachability” in the Internet Protocol is also enshrined in the design of IP addresses themselves. Looking at the Internet Protocol specification, <a href="https://www.rfc-editor.org/rfc/rfc791">RFC 791</a>, the following excerpt from Section 2.3 is explicit about IP addresses having no association with names, interfaces, or anything else.</p>
            <pre><code>Addressing

    A distinction is made between names, addresses, and routes [4].   A
    name indicates what we seek.  An address indicates where it is.  A
    route indicates how to get there.  The internet protocol deals
    primarily with addresses.  It is the task of higher level (i.e.,
    host-to-host or application) protocols to make the mapping from
    names to addresses.   The internet module maps internet addresses to
    local net addresses.  It is the task of lower level (i.e., local net
    or gateways) procedures to make the mapping from local net addresses
    to routes.
                            [ RFC 791, 1981 ]</code></pre>
            <p>Just like postal addresses for skyscrapers in the physical world, IP addresses are no more than street addresses written on a piece of paper. And just like a street address on paper, one can never be confident about the entities or organizations that exist behind an IP address. In a network like Cloudflare’s, any single IP address represents <a href="/cloudflare-architecture-and-how-bpf-eats-the-world/">thousands of servers</a>, and can have even more websites and services -- in some cases numbering into the <a href="/addressing-agility/">millions</a> -- expressly because the Internet Protocol is designed to enable it.</p><p>Here’s an interesting question: could we, or any content service provider, ensure that every IP address matches to one and only one name? The answer is an unequivocal <b>no</b>, and here too, because of a protocol design -- in this case, DNS.</p>
    <div>
      <h3>The number of names in DNS always exceeds the available addresses</h3>
      <a href="#the-number-of-names-in-dns-always-exceeds-the-available-addresses">
        
      </a>
    </div>
    <p>A one-to-one relationship between names and addresses is impossible given the Internet specifications for the same reasons that it is infeasible in the physical world. Ignore for a moment that people and organizations can change addresses. Fundamentally, the number of people and organizations on the planet exceeds the number of postal addresses. We not only want, but <i>need</i> for the Internet to accommodate more names than addresses.</p><p>The difference in magnitude between names and addresses is also codified in the specifications. IPv4 addresses are 32 bits, and IPv6 addresses are 128 bits. The size of a domain name that can be queried by DNS is as many as 253 octets, or 2,024 bits (from Section 2.3.4 in <a href="https://datatracker.ietf.org/doc/rfc1035/">RFC 1035</a>, published 1987). The table below helps to put those differences into perspective:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6owhjh505J8A9SrL5gznHG/499625dd20a849da4d72727fedd2da8d/Screenshot-2022-12-16-at-13.02.04.png" />
            
            </figure><p>On November 15, 2022, the United Nations announced the population of the Earth surpassed eight billion people. Intuitively, we know that there cannot be anywhere near as many postal addresses. The difference between the number of possible names on the planet, and similarly on the Internet, does and must exceed the number of available addresses.</p>
    <div>
      <h2>The proof is in the pudding names!</h2>
      <a href="#the-proof-is-in-the-pudding-names">
        
      </a>
    </div>
    <p>Now that those two relevant principles about IP addresses and DNS names in the international standards are understood - that IP address and domain names serve distinct purposes and there is no one to one relationship between the two - an examination of a recent case of content blocking using IP addresses can help to see the reasons it is problematic. Take, for example, the IP blocking incident in Austria late August 2022. The goal was to restrict access to 14 target domains, by blocking 11 IP addresses (source: RTR.Telekom. Post via the <a href="https://web.archive.org/web/20220828220559/http://netzsperre.liwest.at/">Internet Archive</a>) -- the mismatch between those two numbers should have been a warning flag that IP blocking might not have the desired effect.</p><p>Analogies and international standards may explain the reasons that IP blocking should be avoided, but we can see the scale of the problem by looking at Internet-scale data. To better understand and explain the severity of IP blocking, we decided to generate a global view of domain names and IP addresses (thanks are due to a PhD research intern, Sudheesh Singanamalla, for the effort). In September 2022, we used the authoritative zone files for the <a href="https://www.cloudflare.com/learning/dns/top-level-domain/">top-level domains (TLDs)</a> .com, .net, .info, and <a href="https://www.cloudflare.com/application-services/products/registrar/buy-org-domains/">.org</a>, together with top-1M website lists, to find a total of 255,315,270 unique names. We then queried DNS from each of five regions and recorded the set of IP addresses returned. The table below summarizes our findings:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5YeOt63WPvv2WY1hYcyNWt/15fad8223e223da6ea17157ad335294f/image3-23.png" />
            
            </figure><p>The table above makes clear that it takes no more than 10.7 million addresses to reach 255,315,270 names from any region on the planet, and the total set of IP addresses for those names from everywhere is about 16 million -- the ratio of names to IP addresses is nearly 24x in Europe and 16x globally.</p><p>There is one more worthwhile detail about the numbers above: The IP addresses are the combined totals of both IPv4 and IPv6 addresses, meaning that far fewer addresses are needed to reach all 255M websites.</p><p>We’ve also inspected the data a few different ways to find some interesting observations. For example, the figure below shows the cumulative distribution (CDF) of the proportion of websites that can be visited with each additional IP address. On the y-axis is the proportion of websites that can be reached given some number of IP addresses. On the x-axis, the 16M IP addresses are ranked from the most domains on the left, to the least domains on the right. Note that any IP address in this set is a response from DNS and so it must have at least one domain name, but the highest numbers of domains on IP addresses in the set number are in the 8-digit millions.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7r9Dj9LCN9K2w3vWyqRTKR/efef1916d727b42ab0baa6ff772b6ada/image2-37.png" />
            
            </figure><p>By looking at the CDF there are a few eye-watering observations:</p><ul><li><p>Fewer than 10 IP addresses are needed to reach 20% of, or approximately 51 million, domains in the set;</p></li><li><p>100 IPs are enough to reach almost 50% of domains;</p></li><li><p>1000 IPs are enough to reach 60% of domains;</p></li><li><p>10,000 IPs are enough to reach 80%, or about 204 million, domains.</p></li></ul><p>In fact, from the total set of 16 million addresses, fewer than half, 7.1M (43.7%), of the addresses in the dataset had one name. On this ‘one’ point we must be additionally clear: we are unable to ascertain if there was only one and no other names on those addresses because there are many more domain names than those contained only in .com, .org, .info., and .net -- there might very well be other names on those addresses.</p><p>In addition to having a number of domains on a single IP address, any IP address may change over time for any of those domains.  Changing IP addresses periodically can be helpful with certain security, performance, and to improve reliability for websites. One common example in use by many operations is load balancing. This means DNS queries may return different IP addresses over time, or in different places, for the same websites. This is a further, and separate, reason why blocking based on IP addresses will not serve its intended purpose over time.</p><p>Ultimately, <b>there is no reliable way to know the number of domains on an IP address</b> without inspecting all names in the DNS, from every location on the planet, at every moment in time -- an entirely infeasible proposition.</p><p>Any action on an IP address must, by the very definitions of the protocols that rule and empower the Internet, be expected to have collateral effects.</p>
    <div>
      <h2>Lack of transparency with IP blocking</h2>
      <a href="#lack-of-transparency-with-ip-blocking">
        
      </a>
    </div>
    <p>So if we have to expect that the blocking of an IP address will have collateral effects, and it’s generally agreed that it’s inappropriate or even legally impermissible to overblock by blocking IP addresses that have multiple domains on them, why does it still happen? That’s hard to know for sure, so we can only speculate. Sometimes it reflects a lack of technical understanding about the possible effects, particularly from entities like judges who are not technologists. Sometimes governments just ignore the collateral damage - as they do with Internet shutdowns - because they see the blocking as in their interest. And when there is collateral damage, it’s not usually obvious to the outside world, so there can be very little external pressure to have it addressed.</p><p>It’s worth stressing that point. When an IP is blocked, a user just sees a failed connection. They don’t know why the connection failed, or who caused it to fail. On the other side, the server acting on behalf of the website doesn’t even know it’s been blocked until it starts getting complaints about the fact that it is unavailable. There is virtually no transparency or accountability for the overblocking. And it can be challenging, if not impossible, for a website owner to challenge a block or seek redress for being inappropriately blocked.</p><p>Some governments, including <a href="https://www.rtr.at/TKP/was_wir_tun/telekommunikation/weitere-regulierungsthemen/netzneutralitaet/nn_blockings.de.html">Austria</a>, do publish active block lists, which is an important step for transparency. But for all the reasons we’ve discussed, publishing an IP address does not reveal all the sites that may have been blocked unintentionally. And it doesn’t give those affected a means to challenge the overblocking. Again, in the physical world example, it’s hard to imagine a court order on a skyscraper that wouldn’t be posted on the door, but we often seem to jump over such due process and notice requirements in virtual space.</p><p>We think talking about the problematic consequences of IP blocking is more important than ever as an increasing number of countries push to block content online. Unfortunately, ISPs often use IP blocks to implement those requirements. It may be that the ISP is newer or less robust than larger counterparts, but larger ISPs engage in the practice, too, and understandably so because IP blocking takes the least effort and is readily available in most equipment.</p><p>And as more and more domains are included on the same number of IP addresses, the problem is only going to get worse.</p>
    <div>
      <h2>Next steps</h2>
      <a href="#next-steps">
        
      </a>
    </div>
    <p>So what can we do?</p><p>We believe the first step is to improve transparency around the use of IP blocking. Although we’re not aware of any comprehensive way to document the collateral damage caused by IP blocking, we believe there are steps we can take to expand awareness of the practice. We are committed to working on new initiatives that highlight those insights, as we’ve done with the Cloudflare Radar Outage Center.</p><p>We also recognize that this is a whole Internet problem, and therefore has to be part of a broader effort. The significant likelihood that blocking by IP address will result in restricting access to a whole series of unrelated (and untargeted) domains should make it a non-starter for everyone. That’s why we’re engaging with civil society partners and like-minded companies to lend their voices to challenge the use of blocking IP addresses as a way of addressing content challenges and to point out collateral damage when they see it.</p><p>To be clear, to address the challenges of illegal content online, countries need legal mechanisms that enable the removal or restriction of content in a rights-respecting way. We believe that addressing the content at the source is almost always the best and the required first step. Laws like the EU’s new Digital Services Act or the Digital Millennium Copyright Act provide tools that can be used to address illegal content at the source, while respecting important due process principles. Governments should focus on building and applying legal mechanisms in ways that least affect other people’s rights, consistent with human rights expectations.</p><p>Very simply, these needs cannot be met by blocking IP addresses.</p><p>We’ll continue to look for new ways to talk about network activity and disruption, particularly when it results in unnecessary limitations on access. Check out <a href="https://radar.cloudflare.com/">Cloudflare Radar</a> for more insights about what we see online.</p> ]]></content:encoded>
            <category><![CDATA[Impact Week]]></category>
            <category><![CDATA[Better Internet]]></category>
            <category><![CDATA[Internet Performance]]></category>
            <category><![CDATA[Policy & Legal]]></category>
            <guid isPermaLink="false">5W7SrYRBnpDHBDnLNBVqBI</guid>
            <dc:creator>Alissa Starzak</dc:creator>
            <dc:creator>Marwan Fayed</dc:creator>
        </item>
        <item>
            <title><![CDATA[“Look, Ma, no probes!” — Characterizing CDNs’ latencies with passive measurement]]></title>
            <link>https://blog.cloudflare.com/cdn-latency-passive-measurement/</link>
            <pubDate>Fri, 15 Oct 2021 13:02:06 GMT</pubDate>
            <description><![CDATA[ In this article we describe an alternative approach to active measurements, which accurately predicts network latencies using only passively collected data. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Something that comes up a lot at Cloudflare is how well our network and systems are performing. Like many service providers, we need to be engaged in a constant process of introspection to evaluate aspects of Cloudflare’s service with respect to customers, within our own network and systems and, as was the case in a recent <a href="/last-mile-insights/">blog post</a>, the clients (such as web browsers). Many of these questions are obvious, but answering them is decisive in opening paths to new and improved services. The important point here is that it’s relatively straightforward to monitor and assess aspects of our service we can see or measure directly.</p><p>However, for certain aspects of our performance we may not have access to the necessary data, for a number of reasons. For instance, the data sources may be outside our <a href="https://www.cloudflare.com/learning/access-management/what-is-the-network-perimeter/">network perimeter</a>, or we may avoid collecting certain measurements that would violate the privacy of end users. In particular, the questions below are important to gain a better understanding of our performance, but harder to answer due to limitations in data availability:</p><ul><li><p>How much better (or worse!) are we doing compared to other service providers (<a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/">CDNs</a>) by being in certain locations?</p></li><li><p>Can we know “a priori” and rank where data centers will have the greatest improvement and know which locations might deteriorate service?</p></li></ul><p>The last question is particularly important because it requires the predictive power of synthesising available network measurements to model and <i>infer</i> network features that cannot be directly observed. For such predictions to be informative and meaningful, it’s critical to distill our measurements in a way that illuminates the interdependence of network structure, content distribution practices and routing policies, and their impact on network performance.</p>
    <div>
      <h2>Active measurements are inadequate or unavailable</h2>
      <a href="#active-measurements-are-inadequate-or-unavailable">
        
      </a>
    </div>
    <p>Measuring and comparing the performance of Content Distribution Networks (CDN) is critical in terms of understanding the level of service offered to end users, detecting and <a href="https://www.cloudflare.com/learning/cdn/common-cdn-issues/">debugging</a> network issues, and planning the deployment of new network locations. Measuring our own existing infrastructure is relatively straightforward, for example, by collecting <a href="https://www.cloudflare.com/learning/dns/what-is-dns/">DNS</a> and HTTP request statistics received at each one of our data centers.</p><p>But what if we want to understand and evaluate the performance of other networks? Understandably, such data is not shared among networks due to privacy and business concerns. An alternative to data sharing is <i>direct</i> observation with what are called “active measurements.” An example of active measurement is when a measuring tape is used to determine the size of a room — one must take an action to perform the measurement.</p><p>Active measurements from Cloudflare data centers to other CDNs, however, don’t say much about the client experience. The only way to actively measure CDNs is by probing from third-party points of view, namely some type of end-client or globally distributed measurement platform. For example, ping probes might be launched from <a href="https://atlas.ripe.net/">RIPE Atlas clients</a> to many CDNs; alternatively, we might rely on data obtained from <a href="https://en.wikipedia.org/wiki/Real_user_monitoring">Real User Measurements (RUM)</a> services that embed JavaScript requests into various services and pages around the world.</p><p>Active measurements are extremely valuable, and we heavily rely on them to collect a wide range of performance metrics. However, active measurements are not always reliable. Consider ping probes from RIPE Atlas. A collection of direct pings is most assuredly accurate. The weakness is that the distribution of its probes is heavily concentrated in Europe and North America, and it offers very sparse coverage of Autonomous Systems (ASes) in other regions (Asia, Africa, South America). Additionally, the distribution of RIPE Atlas probes to ASes does not reflect the distribution of users to ASes, instead university networks and hosting providers or enterprises are overrepresented in the probes' population.</p><p>Similarly, data from third party Real User Measurements (RUM) services have weaknesses too. RUM platforms compare CDNs by embedding JavaScript request probes in websites visited by users all over the world. This sounds great, except the data cannot be validated by outside parties, which is an important aspect of measurement. For example, consider the following chart that shows Cloudfront’s median <a href="https://www.cloudflare.com/learning/cdn/glossary/round-trip-time-rtt/">Round-Trip Time (RTT)</a> in Greece as measured by the two leading RUM platforms, <a href="https://www.cedexis.com/">Cedexis</a> and <a href="https://perfops.net/">Perfops</a>. While both platforms follow the same measurement method, their results for the same time period and the same networks differ considerably. If the two sets of measurements for the same thing differ, then neither can be relied upon.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/nP0ZN4fYPeYJG8XwvFKvb/680b0d37a7568b803169e8f3cd9b0584/2-14.png" />
            
            </figure><p>Comparison of Real User Measurements (RUM) from two leading RUM providers, Cedexis and Perfops. While both RUM datasets were collected during the same period for the same location, there is a pronounced disparity between the two measurements which highlights the sensitivity of RUM data on specific deployment details.</p><p>Ultimately, active measurements are always limited to and by the things that they directly see. Simply relying on existing measurements does not in and of itself translate to predictive models that help assess the potential impact of infrastructure and policy changes on performance. However, when the biases of active measurements are well understood, they can do two things really well: inform our understanding, and help validate models of our understanding of the world — and we’re going to showcase both as we develop a mechanism for evaluating CDN latencies passively.</p>
    <div>
      <h2>Predicting CDNs’ RTTs with Passive Network Measurements</h2>
      <a href="#predicting-cdns-rtts-with-passive-network-measurements">
        
      </a>
    </div>
    <p>So, how might we measure without active probes? We’ve devised a method to understand latency across CDNs by using our own RTT measurements. In particular, we can use these measurements as a proxy for estimating the latency between clients and other CDNs. With this technique, we can understand latency to locations where CDNs have deployed their infrastructure, as well as show performance improvements in locations where one CDN exists, but others do not. Importantly, we have validated the assumptions shown below through a large-scale traceroute and ping measurement campaign, and we’ve designed this technique so that it can be reproduced by others. After all, independent validation is important across measurement communities.</p>
    <div>
      <h3>Step 1. Predicting Anycast Catchments</h3>
      <a href="#step-1-predicting-anycast-catchments">
        
      </a>
    </div>
    <p>The first step in RTT inference is to predict the anycast catchments, namely predict the set of data centers that will be used by an IP. To this end, we compile the network footprint of each CDN provider whose performance we want to predict, which allows us to predict the CDN location where a request from a particular client AS will arrive. In particular, we collect the following data:</p><ul><li><p>List of ISPs that host off-net server caches of CDNs using the methodology and code developed in <a href="https://dl.acm.org/doi/10.1145/3452296.3472928">Gigis et al. paper</a>.</p></li><li><p>List of on-net city-level data centers according to <a href="https://www.peeringdb.com/">PeeringDB</a>, the network maps in the websites of each individual CDN, and IP geolocation measurements.</p></li><li><p>List of Internet eXchange Points (IXPs) where each CDN is connected, in conjunction with the other ASes that are also members of the same IXPs, from IXP databases such as <a href="https://www.peeringdb.com/">PeeringDB</a>, the <a href="https://ixpdb.euro-ix.net/">Euro-IX IXP-DB</a>, and <a href="https://www.pch.net/">Packet Clearing House</a>.</p></li><li><p>List of CDN interconnections to other ASes extracted from BGP data collected from <a href="http://www.routeviews.org/">RouteViews</a> and <a href="https://www.ripe.net/analyse/internet-measurements/routing-information-service-ris">RIPE RIS</a>.</p></li></ul><p>The figure below shows the IXP connections for nine CDNs, according to the above-mentioned datasets. Cloudflare is present in 258 IXPs, which is 56 IXPs more than Google, the second CDN in the list.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4tpNEGB2VefICSiApc6Lm5/21ab3ee5dfd9b520078021344fe15304/4-6.png" />
            
            </figure><p>Heatmap of IXP connections per country for 9 major service providers, according to data from PeeringDB, Euro-IX and Packet Clearing House (PCH) for October 2021.</p><p>With the above data, we can compute the possible paths between a client AS and the CDN’s data centers and infer the Anycast Catchments using techniques similar to the recent papers by <a href="https://dl.acm.org/doi/10.1145/3452296.3472935">Zhang et al.</a> and <a href="https://dl.acm.org/doi/10.1145/3341617.3326145">Sermpezis and Kotronis</a>, which predict paths by reproducing the Internet inter-domain routing policies. For CDNs that use BGP-based Anycast, we can predict which data center will receive a request based on the possible routing paths between the client and the CDN.  For CDNs that rely on DNS-based redirection, we don’t make an inference yet, but we first predict the latency to each data center, and we select the path with the lowest latency assuming that CDN operators manage to offer the path with the smallest latency.</p><p>The challenge in predicting paths emanates from the incomplete knowledge of the varying routing policies implemented by individual ASes, which are either hosting web clients (for instance an ISP or an <a href="https://www.cloudflare.com/learning/network-layer/enterprise-networking/">enterprise network</a>), or are along the path between the CDN and the client’s network. However, in our prediction problem, we can already partition the IP address space to Anycast Catchment regions (as proposed by <a href="https://dl.acm.org/doi/10.1145/3431832.3431834">Schomp and Al-Dalky</a>) based on our extensive data center footprint, which allows us to reverse engineer the routing decisions of client ASes that are visible to Cloudflare. That’s a lot to unpack, so let’s go through an example.</p>
    <div>
      <h4>Example</h4>
      <a href="#example">
        
      </a>
    </div>
    <p>First, assume that an ISP has two potential paths to a CDN: one over a transit provider and one through a direct peering connection over an IXP, and each path terminates at a different data center, as shown in the figure below. In the example below, routing through a transit AS incurs a cost, while IXP peering links do not incur transit exchange costs. Therefore, we would predict that the client ISP would use the path to data center 2 through the IXP.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/p3RWX3R79ZtqBSiNWZDJr/6a626e30dfc0dec4e27c26cce8684740/5-4.png" />
            
            </figure><p>A client ISP may have paths to multiple data centers of a CDN. The prediction of which data center will eventually be used by the client, the so-called anycast catchment, combines both topological and routing policy data.</p>
    <div>
      <h2>Step 2. Predicting CDN Path Latencies</h2>
      <a href="#step-2-predicting-cdn-path-latencies">
        
      </a>
    </div>
    <p>The next step is to estimate the RTT between the client AS and the corresponding CDN location. To this end, we utilize passive RTT measurements from Cloudflare’s own infrastructure. For each of our data centers, we calculate the median TCP RTT for each IP /24 subnet that sends us HTTP requests. We then assume that a request from a given IP subnet to a data center that is common between Cloudflare and another CDN will have a comparable RTT (our approach focuses on the performance of the anycast network and omits host software differences). This assumption is generally true, because the distance between two endpoints is the dominant factor in determining latency. Note that the median RTT is selected to represent client performance. In contrast, the minimum RTT is an indication of closeness to clients (not expected performance). Our approach on estimating latencies is similar to the work of <a href="https://dl.acm.org/doi/10.1145/1177080.1177092">Madhyastha et al.</a> who combined the median RTT of existing measurements with a path prediction technique informed by network topologies to infer end-to-end latencies that cannot be measured directly. While this work reported an accuracy of 65% for arbitrary ASes, we focus on CDNs which, on average, have much smaller paths (most clients are within 1 AS hop) making the path prediction problem significantly easier (as noted by <a href="https://dl.acm.org/doi/10.1145/2815675.2815719">Chiu et al.</a> and <a href="https://dl.acm.org/doi/abs/10.1145/2934872.2959053">Singh and Gill</a>). Also note that for the purposes of RTT estimation, it’s important to predict which CDN data center the request from a client IP will use, not the actual hops along the path.</p>
    <div>
      <h4>Example</h4>
      <a href="#example">
        
      </a>
    </div>
    <p>Assume that for a certain IP subnet used by AS3379 (a Greek ISP), the following table shows the median RTT for each Cloudflare data center that receives HTTP requests from that subnet. Note that while requests from an IP typically land at the nearest data center (Athens in that case), some requests may arrive at different data centers due to traffic load management and different <a href="https://www.cloudflare.com/plans/">service tiers</a>.</p><table><tr><td><p><b>Data Center</b></p></td><td><p><b>Athens</b></p></td><td><p><b>Sofia</b></p></td><td><p><b>Milan</b></p></td><td><p><b>Frankfurt</b></p></td><td><p><b>Amsterdam</b></p></td></tr><tr><td><p><b>Median RTT</b></p></td><td><p>22 ms</p></td><td><p>42 ms</p></td><td><p>43 ms</p></td><td><p>70 ms</p></td><td><p>75 ms</p></td></tr></table><p>Assume that another CDN B does not have data centers or cache servers in Athens and Sofia, but only in Milan, Frankfurt, and Amsterdam. Based on the topology and colocation data of CDN B, we will predict the anycast catchments, and we find that for AS3379 the data center in Frankfurt will be used. In that case, we will use the corresponding latency as an estimate of the median latency between CDN B and the given prefix.</p><p>The above methodology works well because Cloudflare’s global network allows us to collect network measurements between 63,832 ASes (virtually every AS which hosts clients), and 300 cities in 115 different countries where Cloudflare infrastructure is deployed, allowing us to cover the vast majority of regions where other CDNs have deployed infrastructure.</p>
    <div>
      <h2>Step 3. Validation</h2>
      <a href="#step-3-validation">
        
      </a>
    </div>
    <p>To validate the above measurement, we run a global campaign of traceroute and ping measurements from 9,990 Atlas probes in 161 different countries (see the <a href="https://atlas.ripe.net/results/maps/network-coverage/">interactive map</a> for real-time data on the geographical distribution of probes).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2jnCkRq97yrEyP7vKSBxwO/1e547add88b468e4aa4063592919cd74/6-2.png" />
            
            </figure><p>Geographical distribution of the RIPE Atlas probes used for the validation of our predictions</p><p>For each CDN as a measurement target, we selected a destination hostname that is anycasted from all locations, and we selected the DNS resolution to run on each measurement probe so that the returned IP corresponds to the probe’s nearest location.</p><p>After the measurements were completed, we first evaluated the Anycast Catchment prediction, namely the prediction of which CDN data center will be used by each RIPE Atlas probe. To this end, we geolocated the destination IPs of each completed traceroute measurement against the predicted data center. <b>Nearly 90% of our predicted data centers agreed with the measured data centers.</b></p><p>We also validated our RTT predictions. The figure below shows the absolute difference between the measured RTT and the predicted RTT in milliseconds, across all data centers. More than 50% of the predictions have an RTT difference of 3 ms or less, while almost 95% of the predictions have an RTT difference of at most 10 ms.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4xXmpUGVi1i36sQ7ZGaIdE/28181ad1e302fd49d43bced21d36492e/7-1.png" />
            
            </figure><p>Histogram of the absolute difference in milliseconds between the predicted RTT and the RTT measured through the RIPE Atlas measurement campaign.</p>
    <div>
      <h2>Results</h2>
      <a href="#results">
        
      </a>
    </div>
    <p>We applied our methodology on nine major CDNs, including Cloudflare, in September 2021. As shown in the boxplot below, Cloudflare exhibits the lowest median RTT across all observed clients, with a median RTT close to 10 ms.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3OsQkNEDKMD6Zgkw6ioRQi/ed28acf0524ec0338216e521c1f9f551/8-1.png" />
            
            </figure><p>Boxplot of the global RTT distributions for each of the 9 networks we considered in our experiments. We anonymize the rest of the networks since the focus on this measurement is not to provide a ranking of content providers but to contextualize the performance of Cloudflare’s network compared to other comparable networks. </p>
    <div>
      <h2>Limitations of measurement methodology</h2>
      <a href="#limitations-of-measurement-methodology">
        
      </a>
    </div>
    <p>Because our approach relies on estimating latency, it is not possible to obtain millisecond-accurate measurements. However, such measurements are essentially infeasible even when using real user measurements because the network conditions are highly dynamic, meaning that measured RTT may differ significantly between different measurements.</p><p>Secondly, our approach obviously cannot be used to monitor network hygiene in real time and detect <a href="https://www.cloudflare.com/learning/cdn/common-cdn-issues/">performance issues</a> that may often lie outside Cloudflare’s network. Instead, our approach is useful for understanding the expected performance of our network topology and connectivity, and we can test what-if scenarios to predict the impact on performance that different events may have (e.g. deployment of a new data center, interruption of connectivity to an ISP or IXP).</p><p>Finally, while Cloudflare has the most extensive coverage of data centers and IXPs compared to other CDNs, there are certain countries where Cloudflare does not have a data center in contrast to other CDNs. In some other countries, Cloudflare is present to a partner data center but not in a carrier-neutral data center which may restrict the number of direct peering links between Cloudflare’s and other regional ISPs. In such countries, client IPs may be routed to a data center outside the country because the BGP decision process typically prioritizes cost over proximity. Therefore, for about 7% of the client /24 IP prefixes, we do not have a measured RTT between a data center in the same country as the IP. We are working to alleviate this with traceroute measurements and will report back later.</p>
    <div>
      <h2>Looking Ahead</h2>
      <a href="#looking-ahead">
        
      </a>
    </div>
    <p>The ability to predict and compare the performance of different CDN networks allows us to evaluate the impact of different peering and data center strategies, as well as identify shortcomings in our Anycast Catchments and traffic engineering policies. Our ongoing work focuses on measuring and quantifying the impact of peering on IXPs on end-to-end latencies, as well as identifying cases of local Internet ecosystems where an open peering policy may lead to latency increases. This work will eventually enable us to optimize our infrastructure placement and control-plane policies to the specific topological properties of different regions and minimize latency for end users.</p> ]]></content:encoded>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <guid isPermaLink="false">6hm0oQNiiTzdYUnrCxrRK1</guid>
            <dc:creator>Vasilis Giotsas</dc:creator>
            <dc:creator>Marwan Fayed</dc:creator>
        </item>
        <item>
            <title><![CDATA[Unbuckling the narrow waist of IP: Addressing Agility for Names and Web Services]]></title>
            <link>https://blog.cloudflare.com/addressing-agility/</link>
            <pubDate>Thu, 14 Oct 2021 12:59:18 GMT</pubDate>
            <description><![CDATA[ IP addresses associated with names, interfaces, and sockets, can tie these things together in a way that IP was never designed to support. This post describes Cloudflare efforts to decouple of IP addresses from names, the latest in a quest for something we’re calling Addressing Agility. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>At large operational scales, IP addressing stifles innovation in network- and web-oriented services. For every architectural change, and certainly when starting to design new systems, the first set of questions we are forced to ask are:</p><ul><li><p>Which block of IP addresses do or can we use?</p></li><li><p>Do we have enough in IPv4? If not, where or how can we get them?</p></li><li><p>How do we use IPv6 addresses, and does this affect other uses of IPv6?</p></li><li><p>Oh, and what careful plan, checks, time, and people do we need for migration?</p></li></ul><p>Having to stop and worry about IP addresses costs time, money, resources. This may sound surprising, given the visionary and resilient <a href="https://datatracker.ietf.org/doc/html/rfc791">advent of IP</a>, 40+ years ago. By their very design, IP addresses should be the last thing that any network has to think about. However, if the Internet has laid anything bare, it’s that small or seemingly unimportant weaknesses — often invisible or impossible to see at design time — always show up at sufficient scale.</p><p>One thing we do know: “more addresses” should never be the answer. In IPv4 that type of thinking only contributes to their scarcity, driving up further their market prices. IPv6 is absolutely necessary, but only one part of the solution. For example, in IPv6, the best practice says that the smallest allocation, just for personal use, is /56 -- that’s 2<sup>72</sup> or about 4,722,000,000,000,000,000,000 addresses. I certainly can’t reason about numbers that large. Can you?</p><p>In this blog post, we’ll explain why IP addressing is a problem for web services, the underlying causes, and then describe an innovative solution that we’re calling Addressing Agility, alongside the lessons we’ve learned. The best part of all may be the kinds of new systems and architectures enabled by Addressing Agility. The full details are available in our recent <a href="https://research.cloudflare.com/publications/Fayed2021/">paper</a> from ACM SIGCOMM 2021. As a preview, here is a summary of some of the things we learned:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/74paTQy3g6xZgcTVirW3mz/9c65a4786c9683b570c256b33594d722/image5-18.png" />
            
            </figure><p>It’s true! There is no limit to the number of names that can appear on any single address; the address of any name can change with every new query, <i>anywhere</i>; and address changes can be made for any reason, be it service provisioning or policy or performance evaluation, or others we’ve yet to encounter...</p><p>Explained below are the reasons this is all true, the way we get there, and the reasons these lessons matter for HTTP and TLS services <i>of any size</i>. The key insight on which we build: On the Internet Protocol (IP) design, much like the global postal system, <b>addresses have never been, should never be, and in no way are ever, needed to represent names.</b> We just sometimes treat addresses as if they do. Instead, this work shows that all names should share all of their addresses, any set of their addresses, or even just one address.</p>
    <div>
      <h2>The narrow waist is a funnel, but also a choke point</h2>
      <a href="#the-narrow-waist-is-a-funnel-but-also-a-choke-point">
        
      </a>
    </div>
    <p>Decades-old conventions artificially tie IP addresses to names and resources. This is understandable since the architecture and software that drive the Internet evolved from a setting in which one computer had one name and (most often) one network interface card. It would be natural, then, for the Internet to evolve such that one IP address would be associated with names and software processes.</p><p>Among end clients and network carriers, where there is little need for names and less need for listening processes, these IP bindings have little impact. However, the name and process conventions create strong limitations on <i>all</i> content hosting, distribution, and content-service providers (CSPs). Once assigned to names, interfaces, and sockets, addresses become largely static and require effort, planning, and care to change if change is possible at all.</p><p>The “narrow waist” of IP has enabled the Internet, but much like TCP has been to transport protocols and HTTP to application protocols, IP has become a stifling bottleneck to innovation. The idea is depicted by the figure below, in which we see that otherwise separate communication bindings (with names) and connection bindings (with interfaces and sockets) create transitive relationships between them.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2JLHGmkSTwbwo97XCnFRQi/df435beb76297e4fea5a77e0a0d70e7b/image8-12.png" />
            
            </figure><p>The transitive lock is hard to break, because changing either can have an impact on the other. Moreover, service providers often use IP addresses to represent policies and service levels that themselves exist independently of names. Ultimately the IP bindings are one more thing to think about — and for no good reason.</p><p>Let’s put this another way. When thinking of new designs, new architectures, or just better resource allocations, the first set of questions should never be “which IP addresses do we use?” or “do we have IP addresses for this?” Questions like these and their answers slow development and innovation.</p><p>We realised that IP bindings are not only artificial but, according to the original visionary RFCs and standards, also incorrect. In fact, the notion of IP addresses as being representative of anything other than reachability runs counter to their original design. In the original <a href="https://datatracker.ietf.org/doc/html/rfc791#section-2.3">RFC</a> and related drafts, the architects are explicit, “A distinction is made between names, addresses, and routes. A name indicates what we seek. An address indicates where it is. A route indicates how to get there.” <b>Any</b> <b>association to IP of information like SNI or HTTP host in higher-layer protocols is a clear violation of the layering principle</b>.</p><p>Of course none of our work exists in isolation. It does, however, complete a long-standing evolution to decouple IP addresses from their conventional use, an evolution that consists of <a href="/cloudflare-research-two-years-in/">standing on</a> the shoulders of giants.</p>
    <div>
      <h3>The Evolving Past...</h3>
      <a href="#the-evolving-past">
        
      </a>
    </div>
    <p>Looking backwards over the last 20 years, it’s easy to see that a quest for addressing agility has been ongoing for some time, and one in which Cloudflare has been deeply invested.</p><p>The decades-old one-to-one binding between IP and network card interfaces was first broken a few years ago when Google’s <a href="https://research.google/pubs/pub44824/">Maglev</a> combined Equal Cost MultiPath (ECMP) and consistent hashing to disseminate traffic from one ‘virtual’ IP address among many servers. As an aside, according to the original Internet Protocol RFCs, this use of IP is proscribed and there is nothing virtual about it.</p><p>Many similar systems have since emerged at GitHub, Facebook, and elsewhere, including our very own <a href="/unimog-cloudflares-edge-load-balancer/">Unimog</a>. More recently, Cloudflare designed a new programmable sockets architecture called <a href="/its-crowded-in-here/">bpf_sk_lookup</a> to decouple IP addresses from sockets and processes.</p><p><b>But what about those names?</b> The value of ‘virtual hosting’ was cemented in 1997 when HTTP 1.1 defined the host field as mandatory. This was the first official acknowledgement that multiple names can coexist on a single IP address, and was necessarily reproduced by TLS in the Server Name Indication field. These are absolute requirements since the number of possible names is greater than the number of IP addresses.</p>
    <div>
      <h3>...Indicates an Agile Future</h3>
      <a href="#indicates-an-agile-future">
        
      </a>
    </div>
    <p>Looking ahead, Shakespeare was wise to ask, “What’s in a Name?” If the Internet could speak then it might say, “That name which we label by any other address would be just as reachable.”</p><p>If Shakespeare instead asked, “What is in an address?” then the Internet would similarly answer, “That address which we label by any other name would be just as reachable, too.”</p><p>A strong implication emerges from the truth of those answers: The mapping between names and addresses is any-to-any. If this is true then any address can be used to reach a name as long as a name is reachable at an address.</p><p>In fact, a version of many addresses for a name has been available since 1995 with the introduction of <a href="https://datatracker.ietf.org/doc/html/rfc1794">DNS-based load-balancing</a>. Then why not all addresses for all names, or any addresses at any given time for all names? Or — as we’ll soon discover — one address for all names! But first let’s talk about the manner in which addressing agility is achieved.</p>
    <div>
      <h2>Achieving Addressing Agility: Ignore names, map policies</h2>
      <a href="#achieving-addressing-agility-ignore-names-map-policies">
        
      </a>
    </div>
    <p>The key to addressing agility is authoritative DNS — but not in the static name-to-IP mappings stored in some form of a record or lookup table. Consider that from any client’s perspective, the binding only appears `on-query’. For all practical uses of the mapping, the query’s response is the last possible moment in the lifetime of a request where a name can be bound to an address.</p><p>This leads to the observation that name mappings are actually made, not in some record or zone file, but at the moment the response is returned. It’s a subtle, but important distinction. Today’s DNS systems use a name to look up a set of addresses, and then sometimes use some policy to decide which specific address to return. The idea is shown in the figure below. When a query arrives, a lookup reveals the addresses associated with that name, and then returns one or more of those addresses. Often, additional policy or logic filters are used to narrow the address selection, such as service level or geo-regional coverage. The important detail is that addresses are identified with a name first, and policies are only applied afterwards.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6yJ6ppYKPOIdy8ycWt1WPH/eb531cfe42021c754d1d492ba5e0c61e/image10-5.png" />
            
            </figure><p>(a) Conventional Authoritative DNS</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5aCwoQPNckWBfEa1wbJ5Ah/8074247fe9fea9a946f5c2a78027294d/image3-24.png" />
            
            </figure><p>(b) Addressing Agility</p><p>Addressing agility is achieved by inverting this relationship. Instead of IP addresses pre-assigned to a name, our architecture begins with a policy that may (or in our case, not) include a name. For example, a policy may be represented by attributes such as location and account type and ignore the name (which we did in our deployment). The attributes identify a pool of addresses that are associated with that policy. The pool itself may be isolated to that policy or have elements shared with other pools and policies. Moreover, all the addresses in the pool are equivalent. This means that any of the addresses may be returned — or even selected at random — without inspecting the DNS query name.</p><p>Now pause for a moment because there are two really noteworthy implications that fall out  to per-query responses:</p><p>i. IP addresses can be, and are, computed and assigned at runtime or query-time.</p><p>ii. The lifetime of the IP-to-name mapping is the larger of the ensuing connection lifetime and the TTL in downstream caches.</p><p>The outcome is powerful and means that <b>the binding itself is otherwise ephemeral and can be changed without regard to previous bindings, resolvers, clients, or purpose.</b> Also, scale is no issue, and we know because we deployed it at the edge.</p>
    <div>
      <h3>IPv6 — new clothes, same emperor</h3>
      <a href="#ipv6-new-clothes-same-emperor">
        
      </a>
    </div>
    <p>Before talking about our deployment, let’s first address the proverbial elephant in the room: IPv6. The first thing to make clear is that everything — <i>everything</i> — discussed here in the context of IPv4 applies equally in IPv6. As is true of the global postal system, addresses are addresses, whether in Canada, Cambodia, Cameroon, Chile, or China — and that includes their relatively static, inflexible nature.</p><p>Despite equivalence, the obvious question remains: Surely all the reasons to pursue Addressing Agility are satisfied simply by changing to IPv6? Counter-intuitive as the answer may be, the answer is a definite, absolute no! IPv6 may mitigate against address exhaustion, at least for the lifetimes of everyone alive today, but the abundance of IPv6 prefixes and addresses makes reasoning difficult about its bindings to names and resources.</p><p>The abundance of IPv6 addresses also risks inefficiencies because operators can take advantage of the bit length and large prefix sizes to embed meaning into the IP address. This is a powerful feature of IPv6, but also means many, many, addresses in any prefix will go unused.</p><p>To be clear, Cloudflare is demonstrably one of the biggest advocates of IPv6, and for good reasons, not least that the abundance of addresses ensures longevity. Even so, IPv6 changes little about the way addresses are tied to names and resources, whereas an address’ agility ensures flexibility and responsiveness for their lifetimes.</p>
    <div>
      <h3>A Side-note: Agility is for Everyone</h3>
      <a href="#a-side-note-agility-is-for-everyone">
        
      </a>
    </div>
    <p>One last comment on the architecture and its transferability — <b>Addressing Agility is usable, even desirable, for any service that operates authoritative DNS</b>. Other content-oriented service providers are obvious contenders, but so too are smaller operators. Universities, enterprises, and governments are just a few examples of organizations that can operate their own authoritative services. So long as the operators are able to accept connections on the IP addresses that are returned, all are potential beneficiaries of addressing agility as a result.</p>
    <div>
      <h2>Policy-based randomized addresses — at scale</h2>
      <a href="#policy-based-randomized-addresses-at-scale">
        
      </a>
    </div>
    <p>We’ve been working with Addressing Agility live at the edge, with production traffic, since June 2020, as follows:</p><ul><li><p>More than 20 million hostnames and services</p></li><li><p>All data centers in Canada (giving a reasonable population and multiple time zones)</p></li><li><p>/20 (4096 addresses) in IPv4 and /44 in IPv6</p></li><li><p>/24 (256 addresses) in IPv4 from January 2021 to June 2021</p></li><li><p>For every query, generate a random host-portion within the prefix.</p></li></ul><p>After all, the true test of agility is most extreme when a random address is generated for every query that hits our servers. Then we decided to truly put the idea to the test. In June 2021, in our Montreal data center and soon after in Toronto, all 20+ million zones were mapped to one-single address.</p><p>Over the course of one year, every query for a domain captured by the policy received an address selected at random — from a set of as few as 4096 addresses, then 256, and then one. Internally, we refer to the address set of one as Ao1, and we’ll return to this point later.</p>
    <div>
      <h3>The measure of success: “Nothing to see here”</h3>
      <a href="#the-measure-of-success-nothing-to-see-here">
        
      </a>
    </div>
    <p>There may be a number of questions our readers are quietly asking themselves:</p><ul><li><p>What did this break on the Internet?</p></li><li><p>What effect did this have on Cloudflare systems?</p></li><li><p>What would I see happening if I could?</p></li></ul><p>The short answer to each question above is <i>nothing</i>. But — and this is important — address randomization does expose weaknesses in the designs of systems that rely on the Internet. The weaknesses <i>always</i>, every one, occurs because the designers ascribe meaning to IP addresses beyond reachability. (And, if only incidentally, every one of those weaknesses are circumvented by the use of one address, or ‘Ao1.’)</p><p>To better understand the nature of “nothing”, let’s answer the above questions starting from the bottom of the list.</p>
    <div>
      <h4><b>What would I see if I could?</b></h4>
      <a href="#what-would-i-see-if-i-could">
        
      </a>
    </div>
    <p>The answer is shown by the example in the figure below. From all data centers in the “Rest of World” outside our deployment, a query for a zone returns the same addresses (such is Cloudflare’s global anycast system). In contrast, every query that lands in a deployment data center receives a random address. These can be seen below in successive dig commands to two different data centers.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4IDAhbgavbKx0WOhd5uYn/571890e89a92724d6c9168e045763ebc/image9-9.png" />
            
            </figure><p>For those who may be wondering about subsequent request traffic, yes, this means that servers are configured to accept connection requests for <i>any</i> of the 20+ million domains on <i>all</i> addresses in the address pool.</p>
    <div>
      <h4><b>Ok, but surely Cloudflare’s surrounding systems needed modification?</b></h4>
      <a href="#ok-but-surely-cloudflares-surrounding-systems-needed-modification">
        
      </a>
    </div>
    <p>Nope. This is a drop-in transparent change to the data pipeline for authoritative DNS. Each of routing prefix advertisements in BGP, DDoS, load balancers, distributed cache, ... no changes were required.</p><p>There is, however, one fascinating side effect: randomization is to IP addresses as a good hash function is to a hash table -- it evenly maps an arbitrary size input to a fixed number of outputs. The effect can be seen by looking at measures of load-per-IP before and after randomization as in the graphs below, with data taken from 1% samples of requests at one data center over seven days.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4cOkznkWXZcsaNI5lScPlI/cc1546e3b58274298b295b00aa9e0b7e/image4-24.png" />
            
            </figure><p>Before Addressing Agility</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/YNafEISfLNYqe5GThlNT9/dc95ed24557685da6640ef45f483b892/image2-26.png" />
            
            </figure><p>Randomization on /20</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6ZVhcE5JRaNbzUf7P7pY3W/2af2f3ac50364ba9128c7b9976bf9865/image1-40.png" />
            
            </figure><p>Randomization on /24</p><p>Before randomization, for only a small portion of Cloudflare’s IP space, (a) the difference between greatest and least requests per IP (y1-axis on the left) is three orders of magnitude; similarly, bytes per IP (y2-axis on the right) is almost six orders of magnitude. After randomization, (b) for all domains on a single /20 that previously occupied multiple /20s, these reduce to 2 and 3 orders of magnitude, respectively. Taking this one step further down to /24 in (c), per-query randomization of 20+ million zones onto 256 addresses reduces differences in load to small constant factors.</p><p>This might matter to any content service provider that might think about provisioning resources by IP address. A priori predictions of load generated by a customer can be hard. The above graphs are evidence that the best path forward is to <i>give all the addresses to all the names</i>.</p>
    <div>
      <h4><b>Surely this breaks something on the wider Internet?</b></h4>
      <a href="#surely-this-breaks-something-on-the-wider-internet">
        
      </a>
    </div>
    <p>Here, too, the answer is no! Well, perhaps more precisely stated as, “no, randomization breaks nothing... but it can expose weaknesses in systems and their designs.”</p><p>Any systems that <i>might</i> be affected by address randomization appears to have a prerequisite: some meaning is ascribed to the IP address beyond just reachability. Addressing Agility keeps and even restores the semantics of IP addresses and the core Internet architecture, but it will break software systems that make assumptions about their meaning.</p><p>Let’s first cover a few examples, why they don’t matter, and then follow with a small change to addressing agility that bypasses weaknesses (by using one single IP address):</p><ul><li><p><b>HTTP Connection Coalescing</b> enables a client to re-use existing connections to request resources from different origins. Clients such as Firefox that permit coalescing when the URI authority matches the connection are unaffected. However, clients that require a URI host to resolve to the same IP address as the given connection will fail.</p></li><li><p><b>Non-TLS or HTTP-based services</b> may be affected. One example is ssh, which maintains a hostname-to-IP mapping in its known_hosts. This association, while understandable, is outdated and already broken given that many DNS records presently return more than one IP address.</p></li><li><p><b>Non-SNI TLS</b> certificates require a dedicated IP address. Providers are forced to charge a premium because each address can only support a single certificate without SNI. The bigger issue, independent of IP, is the use of TLS without SNI. We have launched efforts to understand non-SNI to hopefully end this unfortunate legacy.</p></li><li><p><b>DDoS protections</b> that rely on destination IPs may be hindered, initially. We would argue that addressing agility is beneficial for two reasons. First, IP randomization distributes the attack load across all addresses in use, effectively serving as a layer-3 load-balancer. Second, DoS mitigations often work by changing IP addresses, an ability that is inherent in Addressing Agility.</p></li></ul>
    <div>
      <h2>All for on One, and One for All</h2>
      <a href="#all-for-on-one-and-one-for-all">
        
      </a>
    </div>
    <p>We started with 20+ million zones bound to addresses across tens of thousands of addresses, and successfully served them from 4096 addresses in a /20 and then 256 addresses in a /24. Surely this trend begs the following question:</p><blockquote><p><b>If randomization works over</b> <b><i>n</i></b> <b>addresses, then why not randomization over 1 address?</b></p></blockquote><p>Indeed, why not? Recall from above the comment about randomization over IPs as being equivalent to a perfect hash function in a hash table. The thing about well-designed hash-based structures is that they preserve their properties for any size of the structure, even a size of 1. Such a reduction would be a true test of the foundations on which Addressing Agility is constructed.</p><p>So, test we did. From a /20 address set, to a /24 and then, from June 2021, to an address set of one /32, and equivalently a /128 (Ao1). It doesn’t just work. It <i>really</i> works. Concerns that might be exposed by randomization are resolved by Ao1. For example, non-TLS or non-HTTP services have a reliable IP address (or at least non-random and until there is a policy change on the name). Also, HTTP connection coalescing falls out as if for free and, yes, we see increased levels of coalescing where Ao1 is being used.</p>
    <div>
      <h3>But why in IPv6 where there are so many addresses?</h3>
      <a href="#but-why-in-ipv6-where-there-are-so-many-addresses">
        
      </a>
    </div>
    <p>One argument against binding to a single IPv6 address is that there is no need, because address exhaustion is unlikely. This is a pre-CIDR position that, we claim, is benign at best and irresponsible at worst. As mentioned above, the number of IPv6 addresses makes reasoning about them difficult. In lieu of asking why use a single IPv6 address, we should be asking, “why not?”</p>
    <div>
      <h3>Are there upstream implications? Yes, and opportunities!</h3>
      <a href="#are-there-upstream-implications-yes-and-opportunities">
        
      </a>
    </div>
    <p>Ao1 reveals an entirely different set of implications from IP randomization that, arguably, gives us a window into the future of Internet routing and reachability by amplifying the effects that seemingly small actions might have.</p><p>Why? The number of possible variable-length names in the universe will always exceed the number of fixed-length addresses. This means that, <b>by the</b> <a href="https://en.wikipedia.org/wiki/Pigeonhole_principle"><b>pigeonhole principle</b></a><b>, single IP addresses must be shared by multiple names</b>, and different content from unrelated parties.</p><p>The possible upstream effects amplified by Ao1 are worth raising and are described below. So far, though, we’ve seen none of these in our evaluations, nor have they come up in communications with upstream networks.</p><ul><li><p><b>Upstream Routing Errors are Immediate and Total.</b> If all traffic arrives on a single address (or prefix), then upstream routing errors affect all content equally. (This is the reason Cloudflare returns two addresses in non-contiguous address ranges.) Note, however, the same is true of threat blocking.</p></li><li><p><b>Upstream DoS Protections could be triggered.</b> It is conceivable that the concentration of requests and traffic on a single address could be perceived upstream as a DoS attack and trigger upstream protections that may exist.</p></li></ul><p>In both cases, the actions are mitigated by Addressing Agility’s ability to change addresses en masse so quickly. Prevention is also possible, but requires open communication and discourse.</p><p>One last upstream effect remains:</p><ul><li><p><b>Port exhaustion in IPv4 NAT might be accelerated, and is solved by IPv6!</b> From the client-side, the number of permissible concurrent connections to one-address is upper-bounded by the size of a transport protocol’s port field, for example about 65K in TCP.</p></li></ul><p>For example, in TCP on Linux this was an issue until recently. (See this <a href="https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=90c337da1524863838658078ec34241f45d8394d">commit</a> and SO_BIND_ADDRESS_NO_PORT in <a href="https://www.man7.org/linux/man-pages/man7/ip.7.html">ip(7) man page</a>.)  In UDP the issue remains. In QUIC, connection identifiers can prevent port exhaustion, but they have to be used. So far, though, we have yet to see any evidence that this is an issue.</p><p>Even so — and here is the best part — to the best of our knowledge <i>this is the only risk to one-address uses, and is also immediately resolved by migrating to IPv6</i>. (So, ISPs and network administrators, go forth and implement IPv6!)</p>
    <div>
      <h2>We’re just getting started!</h2>
      <a href="#were-just-getting-started">
        
      </a>
    </div>
    <p>And so we end as we began. With no limit to the number of names on any single IP address, the ability to change the address per-query, for any reason, what could <i>you</i> build?</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5AvsEosOmXPtmdbeJJYzN2/1e807c46b1d6cfdde89dc8234cc6b69f/image7-12.png" />
            
            </figure><p>We are, indeed, just getting started! The flexibility and future-proofing enabled by Addressing Agility is enabling us to imagine, design, and build new systems and architectures. We’re planning BGP route leak detection and mitigation for anycast systems, measurement platforms, and more.</p><p>Further technical details on all the above, as well as acknowledgements to so many who helped make this possible, can be found in this <a href="https://research.cloudflare.com/publications/Fayed2021/">paper</a> and short <a href="https://youtu.be/zg6944L-B3M?t=2137">talk</a>. Even with these new possibilities, challenges remain. There are many open questions that include, but are in no way limited to the following:</p><ul><li><p>What policies can be reasonably expressed or implemented?</p></li><li><p>Is there an abstract syntax or grammar with which to express them?</p></li><li><p>Could we use formal methods and verification to prevent erroneous or conflicting policies?</p></li></ul><p>Addressing Agility is for everyone, even necessary for these ideas to succeed more widely. Input and ideas are welcomed at <a>ask-research@cloudflare.com</a>.</p><p>If you are a student enrolled in a PhD or equivalent research program and looking for an internship for 2022 in the <a href="https://boards.greenhouse.io/cloudflare/jobs/1976547?gh_jid=1976547">USA or Canada</a> and the <a href="https://boards.greenhouse.io/cloudflare/jobs/2525989?gh_jid=2525989">EU or UK</a>.</p><p>If you’re interested in contributing to projects like this or helping Cloudflare develop its traffic and address management systems, <a href="https://boards.greenhouse.io/cloudflare/jobs/2428325">our Addressing Engineering team is hiring</a>!</p> ]]></content:encoded>
            <category><![CDATA[Research]]></category>
            <guid isPermaLink="false">2FpQoLe44ArRQteiIHDb36</guid>
            <dc:creator>Marwan Fayed</dc:creator>
        </item>
        <item>
            <title><![CDATA[Announcing Project Pangea: Helping Underserved Communities Expand Access to the Internet For Free]]></title>
            <link>https://blog.cloudflare.com/pangea/</link>
            <pubDate>Mon, 26 Jul 2021 12:59:25 GMT</pubDate>
            <description><![CDATA[ Cloudflare is excited to announce Project Pangea. We’re launching a program that provides secure, performant, reliable access to the Internet for community networks that support underserved communities, and we’re doing it for free because we want to help build an Internet for everyone. ]]></description>
            <content:encoded><![CDATA[ <p></p><p><a href="https://www.weforum.org/agenda/2020/04/coronavirus-covid-19-pandemic-digital-divide-internet-data-broadband-mobbile/">Half of the world’s population has no access to the Internet</a>, with many more limited to poor, expensive, and unreliable connectivity. This problem persists despite large levels of public investment, private infrastructure, and effort by local organizers.</p><p>Today, Cloudflare is excited to announce Project Pangea: a piece of the puzzle to help solve this problem. We’re launching a program that provides secure, performant, reliable access to the Internet for community networks that support underserved communities, and we’re doing it for free<sup>1</sup> because <a href="/understanding-where-the-internet-isnt-good-enough-yet/">we want to help build an Internet for everyone</a>.</p>
    <div>
      <h3>What is Cloudflare doing to help?</h3>
      <a href="#what-is-cloudflare-doing-to-help">
        
      </a>
    </div>
    <p>Project Pangea is Cloudflare’s project to help bring underserved communities secure connectivity to the Internet through Cloudflare’s global and interconnected network.</p><p>Cloudflare is offering our suite of network services — <a href="https://www.cloudflare.com/network-interconnect/">Cloudflare Network Interconnect</a>, <a href="https://www.cloudflare.com/magic-transit/">Magic Transit</a>, and <a href="/introducing-magic-firewall/">Magic Firewall</a> — for free to nonprofit community networks, local networks, or other networks primarily focused on providing Internet access to local underserved or developing areas. This service would dramatically reduce the cost for communities to connect to the Internet, with industry leading security and performance functions built-in:</p><ul><li><p><b>Cloudflare Network Interconnect</b> provides access to Cloudflare’s edge in 200+ cities across the globe through physical and virtual connectivity options.</p></li><li><p><b>Magic Transit</b> acts as a conduit to and from the broader Internet and protects community networks by mitigating DDoS attacks within seconds at the edge.</p></li><li><p><b>Magic Firewall</b> gives community networks access to a network-layer <a href="https://www.cloudflare.com/learning/cloud/what-is-a-cloud-firewall/">firewall as a service</a>, providing further protection from malicious traffic.</p></li></ul><p>We’ve learned from working with customers that pure connectivity is not enough to keep a network sustainably connected to the Internet. Malicious traffic, such as DDoS attacks, can target a network and saturate Internet service links, which can lead to providers aggressively rate limiting or even entirely shutting down incoming traffic until the attack subsides. This is why we’re including our security services in addition to connectivity as part of Project Pangea: no attacker should be able to keep communities closed off from accessing the Internet.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6qlqh0LQN6YOGyFRsyBFNO/b970dd6fa3f24e243ac30a6c206a504b/pangea-flow.png" />
            
            </figure>
    <div>
      <h3>What is a community network?</h3>
      <a href="#what-is-a-community-network">
        
      </a>
    </div>
    <p>Community networks have existed almost as long as commercial subscribership to the Internet that began with dial-up service. The Internet Society, or <a href="https://www.internetsociety.org/issues/community-networks/">ISOC</a>, describes community networks as happening “when people come together to build and maintain the necessary infrastructure for Internet connection.”</p><p>Most often, community networks emerge from need, and in response to the lack or absence of available Internet connectivity. They consistently demonstrate success where public and private-sector initiatives have either failed or under-deliver. We’re not talking about stop-gap solutions here, either — community networks around the world have been providing reliable, sustainable, high-quality connections for years.</p><p>Many will operate only within their communities, but many others can grow, and have grown, to regional or national scale. The most common models of governance and operation are as not-for-profits or cooperatives, models that ensure reinvestment within the communities being served. For example, we see networks that reinvest their proceeds to replace Wi-Fi infrastructure with fibre-to-the-home.</p><p>Cloudflare celebrates these networks’ successes, and also the diversity of the communities that these networks represent. In that spirit, we’d like to dispel myths that we encountered during the launch of this program — many of which we wrongly assumed or believed to be true — because the myths turn out to be barriers that communities so often are forced to overcome.  Community networks are built on knowledge sharing, and so we’re sharing some of that knowledge, so others can help accelerate community projects and policies, rather than rely on the assumptions that impede progress.</p><p><b>Myth #1: Only very rural or remote regions are underserved and in need.</b> It’s true that remote regions are underserved. It is also true that underserved regions exist within 10 km (about six miles) of large city centers, and even within the largest cities themselves, as evidenced by the existence of some of our launch partners.</p><p><b>Myth #2: Remote, rural, or underserved is also low-income.</b> This might just be the biggest myth of all. Rural and remote populations are often thriving communities that can afford service, but have no access. In contrast, the need for urban community networks are often egalitarian, and emerge because the access that is available is unaffordable to many.</p><p><b>Myth #3: Service is necessarily more expensive.</b> This myth is sometimes expressed by statements such as, “if large service providers can’t offer affordable access, then no one can.”  More than a myth, this is a lie. Community networks (including our launch partners) use novel governance and cost models to ensure that subscribers pay rates similar to the wider market.</p><p><b>Myth #4: Technical expertise is a hard requirement and is unavailable.</b> There is a rich body of evidence and examples showing that, with small amounts of training and support, communities can build their own local networks cheaply and reliably with commodity hardware and non-specialist equipment.</p><p>These myths aside, there is one truth: <b>the path to sustainability is hard</b>. The start and initial growth of community networks often consists of volunteer time or grant funding, which are difficult to sustain in the long-term. Eventually the starting models need to transition to models of “willing to charge and willing to pay” — Project Pangea is designed to help fill this gap.</p>
    <div>
      <h2>What is the problem?</h2>
      <a href="#what-is-the-problem">
        
      </a>
    </div>
    <p>Communities around the world can and have put up Wi-Fi antennas and laid their own fibre. Even so, and however well-connected the community is to itself, <i>Internet services are prohibitively expensive — if they can be found at all</i>.</p><p>Two elements are required to connect to the Internet, and each incurs its own cost:</p><ul><li><p><b>Backhaul</b> connections to an interconnection point — the connection point may be anything from a local cabinet to a large Internet exchange point (IXP).</p></li><li><p><b>Internet Services</b> are provided by a network that interfaces with the wider Internet, and agrees to route traffic to and from on behalf of the community network.</p></li></ul><p>These are distinct elements. Backhaul service carries data packets along a physical link (a fibre cable or wireless medium). Internet service is separate and may be provided over that link, or at its endpoint.</p><p>The cost of Internet service for networks is both dominant and variable (with usage), so in most cases it is cheaper to purchase both as a bundle from service providers that also own or operate their own physical network. Telecommunications and energy companies are prime examples.</p><p>However, the operating costs and complexity of long-distance backhaul is significantly lower than the costs of Internet service. If reliable, high capacity service were affordable, then community networks could extend their knowledge and governance models sustainably to also provide their own backhaul.</p><p>For all that community networks can build, establish, and operate, the one element entirely outside their control is the cost of Internet service — a problem that Project Pangea helps to solve.</p>
    <div>
      <h3>Why does the problem persist?</h3>
      <a href="#why-does-the-problem-persist">
        
      </a>
    </div>
    <p>On this subject, I — Marwan — can only share insights drawn from prior experience as a computer science professor, and a co-founder of <a href="https://hubs.net.uk/">HUBS c.i.c.</a>, launched with talented professors and a network engineer. HUBS is a not-for-profit backhaul and Internet provider in Scotland. It is a cooperative of more than a dozen community networks — some that serve communities with no roads in or out — across thousands of square kilometers along Scotland’s West Coast and Borders regions. As is true of many community networks, not least some of Pangea’s launch partners, HUBS’ is award-<a href="https://digital-strategy.ec.europa.eu/en/news/winners-european-broadband-awards-2016">winning</a>, and engages in <a href="https://committees.parliament.uk/committee/136/scottish-affairs-committee/news/102790/ee-o2-three-and-vodafone-questioned-on-scotland-mobile-coverage/">advocacy and policy</a>.</p><p>During that time my co-founders and I engaged with research funders, economic development agencies, three levels of government, and so many communities that I lost track. After all that, the answer to the question is still far from clear. There are, however, noteworthy observations and experiences that stood out, and often came from surprising places:</p><ul><li><p>Cables on the ground get chewed by animals that, small or large, might never be seen.</p></li><li><p>Burying power and Ethernet cables, even 15 centimeters below soil, makes no difference because (we think) animals are drawn by the electrical current.</p></li><li><p>Property owners sometimes need to be convinced that 8 to 10 square meters to build a small tower in exchange for free Internet and community benefit is a good thing.</p></li><li><p>The raising of small towers, even that no one will see, is sometimes blocked by legislation or regulation that assumes private non-residential structures can only be a shed, or never taller than a shed.</p></li><li><p>Private fibre backbone installations installed with public funds are often inaccessible, or are charged by distance even though the cost to light 100 meters of fibre is identical to the cost of lighting 1 km of fibre.</p></li><li><p>Civil service agencies may be enthusiastic, but are also cautious, even in the face of evidence. Be patient, suffer frustration, be more patient, and repeat. Success is possible.</p></li><li><p>If and where possible, it’s best to avoid attempts to deliver service where national telecommunications companies have plans to do so.</p></li><li><p>Never underestimate tidal fading -- twice a day, wireless signals over water will be amazing, and will completely disappear. We should have known!</p></li></ul><p>All anecdotes aside, the best policies and practices are non-trivial -- but because of so many prior community efforts, and organizations such as <a href="https://www.internetsociety.org/issues/access/">ISOC</a>, the <a href="https://www.apc.org/">APC</a>, the <a href="https://a4ai.org/">A4AI</a>, and more, the challenges and solutions are better understood than ever before.</p>
    <div>
      <h3>How does a community network reach the Internet?</h3>
      <a href="#how-does-a-community-network-reach-the-internet">
        
      </a>
    </div>
    <p>First, we’d like to honor the many organisations we’ve learned from who might say that there are no <i>technical</i> barriers to success. Connections within the community networks may be shaped by geographical features or regional regulations. For example, wireless lines of sight between antenna towers on personal property are guided by hills or restricted by regulations. Similarly, Ethernet cables and fibre deployments are guided by property ownership, digging rights, and the presence or migration of grazing animals that dig into soil and gnaw at cables — yes, they do, even small rabbits.</p><p>Once the community establishes its own area network, the connections to meet Internet services are more conventional, more familiar. In part, the choice is influenced or determined by proximity to Internet exchanges, PoPs, or regional fibre cabinet installations. The connections with community networks fall into three broad categories.</p><p><b>Colocation.</b> A community network may be fortunate enough to have service coverage that overlaps with, or is near to, an Internet eXchange Point (IXP), as shown in the figure below. In this case a natural choice is to colocate a router within the exchange, near to the Internet service provider’s router (labeled as Cloudflare in the figure). Our launch partner <a href="https://www.nycmesh.net/">NYC Mesh</a> connects in this manner. Unfortunately, being that exchanges are most often located in urban settings, colocation is unavailable to many, if not most, community networks.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4yA8E4pM9To8f8Cp3Hz7xJ/e2dbbf0a0ce39a2045a2afa6b69eb0a1/Colocation-Community-Network.png" />
            
            </figure><p><b>Conventional point-to-point backhaul.</b> Community networks that are remote must establish a point-to-point backhaul connection to the Internet exchange. This connection method is shown in the figure below in which the community network in the previous figure has moved to the left, and is joined by a physical long-distance link to the Internet service router that remains in the exchange on the right.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1ZzArG47FqRbwCM0s4jU5T/6adc585aad03dc29b061ff50d1ac8072/Conventional-point-to-point-backhaul.png" />
            
            </figure><p>Point-to-point backhaul is familiar. If the infrastructure is available -- and this is a big ‘if’ -- then backhaul is most often available from a utility company, such as a telecommunications or energy provider, that may also bundle Internet service as a way to reduce total costs. Even bundled, the total cost is variable and unaffordable to individual community networks, and is exacerbated by distance. Some community networks have succeeded in acquiring backhaul through university, research and education, or publicly-funded networks that are compelled or convinced to offer the service in the public interest. On the west coast of Scotland, for example, <a href="https://www.tegola.org.uk/tegola-history.html">Tegola</a> launched with service from the University of Highlands and Islands and the University of Edinburgh.</p><p><b>Start a backhaul cooperative for point-to-point and colocation.</b> The last connection option we see among our launch partners overcomes the prohibitive costs by forming a cooperative network in which the individual subscriber community networks are also members. The cooperative model can be seen in the figure below. The exchange remains on the right. On the left the community network in the previous figure is now replaced by a collection of community networks that may optionally connect with each other (for example, to establish reliable routing if any link fails). Either directly or indirectly via other community networks, each of these community networks has a connection to a remote router at the near-end of the point-to-point connection. Crucially, the point-to-point backhaul service -- as well as the co-located end-points -- are owned and operated by the cooperative. In this manner, an otherwise expensive backhaul service is made affordable by being a shared cost.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5Z87gKX4VyktNOkRqFCLhp/ab2e1d519d15fd892a854f81271b45c1/Launch-a-backhaul-cooperative-for-point-to-point-and-colocation.png" />
            
            </figure><p>Two of our launch partners, <a href="https://guifi.net/">Guifi.net</a> and <a href="https://hubs.net.uk/">HUBS c.i.c.</a>, are organised this way and their 10+ years in operation demonstrate both success and sustainability. Since the backhaul provider is a cooperative, the community network members have a say in the ways that revenue is saved, spent, and — best of all — reinvested back into the service and infrastructure.</p>
    <div>
      <h3>Why is Cloudflare doing this?</h3>
      <a href="#why-is-cloudflare-doing-this">
        
      </a>
    </div>
    <p>Cloudflare’s mission is to help build a better Internet, for <i>everyone</i>, not just those with privileged access based on their geographical location. Project Pangea aligns with this mission by extending the Internet we’re helping to build — a faster, more reliable, more secure Internet — to otherwise underserved communities.</p>
    <div>
      <h3>How can my community network get involved?</h3>
      <a href="#how-can-my-community-network-get-involved">
        
      </a>
    </div>
    <p>Check out our <a href="http://www.cloudflare.com/pangea">landing page</a> to learn more and apply for Project Pangea today.</p>
    <div>
      <h3>The ‘community’ in Cloudflare</h3>
      <a href="#the-community-in-cloudflare">
        
      </a>
    </div>
    <p>Lastly, in a blog post about community networks, we feel it is appropriate to acknowledge the ‘community’ at Cloudflare: Project Pangea is the culmination of multiple projects, and multiple peoples’ hours, effort, dedication, and community spirit. Many, many thanks to all.</p><p>______</p><p><sup>1</sup>For eligible networks, free up to 5Gbps at p95 levels.</p> ]]></content:encoded>
            <category><![CDATA[Impact Week]]></category>
            <category><![CDATA[Project Pangea]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Magic Transit]]></category>
            <category><![CDATA[Magic Firewall]]></category>
            <category><![CDATA[Network Interconnect]]></category>
            <guid isPermaLink="false">1QJVpfsZRMaJn1UAqQfa15</guid>
            <dc:creator>Marwan Fayed</dc:creator>
            <dc:creator>Annika Garbers</dc:creator>
        </item>
    </channel>
</rss>