
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Wed, 15 Apr 2026 13:20:16 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Cloudflare incident on November 14, 2024, resulting in lost logs]]></title>
            <link>https://blog.cloudflare.com/cloudflare-incident-on-november-14-2024-resulting-in-lost-logs/</link>
            <pubDate>Tue, 26 Nov 2024 16:00:00 GMT</pubDate>
            <description><![CDATA[ On November 14, 2024, Cloudflare experienced a Cloudflare Logs outage, impacting the majority of customers using these products. During the ~3.5 hours that these services were impacted, about 55% of the logs we normally send to customers were not sent and were lost. The details of what went wrong and why are interesting both for customers and practitioners. ]]></description>
            <content:encoded><![CDATA[ <p>On November 14, 2024, Cloudflare experienced an incident which impacted the majority of customers using <a href="https://developers.cloudflare.com/logs"><u>Cloudflare Logs</u></a>. During the roughly 3.5 hours that these services were impacted, about 55% of the logs we normally send to customers were not sent and were lost. We’re very sorry this happened, and we are working to ensure that a similar issue doesn't happen again.</p><p>This blog post explains what happened and what we’re doing to prevent recurrences. Also, the systems involved and the particular class of failure we experienced will hopefully be of interest to engineering teams beyond those specifically using these products.</p><p>Failures within systems at scale are inevitable, and it’s essential that subsystems protect themselves from failures in other parts of the larger system to prevent cascades. In this case, a misconfiguration in one part of the system caused a cascading overload in another part of the system, which was itself misconfigured. Had it been properly configured, it could have prevented the loss of logs.</p>
    <div>
      <h2>Background</h2>
      <a href="#background">
        
      </a>
    </div>
    <p>Cloudflare’s network is a globally distributed system enabling and supporting a wide variety of services. Every part of this system generates event logs which contain detailed metadata about what’s happening with our systems around the world. For example, an event log is generated for every request to Cloudflare’s CDN. <a href="https://developers.cloudflare.c/om/logs"><u>Cloudflare Logs</u></a> makes these event logs available to customers, who use them in a number of ways, including compliance, observability, and accounting.</p><p>On a typical day, Cloudflare sends about 4.5 trillion individual event logs to customers. Although this represents less than 10% of the over 50 trillion total customer event logs processed, it presents unique challenges of scale when building a reliable and fault-tolerant system.</p>
    <div>
      <h2>System architecture</h2>
      <a href="#system-architecture">
        
      </a>
    </div>
    <p>Cloudflare’s network is composed of tens of thousands of individual servers, network hardware components, and specialized software programs located in over 330 cities around the world. Although Cloudflare’s <a href="https://developers.cloudflare.com/logs/edge-log-delivery/"><u>Edge Log Delivery</u></a> product will send customers their event logs directly from each server, most customers opt not to do this because doing so will create significant complication and cost at the receiving end.</p><p>By analogy, imagine the postal service ringing your doorbell once for each letter instead of once for each packet of letters. With thousands or millions of letters each second, the number of separate transactions that would entail becomes prohibitive.</p><p>Fortunately, we also offer <a href="https://developers.cloudflare.com/logs/about/"><u>Logpush</u></a>, which collects and pushes logs to customers in more predictable file sizes and which scales automatically with usage. In order to provide this feature several services work together to collect and push the logs, as illustrated in the diagram below:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6pRdzUUrMbwG3AWsPncB3w/75146d1e379ccb126d8cd0210a6c12b8/image2.png" />
          </figure>
    <div>
      <h3>Logfwdr</h3>
      <a href="#logfwdr">
        
      </a>
    </div>
    <p>Logfwdr is an internal service written in Golang that accepts event logs from internal services running across Cloudflare’s global network and forwards them in batches to a service called Logreceiver. Logfwdr handles many different types of event logs, and one of its responsibilities is to determine which event logs should be forwarded and where they should be sent based on the type of event log, which customers it represents, and associated rules about where it should be processed. Configuration is provided to Logfwdr to enable it to make these determinations.</p>
    <div>
      <h3>Logreceiver</h3>
      <a href="#logreceiver">
        
      </a>
    </div>
    <p>Logreceiver (also written in Golang) accepts the batches of logs from across Cloudflare’s global network and further sorts them depending on the type of event and its purpose. For Cloudflare Logs, Logreceiver demultiplexes the batches into per-customer batches and forwards them to be buffered by Buftee. Currently, Logreceiver is handling about 45 PB (uncompressed) of customer event logs each day.</p>
    <div>
      <h3>Buftee</h3>
      <a href="#buftee">
        
      </a>
    </div>
    <p>It’s common for data pipelines to include a buffer. Producers and consumers of the data might be operating at different cadences, and parts of the pipeline will experience variances in how quickly they can process information. Using a <a href="https://en.wikipedia.org/wiki/Data_buffer"><u>buffer</u></a> makes it easier to manage these situations, and helps to prevent data loss if downstream consumers are broken. It’s also convenient to have a buffer that supports multiple downstream consumers with different cadences (<a href="https://en.wikipedia.org/wiki/Tee_(command)"><u>like the pipe fitting function of a tee</u></a>.)</p><p>At Cloudflare, we use an internal system called Buftee (written in Golang) to support this combined function. Buftee is a highly distributed system which supports a large number of named “buffers”. It supports operating on named “prefixes” (collections of buffers) as well as multiple representations/resolutions of the same time-indexed dataset. Using Buftee makes it possible for Cloudflare to handle extremely high throughput very efficiently.</p><p>For Cloudflare Logs, Buftee provides buffers for each Logpush job, containing 100% of the logs generated by the zone or account referenced by each job. This means that failure to process one customer’s job will not affect progress on another customer’s job. Handling buffers in this way avoids “head of line” blocking and also enables us to encrypt and delete each customer’s data separately if needed.</p><p>Buftee typically handles over 1 million buffers globally. The following is a snapshot of the number of buffers managed by Buftee servers in the period just prior to the incident.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5g4Ev1CX8pGqtpNUY3vZOo/50d8dfa7a1e9c492a537e4822d801625/image5.png" />
          </figure>
    <div>
      <h3>Logpush</h3>
      <a href="#logpush">
        
      </a>
    </div>
    <p>Logpush is a Golang service which reads logs from Buftee buffers and pushes the results in batches to various destinations configured by customers. A batch could end up, for example, as a file in R2. Each job has a unique configuration, and only jobs that are active and configured will be pushed. Currently, we push over 600 million such batches each day.</p>
    <div>
      <h2>What happened</h2>
      <a href="#what-happened">
        
      </a>
    </div>
    <p>On November 14, 2024, we made a change to support an additional <a href="https://developers.cloudflare.com/logs/reference/log-fields/#datasets"><u>dataset</u></a> for Logpush. This required adding a new configuration to be provided to Logfwdr in order for it to know which customers’ logs to forward for this new stream. Every few minutes, a separate system re-generates the configuration used by Logfwdr to decide which logs need to be forwarded. A bug in this system resulted in a blank configuration being provided to Logfwdr.</p><p>This bug essentially informed Logfwdr that <b>no customers</b> had logs configured to be pushed. The team quickly noticed the mistake and reverted the change in under five minutes.</p><p>Unfortunately, this first mistake triggered a second, latent bug in Logfwdr itself. A failsafe introduced in the early days of this feature, when traffic was much lower, was configured to “fail open”. This failsafe was designed to protect against a situation when this specific Logfwdr configuration was unavailable (as in this case) by transmitting events for <b>all customers</b> instead of just those who had configured a Logpush job. This was intended to prevent the loss of logs at the expense of sending more logs than strictly necessary when individual hosts were prevented from getting the configuration due to intermittent networking errors, for example.</p><p>When this failsafe was first introduced, the potential list of customers was smaller than it is today. This small window of less than five minutes resulted in a massive spike<b> </b>in the number of customers whose logs were sent by Logfwdr.</p><p>Even given this massive overload, our systems would have continued to send logs if not for one additional problem. Remember that Buftee creates a separate buffer for each customer with their logs to be pushed. When Logfwdr began to send event logs for all customers, Buftee began to create buffers for each one as those logs arrived, and each buffer requires resources as well as the bookkeeping to maintain them. This massive increase, resulting in roughly 40 times more buffers, is not something we’ve provisioned Buftee clusters to handle. In the lead-up to impact, Buftee was managing 40 million buffers globally, as shown in the figure below.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7HPwhkvRxiAQtjVbdVxRUN/a1d4aa174961f6a163e884011c8c18ad/image3.png" />
          </figure><p>A short temporary misconfiguration lasting just five minutes created a massive overload that took us several hours to fix and recover from. Because our backstops were not properly configured, the underlying systems became so overloaded that we could not interact with them normally.  A full reset and restart was required.</p>
    <div>
      <h2>Root causes</h2>
      <a href="#root-causes">
        
      </a>
    </div>
    <p>The bug in the Logfwdr configuration system was easy to fix, but it’s the type of bug that was likely to happen at some point.  We had planned for it by designing the original “fail open” behavior.  However, we neglected to regularly test that the broader system was capable of handling a fail open event.</p><p>The bigger failure was that Buftee became unresponsive.  Buftee’s purpose is to be a safeguard against bugs like this one.  A huge increase in the number of buffers is a failure mode that we had predicted, and had put mechanisms in Buftee to prevent this failure from cascading.  Our failure in this case was that we had not configured these mechanisms.  Had they been configured correctly, Buftee would not have been overwhelmed.</p><p>It's like having a seatbelt in a car, yet not fastening it. The seatbelt is there to protect you in case of an accident but if you don't actually buckle it up, it's not going to do its job when you need it. Similarly, while we had the safeguard of Buftee in place, we hadn't 'buckled it up' by configuring the necessary settings. We’re very sorry this happened and are taking steps to prevent a recurrence as described below.</p>
    <div>
      <h2>Going forward</h2>
      <a href="#going-forward">
        
      </a>
    </div>
    <p>We’re creating alerts to ensure that these particular misconfigurations will be impossible to miss, and we are also addressing the specific bug and the associated tests that triggered this incident.</p><p>Just as importantly, we accept that mistakes and misconfigurations are inevitable. All our systems at Cloudflare need to respond to these predictably and gracefully. Currently, we conduct regular “cut tests” to ensure that these systems will cope with the loss of a datacenter or a network failure. In the future, we’ll also conduct regular “overload tests” to simulate the kind of cascade which happened in this incident to ensure that our production systems will handle them gracefully.</p><p>Logpush is a robust and flexible platform for customers who need to integrate their own logging and monitoring systems with Cloudflare. Different Logpush jobs can be deployed to support multiple destinations or, with filtering, multiple subsets of logs.</p> ]]></content:encoded>
            <category><![CDATA[Logs]]></category>
            <category><![CDATA[Data]]></category>
            <category><![CDATA[Log Push]]></category>
            <guid isPermaLink="false">3SNSdDbbVziSrdGxDq4AzH</guid>
            <dc:creator>Jamie Herre</dc:creator>
            <dc:creator>Tom Walwyn</dc:creator>
            <dc:creator>Christian Endres</dc:creator>
            <dc:creator>Gabriele Viglianisi</dc:creator>
            <dc:creator>Mik Kocikowski</dc:creator>
            <dc:creator>Rian van der Merwe</dc:creator>
        </item>
        <item>
            <title><![CDATA[Explaining Cloudflare's ABR Analytics]]></title>
            <link>https://blog.cloudflare.com/explaining-cloudflares-abr-analytics/</link>
            <pubDate>Tue, 29 Sep 2020 13:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare’s analytics products help customers answer questions about their traffic by analyzing the mind-boggling, ever-increasing number of events (HTTP requests, Workers requests, Spectrum events) logged by Cloudflare products every day.  ]]></description>
            <content:encoded><![CDATA[ <p>Cloudflare’s analytics products help customers answer questions about their traffic by analyzing the mind-boggling, ever-increasing number of events (HTTP requests, Workers requests, Spectrum events) logged by Cloudflare products every day.  The answers to these questions depend on the point of view of the question being asked, and we’ve come up with a way to exploit this fact to improve the quality and responsiveness of our analytics.</p>
    <div>
      <h3>Useful Accuracy</h3>
      <a href="#useful-accuracy">
        
      </a>
    </div>
    <p>Consider the following questions and answers:</p><p>What is the length of the <a href="https://en.wikipedia.org/wiki/Coastline_paradox">coastline of Great Britain</a>? <a href="https://en.wikipedia.org/wiki/Coastline_paradox">12.4K km</a></p><p>What is the total world population? <a href="https://en.wikipedia.org/wiki/World_population_milestones">7.8B</a></p><p>How many stars are in the Milky Way? <a href="https://en.wikipedia.org/wiki/Milky_Way#Contents">250B</a></p><p>What is the total volume of the Antarctic ice shelf? <a href="https://en.wikipedia.org/wiki/West_Antarctic_Ice_Sheet#General">25.4M km</a><sup>3</sup></p><p>What is the worldwide production of lentils? <a href="https://en.wikipedia.org/wiki/Lentil#Production">6.3M tonnes</a></p><p>How many HTTP requests hit my site in the last week? <a href="https://dash.cloudflare.com/">22.6M</a></p><p>Useful answers do not benefit from being overly exact.  For large quantities, knowing the correct order of magnitude and a few significant digits gives the most useful answer.  At Cloudflare, the difference in traffic between different sites or when a single site is under attack can cross nine <a href="https://en.wikipedia.org/wiki/Orders_of_magnitude_(numbers)">orders of magnitude</a> and, in general, all our traffic follows a <a href="https://en.wikipedia.org/wiki/Pareto_distribution">Pareto distribution</a>, meaning that what’s appropriate for one site or one moment in time might not work for another.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/sJsqh0fC8scFltxrCftPD/4ecb77ab450979eb6bc600b206358ffb/abr_pareto2.png" />
            
            </figure><p>Because of this distribution, a query that scans a few hundred records for one customer will need to scan billions for another.  A report that needs to load a handful of rows under normal operation might need to load millions when a site is under attack.</p><p>To get a sense of the relative difference of each of these numbers, remember “<a href="https://www.youtube.com/watch?v=0fKBhvDjuy0">Powers of Ten</a>”, an amazing visualization that Ray and Charles Eames produced in 1977.  Notice that the scale of an image determines what resolution is practical for recording and displaying it.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/pkc5ucF10g2Y3IsffEsTS/a56468a902ee6e5b17d6bb96ecc5d0d1/pasted-image-0.png" />
            
            </figure>
    <div>
      <h3>Using ABR to determine resolution</h3>
      <a href="#using-abr-to-determine-resolution">
        
      </a>
    </div>
    <p>This basic fact informed our design and implementation of <b>ABR</b> for Cloudflare analytics.  ABR stands for “<a href="https://en.wikipedia.org/wiki/Adaptive_bitrate_streaming">Adaptive Bit Rate</a>”.  It’s essentially an eponym for the term as used in video streaming such as Cloudflare’s own <a href="https://www.cloudflare.com/products/stream-delivery/">Stream Delivery</a>.  In those cases, the server will select the best resolution for a <a href="https://www.cloudflare.com/developer-platform/solutions/live-streaming/">video stream</a> to match your client and network connection.</p><p>In our case, every analytics query that supports ABR will be calculated at a resolution matching the query.  For example, if you’re interested to know from which country the most firewall events were generated in the past week, the system might opt to use a lower resolution version of the firewall data than if you had opted to look at the last hour. The lower resolution version will provide the same answer but take less time and fewer resources.  By using multiple, different resolutions of the same data, our analytics can provide consistent response times and a better user experience.</p><p>You might be aware that we <a href="/http-analytics-for-6m-requests-per-second-using-clickhouse/">use a columnar store called ClickHouse</a> to store and process our analytics data.  When using ABR with ClickHouse, we write the same data at multiple resolutions into separate tables.  Usually, we cover seven orders of magnitude – from 100% to 0.0001% of the original events.  We wind up using an additional 12% of disk storage but enable very fast ad hoc queries on the reduced resolution tables.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2HKrBQ950HetBHxqHDgHUl/95051dc10f00da3e1c8cafee0d26084a/abr_storage--1-.png" />
            
            </figure>
    <div>
      <h3>Aggregations and Rollups</h3>
      <a href="#aggregations-and-rollups">
        
      </a>
    </div>
    <p>The ABR technique facilitates aggregations by making compact estimates of every dimension.  Another way to achieve the same ends is with a system that computes “rollups”.  Rollups save space by computing either complete or partial aggregations of the data as it arrives.  </p><p>For example, suppose we wanted to count a total number of lentils. (Lentils are legumes and among the oldest and most widely cultivated crops.  They are a staple food in many parts of the world.)  We could just count each lentil as it passed through the processing system. Of course because there a lot of lentils, that system is distributed – meaning that there are hundreds of separate machines.  Therefore we’ll actually have hundreds of separate counters.</p><p>Also, we’ll want to include more information than just the count, so we’ll also include the weight of each lentil and maybe 10 or 20 other attributes. And of course, we don’t want just a total for each attribute, but we’ll want to be able to break it down by color, origin, distributor and many other things, and also we’ll want to break these down by slices of time.</p><p>In the end, we’ll have tens of thousands or possibly millions of aggregations to be tabulated and saved every minute.  These aggregations are expensive to compute, especially when using aggregations more complicated than simple counters and sums.  They also destroy some information.  For example, once we’ve processed all the lentils through the rollups, we can’t say for sure that we’ve counted them all, and most importantly, whichever attributes we neglected to aggregate are unavailable.</p><p>The number we’re counting, 6.3M tonnes, only includes two significant digits which can easily be achieved by counting a sample.  Most of the rollup computations used on each lentil (on the order 1013 to account for 6.3M tonnes) are wasted.</p>
    <div>
      <h3>Other forms of aggregations</h3>
      <a href="#other-forms-of-aggregations">
        
      </a>
    </div>
    <p>So far, we’ve discussed ABR and its application to aggregations, but we’ve only given examples involving “counts” and “sums”.  There are other, more complex forms of aggregations we use quite heavily.  Two examples are “topK” and “count-distinct”.</p><p>A “topK” aggregation attempts to show the K most frequent items in a set.  For example, the most frequent IP address, or country.  To compute topK, just count the frequency of each item in the set and return the K items with the highest frequencies. Under ABR, we compute topK based on the set found in the matching resolution sample. Using a sample makes this computation a lot faster and less complex, but there are problems.</p><p>The estimate of topK derived from a sample is biased and dependent on the distribution of the underlying data. This can result in overestimating the significance of elements in the set as compared to their frequency in the full set. In practice this effect can only be noticed when the cardinality of the set is very high and you’re not going to notice this effect on a Cloudflare dashboard.  If your site has a lot of traffic and you’re looking at the Top K URLs or browser types, there will be no difference visible at different resolutions.  Also keep in mind that as long as we’re estimating the “proportion” of the element in the set and the set is large, the results will be quite accurate.</p><p>The other fascinating aggregation we support is known as “<a href="https://en.wikipedia.org/wiki/Count-distinct_problem">count-distinct</a>”, or number of uniques.  In this case we want to know the number of unique values in a set.  For example, how many unique cache keys have been used.  We can safely say that a uniform random sample of the set cannot be used to estimate this number.  However, we do have a solution.</p><p>We can generate another, alternate sample based on the value in question.  For example, instead of taking a random sample of all requests, we take a random sample of IP addresses.  This is sometimes called distinct reservoir sampling, and it allows us to estimate the true number of distinct IPs based on the cardinality of the sampled set. Again, there are techniques available to improve these estimates, and we’ll be implementing some of those.</p>
    <div>
      <h3>ABR improves resilience and scalability</h3>
      <a href="#abr-improves-resilience-and-scalability">
        
      </a>
    </div>
    <p>Using ABR saves us resources.  Even better, it allows us to query all the attributes in the original data, not just those included in rollups.  And even better, it allows us to check our assumptions against different sample intervals in separate tables as a check that the system is working correctly, because the original events are preserved.</p><p>However, the greatest benefits of employing ABR are the ones that aren’t directly visible. Even under ideal conditions, a large distributed system such as Cloudflare’s data pipeline is subject to high <a href="https://research.google/pubs/pub40801/">tail latency</a>.  This occurs when any single part of the system takes longer than usual for any number of a long list of reasons.  In these cases, the ABR system will adapt to provide the best results available at that moment in time.</p><p>For example, compare this chart showing Cache Performance for a site under attack with the same chart generated a moment later while we simulate a failure of some of the servers in our cluster.  In the days before ABR, your Cloudflare dashboard would fail to load in this scenario.  Now, with ABR analytics, you won’t see significant degradation.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2ueUzoxEaJ1GPvTOLkBX8v/4d511659d5593ab7565e2345ebaefd48/24_request_traffic_normal.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7CRayU1gtFi6dv835RjKED/6e2eaf5a4f2a6263206cb1be01e9db6d/24_request_traffic_degraded.png" />
            
            </figure><p>Stretching the analogy to ABR in video streaming, we want you to be able to enjoy your analytics dashboard without being bothered by issues related to faulty servers, or network latency, or long running queries.  With ABR you can get appropriate answers to your questions reliably and within a predictable amount of time.</p><p>In the coming months, we’re going to be releasing a variety of new dashboards and analytics products based on this simple but profound technology.  Watch your Cloudflare dashboard for increasingly useful and interactive analytics.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Analytics]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Logs]]></category>
            <category><![CDATA[Developers]]></category>
            <guid isPermaLink="false">1Oos7Way2Rihk8tzUaa44d</guid>
            <dc:creator>Jamie Herre</dc:creator>
        </item>
    </channel>
</rss>