
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sat, 04 Apr 2026 09:39:26 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Performance measurements… and the people who love them]]></title>
            <link>https://blog.cloudflare.com/loving-performance-measurements/</link>
            <pubDate>Tue, 20 May 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ Developers have a gut-felt understanding for performance, but that intuition breaks down when systems reach Cloudflare’s scale. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>⚠️ WARNING ⚠️ This blog post contains graphic depictions of probability. Reader discretion is advised.</p><p>Measuring performance is tricky. You have to think about accuracy and precision. Are your sampling rates high enough? Could they be too high?? How much metadata does each recording need??? Even after all that, all you have is raw data. Eventually for all this raw performance information to be useful, it has to be aggregated and communicated. Whether it's in the form of a dashboard, customer report, or a paged alert, performance measurements are only useful if someone can see and understand them.</p><p>This post is a collection of things I've learned working on customer performance escalations within Cloudflare and analyzing existing tools (both internal and commercial) that we use when evaluating our own performance.  A lot of this information also comes from Gil Tene's talk, <a href="https://youtu.be/lJ8ydIuPFeU"><u>How NOT to Measure Latency</u></a>. You should definitely watch that too (but maybe after reading this, so you don't spoil the ending). I was surprised by my own blind spots and which assumptions turned out to be wrong, even though they seemed "obviously true" at the start. I expect I am not alone in these regards. For that reason this journey starts by establishing fundamental definitions and ends with some new tools and techniques that we will be sharing as well as the surprising results that those tools uncovered.</p>
    <div>
      <h2>Check your verbiage</h2>
      <a href="#check-your-verbiage">
        
      </a>
    </div>
    <p>So ... what is performance? Alright, let's start with something easy: definitions. "Performance" is not a very precise term because it gets used in too many contexts. Most of us as nerds and engineers have a gut understanding of what it means, without a real definition. We can't <i>really</i> measure it because how "good" something is depends on what makes that thing good. "Latency" is better ... but not as much as you might think. Latency does at least have an implicit time unit, so we <i>can</i> measure it. But ... <a href="https://www.cloudflare.com/learning/performance/glossary/what-is-latency/">what is latency</a>? There are lots of good, specific examples of measurements of latency, but we are going to use a general definition. Someone starts something, and then it finishes — the elapsed time between is the latency.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1r4blwH5oeloUdXoizuLB4/f58014b1b4b3715f54400e6b03c60ea7/image7.png" />
          </figure><p>This seems a bit reductive, but it’s a surprisingly useful definition because it gives us a key insight. This fundamental definition of latency is based around the client's perspective. Indeed, when we look at our internal measurements of latency for health checks and monitoring, they all have this one-sided caller/callee relationship. There is the latency of the caching layer from the point of view of the ingress proxy. There’s the latency of the origin from the cache’s point of view. Each component can measure the latency of its upstream counterparts, but not the other way around. </p><p>This one-sided nature of latency observation is a real problem for us because Cloudflare <i>only</i> exists on the server side. This makes all of our internal measurements of latency purely estimations. Even if we did have full visibility into a client’s request timing, the start-to-finish latency of a request to Cloudflare isn’t a great measure of Cloudflare’s latency. The process of making an HTTP request has lots of steps, only a subset of which are affected by us. Time spent on things like DNS lookup, local computation for TLS, or resource contention <i>do</i> affect the client’s experience of latency, but only serve as sources of noise when we are considering our own performance.</p><p>There is a very useful and common metric that is used to measure web requests, and I’m sure lots of you have been screaming it in your brains from the second you read the title of this post. ✨Time to first byte✨. Clearly this is the answer, right?!  But ... what is “Time to first byte”?</p>
    <div>
      <h2>TTFB mine</h2>
      <a href="#ttfb-mine">
        
      </a>
    </div>
    <p>Time to first byte (TTFB) on its face is simple. The name implies that it's the time it takes (on the client's side) to receive the first byte of the response from the server, but unfortunately, that only describes when the timer should end. It doesn't say when the timer should start. This ambiguity is just one factor that leads to inconsistencies when trying to compare TTFB across different measurement platforms ... or even across a single platform because there is no <i>one</i> definition of TTFB. Similar to “performance”, it is used in too many places to have a single definition. That being said, TTFB is a very useful concept, so in order to measure it and report it in an unambiguous way, we need to pick a definition that’s already in use.</p><p>We have mentioned TTFB in other blog posts, but <a href="https://blog.cloudflare.com/ttfb-is-not-what-it-used-to-be/"><u>this one</u></a> sums up the problem best with “Time to first byte isn’t what it used to be.” You should read that article too, but the gist is that one popular TTFB definition used by browsers was changed in a confusing way with the introduction of <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Status/103"><u>early hints</u></a> in June 2022. That post and <a href="https://blog.cloudflare.com/tag/ttfb/"><u>others</u></a> make the point that while TTFB is useful, it isn’t the best direct measurement for web performance. Later on in this post we will derive why that’s the case.</p><p>One common place <i>we</i> see TTFB used is our customers’ analysis comparing Cloudflare's performance to our competitors through <a href="https://www.catchpoint.com/"><u>Catchpoint</u></a>. Customers, as you might imagine, have a vested interest in measuring our latency, as it affects theirs. Catchpoint provides several tools built on their global Internet probe network for measuring HTTP request latency (among other things) and visualizing it in their web interface. In an effort to align better with our customers, we decided to adopt Catchpoint’s terminology for talking about latency, both internally and externally.</p>
    <div>
      <h2>Catchpoint catch-up</h2>
      <a href="#catchpoint-catch-up">
        
      </a>
    </div>
    <p>While Catchpoint makes things like TTFB easy to plot over time, the visualization tool doesn't give a definition of what TTFB is, but after going through all of their technical blog posts and combing through thousands of lines of raw data, we were able to get functional definitions for TTFB and other composite metrics. This was an important step because these metrics are how our customers are viewing our performance, so we all need to be able to understand exactly what they signify! The final report for this is internal (and long and dry), so in this post, I'll give you the highlights in the form of colorful diagrams, starting with this one.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5bB3HmSrIIhQ2AzVpheJWa/8d2b73f3f2f0602217daaf7fea847e11/image6.png" />
          </figure><p>This diagram shows our customers' most commonly viewed client metrics on Catchpoint and how they fit together into the processing of a request from the server side. Notice that some are directly measured, and some are calculated based on the direct measurements. Right in the middle is TTFB, which Catchpoint calculates as the sum of the DNS, Connect, TLS, and Wait times. It’s worth noting again that this is not <i>the</i> definition of TTFB, this is just Catchpoint’s definition, and now ours.</p><p>This breakdown of HTTPS phases is not the only one commonly used. Browsers themselves have a standard for measuring the stages of a request. The diagram below shows how most browsers are reporting request metrics. Luckily (and maybe unsurprisingly) these phases match Catchpoint's very closely.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1ZouyuBQV7XgER2kqhMy8r/04f750eef44ba12bb6915a06eac532ca/image1.png" />
          </figure><p>There are some differences beyond the inclusion of things like <a href="https://html.spec.whatwg.org/#applicationcache"><u>AppCache</u></a> and <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/Redirections"><u>Redirects</u></a> (which are not directly impacted by Cloudflare's latency). Browser timing metrics are based on timestamps instead of durations. The diagram subtly calls this out with gaps between the different phases indicating that there is the potential for the computer running the browser to do things that are not part of any phase. We can line up these timestamps with Catchpoint's metrics like so:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4TvwOuTxWvMBxKGQQTUfZc/a8105d77725a9fa0d3e5bf6a115a13a5/Screenshot_2025-05-15_at_11.31.46.png" />
          </figure><p>Now that we, our customers, and our browsers (with data coming from <a href="https://en.wikipedia.org/wiki/Real_user_monitoring"><u>RUM</u></a>) have a common and well-defined language to talk about the phases of a request, we can start to measure, visualize, and compare the components that make up the network latency of a request. </p>
    <div>
      <h2>Visual basics</h2>
      <a href="#visual-basics">
        
      </a>
    </div>
    <p>Now that we have defined what our key values for latency are, we can record numbers and put them in a chart and watch them roll by ... except not directly. In most cases, the systems we use to record the data actively prevent us from seeing the recorded data in its raw form. Tools like <a href="https://prometheus.io/"><u>Prometheus</u></a> are designed to collect pre-aggregated data, not individual samples, and for a good reason. Storing every recorded metric (even compacted) would be an enormous amount of data. Even worse, the data loses its value exponentially over time, since the most recent data is the most actionable.</p><p>The unavoidable conclusion is that some aggregation has to be done before performance data can be visualized. In most cases, the aggregation means looking at a series of windowed percentiles over time. The most common are 50th percentile (median), 75th, 90th, and 99th if you're really lucky. Here is an example of a latency visualization from one of our own internal dashboards.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/lvjAR41mTJf2d5Vdg5SwT/19ff931587790b1fb7fbcc317ab83a5e/image8.png" />
          </figure><p>It clearly shows a spike in latency around 14:40 UTC. Was it an incident? The p99 jumped by 1300% (500ms to 6500</p><p>ms) for multiple minutes while the p50 jumped by more than 13600% (4.4ms to 600ms). It is a clear signal, so something must have happened, but what was it? Let me keep you in suspense for a second while we talk about statistics and probability.</p>
    <div>
      <h2>Uncooked math</h2>
      <a href="#uncooked-math">
        
      </a>
    </div>
    <p>Let me start with a quote from my dear, close, personal friend <a href="https://www.youtube.com/watch?v=xV4rLfpidIk"><u>@ThePrimeagen</u></a>:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/I8VbrcSjVSKY1i7fbVEMl/8108e25e78c1ee5356bbd080c467c056/Screenshot_2025-05-15_at_11.33.40.png" />
          </figure><p>It's a good reminder that while statistics is a great tool for providing a simplified and generalized representation of a complex system, it can also obscure important subtleties of that system. A good way to think of statistical modeling is like lossy compression. In the latency visualization above (which is a plot of TTFB over time), we are compressing the entire spectrum of latency metrics into 4 percentile bands, and because we are only considering up to the 99th percentile, there's an entire 1% of samples left over that we are ignoring! </p><p>"What?" I hear you asking. "P99 is already well into perfection territory. We're not trying to be perfectionists. Maybe we should get our p50s down first". Let's put things in perspective. This zone (<a href="http://www.cloudflare.com/"><u>www.cloudflare.com</u></a>) is getting about 30,000 req/s and the 99th percentile latency is 500 ms. (Here we are defining latency as “Edge TTFB”, a server-side approximation of our now official definition.) So there are 300 req/s that are taking longer than half a second to complete, and that's just the portion of the request that <i>we</i> can see. How much worse than 500 ms are those requests in the top 1%? If we look at the 100th percentile (the max), we get a much different vibe from our Edge TTFB plot.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/NDvJObDLjy5D8bKIEhsjS/10f1c40940ba41aae308100c7f374836/image12.png" />
          </figure><p>Viewed like this, the spike in latency no longer looks so remarkable. Without seeing more of the picture, we could easily believe something was wrong when in reality, even if something is wrong, it is not localized to that moment. In this case, it's like we are using our own statistics to lie to ourselves. </p>
    <div>
      <h2>The top 1% of requests have 99% of the latency</h2>
      <a href="#the-top-1-of-requests-have-99-of-the-latency">
        
      </a>
    </div>
    <p>Maybe you're still not convinced. It feels more intuitive to focus on the median because the latency experienced by 50 out of 100 people seems more important to focus on than that of 1 in 100. I would argue that is a totally true statement, but notice I said "people"<sup> </sup>and not "requests." A person visiting a website is not likely to be doing it one request at a time.</p><p>Taking <a href="http://www.cloudflare.com/"><u>www.cloudflare.com</u></a> as an example again, when a user opens that page, their browser makes more than <b>70</b> requests. It sounds big, but in the world of user-facing websites, it’s not that bad. In contrast, <a href="http://www.amazon.com/"><u>www.amazon.com</u></a> issues more than <b>400</b> requests! It's worth noting that not all those requests need to complete before a web page or application becomes usable. That's why more advanced and browser-focused metrics exist, but I will leave a discussion of those for later blog posts. I am more interested in how making that many requests changes the probability calculations for expected latency on a per-user basis. </p><p>Here's a brief primer on combining probabilities that covers everything you need to know to understand this section.</p><ul><li><p>The probability of two things happening is the probability of the first happening multiplied by the probability of the second thing happening. $$P(X\cap Y )=P(X) \times P (Y)$$</p></li><li><p>The probability of something in the $X^{th}$ percentile happening is $X\%$. $$P(pX) = X\%$$</p></li></ul><p>Let's define $P( pX_{N} )$ as the probability that someone on a website with $N$ requests experiences no latencies &gt;= the $X^{th}$ percentile. For example, $P(p50_{2})$ would be the probability of getting no latencies greater than the median on a page with 2 requests. This is equivalent to the probability of one request having a latency less than the $p50$ and the other request having a latency less than the $p50$. We can use the first identities above. </p><p>$$\begin{align}
P( p50_{2}) &amp;= P\left ( p50 \cap p50 \right ) \\
   &amp;= P( p50) \times P\left ( p50 \right ) \\
   &amp;= 50\%^{2} \\
   &amp;= 25\%
\end{align}$$</p><p>We can generalize this for any percentile and any number of requests. $$P( pX_{N}) = X\%^{N}$$</p><p>For <a href="http://www.cloudflare.com/"><u>www.cloudflare.com</u></a> and its 70ish requests, the percentage of visitors that won't experience a latency above the median is </p><p>$$\begin{align} 
P( p50_{70}) &amp;= 50\%^{70} \\
  &amp;\approx 0.000000000000000000001\%
\end{align}$$</p><p>This vanishingly small number should make you question why we would value the $p50$ latency so highly at all when effectively no one experiences it as their worst case latency.</p><p>So now the question is, what request latency percentile <i>should</i> we be looking at? Let's go back to the statement at the beginning of this section. What does the median person experience on <a href="http://www.cloudflare.com./"><u>www.cloudflare.com</u></a>? We can use a little algebra to solve for that.</p><p>$$\begin{align} 
P( pX_{70}) &amp;= 50\% \\
X^{70}  &amp;= 50\% \\
X &amp;= e^{ \frac{ln\left ( 50\% \right )}{70}} \\
X &amp;\approx  99\%
\end{align}$$</p><p>This seems a little too perfect, but I am not making this up. For <a href="http://www.cloudflare.com/"><u>www.cloudflare.com</u></a>, if you want to capture a value that's representative of what the median user can expect, you need to look at $p99$ request latency. Extending this even further, if you want a value that's representative of what 99% of users will experience, you need to look at the <b>99.99th</b> <b>percentile</b>!</p>
    <div>
      <h2>Spherical latency in a vacuum</h2>
      <a href="#spherical-latency-in-a-vacuum">
        
      </a>
    </div>
    <p>Okay, this is where we bring everything together, so stay with me. So far, we have only talked about measuring the performance of a single system. This gives us absolute numbers to look at internally for monitoring, but if you’ll recall, the goal of this post was to be able to clearly communicate about performance outside the company. Often this communication takes the form of comparing Cloudflare’s performance against other providers. How are these comparisons done? By plotting a percentile request "latency" over time and eyeballing the difference.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/x9j5kstMS1kXdsb1PaIbu/837398e0da4758743155595f4f570340/image2.png" />
          </figure><p>With everything we have discussed in this post, it seems like we can devise a better method for doing this comparison. We saw how exposing more of the percentile spectrum can provide a new perspective on existing data, and how impactful higher percentile statistics can be when looking at a more complete user experience. Let me close this post with an example of how putting those two concepts together yields some intriguing results.</p>
    <div>
      <h2>One last thing</h2>
      <a href="#one-last-thing">
        
      </a>
    </div>
    <p>Below is a comparison of the latency (defined here as the sum of the TLS, Connect, and Wait times or the equivalent of TTFB - DNS lookup time) for the customer when viewed through Cloudflare and a competing provider. This is the same data represented in the chart immediately above (containing 90,000 samples for each provider), just in a different form called a <a href="https://en.wikipedia.org/wiki/Cumulative_distribution_function"><u>CDF plot</u></a>, which is one of a few ways we are making it easier to visualize the entire percentile range. The chart shows the percentiles on the y-axis and latency measurements on the x-axis, so to see the latency value for a given percentile, you go up to the percentile you want and then over to the curve. Interpreting these charts is as easy as finding which curve is farther to the left for any given percentile. That curve will have the lower latency.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/53sRk6UCoflU2bGcXypgEQ/f435bbdf43e1646cf2afb56d2aca26be/image4.png" />
          </figure><p>It's pretty clear that for nearly the entire percentile range, the other provider has the lower latency by as much as 30ms. That is, until you get to the very top of the chart. There's a little bit of blue that's above (and therefore to the left) of the green. In order to see what's going on there more clearly, we can use a different kind of visualization. This one is called a <a href="https://en.wikipedia.org/wiki/Q%E2%80%93Q_plot"><u>QQ-Plot</u></a>, or quantile-quantile plot. This shows the same information as the CDF plot, but now each point on the curve represents a specific quantile, and the 2 axes are the latency values of the two providers at that percentile.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/jeYkDomZjnqhrCIIUJBqj/ebb4533c6982b0f8b9f5f491aa1549fb/image9.png" />
          </figure><p>This chart looks complicated, but interpreting it is similar to the CDF plot. The blue is a dividing marker that shows where the latency of both providers is equal. Points below the line indicate percentiles where the other provider has a lower latency than Cloudflare, and points above the line indicate percentiles where Cloudflare is faster. We see again that for most of the percentile range, the other provider is faster, but for percentiles above 99, Cloudflare is significantly faster. </p><p>This is not so compelling by itself, but what if we take into account the number of requests this page issues ... which is over 180. Using the same math from above, and only considering <i>half</i> the requests to be required for the page to be considered loaded, yields this new effective QQ plot.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/S0lLIZfVyVM7KjWUawcNg/967417729939f454bacd0d4c12b0c0e2/image3.png" />
          </figure><p>Taking multiple requests into account, we see that the median latency is close to even for both Cloudflare and the other provider, but the stories above and below that point are very different. A user has about an even chance of an experience where Cloudflare is significantly faster and one where Cloudflare is slightly slower than the other provider. We can show the impact of this shift in perspective more directly by calculating the <a href="https://en.wikipedia.org/wiki/Expected_value#Arbitrary_real-valued_random_variables"><u>expected value</u></a> for request and experienced latency.</p><table><tr><td><p><b>Latency Kind</b></p></td><td><p><b>Cloudflare </b>(ms)</p></td><td><p><b>Other CDN</b> (ms)</p></td><td><p><b>Difference</b> (ms)</p></td></tr><tr><td><p>Expected Request Latency</p></td><td><p>141.9</p></td><td><p>129.9</p></td><td><p><b>+12.0</b></p></td></tr><tr><td><p>Expected Experienced Latency </p><p>Based on 90 Requests </p></td><td><p>207.9</p></td><td><p>281.8</p></td><td><p><b>-71.9</b></p></td></tr></table><p>Shifting the focus from individual request latency to user latency we see that Cloudflare is 70 ms faster than the other provider. This is where our obsession with reliability and tail latency becomes a win for our customers, but without a large volume of raw data, knowledge, and tools, this win would be totally hidden. That is why in the near future we are going to be making this tool and others available to our customers so that we can all get a more accurate and clear picture of our users’ experiences with latency. Keep an eye out for more announcements to come later in 2025.</p> ]]></content:encoded>
            <category><![CDATA[Internet Performance]]></category>
            <category><![CDATA[Latency]]></category>
            <category><![CDATA[Open Source]]></category>
            <category><![CDATA[Observability]]></category>
            <category><![CDATA[TTFB]]></category>
            <guid isPermaLink="false">6R3IB3ISH3fXyycnjNPyZC</guid>
            <dc:creator>Kevin Guthrie</dc:creator>
        </item>
        <item>
            <title><![CDATA[Are you measuring what matters? A fresh look at Time To First Byte]]></title>
            <link>https://blog.cloudflare.com/ttfb-is-not-what-it-used-to-be/</link>
            <pubDate>Tue, 20 Jun 2023 13:00:59 GMT</pubDate>
            <description><![CDATA[ Time To First Byte (TTFB) is not a good way to measure your websites performance. In this blog we’ll cover what TTFB is a good indicator of, what it's not great for, and what you should be using instead ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3DDRz6sPcW8kWs2Xw8iDv4/f99afbb10dad72d9d1f28855a71edb49/image1-18.png" />
            
            </figure><p>Today, we’re making the case for why Time To First Byte (TTFB) is not a good metric for evaluating how fast web pages load. There are better metrics out there that give a more accurate representation of how well a server or <a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/">content delivery network</a> performs for end users. In this blog, we’ll go over the ambiguity of measuring TTFB, touch on more meaningful metrics such as <a href="https://www.cloudflare.com/learning/performance/what-are-core-web-vitals/">Core Web Vitals</a> that should be used instead, and finish on scenarios where TTFB still makes sense to measure.</p><p>Many of our customers ask what the best way would be to evaluate how well a network like ours works. This is a good question! Measuring performance is difficult. It’s easy to simplify the question to “How close is Cloudflare to end users?” The predominant metric that’s been used to measure that is <a href="https://www.cloudflare.com/learning/cdn/glossary/round-trip-time-rtt/">round trip time (RTT)</a>. This is the time it takes for one network packet to travel from an end user to Cloudflare and back. We measure this metric and mention it from time to time: Cloudflare has an average RTT of 50 milliseconds for 95% of the Internet-connected population.</p><p>Whilst RTT is a relatively good indicator of the quality of a network, it doesn’t necessarily tell you that much about how good it is at actually delivering actual websites to end users. For instance, what if the web server is really slow? A user might be very close to the data center that serves the traffic, but if it takes a long time to actually grab the asset from disk and serve it the result will still be a poor experience.</p><p>This is where TTFB comes in. It measures the time it takes between a request being sent from an end user until the very first byte of the response being received. This sounds great on paper! However it doesn’t capture how a webpage or web application loads, and what happens <i>after</i> the first byte is received.</p><p>In this blog we’ll cover what TTFB is a good indicator of, what it's not great for, and what you should be using instead.</p>
    <div>
      <h3>What is TTFB?</h3>
      <a href="#what-is-ttfb">
        
      </a>
    </div>
    <p>TTFB is a metric which reports the duration between sending the request from the client to a server for a given file, and the receipt of the first byte of said file. For example, if you were to download the Cloudflare logo from our website the TTFB would be how long it took to receive the first byte of that image. Similarly, if you were to measure the TTFB of a request to cloudflare.com the metric would return the TTFB of how long it took from request to receiving the first byte of the first HTTP response. Not how long it took for the image to be fully visible or for the web page to be loaded in a state that allowed a user to begin using it.</p><p>The simplest answer therefore is to look at the diametrically opposite measurement, Time to Last Byte (TTLB). TTLB, as you’d expect, measures how long it takes until the last byte of data is received from the server. For the Cloudflare logo file this would make sense, as until the image is fully downloaded it's not exactly useful. But what about for a webpage? Do you really need to wait until every single file is fully downloaded, even those images at the bottom of the page you can't immediately see? TTLB is fine for measuring how long it took to download a single file from a CDN / server. However for multi-faceted traffic, like web pages, it is too conservative, as it doesn’t tell you how long it took for the web page to be <i>usable.</i></p><p>As an analogy we can look at measuring how long it takes to process an incoming airplane full of passengers. What's important is to understand how long it takes for those passengers to disembark, pass through passport control, collect their baggage and leave the terminal, if no onward journeys. TTFB would measure success as how long it took to get the first passenger off of the airplane. TTLB would measure how long it took the last passenger to leave the terminal, even if this passenger remained in the terminal for hours afterwards due to passport issues or getting lost. Neither are a good measure of success for the airline.</p>
    <div>
      <h3>Why TTFB doesn't make sense</h3>
      <a href="#why-ttfb-doesnt-make-sense">
        
      </a>
    </div>
    <p>TTFB is a widely-used metric because it is easy-to-understand and it is a great signal for connection setup time, server time and network latency. It can help website owners identify when performance issues originate from their server. But is TTFB a good signal for how real users experience the loading speed of a web page in a browser?</p><p>When a web page loads in a browser, the user’s perception of speed isn’t related to the moment the browser first receives bytes of data. It is related to when the user starts to see the page rendering on the screen.</p><p>The loading of a web page in a browser is a very complex process. Almost all of this process happens after TTFB is reported. After the first byte has been received, the browser still has to load the main HTML file. It also has to load fonts, stylesheets, javascript, images and other resources. Often these resources link to other resources that also must be downloaded. Often these resources entirely block the rendering of the page. Alongside all these downloads, the browser is also parsing the HTML, CSS and JavaScript. It is building data structures that represent the content of the web page as well as how it is styled. All of this is in preparation to start rendering the final page onto the screen for the user.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5CEjRuDYe3eMAChrWxwmoC/29a7e28b2c9b961cb2bf0297edacffca/image2-12.png" />
            
            </figure><p>When the user starts seeing the web page actually rendered on the screen, TTFB has become a distant memory for the browser. For a metric that signals the loading speed as perceived by the user, TTFB falls dramatically short.</p><p>Receiving the first byte isn't sufficient to determine a good end user experience as most pages have additional render blocking resources that get loaded after the initial document request. Render-blocking resources are scripts, stylesheets, and HTML imports that prevent a web page from loading quickly. From a TTFB perspective it means the client could stop the ‘TTFB clock’ on receipt of the first byte of one of these files, but the web browser is blocked from showing anything to the user until the remaining critical assets are downloaded.</p><p>This is because browsers need instructions for what to render and what resources need to be fetched to complete “painting” a given web page. These instructions come from a server response. But the servers sending these responses often need time to compile these resources — this is known as “server think time.” While the servers are busy during this time… browsers sit idle and wait. And the TTFB counter goes up.</p><p>There have been a number of attempts over the years to benefit from this “think time”. First came Server Push, which was superseded last year by <b>Early Hints</b>. Early Hints take advantage of “server think time” to asynchronously send instructions to the browser to begin loading resources while the origin server is compiling the full response. By sending these hints to a browser before the full response is prepared, the browser can figure out what it needs to do to load the webpage faster for the end user. It also stops the TTFB clock, meaning a lower TTFB. This helps ensure the browser gets the critical files sooner to begin loading the webpage, and it also means the first byte is delivered sooner as there is no waiting on the server for the whole dataset to be prepared and ready to send. Even with Early Hints, though, TTFB doesn’t accurately define how long it took the web page to be in a usable state.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5HWEDVlv82i1e7QA4M9MHO/f7847d5bc8b344c6f2131ed9451ece9c/image3-10.png" />
            
            </figure><p>TTFB also does not take into account multiplexing benefits of <a href="https://www.cloudflare.com/learning/performance/http2-vs-http1.1/">HTTP/2</a> and <a href="https://www.cloudflare.com/learning/performance/what-is-http3/">HTTP/3</a> which allow browsers to load files in parallel. It also doesn't take into account compression on the origin, which would result in a higher TTFB but a quicker page load overall due to the time the server took to compress the assets and send them in a small format over the network.</p><p>Cloudflare offers many features that can improve the loading speed of a website, but don’t necessarily impact the TTFB score. These features include Zaraz, Rocket Loader, HTTP/2 and HTTP/3 Prioritization, Mirage, Polish, Image Resizing, Auto Minify and Cache. These features <a href="https://www.cloudflare.com/learning/performance/speed-up-a-website/">improve the loading time of a webpage</a>, ensuring they load optimally through a series of enhancements from <a href="https://www.cloudflare.com/developer-platform/cloudflare-images/">image optimization and compression</a> to render blocking elimination by optimizing the sending of assets from the server to the browser in the best possible order.</p><p>More comprehensive metrics are required to illustrate the full loading process of a web page, and the benefit provided by these features. This is where <b>Real User Monitoring</b> helps.  At Cloudflare we are all-in on Real User Monitoring (RUM) as the future of <a href="https://www.cloudflare.com/learning/performance/why-site-speed-matters/">website performance</a>. We’re investing heavily in it: both from an observation point of view and from an optimization one also.</p><p>For those unfamiliar with RUM, we typically optimize websites for three main metrics - known as the “Core Web Vitals”. This is a set of key metrics which are believed to be the best and most accurate representation of a poorly performing website vs a well performing one. These key metrics are Largest Contentful Paint, First Input Delay and Cumulative Layout Shift.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5sWYD1VyB6OkS1avUesrta/d6ddf78c645c6175987cbbaf1df9a029/image4-10.png" />
            
            </figure><p>Source: <a href="https://addyosmani.com/blog/web-vitals-extension/">https://addyosmani.com/blog/web-vitals-extension/</a> </p><p>LCP measures loading performance; typically how long it takes to load the largest image or text block visible in the browser. FID measures interactivity. For example, the time between when a user clicks or taps on a button to when the browser responds and starts doing something. Finally, CLS measures visual stability. A good, or bad example of CLS is when you go to a website on your mobile phone, tap on a link and the page moves at the last second meaning you tap something you didn't want to. That would be a lower CLS score as its poor user experience.</p><p>Looking at these metrics gives us a good idea of how the end user is truly experiencing your website (RUM) vs. how quickly the first byte of the file was retrieved from the nearest Cloudflare data center (TTFB).</p>
    <div>
      <h3>Good TTFB, bad user experience</h3>
      <a href="#good-ttfb-bad-user-experience">
        
      </a>
    </div>
    <p>One of the “sub parts” that comprise LCP is TTFB. That means a poor TTFB is very likely to result in a poor LCP. If it takes you 20 seconds to retrieve the first byte of the first image, your user isn't going to have a good experience - regardless of your outlook on TTFB vs RUM.</p><p>Conversely, we found that a <a href="https://web.dev/ttfb/#what-is-a-good-ttfb-score">good TTFB</a> does not always mean a good LCP score, or FID or CLS. We ran a query to collect RUM metrics of web pages we served which had a good TTFB score. Good is defined as a TTFB as less than 800ms. This allowed us to ask the question: TTFB says these websites are good. Does the RUM data support that?</p><p>We took four distinct samples from our RUM data in June. Each sample had a different date-range and sample-rate combination. In each sample we queried for 200,000 page views. From these 200,000 page views we filtered for only the page views that reported a 'Good' TTFB. Across the samples, of all page views that have a good TTFB, about 21% of them did not have a <a href="https://web.dev/lcp/#what-is-a-good-lcp-score">“good” LCP score</a>. 46% of them did not have a <a href="https://web.dev/fid/#what-is-a-good-fid-score">“good” FID score</a>. And 57% of them did not have a good <a href="https://web.dev/cls/#what-is-a-good-cls-score">CLS</a> score.</p><p>This clearly shows the disparity between measuring the time it takes to receive the first byte of traffic, vs the time it takes for a webpage to become stable and interactive. In summary, LCP includes TTFB but also includes other parts of the loading experience. LCP is a more comprehensive, user-centric metric.</p>
    <div>
      <h3>TTFB is not all bad</h3>
      <a href="#ttfb-is-not-all-bad">
        
      </a>
    </div>
    <p>Reading this post and others from Speed Week 2023 you may conclude we really don't like TTFB and you should stop using it. That isn't the case.</p><p>There are a few situations where TTFB does matter. For starters, there are many applications that aren’t websites. File servers, APIs and all sorts of streaming protocols don’t have the same semantics as web pages and the best way to objectively measure performance is to in fact look at exactly when the first byte is returned from a server.</p><p>To help optimize TTFB for these scenarios we are announcing <a href="/introducing-timing-insights">Timing Insights</a>, a new analytics tool to help you understand what is contributing to "Time to First Byte" (TTFB) of Cloudflare and your origin. Timing Insights breaks down TTFB from the perspective of our servers to help you understand what is slow, so that you can begin addressing it.</p>
    <div>
      <h3>Get started with RUM today</h3>
      <a href="#get-started-with-rum-today">
        
      </a>
    </div>
    <p>To help you understand the real user experience of your website we have today launched <a href="/cloudflare-observatory-generally-available"><b>Cloudflare Observatory</b></a> <b>-</b> the new home of performance at Cloudflare.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1UbKEuzHKLSwcjmGfQIrfW/bcbbdb8272fbbbdf17a7f5238fad0812/image5-2-1.png" />
            
            </figure><p>Cloudflare users can now easily <a href="https://www.cloudflare.com/application-services/solutions/app-performance-monitoring/">monitor website performance</a> using Real User Monitoring (RUM) data along with scheduled synthetic tests from different regions in a single dashboard. This will identify any performance issues your website may have. The best bit? Once we’ve identified any issues, Observatory will highlight customized recommendations to resolve these issues, all with a single click.</p><p>Start making your website faster today with <a href="https://dash.cloudflare.com/?to=/:account/:zone/speed/test">Observatory</a>.</p> ]]></content:encoded>
            <category><![CDATA[Speed Week]]></category>
            <category><![CDATA[TTFB]]></category>
            <category><![CDATA[Performance]]></category>
            <guid isPermaLink="false">1ckVp4U6xrlEipotlKstbo</guid>
            <dc:creator>Sam Marsh</dc:creator>
            <dc:creator>Achiel van der Mandele</dc:creator>
            <dc:creator>Shih-Chiang Chien</dc:creator>
        </item>
        <item>
            <title><![CDATA[Benchmarking Edge Network Performance: Akamai, Cloudflare, Amazon CloudFront, Fastly, and Google]]></title>
            <link>https://blog.cloudflare.com/benchmarking-edge-network-performance/</link>
            <pubDate>Fri, 17 Sep 2021 13:00:01 GMT</pubDate>
            <description><![CDATA[ We recently ran a measurement experiment where we used Real User Measurement (RUM) data from the standard browser API to test the performance of Cloudflare and others in real-world conditions across the globe. ]]></description>
            <content:encoded><![CDATA[ <p><i>This blog was published on 17 September, 2021. As we continue to optimize our network we're publishing regular updates, which are available </i><a href="/tag/network-performance-update/"><i>here</i></a><i>.</i></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6kNRveYzvIdeJ67Hy4de5F/3b40761ca133d10897ab6f1d2874f746/image20.png" />
            
            </figure><p>During Speed Week we’ve talked a lot about services that make the web faster. <a href="/argo-v2/">Argo 2.0</a> for better routing around bad Internet weather, <a href="/orpheus/">Orpheus</a> to ensure that origins are reachable from anywhere, <a href="https://www.cloudflare.com/developer-platform/cloudflare-images/">image optimization</a> to send just the right bits to the client, <a href="/orpheus/">Tiered Cache</a> to maximize cache hit rates and get the most out of Cloudflare’s new 25% bigger network, our expanded <a href="/cloudflare-backbone-internet-fast-lane/">fiber backbone</a> and <a href="/tag/speed-week/">more</a>.</p><p>Those things are all great.</p><p>But it’s vital that we also measure the performance of our network and benchmark ourselves against industry players large and small to make sure we are providing the best, fastest service.</p><p>We recently ran a measurement experiment where we used Real User Measurement (RUM) data from the standard browser API to test the performance of Cloudflare and others in real-world conditions across the globe. We wanted to use third-party tests for this, but they didn’t have the granularity we wanted. We want to drill down to every single ISP in the world to make sure we optimize everywhere. We knew that in some places the answers we got wouldn’t be good, and we’d need to do work to improve our performance. But without detailed analysis across the entire globe we couldn’t know we were really the fastest or where we needed to improve.</p><p>In this blog post I’ll describe how that measurement worked and what the results are. But the short version is: Cloudflare is #1 in almost all cases whether you look at all the networks on the globe, or the top 1,000 largest, or the top 1,000 most active, and whether you look at mean timings or 95th percentile, and you measure how quickly a connection can be established, how long it takes for the first byte to arrive in a user’s web browser, or how long the complete download takes. And we’re not resting here, we’re committed to working network by network globally to ensure that we are always #1.</p>
    <div>
      <h3>Why we measured</h3>
      <a href="#why-we-measured">
        
      </a>
    </div>
    <p>Commercial Internet measurement services (such as Cedexis, Catchpoint, Pingdom, ThousandEyes) already exist and perform the sorts of RUM measurements that Cloudflare used for this test. And we wanted to ensure that our test was as fair as possible and allowed each network to put its best foot forward.</p><p>We subscribe to the third party monitoring services already. And, when we looked at their data we weren’t satisfied.</p><p>First, we worried that the way they sampled wasn’t globally representative and was often skewed by measuring from the server, rather than the eyeball, side of the network. Or, even if operating from the eyeball side, could be skewed as artificial or tainted by bots and other automated traffic.</p><p>Second, it wasn’t granular enough. It showed our performance by country or region, but didn’t dive into individual networks and therefore obscured the details and outliers behind averages. While we looked good in third party tests, we didn’t trust them to be as thorough and accurate as we wanted. The goal isn’t to pick a test where we looked good. The goal was to be accurate and see where we weren’t good enough yet, so we could focus on those areas and improve. That’s why we had to build this ourselves.</p><p>We benchmark against others because it’s useful to know what’s possible. If someone else is faster than we are somewhere then it proves it’s possible. We won’t be satisfied until we’re at least as good as everyone else everywhere. Now we have the granular data to ensure that’ll happen. We plan our next update during Birthday Week when our target is to take 10% of networks where we’re not the fastest and become the fastest.</p>
    <div>
      <h3>How we measured</h3>
      <a href="#how-we-measured">
        
      </a>
    </div>
    <p>To measure performance we did two things. We created a small internal team to do the measurements separate from the team that manages and optimizes our network. The goal was to show the team where we need to improve.</p><p>And to ensure that the other <a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/">CDNs</a> were tested using their most representative assets we used the very same endpoints that commercial measurement services use on the assumption that our competitors will have already ensured that those endpoints are optimized to show their networks’ best performance.</p><p>The measurements in this blog post are based on four days just before Speed Week began (2021-09-10 12:25:02 UTC to 2021-09-13 16:21:10 UTC). We took measurements of downloading exactly the same 100KB PNG file. We categorized them by the network the measurement was taken from. It’s identified by its ASN and a name. We’re not done with these measurements and will continue measuring and optimizing.</p><p>A 100KB file is a common test measurement used in the industry and allows us to measure network characteristics like the connection time, but also the total download time.</p><p>Before we get into results let’s talk a little about how the Internet fits together. The Internet is a network of networks that cooperate to form the global network that we all use. These networks are identified by a strangely named <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/">“autonomous system number” or ASN</a>. The idea is that large networks (like ISPs, or cloud providers, or universities, or mobile phone companies) operate autonomously and then join the global Internet through a protocol called <a href="https://www.cloudflare.com/learning/security/glossary/what-is-bgp/">BGP</a> (which we’ve <a href="/tag/bgp/">written</a> about in the past).</p><p>In a way the Internet is these ASNs and because the Internet is made of them we want to measure our performance for each ASN. Why? Because one part of improving performance is improving our connectivity to each ASN and knowing how we’re doing on a per-network basis helps enormously.</p><p>There are roughly 70,000 ASNs in the global Internet and during the measurement period we saw traffic from about 21,000 (exact number: 20,728) of them. This makes sense since not all networks are “external” (as in the source of traffic to websites); many ASNs are intermediaries moving traffic around the Internet.</p><p>For the rest of this blog we simply say “network” instead of “ASN”.</p>
    <div>
      <h3>What we measured</h3>
      <a href="#what-we-measured">
        
      </a>
    </div>
    <p>Getting real user measurement data used to be hard but has been made easy for HTTP endpoints thanks to the <a href="https://developer.mozilla.org/en-US/docs/Web/API/Resource_Timing_API/Using_the_Resource_Timing_API">Resource Timing API</a>, supported by most modern browsers. This API allows a page to measure network timing data of fetched resources using high-resolution timestamps, accurate to 5 µs (microseconds).</p><p>The point of this API is to get timing information that shows how a real end-user experiences the Internet (and not a synthetic test that might measure a single component of all the things that happen when a user browses the web).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6fN6tLv58jaS0qhhrQWakh/789d201a0441a744db55d4a99fe0d555/image16.png" />
            
            </figure><p>The Resource Timing API is supported by pretty <a href="https://caniuse.com/resource-timing">much every browser</a> enabling measurement on everything from old versions of Internet Explorer, to mobile browsers on iOS and Android to the latest version of Chrome. Using this API gives us a view of real world use on real world browsers.</p><p>We don't just instruct the browser to download an image too. To make sure that we're fair and replicate the real-life end-user experience, we make sure that no local caching was involved in the request, check if the object has been compressed by the server or not, take the HTTP headers size into account, and record if the connection has been pre-warmed or not, to name a few technical details.</p><p>Here's a high-level example on how this works:</p>
            <pre><code>await fetch("https://example.com/100KB.png?r=7562574954", {
              mode: "cors",
              cache: "no-cache",
              credentials: "omit",
              method: "GET",
})

performance.getEntriesByType("resource");

{
   connectEnd: 1400.3999999761581
   connectStart: 1400.3999999761581
   decodedBodySize: 102400
   domainLookupEnd: 1400.3999999761581
   domainLookupStart: 1400.3999999761581
   duration: 51.60000002384186
   encodedBodySize: 102400
   entryType: "resource"
   fetchStart: 1400.3999999761581
   initiatorType: "fetch"
   name: "https://example.com/100KB.png"
   nextHopProtocol: "h2"
   redirectEnd: 0
   redirectStart: 0
   requestStart: 1406
   responseEnd: 1452
   responseStart: 1428.5
   secureConnectionStart: 1400.3999999761581
   startTime: 1400.3999999761581
   transferSize: 102700
   workerStart: 0
}</code></pre>
            <p>To measure the performance of each CDN we downloaded an image from each, when a browser visited one of our special pages. Once every image is downloaded we record the measurements using a <a href="https://workers.cloudflare.com/">Cloudflare Workers</a> based API.</p>
    <div>
      <h3>The three measurements: TCP connection time, TTFB and TTLB</h3>
      <a href="#the-three-measurements-tcp-connection-time-ttfb-and-ttlb">
        
      </a>
    </div>
    <p>We focused on three measurements to illustrate how fast our network is: TCP connection time, TTFB and TTLB. Here’s why those three values matter.</p><p>TCP connection time is used to show how well-connected we are to the global Internet as it counts only the time taken for a machine to establish a connection to the remote host (before any higher level protocols are used). The TCP connection time is calculated as connectEnd - connectStart (see the diagram above).</p><p>TTFB (or time to first byte) is the time taken once an HTTP request has been sent for the first byte of data to be returned by the server. This is a common measurement used to show how responsive a server is. We calculate TTFB as responseStart - connectStart - (requestStart - connectEnd).</p><p>TTLB (or time to last byte) is the time taken to send the entire response to the web browser. It’s a good measure of how long a complete download takes and helps measure how good the server (or CDN) is at sending data. We calculate TTLB as responseEnd - connectStart - (requestStart - connectEnd).</p><p>We then produced two sets of data: mean and p95. The mean is a very well understood number for laypeople and gives the average user experience, but it doesn’t capture the breadth of different speeds people experience very well. Because it averages everything together it can miss skewed distributions of data (where lots of people get good performance and lots bad performance, for example).</p><p>To address the mean’s problems we also used p95, the 95th percentile. This number tells us what performance 95% of measurements fall below. That can be a hard number to understand, but you can think of it as the “reasonable worst case” performance for a user. Only 5% of measurements were worse than this number.</p>
    <div>
      <h3>An example chart</h3>
      <a href="#an-example-chart">
        
      </a>
    </div>
    <p>As this blog contains a lot of data, let’s start with a detailed look at a chart of results. We compared ourselves against industry giants Google and <a href="https://www.cloudflare.com/cloudflare-vs-cloudfront/">Amazon CloudFront</a>, industry pioneer <a href="https://www.cloudflare.com/cloudflare-vs-akamai/">Akamai</a> and up and comer Fastly.</p><p>For each network (represented by an ASN) and for each CDN we calculated which CDN had the best performance. Here, for example, is a count of the number of networks on which each CDN had the best performance for TTFB. This particular chart shows p95 and includes data from the top 1,000 networks (by number of IPv4 addresses advertised).</p><p>In these charts, longer bars are better; the bars indicate the number of networks for which that CDN had the lowest TTFB at p95.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4pL6HFQCpsTlMxqJpGW3ZO/1b318a9482d9c8f269e134ded4dd1b60/image6-12.png" />
            
            </figure><p>This shows that Cloudflare had the lowest time to first byte (the TTFB, or the time it took the first byte of content to reach their browser) at the 95th percentile for the largest number of networks in the top 1,000 (based on the number IPv4 addresses they advertise). Google was next, then Fastly followed by Amazon CloudFront and Akamai.</p><p>All three measures, TCP connection time, time to first byte and time to last byte, matter to the user experience. For this example, I focused on time to first byte (TTFB) because it’s such a common measure of responsiveness of the web. It’s literally the time it takes a web server to start responding to a request from a browser for a web page.</p><p>To understand the data we gathered let’s look at two large US-based ISPs: Cox and Comcast. Cox serves about <a href="https://newsroom.cox.com/company-overview">6.5 million customers</a> and Comcast has about <a href="https://www.cmcsa.com/news-releases/news-release-details/comcast-reports-4th-quarter-and-full-year-2020-results">30 million customers</a>. We performed roughly 22,000 measurements on Cox’s network and 100,000 on Comcast’s. Below we’ll make use of measurement counts to rank networks by their size, here we see that our measurements and customer counts of Cox and Comcast track nicely.</p><p>Cox Communications has ASN 22773 and our data shows that the p95 TTFB for the five CDNs was as follows: Cloudflare 332.6ms, Fastly 357.6ms, Google 380.3ms, Amazon CloudFront 404.4ms and Akamai 441.5ms. In this case Cloudflare was the fastest and about 7% faster than the next CDN (Fastly) which was faster than Google and so on.</p><p>Looking at Comcast (with ASN 7922) we see p95 TTFB was 323.7ms (Fastly), 324.2ms (Cloudflare), 353.7ms (Google), 384.6ms (Akamai) and 418.8ms (Amazon CloudFront). Here Fastly (323.7ms) edged out Cloudflare (324.2ms) by 0.2%.</p><p>Figures like these go into determining which CDN is the fastest for each network for this analysis and the charts presented. At a granular level they matter to Cloudflare as we can then target networks and connectivity for optimization.</p>
    <div>
      <h3>The results</h3>
      <a href="#the-results">
        
      </a>
    </div>
    <p>Shown below are the results for three different measurement types (TCP connection time, TTFB and TTLB) with two different aggregations (mean and p95) and for three different groupings of networks.</p><p>The first grouping is the simplest: it’s all the networks we have data for. The second grouping is the one used above, the top 1,000 networks by number of IP addresses. But we also show a third grouping, top 1,000 networks by number of observations. That last group is important.</p><p>Top 1,000 networks by number of IP addresses is interesting (it includes the major ISPs) but it also includes some networks that have huge numbers of IP addresses available that aren’t necessarily used. These come about because of historical allocations of IP addresses organisations like the US Department of Defense.</p><p>Cloudflare's testing reveals which networks are most used, and so we also report results for the top 1,000 networks by number of observations to get an idea of how we’re performing on networks with heavy usage.</p><p>Hence, we have 18 charts showing all combinations of (TCP connection time, TTFB, TTLB), (mean, p95) and (all networks, top 1,000 networks by IP count, top 1,000 networks by observations).</p><p>You’ll see that in two of the charts Cloudflare is not #1 of 18 (we’re #2). We’ll be working to make sure we’re #1 for every measurement over the next few weeks. Both of those are average times. We’re most interested in the p95 measurements because they show the “reasonable worst case” scenario for someone using the Internet. But as we go about optimizing performance we want to be #1 on every chart so that we’re top no matter how performance is measured.</p>
    <div>
      <h3>TCP Connection Time</h3>
      <a href="#tcp-connection-time">
        
      </a>
    </div>
    <p>Let’s start with TCP connection time to get a sense of how well-connected the five companies we’ve measured. Recall that longer bars are better here, they indicate that the particular CDN was the highest performance for that many networks: more networks is better.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6QguCRrAEnO67ERcxRs9TO/7e54b9d95945df8245ed06d68d189b3f/image1-21.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3UVflzEE8XMOTB2dnK3Ajk/6581e53c9979aeb2c2b12c963bd9ed95/image21-3.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1S0gQFxfg15eRgBqMbKVKI/78c7802ac0ccc3cb23729ad048da3f66/image7-6.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3ubS7knCK9QfOhR0ljFKvc/ec8a9b06fafc491496bf3cb62b3d68ab/image14-3.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4wAFt0fPIp5UgKoVdZVRT8/04a27e756d3296094a2d5b4ddb5fb913/image11-3.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6kSILNzTnA9SuJdNcQSjRT/859eb95f61041d110e8c06039d396938/image4-20.png" />
            
            </figure>
    <div>
      <h3>Time To First Byte (TTFB)</h3>
      <a href="#time-to-first-byte-ttfb">
        
      </a>
    </div>
    <p>Next up is TTFB for the five companies. Once again, longer bars is better means more networks where that CDN had the lowest TTFB.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/30Q7f5HOY9lC0Xskkop3y1/a0f81b3cbdfdca0bde6883a4ae5a00b3/image2-26.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4blPfzmJ0xsEvoPNHT6erW/796d37ad9196c565034b6a761e160da2/image6-11.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6UD9AcLtsQoOYseVcib59M/b98add4f6f4651943cb2f9b604ed4b3b/image5-18.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/347rQICdjdYyLop9MmzPoM/0877f2cb39552733484cf8e8211d15ac/image9-3.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1Qtv5TDMKl81V8nu66NKAq/26c4ba4429187794f284dcb79bc48bc4/image10-3.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3vxM6xlmF1WUiyoqYX6tSu/e5bce59729bef596cd53bd14dfe93c31/image12-4.png" />
            
            </figure>
    <div>
      <h3>Time To Last Byte (TTLB)</h3>
      <a href="#time-to-last-byte-ttlb">
        
      </a>
    </div>
    <p>And finally the TTLB measurements. Once again, longer bars is better means more networks where that CDN had the lowest TTLB.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1oSgZwwvRQDyNeYJGCt1kb/89631c7f2e9a6b38c649d83bada4e5a6/image18-3.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7CMUraZRpMHebETVnctzEH/260f11a25983efb22d6cfdbdee682abf/image8-5.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3unpLISlQOuu9EIht40nke/6d9b236150ae898746f6a49a703bf0b3/image3-24.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/72L0GJTxxBwmDjb0JMreis/3e311b2c3636a5b048cd34a32bf1e55a/image9-3.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1ORMPwn3VUmAiwk8ztT0nW/13bd53d42b25207f9d700aef679a2a0b/image17-3.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3kNrXkn4FsOhKOyh2X7ZoQ/0e71fc4bd9da8a0a6f8138d47235b26a/image13-3.png" />
            
            </figure>
    <div>
      <h3>Optimization Targets</h3>
      <a href="#optimization-targets">
        
      </a>
    </div>
    <p>Looking not just at where we are #1 but where we are #1 or #2 helps us see how quickly we can optimize our network to be #1 in more places. Examining the top 1,000 networks by observations we see that we’re currently #1 or #2 for TTFB in 69.9% of networks, for TTLB in 65.0% of networks and for TCP connection time in 70.5%.</p><p>To see how much optimization we need to do to go from #2 to #1 we looked at the three measures and see that median TTFB of the #1 network is 92.3%, median TTLB is 94.0% and TCP connection time is 91.5%.</p><p>The latter figure is very interesting because it shows that we can make big gains by optimizing network level routing.</p>
    <div>
      <h3>Where’s the world map?</h3>
      <a href="#wheres-the-world-map">
        
      </a>
    </div>
    <p>It’s very common to present data about Internet performance on maps. World maps look good but they obscure information. The reason is very simple: world maps show geography (and, depending on the projection, a very skewed version of the world’s actual countries and land masses).</p><p>Here’s an example world map with countries colored by who had the lowest TTLB at p95. Cloudflare is in orange, Amazon CloudFront in yellow, Google in purple, Akamai in red and Fastly in blue.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/44fmLQNlpYO7qUlPs5nI1C/c9f8ff7ba3dd63f42ae928080b57ebb9/image19.png" />
            
            </figure><p>What that doesn’t show is population. And Cloudflare is interested in how well we perform for people. Consider Russia and Indonesia. Indonesia has double the population of Russia and about 1/10th of the land.</p><p>By focusing on networks we can optimize for the people who use the Internet. For example, Biznet is a major ISP in Indonesia with ASN 17451. Looking at TTFB at p95 (the same statistics we discussed earlier for US ISPs Cox and Comcast) we see that for Biznet users Cloudflare has the fastest p95 TTFB at 677.7ms, Fastly 744.0ms, Google 872.8ms, Amazon CloudFront 1,239.9 and Akamai 1,248.9ms.</p>
    <div>
      <h3>What’s next</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>The data we’ve gathered gives us a granular view of every network that connects to Cloudflare. This allows us to optimize routes and over the coming days and weeks we’ll be using the data to further increase Cloudflare’s performance.</p><p>In just over a week it’s Cloudflare’s Birthday Week, and we are aiming to improve our performance in 10% of the networks where we are not #1 today. Stay tuned.</p> ]]></content:encoded>
            <category><![CDATA[Speed Week]]></category>
            <category><![CDATA[Speed]]></category>
            <category><![CDATA[Performance]]></category>
            <category><![CDATA[TTFB]]></category>
            <category><![CDATA[Network Performance Update]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <guid isPermaLink="false">1ZCZcTBaJB8i9qQ1mgjyPR</guid>
            <dc:creator>John Graham-Cumming</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing Automatic Platform Optimization, starting with WordPress]]></title>
            <link>https://blog.cloudflare.com/automatic-platform-optimizations-starting-with-wordpress/</link>
            <pubDate>Fri, 02 Oct 2020 13:00:00 GMT</pubDate>
            <description><![CDATA[ Today, we are announcing a new service to serve more than just the static content of your website with the Automatic Platform Optimization (APO) service. With this launch, we're supporting WordPress. ]]></description>
            <content:encoded><![CDATA[ 
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7pBDoI1dESTWbQFz0Vcsa6/76dfc80f250f0dec6cfd442c2a0dae7d/Birthday-Week-OG-images_WordPress_Optimization.png" />
          </figure><p>Today, we are announcing a new service to serve more than just the static content of your website with the <a href="https://www.cloudflare.com/application-services/products/automatic-platform-optimization/">Automatic Platform Optimization (APO)</a> service. With this launch, we are supporting WordPress, the most popular website hosting solution serving 38% of all websites. Our testing, as detailed below, showed a 72% reduction in Time to First Byte (TTFB), 23% reduction to <a href="https://web.dev/fcp/">First Contentful Paint</a>, and 13% reduction in <a href="https://web.dev/speed-index/">Speed Index</a> for desktop users at the 90th percentile, by serving nearly all of your website’s content from Cloudflare’s network. This means visitors to your website see not only the first content sooner but all content more quickly.</p><p>With Automatic Platform Optimization for WordPress, your customers won’t suffer any slowness caused by common issues like shared hosting congestion, slow database lookups, or misbehaving plugins. This service is now available for anyone using WordPress.</p>
    <div>
      <h3>Automatic Platform Optimization Pricing</h3>
      <a href="#automatic-platform-optimization-pricing">
        
      </a>
    </div>
    <p>APO for WordPress costs $5/month for customers on our Free plan and is included, at no additional cost, in our Professional, Business, and Enterprise <a href="https://www.cloudflare.com/plans/">plans</a>. No usage fees, no surprises, <i>just speed</i>.</p>
    <div>
      <h3>How to get started</h3>
      <a href="#how-to-get-started">
        
      </a>
    </div>
    <p>The easiest way to get started with APO is from your WordPress admin console.</p><p><b>1</b>. First, install the <a href="https://wordpress.org/plugins/cloudflare/">Cloudflare WordPress plugin</a> on your WordPress website or update to the latest version (3.8.2 or higher).
<b>2</b>. Authenticate the plugin (<a href="https://wordpress.org/plugins/cloudflare/#installation">steps here</a>) to talk to Cloudflare if you have not already done that.
<b>3</b>. From the Home screen of the Cloudflare section, turn on Automatic Platform Optimization.

Free customers will first be directed to the <a href="https://dash.cloudflare.com/?to=/:account/:zone/speed/optimization">Cloudflare Dashboard</a> to purchase the service.</p>
    <div>
      <h3>Why We Built This</h3>
      <a href="#why-we-built-this">
        
      </a>
    </div>
    <p>At Cloudflare, we jump at the opportunity to make hard problems for our customers disappear with the click of a button. Running a consistently fast website is challenging. Many businesses don’t have the time nor money to spend on complicated and expensive performance solutions for their site. Even if they do, it can be extremely costly to pay for specialized attention to ensure you get the best performance possible. Having a fast website doesn’t have to be complicated, though. <b>The closer your content is to your customers, the better your site will perform.</b> Static content caching does this for files like images, CSS and JavaScript, but that is only part of the equation. Dynamic content is still fetched from the origin incurring costly round trips and additional processing time. For more info on dynamic versus static content, see our <a href="https://www.cloudflare.com/learning/cdn/caching-static-and-dynamic-content/">learning center</a>.</p><p>WordPress is one of the most open platforms in the world, but that means you are always at risk of incurring performance penalties because of plugins or other sources that, while necessary, may be hard to pinpoint and resolve. With the Automatic Platform Optimization service, we put your website into our network that is within 10 milliseconds of 99% of the Internet-connected population in the developed world, all without having to change your existing <a href="https://www.cloudflare.com/developer-platform/solutions/hosting/">hosting</a> provider. This means that for most requests your customers won’t even need to go to your origin, reducing many costly round trips and server processing time. These optimizations run on our edge network, so they also will not impact render or interactivity since no additional JavaScript is run on the client.</p>
    <div>
      <h3>How We Measure Web Performance</h3>
      <a href="#how-we-measure-web-performance">
        
      </a>
    </div>
    <p>Evaluating performance of a website is difficult. There are many different metrics you can track and it is not always obvious which metrics most meaningfully represent a user’s experience. As discussed when we <a href="https://blog.cloudflare.com/new-speed-page/">blogged</a> about our new <a href="https://dash.cloudflare.com/?to=/:account/:zone/speed">Speed page</a>, we aim to simplify this for customers by automating tests using the infrastructure of webpagetest.org, and summarizing both the results visually and numerically in one place.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1NWhWuNJ3uph9V9GZVw1aW/be579629678a74e01792f6c6c6f56330/image4-2.png" />
          </figure><p>The visualization gives you a clear idea of what customers are going to see when they come to your site, and the <i>Critical Loading Times</i> provide the most important metrics to judge your performance. On top of seeing your site’s performance, we provide a list of recommendations for ways to even further increase your performance. If you are using WordPress, then we will test your site with Automatic Platform Optimizations to estimate the benefit you will get with the service.</p>
    <div>
      <h3>The Benefits of Automatic Platform Optimization</h3>
      <a href="#the-benefits-of-automatic-platform-optimization">
        
      </a>
    </div>
    <p>We tested APO on over 500 Cloudflare customer websites that run on WordPress to understand what the performance improvements would be. The results speak for themselves:</p><p><b>Test Results</b></p>
<table><thead>
  <tr>
    <th><span>Metric</span></th>
    <th><span>Percentiles</span></th>
    <th><span>Baseline Cloudflare</span></th>
    <th><span>APO Enabled</span></th>
    <th><span>Improvement (%)</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>Time to First Byte (TTFB)</span></td>
    <td>90th</td>
    <td>1252 ms</td>
    <td>351 ms</td>
    <td>71.96%</td>
  </tr>
  <tr>
    <td>10th</td>
    <td>254 ms</td>
    <td>261 ms</td>
    <td>-2.76%</td>
  </tr>
  <tr>
    <td><a href="http://web.archive.org/web/20210503002739/https://web.dev/fcp/"><span>First Contentful Paint</span></a><br /><span>(FCP)</span></td>
    <td><span>90th</span></td>
    <td>2655 ms</td>
    <td>2056 ms</td>
    <td>22.55%</td>
  </tr>
  <tr>
    <td><span>10th</span></td>
    <td>894 ms</td>
    <td>783 ms</td>
    <td>12.46%</td>
  </tr>
  <tr>
    <td><a href="http://web.archive.org/web/20210503002739/https://web.dev/speed-index/"><span>Speed Index</span></a><br /><span>(SI)</span></td>
    <td><span>90th</span></td>
    <td>6428</td>
    <td>5586</td>
    <td>13.11%</td>
  </tr>
  <tr>
    <td><span>10th</span></td>
    <td>1301</td>
    <td>1242</td>
    <td>4.52%</td>
  </tr>
</tbody></table><p>Note: Results are based on test results of 505 randomly selected websites that are cached by Cloudflare. Tests were run using WebPageTest from South Carolina, USA and the following environment: Chrome, Cable connection speed.</p><p>Most importantly, with APO, a site’s TTFB is made both fast and consistent. Because we now serve the HTML from Cloudflare’s edge with 0 origin processing time, getting the first byte to the eyeball is consistently fast. Under heavy load, a WordPress origin can suffer delays in building the HTML and returning it to visitors. APO removes the variance due to load resulting in consistent TTFB &lt;400 ms.</p><p>Additionally, between faster TTFB and additional caching of third party fonts, we see performance improvements in both FCP and SI for both the fastest and slowest of the sites we tested. Some of this comes naturally from reducing the TTFB, since every millisecond you shave off of TTFB is a potential millisecond gain for other metrics. Caching additional third party fonts allows us to reduce the time it takes to fetch that content. Given fonts can often block paints due to text rendering, this improves the rate at which the page paints, and improves the Speed Index.</p><p>We asked the folks at <a href="https://kinsta.com/">Kinsta</a> to try out APO, given their expertise on WordPress, and tell us what they think. <a href="https://brianli.com/">Brian Li</a>, a Website Content Manager at Kinsta, ran a set of tests from around the world on a website hosted in Tokyo. I’ll let him explain what they did and the results:</p><blockquote><p>At Kinsta, <a href="https://kinsta.com/blog/fastest-wordpress-hosting/">WordPress performance</a> is something that’s near and dear to our hearts. So, when Cloudflare reached out about testing their new Automatic Platform Optimization (APO) service for WordPress, we were all ears.
 
This is what we did to test out the new service:</p></blockquote><ol><li><p>We set up a test site in Tokyo, Japan – one of the 24 high-performance <a href="https://kinsta.com/knowledgebase/google-cloud-data-center-locations/">data center locations</a> available for Kinsta customers.</p></li><li><p>We ran several speed tests from six different locations around the world with and without Cloudflare’s APO.</p></li></ol><blockquote><p>The results were incredible!
 
By caching <a href="https://kinsta.com/blog/wordpress-vs-static-html/">static HTML</a> on Cloudflare’s edge network, we saw a 70-300% performance increase. As expected, the testing locations furthest away from Tokyo saw the biggest reduction in <a href="https://kinsta.com/blog/ttfb/">load time</a>.
 
If your WordPress site uses a traditional <a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/">CDN</a> that only caches CSS, JS, and images, upgrading to Cloudflare’s WordPress APO is a no-brainer and will help you stay competitive with modern Jamstack and static sites that live on the edge by default.</p></blockquote><p>Brian’s test results are summarized in this image:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6gDzHYWY75hKTwwHOoi6Xz/17e3e423683f032d12d22407bccf74a2/image1-7.png" />
          </figure><p><sub>Page Load Speeds for loading a website hosted in Tokyo from 6 locations worldwide - comparing Kinsta, Kinsta with KeyCDN, and Kinsta with Cloudflare APO.</sub></p><p>One of the clear benefits, from Kinsta’s testing of APO, is the consistency of performance for serving your site no matter where your visitors are in the world. The consistent sub-second performance shown with APO versus two or three second load times in other setups makes it clear that if you have a global customer base, APO delivers an improved experience for all visitors.</p>
    <div>
      <h3>How Automatic Platform Optimization Works</h3>
      <a href="#how-automatic-platform-optimization-works">
        
      </a>
    </div>
    <p>Automatic Platform Optimization is the result of being able to use the power of <a href="https://www.cloudflare.com/developer-platform/workers/">Cloudflare Workers</a> to intelligently cache dynamic content. By caching dynamic content, we can serve the entire website from our edge network. Think ‘static site’ but without any of the work of having to build or maintain a static site. Customers can keep managing and updating content on their website in the same way and leave the hard work for performance to us. Serving both static and dynamic content from our network results, generally, in no origin requests or origin processing time. This means all the communication occurs between the user’s device and our edge. Reducing the multitude of round trips typically required from our edge to the origin for dynamic content is what makes this service so effective. Let’s first see what it normally looks like to load a WordPress site for a visitor.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/nNaWWkM0X15Pp0jwnO0Hc/3c3b17b0325638bdb93e844654f2b92b/image3-3.png" />
          </figure><p><sub>A sequence diagram for a typical user visiting a site‌‌</sub></p><p>In a regular request flow, Cloudflare is able to cache some of the content like images, CSS, or JS, while other requests go to either the origin or a third party service in order to fetch the content. Most importantly the first request to fetch the HTML for the site needs to go to the origin which is a typical cause of long TTFB, since no other requests get made until the client can receive the HTML and parse it to make subsequent requests.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/rbLsPASTNRFCNN5E1ySpm/edc2cc24486bc03730ebf0980654c834/image2-2.png" />
          </figure><p><sub>The same site visit but with APO enabled.</sub></p><p>Once APO is enabled, all those trips to the origin are removed. TTFB benefits greatly because the first hop starts and ends at Cloudflare’s network. This also means the browser starts working on fetching and painting the webpage sooner meaning each paint event occurs earlier. Last by caching third party fonts, we remove additional requests that would need to leave Cloudflare’s network and extend the time to display text to the user. Often, websites use fonts hosted on third-party domains. While this saves bandwidth costs that would be incurred from hosting it on the origin, depending on where those fonts are hosted, it can still be a costly operation to fetch them. By rehosting the fonts and serving them from our cache, we can reduce one of the remaining costly round trips.</p><p>With APO for WordPress, you can say bye bye to database congestion or unwieldy plugins slowing down your customers’ experience. These benefits are stacked on top of our already fast <a href="https://www.cloudflare.com/learning/ssl/transport-layer-security-tls/">TLS connection times</a> and industry leading protocol support like HTTP/2 that ensure we are using the most efficient and the fastest way to connect and deliver your website to your customers.</p><p>For customers with WordPress sites that support authenticated sessions, you do not have to worry about us caching content from authenticated users and serving it to others. We bypass the cache on standard WordPress and WooCommerce cookies for authenticated users. This ensures customized content for a specific user is only visible to that user. While this has been available to customers with our Business-level service, it is now available for any WordPress customer that enables APO.</p><p>You might be wondering: “This all sounds great, but what about when I change content on my site?” Because this service works in tandem with our <a href="https://www.cloudflare.com/integrations/wordpress/">WordPress plugin</a>, we are able to understand when you make changes and ensure we quickly purge the content in Cloudflare’s edge and refresh it with the new content. With the plugin installed, we detect content changes and update our edge network worldwide with automatic cache purges. As part of this release, we have updated our WordPress plugin, so whether you use APO, you should upgrade to the latest version today. If you do not or cannot use our WordPress plugin, then APO will still provide the same performance benefits, but may serve stale content for up to 30 minutes and when the content is requested again.</p><p>This service was built on the prototype work originally blogged about <a href="https://blog.cloudflare.com/improving-html-time-to-first-byte/">here</a> and <a href="https://blog.cloudflare.com/fast-google-fonts-with-cloudflare-workers/">here</a>. For a more in depth look at the technical side of the service and how Cloudflare Workers allowed us to build the Automatic Platform Optimization service, see the <a href="https://blog.cloudflare.com/building-automatic-platform-optimization-for-wordpress-using-cloudflare-workers/">accompanying blog post</a>.</p>
    <div>
      <h3>WordPress Today, Other Platforms Coming Soon</h3>
      <a href="#wordpress-today-other-platforms-coming-soon">
        
      </a>
    </div>
    <p>While today’s announcement is focused on supporting WordPress, this is just the start. We plan to bring these same capabilities to other popular platforms used for web hosting. If you operate a platform and are interested in how we may be able to work together to improve things for all your customers, <a href="https://www.cloudflare.com/partners/">please get in touch</a>. If you are running a website, let us know what platform you want to see us bring Automatic Platform Optimization to next.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[WordPress]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[TTFB]]></category>
            <category><![CDATA[Automatic Platform Optimization]]></category>
            <guid isPermaLink="false">wrkP5f4p5fxKa7hQACHv6</guid>
            <dc:creator>Garrett Galow</dc:creator>
        </item>
    </channel>
</rss>