
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sat, 11 Apr 2026 14:17:55 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Google’s AI advantage: why crawler separation is the only path to a fair Internet]]></title>
            <link>https://blog.cloudflare.com/uk-google-ai-crawler-policy/</link>
            <pubDate>Fri, 30 Jan 2026 17:01:04 GMT</pubDate>
            <description><![CDATA[ Google's dual-purpose crawler creates an unfair AI advantage. To protect publishers and foster competition, the UK’s Competition and Markets Authority must mandate crawler separation for search and AI. ]]></description>
            <content:encoded><![CDATA[ <p>Earlier this week, the UK’s Competition and Markets Authority (CMA) <a href="https://www.gov.uk/government/news/cma-proposes-package-of-measures-to-improve-google-search-services-in-uk"><u>opened its consultation</u></a> on a package of proposed conduct requirements for Google. The consultation invites comments on the proposed requirements before the CMA imposes any final measures. These new rules aim to address the lack of choice and transparency that publishers (broadly defined as “any party that makes content available on the web”) face over how Google uses search to fuel its generative AI services and features. These are the first consultations on conduct requirements launched under the digital markets competition regime in the UK. </p><p>We welcome the CMA’s recognition that publishers need a fairer deal and believe the proposed rules are a step into the right direction. Publishers should be entitled to have access to tools that enable them to control the inclusion of their content in generative AI services, and AI companies should have a level playing field on which to compete. </p><p>But we believe the CMA has not gone far enough and should do more to safeguard the UK’s creative sector and foster healthy competition in the market for generative and agentic AI. </p>
    <div>
      <h2>CMA designation of Google as having Strategic Market Status </h2>
      <a href="#cma-designation-of-google-as-having-strategic-market-status">
        
      </a>
    </div>
    <p>In January 2025, the UK’s regulatory landscape underwent a significant legal shift with the implementation of the Digital Markets, Competition and Consumers Act 2024 (DMCC). Rather than relying on antitrust investigations to address risks to competition, the CMA can now designate firms as having Strategic Market Status (SMS) when they hold substantial, entrenched market power. This designation allows for targeted CMA interventions in digital markets, such as imposing detailed conduct requirements, to improve competition. </p><p>In October 2025, the CMA <a href="https://assets.publishing.service.gov.uk/media/68e8b643cf65bd04bad76724/Final_decision_-_strategic_market_status_investigation_into_Google_s_general_search_services.pdf"><u>designated Google</u></a> as having SMS in general search and search advertising, given its 90 percent share of the search market in the UK. Crucially, this designation encompasses AI Overviews and AI Mode, with the CMA now having the authority to impose conduct requirements on Google’s search ecosystem. Final requirements imposed by the CMA are not merely suggestions but legally enforceable rules that can relate specifically to AI crawling with significant sanctions to ensure Google operates fairly. </p>
    <div>
      <h2>Publishers need a meaningful way to opt out of Google’s use of their content for generative AI</h2>
      <a href="#publishers-need-a-meaningful-way-to-opt-out-of-googles-use-of-their-content-for-generative-ai">
        
      </a>
    </div>
    <p>The CMA’s designation could not be more timely. As we’ve <a href="https://blog.cloudflare.com/building-a-better-internet-with-responsible-ai-bot-principles/"><u>said before</u></a>, we are indisputably in a time when the Internet needs clear “rules of the road” for AI crawling behavior. </p><p>As the CMA rightly <a href="https://assets.publishing.service.gov.uk/media/6979d0bf75d44370965520a0/Publisher_conduct_requirement.pdf"><u>states</u></a>, “publishers have no realistic option but to allow their content to be crawled for Google’s general search because of the market power Google holds in general search. However, Google currently uses that content in both its search generative AI features and in its broader generative AI services.” </p><p>In other words: the same content that Google scrapes for search indexing is also used for inference/grounding purposes, like AI Overviews and AI Mode, which rely on fetching live information from the Internet in response to real-time user queries. And that creates a big problem for publishers—and for competition.</p><p>Because publishers cannot afford to disallow or block Googlebot, Google’s search crawler, on their website, they have to accept that their content will be used in generative AI applications such as AI Overviews and AI Mode within Google Search that <a href="https://blog.cloudflare.com/crawlers-click-ai-bots-training/"><u>return very little, if any, traffic to their websites</u></a>. This undermines the ad-supported business models that have sustained digital publishing for decades, given the critical role of Google Search in driving human traffic to online advertising. It also means that Google’s generative AI applications enter into direct competition with publishers by reproducing their content, most often without attribution or compensation. </p><p>Publishers’ reluctance to block Google because of its dominance in search gives Google an unfair competitive advantage in the market for generative and agentic AI. Unlike other AI bot operators, Google can use its search crawler to gather data for a variety of AI functions with little fear that its access will be restricted. It has minimal incentive to pay publishers for that data, which it is already getting for free. </p><p>This prevents the emergence of a well-functioning marketplace where AI developers negotiate fair value for content. Instead, other AI companies are disincentivized from coming to the table, as they are structurally disadvantaged by a system that allows one dominant player to bypass compensation entirely. As the CMA itself <a href="https://assets.publishing.service.gov.uk/media/6979d05275d443709655209f/Introduction_to_the_consultation.pdf"><u>recognizes</u></a>, "[b]y not providing sufficient control over how this content is used, Google can limit the ability of publishers to monetise their content, while accessing content for AI-generated results in a way that its competitors cannot match”. </p>
    <div>
      <h2>Google’s advantage</h2>
      <a href="#googles-advantage">
        
      </a>
    </div>
    <p>Cloudflare data validates the concern about Google’s competitive advantage. Based on our data, Googlebot sees significantly more Internet content than its closest peers. </p><p>Over an observed period of two months, Googlebot successfully accessed individual pages almost two times more than ClaudeBot and GPTBot, three times more than Meta-ExternalAgent, and more than three times more than Bingbot. The difference was even more extreme for other popular AI crawlers: for instance, Googlebot saw 167 times more unique pages than PerplexityBot. Out of the sampled unique URLs using our network that we observed over the last two months, Googlebot crawled roughly 8%.</p><p><b>In rounded multiple terms, Googlebot sees:</b></p><ul><li><p>vs. ~1.70x the amount of unique URLs seen by ClaudeBot;</p></li><li><p>vs. ~1.76x the amount of unique URLs seen by GPTBot;</p></li><li><p>vs. ~2.99x the amount of unique URLs by Meta-ExternalAgent;</p></li><li><p>vs. ~3.26x the amount of unique URLs seen by Bingbot;</p></li><li><p>vs. ~5.09x the amount of unique URLs seen by Amazonbot;</p></li><li><p>vs. ~14.87x the amount of unique URLs seen by Applebot;</p></li><li><p>vs. ~23.73x the amount of unique URLs seen by Bytespider;</p></li><li><p>vs. ~166.98x the amount of unique URLs seen by PerplexityBot;</p></li><li><p>vs. ~714.48x the amount of unique URLs seen by CCBot; and</p></li><li><p>vs: ~1801.97x the amount of unique URLs seen by archive.org_bot.</p></li></ul><p>Googlebot also stands out in other Cloudflare datasets.  </p><p>Even though it ranks as the most active bot by overall traffic, publishers are far less likely to disallow or block Googlebot in their <a href="https://www.cloudflare.com/learning/bots/what-is-robots-txt/"><u>robots.txt file</u></a> compared to other crawlers. This is likely due to its importance in driving human traffic to their content—and, as a result, ad revenue—through search. </p><p>As shown below, almost no website explicitly disallows the dual-purpose Googlebot in full, reflecting how important this bot is to driving traffic via search referrals. (Note that partial disallows often impact certain parts of a website that are irrelevant for search engine optimization, or SEO, such as login endpoints.)</p>
<p>
Robots.txt merely allows the expression of crawling preferences; it is not an enforcement mechanism. Publishers rely on “good bots” to comply. To manage crawler access to their sites more effectively—and independently of a given bot’s compliance—publishers can set up a Web Application Firewall (WAF) with specific rules, technically preventing undesired crawlers from accessing their sites. Following the same logic as with robots.txt above, we would expect websites to block mostly other AI crawlers but not Googlebot. </p><p>Indeed, when comparing the numbers for customers using <a href="https://www.cloudflare.com/lp/pg-ai-crawl-control/"><u>AI Crawl Control</u></a>, Cloudflare’s own <a href="https://developers.cloudflare.com/ai-crawl-control/configuration/ai-crawl-control-with-waf/"><u>AI crawler blocking tool</u></a> that is integrated in our Application Security suite, between July 2025 and January 2026, one can see that the number of websites actively blocking other popular AI crawlers (e.g., GPTBot, Claudebot), was nearly seven times as high as the number of websites that blocked Googlebot and Bingbot. (Like Googlebot, Bingbot combines search and AI crawling and drives traffic to these sites, but given its small market share in search, its impact is less significant.)</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/344ATKpYmJHsSRlEtxQen5/2fc5da1211b4fd0189e026f0ec19548f/BLOG-3170_3.png" />
          </figure><p>So we agree with the CMA on the problem statement. But how can publishers be enabled to effectively opt out of Google using their content for its generative AI applications? We share the CMA’s conclusion that “in order to be able to make meaningful decisions about how Google uses their Search Content, (...) publishers need the ability effectively to opt their Search Content out of both Google’s search generative AI features and Google’s broader generative AI services.” </p><p>But we’re concerned that the CMA’s proposal is insufficient.</p>
    <div>
      <h2>CMA’s proposed publisher conduct requirements</h2>
      <a href="#cmas-proposed-publisher-conduct-requirements">
        
      </a>
    </div>
    <p>On January 28, 2026, the CMA published four sets of proposed conduct requirements for Google, including <a href="https://assets.publishing.service.gov.uk/media/6979ceae75d443709655209c/Publisher_conduct_requirement.pdf"><u>conduct requirements related to publishers</u></a>. According to the CMA, the proposed publisher rules are designed to address concerns that publishers (1) lack sufficient choice over how Google uses their content in its AI-generated responses, (2) have limited transparency into Google’s use of that content, and (3) do not get effective attribution for Google’s use of their content. The CMA recognized the importance of these concerns because of the role that Google search plays in finding content online. </p><p>The conduct requirements would mandate Google grant publishers <a href="https://assets.publishing.service.gov.uk/media/6979d05275d443709655209f/Introduction_to_the_consultation.pdf"><u>"meaningful and effective" </u></a>control over whether their content is used for AI features, like AI Overviews. Google would be prohibited from taking any action that negatively impacts the effectiveness of those control options, such as intentionally downranking the content in search. </p><p>To support informed decisionmaking, the CMA proposal also requires Google to increase transparency, by publishing clear documentation on how it uses crawled content for generative AI and on exactly what its various publisher controls cover in practice. Finally, the proposal would require Google to ensure effective attribution of publisher content and to provide publishers with detailed, disaggregated engagement data—including specific metrics for impressions, clicks, and "click quality"—to help them evaluate the commercial value of allowing their content to be used in AI-generated search summaries.</p>
    <div>
      <h2>The CMA’s proposed remedies are insufficient</h2>
      <a href="#the-cmas-proposed-remedies-are-insufficient">
        
      </a>
    </div>
    <p>Although we support the CMA’s efforts to improve options for publishers, we are concerned that the proposed requirements do not solve the underlying issue of promoting fair, transparent choice over how their content is used by Google. Publishers are effectively forced to use Google’s proprietary opt-out mechanisms, tied specifically to the Google platform and under the conditions set by Google, rather than granting them direct, autonomous control. <b>A framework where the platform dictates the rules, manages the technical controls, and defines the scope of application does not offer “effective control” to content creators or encourage competitive innovation in the market. Instead, it reinforces a state of permanent dependency.</b>  </p><p>Such a framework also reduces choice for publishers. Creating new opt-out controls makes it impossible for publishers to choose to use external tools to block Googlebot from accessing their content without jeopardizing their appearance in Search results. Instead, under the current proposal, content creators will still have to allow Googlebot to scrape their websites, with no enforcement mechanisms to deploy and limited visibility available if Google does not respect their signalled preferences. Enforcement of these requirements by the CMA, if done properly, will be very onerous, without guarantee that publishers will trust the solution.</p><p>In fact, Cloudflare has received feedback from its customers that Google’s current proprietary opt-out mechanisms, including Google-Extended and ‘nosnippet’, have failed to prevent content from being utilized in ways that publishers cannot control. These opt-out tools also do not enable mechanisms for fair compensation for publishers. </p><p>More broadly, as reflected in our proposed <a href="https://blog.cloudflare.com/building-a-better-internet-with-responsible-ai-bot-principles/"><u>responsible AI bot principles</u></a>, we believe that all AI bots should have one distinct purpose and declare it, so that website owners can make clear decisions over who can access their content and why. Unlike its leading competitors, such as OpenAI and Anthropic, Google does not comply with this principle for Googlebot, which is used for multiple purposes (search indexing, AI training, and inference/grounding). Simply requiring Google to develop a new opt-out mechanism would not allow publishers to achieve meaningful control over the use of their content. </p><p>The most effective way to give publishers that necessary control is to require Googlebot to be split up into separate crawlers. That way, publishers could allow crawling for traditional search indexing, which they need to attract traffic to their sites, but block access for unwanted use of their content in generative AI services and features. </p>
    <div>
      <h2>Requiring crawler separation is the only effective solution </h2>
      <a href="#requiring-crawler-separation-is-the-only-effective-solution">
        
      </a>
    </div>
    <p>To ensure a fair digital ecosystem, the CMA must instead empower content owners to prevent Google from accessing their data for particular purposes in the first place, rather than relying on Google-managed workarounds after the crawler has already accessed the content for other purposes. That approach also enables creators to set conditions for access to their content. </p><p>Although the CMA described crawler separation as an “equally effective intervention”, it ultimately rejected mandating separation based on Google’s input that it would be too onerous. We disagree.</p><p>Requiring Google to split up Googlebot by purpose — just like Google already does for its <a href="https://developers.google.com/crawling/docs/crawlers-fetchers/overview-google-crawlers"><u>nearly 20 other crawlers</u></a> — is not only technically feasible, but also a necessary and proportionate remedy that empowers website operators to have the granular control they currently lack, without increasing traffic load from crawlers to their websites (and in fact, perhaps even decreasing it, should they choose to block AI crawling).</p><p>To be clear, a crawler separation remedy benefits AI companies, by leveling the playing field between them and Google, in addition to giving UK-based publishers more control over their content. (There has been widespread public support for a crawler separation remedy by Daily Mail Group, the Guardian and the News Media Association.) Mandatory crawler separation is not a disadvantage to Google, nor does it undermine investment in AI. On the contrary, it is a pro-competitive safeguard that prevents Google from leveraging its search monopoly to gain an unfair advantage in the AI market. By decoupling these functions, we ensure that AI development is driven by fair-market competition rather than the exploitation of a single hyperscaler’s dominance.</p><p>******</p><p>The UK has a unique chance to lead the world in protecting the value of original and high-quality content on the Internet. However, we worry that the current proposals fall short. We would encourage rules that ensure that Google operates under the same conditions for content access as other AI developers, meaningfully restoring agency to publishers and paving the way for new business models promoting content monetization.</p><p>Cloudflare remains committed to engaging with the CMA and other partners during upcoming consultations to provide evidence-based data to help shape a final decision on conduct requirements that are targeted, proportional, and effective. The CMA still has an opportunity to ensure that the Internet becomes a fair marketplace for content creators and smaller AI players—not just a select few tech giants.</p> ]]></content:encoded>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[AI Bots]]></category>
            <category><![CDATA[Google]]></category>
            <category><![CDATA[Legal]]></category>
            <category><![CDATA[Policy & Legal]]></category>
            <guid isPermaLink="false">1csdasmGFE5gWnYFDBbN9j</guid>
            <dc:creator>Maria Palmieri</dc:creator>
            <dc:creator>Sebastian Hufnagel</dc:creator>
        </item>
        <item>
            <title><![CDATA[Keeping the Internet fast and secure: introducing Merkle Tree Certificates]]></title>
            <link>https://blog.cloudflare.com/bootstrap-mtc/</link>
            <pubDate>Tue, 28 Oct 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare is launching an experiment with Chrome to evaluate fast, scalable, and quantum-ready Merkle Tree Certificates, all without degrading performance or changing WebPKI trust relationships. ]]></description>
            <content:encoded><![CDATA[ <p>The world is in a race to build its first quantum computer capable of solving practical problems not feasible on even the largest conventional supercomputers. While the quantum computing paradigm promises many benefits, it also threatens the security of the Internet by breaking much of the cryptography we have come to rely on.</p><p>To mitigate this threat, Cloudflare is helping to migrate the Internet to Post-Quantum (PQ) cryptography. Today, <a href="https://radar.cloudflare.com/adoption-and-usage#post-quantum-encryption"><u>about 50%</u></a> of traffic to Cloudflare's edge network is protected against the most urgent threat: an attacker who can intercept and store encrypted traffic today and then decrypt it in the future with the help of a quantum computer. This is referred to as the <a href="https://en.wikipedia.org/wiki/Harvest_now,_decrypt_later"><u>harvest now, decrypt later</u></a><i> </i>threat.</p><p>However, this is just one of the threats we need to address. A quantum computer can also be used to crack a server's <a href="https://www.cloudflare.com/application-services/products/ssl/">TLS certificate</a>, allowing an attacker to impersonate the server to unsuspecting clients. The good news is that we already have PQ algorithms we can use for quantum-safe authentication. The bad news is that adoption of these algorithms in TLS will require significant changes to one of the most complex and security-critical systems on the Internet: the Web Public-Key Infrastructure (WebPKI).</p><p>The central problem is the sheer size of these new algorithms: signatures for ML-DSA-44, one of the most performant PQ algorithms standardized by NIST, are 2,420 bytes long, compared to just 64 bytes for ECDSA-P256, the most popular non-PQ signature in use today; and its public keys are 1,312 bytes long, compared to just 64 bytes for ECDSA. That's a roughly 20-fold increase in size. Worse yet, the average TLS handshake includes a number of public keys and signatures, adding up to 10s of kilobytes of overhead per handshake. This is enough to have a <a href="https://blog.cloudflare.com/another-look-at-pq-signatures/#how-many-added-bytes-are-too-many-for-tls"><u>noticeable impact</u></a> on the performance of TLS.</p><p>That makes drop-in PQ certificates a tough sell to enable today: they don’t bring any security benefit before Q-day — the day a cryptographically relevant quantum computer arrives — but they do degrade performance. We could sit and wait until Q-day is a year away, but that’s playing with fire. Migrations always take longer than expected, and by waiting we risk the security and privacy of the Internet, which is <a href="https://developers.cloudflare.com/ssl/edge-certificates/universal-ssl/"><u>dear to us</u></a>.</p><p>It's clear that we must find a way to make post-quantum certificates cheap enough to deploy today by default for everyone — not just those that can afford it. In this post, we'll introduce you to the plan we’ve brought together with industry partners to the <a href="https://datatracker.ietf.org/group/plants/about/"><u>IETF</u></a> to redesign the WebPKI in order to allow a smooth transition to PQ authentication with no performance impact (and perhaps a performance improvement!). We'll provide an overview of one concrete proposal, called <a href="https://datatracker.ietf.org/doc/draft-davidben-tls-merkle-tree-certs/"><u>Merkle Tree Certificates (MTCs)</u></a>, whose goal is to whittle down the number of public keys and signatures in the TLS handshake to the bare minimum required.</p><p>But talk is cheap. We <a href="https://blog.cloudflare.com/experiment-with-pq/"><u>know</u></a> <a href="https://blog.cloudflare.com/announcing-encrypted-client-hello/"><u>from</u></a> <a href="https://blog.cloudflare.com/why-tls-1-3-isnt-in-browsers-yet/"><u>experience</u></a> that, as with any change to the Internet, it's crucial to test early and often. <b>Today we're announcing our intent to deploy MTCs on an experimental basis in collaboration with Chrome Security.</b> In this post, we'll describe the scope of this experiment, what we hope to learn from it, and how we'll make sure it's done safely.</p>
    <div>
      <h2>The WebPKI today — an old system with many patches</h2>
      <a href="#the-webpki-today-an-old-system-with-many-patches">
        
      </a>
    </div>
    <p>Why does the TLS handshake have so many public keys and signatures?</p><p>Let's start with Cryptography 101. When your browser connects to a website, it asks the server to <b>authenticate</b> itself to make sure it's talking to the real server and not an impersonator. This is usually achieved with a cryptographic primitive known as a digital signature scheme (e.g., ECDSA or ML-DSA). In TLS, the server signs the messages exchanged between the client and server using its <b>secret key</b>, and the client verifies the signature using the server's <b>public key</b>. In this way, the server confirms to the client that they've had the same conversation, since only the server could have produced a valid signature.</p><p>If the client already knows the server's public key, then only <b>1 signature</b> is required to authenticate the server. In practice, however, this is not really an option. The web today is made up of around a billion TLS servers, so it would be unrealistic to provision every client with the public key of every server. What's more, the set of public keys will change over time as new servers come online and existing ones rotate their keys, so we would need some way of pushing these changes to clients.</p><p>This scaling problem is at the heart of the design of all PKIs.</p>
    <div>
      <h3>Trust is transitive</h3>
      <a href="#trust-is-transitive">
        
      </a>
    </div>
    <p>Instead of expecting the client to know the server's public key in advance, the server might just send its public key during the TLS handshake. But how does the client know that the public key actually belongs to the server? This is the job of a <b>certificate</b>.</p><p>A certificate binds a public key to the identity of the server — usually its DNS name, e.g., <code>cloudflareresearch.com</code>. The certificate is signed by a Certification Authority (CA) whose public key is known to the client. In addition to verifying the server's handshake signature, the client verifies the signature of this certificate. This establishes a chain of trust: by accepting the certificate, the client is trusting that the CA verified that the public key actually belongs to the server with that identity.</p><p>Clients are typically configured to trust many CAs and must be provisioned with a public key for each. Things are much easier however, since there are only 100s of CAs instead of billions. In addition, new certificates can be created without having to update clients.</p><p>These efficiencies come at a relatively low cost: for those counting at home, that's <b>+1</b> signature and <b>+1</b> public key, for a total of <b>2 signatures and 1 public key</b> per TLS handshake.</p><p>That's not the end of the story, however. As the WebPKI has evolved, so have these chains of trust grown a bit longer. These days it's common for a chain to consist of two or more certificates rather than just one. This is because CAs sometimes need to rotate<b> </b>their keys, just as servers do. But before they can start using the new key, they must distribute the corresponding public key to clients. This takes time, since it requires billions of clients to update their trust stores. To bridge the gap, the CA will sometimes use the old key to issue a certificate for the new one and append this certificate to the end of the chain.</p><p>That's<b> +1</b> signature and<b> +1</b> public key, which brings us to<b> 3 signatures and 2 public keys</b>. And we still have a little ways to go.</p>
    <div>
      <h3>Trust but verify</h3>
      <a href="#trust-but-verify">
        
      </a>
    </div>
    <p>The main job of a CA is to verify that a server has control over the domain for which it’s requesting a certificate. This process has evolved over the years from a high-touch, CA-specific process to a standardized, <a href="https://datatracker.ietf.org/doc/html/rfc8555/"><u>mostly automated process</u></a> used for issuing most certificates on the web. (Not all CAs fully support automation, however.) This evolution is marked by a number of security incidents in which a certificate was <b>mis-issued </b>to a party other than the server, allowing that party to impersonate the server to any client that trusts the CA.</p><p>Automation helps, but <a href="https://en.wikipedia.org/wiki/DigiNotar#Issuance_of_fraudulent_certificates"><u>attacks</u></a> are still possible, and mistakes are almost inevitable. <a href="https://blog.cloudflare.com/unauthorized-issuance-of-certificates-for-1-1-1-1/"><u>Earlier this year</u></a>, several certificates for Cloudflare's encrypted 1.1.1.1 resolver were issued without our involvement or authorization. This apparently occurred by accident, but it nonetheless put users of 1.1.1.1 at risk. (The mis-issued certificates have since been revoked.)</p><p>Ensuring mis-issuance is detectable is the job of the Certificate Transparency (CT) ecosystem. The basic idea is that each certificate issued by a CA gets added to a public <b>log</b>. Servers can audit these logs for certificates issued in their name. If ever a certificate is issued that they didn't request itself, the server operator can prove the issuance happened, and the PKI ecosystem can take action to prevent the certificate from being trusted by clients.</p><p>Major browsers, including Firefox and Chrome and its derivatives, require certificates to be logged before they can be trusted. For example, Chrome, Safari, and Firefox will only accept the server's certificate if it appears in at least two logs the browser is configured to trust. This policy is easy to state, but tricky to implement in practice:</p><ol><li><p>Operating a CT log has historically been fairly expensive. Logs ingest billions of certificates over their lifetimes: when an incident happens, or even just under high load, it can take some time for a log to make a new entry available for auditors.</p></li><li><p>Clients can't really audit logs themselves, since this would expose their browsing history (i.e., the servers they wanted to connect to) to the log operators.</p></li></ol><p>The solution to both problems is to include a signature from the CT log along with the certificate. The signature is produced immediately in response to a request to log a certificate, and attests to the log's intent to include the certificate in the log within 24 hours.</p><p>Per browser policy, certificate transparency adds <b>+2</b> signatures to the TLS handshake, one for each log. This brings us to a total of <b>5 signatures and 2 public keys</b> in a typical handshake on the public web.</p>
    <div>
      <h3>The future WebPKI</h3>
      <a href="#the-future-webpki">
        
      </a>
    </div>
    <p>The WebPKI is a living, breathing, and highly distributed system. We've had to patch it a number of times over the years to keep it going, but on balance it has served our needs quite well — until now.</p><p>Previously, whenever we needed to update something in the WebPKI, we would tack on another signature. This strategy has worked because conventional cryptography is so cheap. But <b>5 signatures and 2 public keys </b>on average for each TLS handshake is simply too much to cope with for the larger PQ signatures that are coming.</p><p>The good news is that by moving what we already have around in clever ways, we can drastically reduce the number of signatures we need.</p>
    <div>
      <h3>Crash course on Merkle Tree Certificates</h3>
      <a href="#crash-course-on-merkle-tree-certificates">
        
      </a>
    </div>
    <p><a href="https://datatracker.ietf.org/doc/draft-davidben-tls-merkle-tree-certs/"><u>Merkle Tree Certificates (MTCs)</u></a> is a proposal for the next generation of the WebPKI that we are implementing and plan to deploy on an experimental basis. Its key features are as follows:</p><ol><li><p>All the information a client needs to validate a Merkle Tree Certificate can be disseminated out-of-band. If the client is sufficiently up-to-date, then the TLS handshake needs just <b>1 signature, 1 public key, and 1 Merkle tree inclusion proof</b>. This is quite small, even if we use post-quantum algorithms.</p></li><li><p>The MTC specification makes certificate transparency a first class feature of the PKI by having each CA run its own log of exactly the certificates they issue.</p></li></ol><p>Let's poke our head under the hood a little. Below we have an MTC generated by one of our internal tests. This would be transmitted from the server to the client in the TLS handshake:</p>
            <pre><code>-----BEGIN CERTIFICATE-----
MIICSzCCAUGgAwIBAgICAhMwDAYKKwYBBAGC2ksvADAcMRowGAYKKwYBBAGC2ksv
AQwKNDQzNjMuNDguMzAeFw0yNTEwMjExNTMzMjZaFw0yNTEwMjgxNTMzMjZaMCEx
HzAdBgNVBAMTFmNsb3VkZmxhcmVyZXNlYXJjaC5jb20wWTATBgcqhkjOPQIBBggq
hkjOPQMBBwNCAARw7eGWh7Qi7/vcqc2cXO8enqsbbdcRdHt2yDyhX5Q3RZnYgONc
JE8oRrW/hGDY/OuCWsROM5DHszZRDJJtv4gno2wwajAOBgNVHQ8BAf8EBAMCB4Aw
EwYDVR0lBAwwCgYIKwYBBQUHAwEwQwYDVR0RBDwwOoIWY2xvdWRmbGFyZXJlc2Vh
cmNoLmNvbYIgc3RhdGljLWN0LmNsb3VkZmxhcmVyZXNlYXJjaC5jb20wDAYKKwYB
BAGC2ksvAAOB9QAAAAAAAAACAAAAAAAAAAJYAOBEvgOlvWq38p45d0wWTPgG5eFV
wJMhxnmDPN1b5leJwHWzTOx1igtToMocBwwakt3HfKIjXYMO5CNDOK9DIKhmRDSV
h+or8A8WUrvqZ2ceiTZPkNQFVYlG8be2aITTVzGuK8N5MYaFnSTtzyWkXP2P9nYU
Vd1nLt/WjCUNUkjI4/75fOalMFKltcc6iaXB9ktble9wuJH8YQ9tFt456aBZSSs0
cXwqFtrHr973AZQQxGLR9QCHveii9N87NXknDvzMQ+dgWt/fBujTfuuzv3slQw80
mibA021dDCi8h1hYFQAA
-----END CERTIFICATE-----</code></pre>
            <p>Looks like your average PEM encoded certificate. Let's decode it and look at the parameters:</p>
            <pre><code>$ openssl x509 -in merkle-tree-cert.pem -noout -text
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 531 (0x213)
        Signature Algorithm: 1.3.6.1.4.1.44363.47.0
        Issuer: 1.3.6.1.4.1.44363.47.1=44363.48.3
        Validity
            Not Before: Oct 21 15:33:26 2025 GMT
            Not After : Oct 28 15:33:26 2025 GMT
        Subject: CN=cloudflareresearch.com
        Subject Public Key Info:
            Public Key Algorithm: id-ecPublicKey
                Public-Key: (256 bit)
                pub:
                    04:70:ed:e1:96:87:b4:22:ef:fb:dc:a9:cd:9c:5c:
                    ef:1e:9e:ab:1b:6d:d7:11:74:7b:76:c8:3c:a1:5f:
                    94:37:45:99:d8:80:e3:5c:24:4f:28:46:b5:bf:84:
                    60:d8:fc:eb:82:5a:c4:4e:33:90:c7:b3:36:51:0c:
                    92:6d:bf:88:27
                ASN1 OID: prime256v1
                NIST CURVE: P-256
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature
            X509v3 Extended Key Usage:
                TLS Web Server Authentication
            X509v3 Subject Alternative Name:
                DNS:cloudflareresearch.com, DNS:static-ct.cloudflareresearch.com
    Signature Algorithm: 1.3.6.1.4.1.44363.47.0
    Signature Value:
        00:00:00:00:00:00:02:00:00:00:00:00:00:00:02:58:00:e0:
        44:be:03:a5:bd:6a:b7:f2:9e:39:77:4c:16:4c:f8:06:e5:e1:
        55:c0:93:21:c6:79:83:3c:dd:5b:e6:57:89:c0:75:b3:4c:ec:
        75:8a:0b:53:a0:ca:1c:07:0c:1a:92:dd:c7:7c:a2:23:5d:83:
        0e:e4:23:43:38:af:43:20:a8:66:44:34:95:87:ea:2b:f0:0f:
        16:52:bb:ea:67:67:1e:89:36:4f:90:d4:05:55:89:46:f1:b7:
        b6:68:84:d3:57:31:ae:2b:c3:79:31:86:85:9d:24:ed:cf:25:
        a4:5c:fd:8f:f6:76:14:55:dd:67:2e:df:d6:8c:25:0d:52:48:
        c8:e3:fe:f9:7c:e6:a5:30:52:a5:b5:c7:3a:89:a5:c1:f6:4b:
        5b:95:ef:70:b8:91:fc:61:0f:6d:16:de:39:e9:a0:59:49:2b:
        34:71:7c:2a:16:da:c7:af:de:f7:01:94:10:c4:62:d1:f5:00:
        87:bd:e8:a2:f4:df:3b:35:79:27:0e:fc:cc:43:e7:60:5a:df:
        df:06:e8:d3:7e:eb:b3:bf:7b:25:43:0f:34:9a:26:c0:d3:6d:
        5d:0c:28:bc:87:58:58:15:00:00</code></pre>
            <p>While some of the parameters probably look familiar, others will look unusual. On the familiar side, the subject and public key are exactly what we might expect: the DNS name is <code>cloudflareresearch.com</code> and the public key is for a familiar signature algorithm, ECDSA-P256. This algorithm is not PQ, of course — in the future we would put ML-DSA-44 there instead.</p><p>On the unusual side, OpenSSL appears to not recognize the signature algorithm of the issuer and just prints the raw OID and bytes of the signature. There's a good reason for this: the MTC does not have a signature in it at all! So what exactly are we looking at?</p><p>The trick to leave out signatures is that a Merkle Tree Certification Authority (MTCA) produces its <i>signatureless</i> certificates <i>in batches</i> rather than individually. In place of a signature, the certificate has an <b>inclusion proof</b> of the certificate in a batch of certificates signed by the MTCA.</p><p>To understand how inclusion proofs work, let's think about a slightly simplified version of the MTC specification. To issue a batch, the MTCA arranges the unsigned certificates into a data structure called a <b>Merkle tree</b> that looks like this:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4LGhISsS07kbpSgDkqx8p2/68e3b36deeca7f97139654d2c769df68/image3.png" />
          </figure><p>Each leaf of the tree corresponds to a certificate, and each inner node is equal to the hash of its children. To sign the batch, the MTCA uses its secret key to sign the head of the tree. The structure of the tree guarantees that each certificate in the batch was signed by the MTCA: if we tried to tweak the bits of any one of the certificates, the treehead would end up having a different value, which would cause the signature to fail.</p><p>An inclusion proof for a certificate consists of the hash of each sibling node along the path from the certificate to the treehead:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4UZZHkRwsBLWXRYeop4rXv/8598cde48c27c112bc4992889f3d5799/image1.gif" />
          </figure><p>Given a validated treehead, this sequence of hashes is sufficient to prove inclusion of the certificate in the tree. This means that, in order to validate an MTC, the client also needs to obtain the signed treehead from the MTCA.</p><p>This is the key to MTC's efficiency:</p><ol><li><p>Signed treeheads can be disseminated to clients out-of-band and validated offline. Each validated treehead can then be used to validate any certificate in the corresponding batch, eliminating the need to obtain a signature for each server certificate.</p></li><li><p>During the TLS handshake, the client tells the server which treeheads it has. If the server has a signatureless certificate covered by one of those treeheads, then it can use that certificate to authenticate itself. That's <b>1 signature,1 public key and 1 inclusion proof</b> per handshake, both for the server being authenticated.</p></li></ol><p>Now, that's the simplified version. MTC proper has some more bells and whistles. To start, it doesn’t create a separate Merkle tree for each batch, but it grows a single large tree, which is used for better transparency. As this tree grows, periodically (sub)tree heads are selected to be shipped to browsers, which we call <b>landmarks</b>. In the common case browsers will be able to fetch the most recent landmarks, and servers can wait for batch issuance, but we need a fallback: MTC also supports certificates that can be issued immediately and don’t require landmarks to be validated, but these are not as small. A server would provision both types of Merkle tree certificates, so that the common case is fast, and the exceptional case is slow, but at least it’ll work.</p>
    <div>
      <h2>Experimental deployment</h2>
      <a href="#experimental-deployment">
        
      </a>
    </div>
    <p>Ever since early designs for MTCs emerged, we’ve been eager to experiment with the idea. In line with the IETF principle of “<a href="https://www.ietf.org/runningcode/"><u>running code</u></a>”, it often takes implementing a protocol to work out kinks in the design. At the same time, we cannot risk the security of users. In this section, we describe our approach to experimenting with aspects of the Merkle Tree Certificates design <i>without</i> changing any trust relationships.</p><p>Let’s start with what we hope to learn. We have lots of questions whose answers can help to either validate the approach, or uncover pitfalls that require reshaping the protocol — in fact, an implementation of an early MTC draft by <a href="https://www.cs.ru.nl/masters-theses/2025/M_Pohl___Implementation_and_Analysis_of_Merkle_Tree_Certificates_for_Post-Quantum_Secure_Authentication_in_TLS.pdf"><u>Maximilian Pohl</u></a> and <a href="https://www.ietf.org/archive/id/draft-davidben-tls-merkle-tree-certs-07.html#name-acknowledgements"><u>Mia Celeste</u></a> did exactly this. We’d like to know:</p><p><b>What breaks?</b> Protocol ossification (the tendency of implementation bugs to make it harder to change a protocol) is an ever-present issue with deploying protocol changes. For TLS in particular, despite having built-in flexibility, time after time we’ve found that if that flexibility is not regularly used, there will be buggy implementations and middleboxes that break when they see things they don’t recognize. TLS 1.3 deployment <a href="https://blog.cloudflare.com/why-tls-1-3-isnt-in-browsers-yet/"><u>took years longer</u></a> than we hoped for this very reason. And more recently, the rollout of PQ key exchange in TLS caused the Client Hello to be split over multiple TCP packets, something that many middleboxes <a href="https://tldr.fail/"><u>weren't ready for</u></a>.</p><p><b>What is the performance impact?</b> In fact, we expect MTCs to <i>reduce </i>the size of the handshake, even compared to today's non-PQ certificates. They will also reduce CPU cost: ML-DSA signature verification is about as fast as ECDSA, and there will be far fewer signatures to verify. We therefore expect to see a <i>reduction in latency</i>. We would like to see if there is a measurable performance improvement.</p><p><b>What fraction of clients will stay up to date? </b>Getting the performance benefit of MTCs requires the clients and servers to be roughly in sync with one another. We expect MTCs to have fairly short lifetimes, a week or so. This means that if the client's latest landmark is older than a week, the server would have to fallback to a larger certificate. Knowing how often this fallback happens will help us tune the parameters of the protocol to make fallbacks less likely.</p><p>In order to answer these questions, we are implementing MTC support in our TLS stack and in our certificate issuance infrastructure. For their part, Chrome is implementing MTC support in their own TLS stack and will stand up infrastructure to disseminate landmarks to their users.</p><p>As we've done in past experiments, we plan to enable MTCs for a subset of our free customers with enough traffic that we will be able to get useful measurements. Chrome will control the experimental rollout: they can ramp up slowly, measuring as they go and rolling back if and when bugs are found.</p><p>Which leaves us with one last question: who will run the Merkle Tree CA?</p>
    <div>
      <h3>Bootstrapping trust from the existing WebPKI</h3>
      <a href="#bootstrapping-trust-from-the-existing-webpki">
        
      </a>
    </div>
    <p>Standing up a proper CA is no small task: it takes years to be trusted by major browsers. That’s why Cloudflare isn’t going to become a “real” CA for this experiment, and Chrome isn’t going to trust us directly.</p><p>Instead, to make progress on a reasonable timeframe, without sacrificing due diligence, we plan to "mock" the role of the MTCA. We will run an MTCA (on <a href="https://github.com/cloudflare/azul/"><u>Workers</u></a> based on our <a href="https://blog.cloudflare.com/azul-certificate-transparency-log/"><u>StaticCT logs</u></a>), but for each MTC we issue, we also publish an existing certificate from a trusted CA that agrees with it. We call this the <b>bootstrap certificate</b>. When Chrome’s infrastructure pulls updates from our MTCA log, they will also pull these bootstrap certificates, and check whether they agree. Only if they do, they’ll proceed to push the corresponding landmarks to Chrome clients. In other words, Cloudflare is effectively just “re-encoding” an existing certificate (with domain validation performed by a trusted CA) as an MTC, and Chrome is using certificate transparency to keep us honest.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>With almost 50% of our traffic already protected by post-quantum encryption, we’re halfway to a fully post-quantum secure Internet. The second part of our journey, post-quantum certificates, is the hardest yet though. A simple drop-in upgrade has a noticeable performance impact and no security benefit before Q-day. This means it’s a hard sell to enable today by default. But here we are playing with fire: migrations always take longer than expected. If we want to keep an ubiquitously private and secure Internet, we need a post-quantum solution that’s performant enough to be enabled by default <b>today</b>.</p><p>Merkle Tree Certificates (MTCs) solves this problem by reducing the number of signatures and public keys to the bare minimum while maintaining the WebPKI's essential properties. We plan to roll out MTCs to a fraction of free accounts by early next year. This does not affect any visitors that are not part of the Chrome experiment. For those that are, thanks to the bootstrap certificates, there is no impact on security.</p><p>We’re excited to keep the Internet fast <i>and</i> secure, and will report back soon on the results of this experiment: watch this space! MTC is evolving as we speak, if you want to get involved, please join the IETF <a href="https://mailman3.ietf.org/mailman3/lists/plants@ietf.org/"><u>PLANTS mailing list</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Post-Quantum]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Chrome]]></category>
            <category><![CDATA[Google]]></category>
            <category><![CDATA[IETF]]></category>
            <category><![CDATA[Transparency]]></category>
            <category><![CDATA[Rust]]></category>
            <category><![CDATA[Open Source]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <guid isPermaLink="false">4jURWdZzyjdrcurJ4LlJ1z</guid>
            <dc:creator>Luke Valenta</dc:creator>
            <dc:creator>Christopher Patton</dc:creator>
            <dc:creator>Vânia Gonçalves</dc:creator>
            <dc:creator>Bas Westerbaan</dc:creator>
        </item>
        <item>
            <title><![CDATA[First-party tags in seconds: Cloudflare integrates Google tag gateway for advertisers ]]></title>
            <link>https://blog.cloudflare.com/google-tag-gateway-for-advertisers/</link>
            <pubDate>Thu, 08 May 2025 18:15:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare introduces a one-click integration with Google tag gateway for advertisers. ]]></description>
            <content:encoded><![CDATA[ <p>If you’re a marketer, advertiser, or a business owner that runs your own website, there’s a good chance you’ve used Google tags in order to collect analytics or measure conversions. A <a href="https://support.google.com/analytics/answer/11994839?hl=en"><u>Google tag</u></a> is a single piece of code you can use across your entire website to send events to multiple destinations like Google Analytics and Google Ads. </p><p>Historically, the common way to deploy a Google tag meant serving the JavaScript payload directly from Google’s domain. This can work quite well, but can sometimes impact performance and accurate data measurement. That’s why Google developed a way to deploy a Google tag using your own first-party infrastructure using <a href="https://developers.google.com/tag-platform/tag-manager/server-side"><u>server-side tagging</u></a>. However, this server-side tagging required deploying and maintaining a separate server, which comes with a cost and requires maintenance.</p><p>That’s why we’re excited to be Google’s launch partner and announce our direct integration of Google tag gateway for advertisers, providing many of the same performance and accuracy benefits of server-side tagging without the overhead of maintaining a separate server.   </p><p>Any <a href="https://www.cloudflare.com/learning/dns/glossary/what-is-a-domain-name/">domain</a> proxied through Cloudflare can now serve your Google tags directly from that domain. This allows you to get better measurement signals for your website and can enhance your campaign performance, with early testers seeing on average an 11% uplift in data signals. The setup only requires a few clicks — if you already have a Google tag snippet on the page, no changes to that tag are required.</p><p>Oh, did we mention it’s free? We’ve heard great feedback from customers who participated in a closed beta, and we are excited to open it up to all customers on any <a href="https://www.cloudflare.com/plans/">Cloudflare plan</a> today.      </p>
    <div>
      <h3>Combining Cloudflare’s security and performance infrastructure with Google tag’s ease of use </h3>
      <a href="#combining-cloudflares-security-and-performance-infrastructure-with-google-tags-ease-of-use">
        
      </a>
    </div>
    <p>Google Tag Manager is <a href="https://radar.cloudflare.com/year-in-review/2024#website-technologies"><u>the most used tag management solution</u></a>: it makes a complex tagging ecosystem easy to use and requires less effort from web developers. That’s why we’re collaborating with the Ads measurement and analytics teams at Google to make the integration with Google tag gateway for advertisers as seamless and accessible as possible.</p><p>Site owners will have two options of where to enable this feature: in the Google tag console, or via the Cloudflare dashboard. When logging into the Google tag console, you’ll see an option to enable Google tag gateway for advertisers in the Admin settings tab. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1QUzHjBrer762UOvypV2Fh/4695fb3996591f001bb02b1be88e41ad/image1.png" />
          </figure><p>Alternatively, if you already know your tag ID and have admin access to your site’s Cloudflare account, you can enable the feature, edit the measurement ID and path directly from the Cloudflare dashboard: </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/amyXiwUzZ0X2V3BGzuOja/b4480e0fe1b420cf7942b0d0957fd6f5/image2.png" />
          </figure>
    <div>
      <h3>Improved performance and measurement accuracy  </h3>
      <a href="#improved-performance-and-measurement-accuracy">
        
      </a>
    </div>
    <p>Before, if site owners wanted to serve first-party tags from their own domain, they had to set up a complex configuration: create a <a href="https://www.cloudflare.com/learning/dns/dns-records/dns-cname-record/">CNAME</a> entry for a new subdomain, create an Origin Rule to forward requests, and a Transform Rule to include geolocation information.</p><p>This new integration dramatically simplifies the setup and makes it a one-click integration by leveraging Cloudflare's position as a <a href="https://www.cloudflare.com/learning/cdn/glossary/reverse-proxy/"><u>reverse proxy</u></a> for your domain. </p><p>In Google Tag Manager’s Admin settings, you can now connect your Cloudflare account and configure your measurement ID directly in Google, and it will push your config to Cloudflare. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2ToLjUY5vNVWV5AGjxeONF/b79b4b32c24080b2461860cea58232a3/image3.png" />
          </figure><p>When you enable the Google tag gateway for advertisers, specific calls to Google’s measurement servers from your website are intercepted and re-routed through your domain. The result: instead of the browser directly requesting the tag script from a Google domain (<code>e.g. www.googletagmanager.com</code>), the request is routed seamlessly through your own domain (<code>e.g. www.example.com/metrics</code>).</p><p>Cloudflare acts as an intermediary for these requests. It first securely fetches the necessary Google tag JavaScript files from Google's servers in the background, then serves these scripts back to the end user's browser from your domain. This makes the request appear as a first-party request.</p><p>A bit more on how this works: When a browser requests <code>https://example.com/gtag/js?id=G-XXXX</code>, Cloudflare intercepts and rewrites the path into the original Google endpoint, preserving all query-string parameters and normalizing the <b>Origin</b> and <b>Referer</b> headers to match Google’s expectations. It then fetches the script on your behalf, and routes all subsequent measurement payloads through the same first-party proxy to the appropriate Google collection endpoints.</p><p>This setup also impacts how cookies are stored from your domain. A <a href="https://www.cloudflare.com/learning/privacy/what-are-cookies/"><u>cookie</u></a> is a small text file that a website asks your browser to store on your computer. When you visit other pages on that same website, or return later, your browser sends that cookie back to the website's server. This allows the site to remember information about you or your preferences, like whether a user is logged in, items in a shopping cart, or, in the case of analytics and advertising, an identifier to recognize your browser across visits.</p><p>With Cloudflare’s integration with Google tag gateway for advertisers, the tag script itself is delivered <i>from your own domain</i>. When this script instructs the browser to set a cookie, the cookie is created and stored under your website's domain. </p>
    <div>
      <h3>How can I get started? </h3>
      <a href="#how-can-i-get-started">
        
      </a>
    </div>
    <p>Detailed instructions to get started can be found <a href="https://developers.cloudflare.com/google-tag-gateway/"><u>here</u></a>. You can also log in to your Cloudflare Dashboard, navigate to the Engagement Tab, and select Google tag gateway in the navigation to set it up directly in the Cloudflare dashboard.</p> ]]></content:encoded>
            <category><![CDATA[Advertising]]></category>
            <category><![CDATA[Analytics]]></category>
            <category><![CDATA[Google Analytics]]></category>
            <category><![CDATA[Google]]></category>
            <guid isPermaLink="false">3wpdZp6NrwT8NcND208zZT</guid>
            <dc:creator>Will Allen</dc:creator>
            <dc:creator>Nikhil Kothari</dc:creator>
        </item>
        <item>
            <title><![CDATA[Creating a single pane of glass for your multi-cloud Kubernetes workloads with Cloudflare]]></title>
            <link>https://blog.cloudflare.com/creating-a-single-pane-of-glass-for-your-multi-cloud-kubernetes-workloads-with-cloudflare/</link>
            <pubDate>Fri, 23 Feb 2018 17:00:00 GMT</pubDate>
            <description><![CDATA[ One of the great things about container technology is that it delivers the same experience and functionality across different platforms. This frees you as a developer from having to rewrite or update your application to deploy it on a new cloud provider. ]]></description>
            <content:encoded><![CDATA[ <p><i>(This is a crosspost of a blog post </i><a href="https://cloudplatform.googleblog.com/2018/02/creating-a-single-pane-of-glass-for-your-multi-cloud-Kubernetes-workloads-with-Cloudflare.html"><i>originally published</i></a><i> on Google Cloud blog)</i></p><p>One of the great things about container technology is that it delivers the same experience and functionality across different platforms. This frees you as a developer from having to rewrite or update your application to deploy it on a new cloud provider—or lets you run it across multiple cloud providers. With a containerized application running on multiple clouds, you can avoid lock-in, run your application on the cloud for which it’s best suited, and lower your overall costs.</p><p>If you’re using Kubernetes, you probably manage traffic to clusters and services across multiple nodes using internal load-balancing services, which is the most common and practical approach. But if you’re running an application on multiple clouds, it can be hard to distribute traffic intelligently among them. In this blog post, we show you how to use Cloudflare Load Balancer in conjunction with Kubernetes so you can start to achieve the benefits of a multi-cloud configuration.</p><p>To continue reading follow the Google Cloud blog <a href="https://cloudplatform.googleblog.com/2018/02/creating-a-single-pane-of-glass-for-your-multi-cloud-Kubernetes-workloads-with-Cloudflare.html">here</a> or if you are ready to get started we created a <a href="https://support.cloudflare.com/hc/en-us/articles/115003384591-Using-Kubernetes-on-GKE-and-AWS-with-Cloudflare-Load-Balancer">guide</a> on how to deploy an application using Kubernetes on GCP and AWS along with our Cloudflare Load Balancer.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1BCYwEZkuZnTcf06JBjegX/176554a91047c6c57c4bd83b815dc08f/Single_Pane_ofglass_Cloudflare.png" />
            
            </figure> ]]></content:encoded>
            <category><![CDATA[Google Cloud]]></category>
            <category><![CDATA[Google]]></category>
            <category><![CDATA[Kubernetes]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Serverless]]></category>
            <guid isPermaLink="false">4ZQwt7DyISJPGuH5oeauP7</guid>
            <dc:creator>Kamilla Amirova</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare and Google Offer App Developers $100,000 in Cloud Platform Credits]]></title>
            <link>https://blog.cloudflare.com/cloudflare-and-google-offer-app-developers-100k/</link>
            <pubDate>Tue, 19 Sep 2017 13:04:00 GMT</pubDate>
            <description><![CDATA[ When Cloudflare started, our company needed two things: an initial group of users, and the finances to fund our development. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>When Cloudflare started, our company needed two things: an initial group of users, and the finances to fund our development. We know most developers face the same issues. The <a href="https://www.cloudflare.com/apps">Cloudflare Apps Platform</a> solves the first problem by allowing third parties to develop applications that can be delivered across Cloudflare's edge network to any of the six million sites powered by Cloudflare. The <a href="https://www.cloudflare.com/fund/">Cloudflare Developer Fund</a> alleviates the second by giving developers the financial support they need to fund their company. Today, we are excited to announce another initiative that will make it possible for developers to make their app dreams a reality.</p><p>Cloudflare and Google Cloud are working together to offer developers the resources needed to quickly launch and scale Cloudflare Apps. This partnership will give any Cloudflare Apps developer the chance to access a wide range of benefits including $3k - $100k of Google Cloud Platform (GCP) for one year at no cost. Some startups will also be eligible for 24/7 technical support, and access to GCP’s technical solutions team. This supports a core belief of the Cloudflare Apps initiative: we want developers to focus on building great Apps, not worry about paying for infrastructure. Hundreds of startups have already built successful applications on Cloudflare Apps and those applications have grown to serve hundreds of thousands of users. This program with Google Cloud significantly decreases the friction to getting up and running on Cloudflare Apps, allowing the next generation of developers and startups to make their living by building Apps.</p>
    <div>
      <h3>How does it work?</h3>
      <a href="#how-does-it-work">
        
      </a>
    </div>
    <p><b>$100k for Exceptional Apps:</b> After an approval process your App could be awarded $20k in Cloud Credits, extendable to $100k based on usage in the first year.</p><p><b>Up to $3,000 for early stage startups:</b> If you are an early-stage startup you are entitled to a $3,000 Google Cloud credit. Even if you aren't quite a startup yet, you are entitled to $500 if you are a first-time Google Cloud Platform user, and $200 if you are an existing user.</p><p>[Collect your credits now!](<i>Offer discontinued</i>)</p> ]]></content:encoded>
            <category><![CDATA[Google]]></category>
            <category><![CDATA[Cloudflare Apps]]></category>
            <category><![CDATA[Developers]]></category>
            <guid isPermaLink="false">2ujNbMZI9PFsRMWwnLPKnp</guid>
            <dc:creator>Alonso Bustamante</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare at Google NEXT 2017]]></title>
            <link>https://blog.cloudflare.com/cloudflare-at-google-next-2017/</link>
            <pubDate>Wed, 08 Mar 2017 00:44:00 GMT</pubDate>
            <description><![CDATA[ The Cloudflare team is headed to Google NEXT 2017 from March 8th - 10th at Moscone Center in San Francisco, CA. We’re excited to meet with customers, partners, and new friends.

 ]]></description>
            <content:encoded><![CDATA[ <p>The Cloudflare team is headed to <a href="https://cloudnext.withgoogle.com/">Google NEXT 2017</a> from March 8th - 10th at Moscone Center in San Francisco, CA. We’re excited to meet with customers, partners, and new friends.</p><p>Come learn about Cloudflare’s recent partnership with Google Cloud Platform (GCP) through their <a href="https://cloud.google.com/interconnect/cdn-interconnect">CDN Interconnect Program</a>. Cloudflare offers performance and security to <b>over 25,000 Google Cloud Platform customers</b>. The CDN Interconnect program allows Cloudflare’s servers to establish high-speed interconnections with Google Cloud Platform at various locations around the world, accelerating dynamic content while reducing bandwidth and egress billing costs.</p><p>We’ll be at booth C7 discussing the benefits of Cloudflare, our partnership with Google Cloud Platform, and handing out Cloudflare SWAG. In addition, our Co-Founder, Michelle Zatlyn, will be presenting “<a href="https://cloudnext.withgoogle.com/schedule#target=a-cloud-networking-blueprint-for-securing-your-workloads-6c6de36a-59a5-4e6f-b434-f57653ffc997">A Cloud Networking Blueprint for Securing Your Workloads</a>” on Thursday, March 9th from 11:20 AM to 12:20 PM at Moscone West, Room 2005.</p>
    <div>
      <h3>What is Google Cloud Platform’s CDN Interconnect Program?</h3>
      <a href="#what-is-google-cloud-platforms-cdn-interconnect-program">
        
      </a>
    </div>
    <p>Google Cloud Platform’s CDN Interconnect program allows select <a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/">CDN providers</a> to establish direct interconnect links with Google’s edge network at various locations. Customers egressing network traffic from Google Cloud Platform through one of these links will benefit from the direct connectivity to the CDN providers and will be billed according to the lower Google Cloud Interconnect pricing.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4AFurp2fQQxJSJ3HTeoJCN/5760b3b4ba34052a5c975a6a3c77fe5f/Screen-Shot-2017-03-07-at-12.34.23-PM-1.png" />
            
            </figure><p>Joint customers of Cloudflare and Google Cloud Platform can <b>expect bandwidth savings of up to 75% and receive </b><a href="https://cloud.google.com/interconnect/cdn-interconnect#pricing"><b>discounted egress pricing</b></a>. Egress traffic is traffic flowing from Google Cloud Platform servers to Cloudflare’s servers. The high-speed interconnections between GCP and Cloudflare speed up the delivery of dynamic content for visitors.</p>
    <div>
      <h3>How does the CDN Interconnect program work?</h3>
      <a href="#how-does-the-cdn-interconnect-program-work">
        
      </a>
    </div>
    <p>As part of this program, 41 Cloudflare data centers are directly connected to Google Cloud Platform’s infrastructure. When one of these Cloudflare data centers requests content from a Google Cloud Platform origin, it’s routed through a high-performance interconnect instead of the public Internet. This dramatically reduces latency for origin requests, and it also enables discounted Google Cloud Platform egress pricing in the US, Europe and Asia regions.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6sSNOri8N1xrlCbUVRTgLc/f807d5d8514dae8b1eac1e26846291a5/google-cloud-how-it-works-1.svg" />
            
            </figure>
    <div>
      <h3>Joint Customer Stories</h3>
      <a href="#joint-customer-stories">
        
      </a>
    </div>
    <p>Quizlet and Discord, two prominent joint customers of Cloudflare and Google Cloud Platform, have shared their performance, security, and cost-savings stories.</p>
    <div>
      <h4>Discord</h4>
      <a href="#discord">
        
      </a>
    </div>
    <p><a href="https://www.cloudflare.com/case-studies/discord">Discord</a> is a free voice and text chat app designed specifically for gaming. In one year, Discord grew from 25,000 concurrent users to 2.4 million, a 9,000 percent growth. Discord’s 25 million registered users send 100 million messages per day across the platform, requiring a global presence with tremendous amounts of network throughput. As Discord experiences explosive growth, they're thankful Cloudflare helps keep bandwidth &amp; hardware costs down and web performance high.</p><ul><li><p>Saving $100,000 on annual hardware costs</p></li><li><p>Saving $100,000 monthly on Google Cloud Network Egress bill</p></li><li><p>Secure traffic even with spikes of websockets events up to 2 million/second</p></li></ul><p>Learn more about Discord’s use of Cloudflare on Google Cloud Platform: <a href="https://www.cloudflare.com/case-studies/discord">https://www.cloudflare.com/case-studies/discord</a></p>
    <div>
      <h4>Quizlet</h4>
      <a href="#quizlet">
        
      </a>
    </div>
    <p><a href="https://www.cloudflare.com/case-studies/quizlet">Quizlet</a> is the world’s largest student and teacher online learning community. Every month, over 20 million active learners from 130 countries practice and master more than 140 million study sets of content on every conceivable subject and topic. Quizlet’s <a href="http://www.alexa.com/siteinfo">Alexa ranking</a> is 588 globally, and 104 in the United States, ranking it as one of the most highly-trafficked websites.</p><p>Quizlet receives performance and security benefits, while saving more than 50 percent on their Google Cloud networking egress bill by using Cloudflare.</p><ul><li><p>Save 50% on monthly Google Cloud Network Egress Bill</p></li><li><p>Reduced daily bandwidth use by 76 percent reduction (or over 10 Tb)</p></li></ul><p>Learn more about Quizlet’s use of Cloudflare on Google Cloud Platform:<a href="https://www.cloudflare.com/case-studies/quizlet">https://www.cloudflare.com/case-studies/quizlet</a></p>
    <div>
      <h3>Presentation by Cloudflare Co-Founder Michelle Zatlyn</h3>
      <a href="#presentation-by-cloudflare-co-founder-michelle-zatlyn">
        
      </a>
    </div>
    <p>Cloudflare’s Co-Founder, Michelle Zatlyn, will be presenting alongside Google and Palo Alto Networks, in a talk titled “<a href="https://cloudnext.withgoogle.com/schedule#target=a-cloud-networking-blueprint-for-securing-your-workloads-6c6de36a-59a5-4e6f-b434-f57653ffc997">A Cloud Networking Blueprint for Securing Your Workloads</a>”.</p>
    <div>
      <h4>Date &amp; Time</h4>
      <a href="#date-time">
        
      </a>
    </div>
    <p>Thursday, March 9th | 11:20 AM - 12:20 PM | Moscone West, Room 2005</p>
    <div>
      <h4>Abstract</h4>
      <a href="#abstract">
        
      </a>
    </div>
    <p>Securing your workloads in the cloud requires shifting away from the traditional “perimeter” security to a “pervasive, hierarchical, scalable” security model. In this session, we discuss cloud networking best practices for securing enterprise and cloud-native workloads on Google Cloud Platform. We describe a network security blueprint that covers securing your virtual networks (VPCs), DDoS protection, using third-party security appliances and services, and visibility and analytics for your deployments. We also highlight Google’s experiences in delivering its own services securely and future trends in cloud network security.</p> ]]></content:encoded>
            <category><![CDATA[Google]]></category>
            <category><![CDATA[Events]]></category>
            <category><![CDATA[Life at Cloudflare]]></category>
            <category><![CDATA[Growth]]></category>
            <guid isPermaLink="false">eBF3xKxxekq2jCbfemoOq</guid>
            <dc:creator>Brady Gentile</dc:creator>
        </item>
        <item>
            <title><![CDATA[You can now use Google Authenticator and any TOTP app for Two-Factor Authentication]]></title>
            <link>https://blog.cloudflare.com/you-can-now-use-google-authenticator/</link>
            <pubDate>Thu, 16 Feb 2017 21:52:49 GMT</pubDate>
            <description><![CDATA[ Since the very beginning, Cloudflare has offered two-factor authentication with Authy, and starting today we are expanding your options to keep your account safe with Google Authenticator and any Time-based One Time Password (TOTP) app of your choice. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Since the very beginning, Cloudflare has offered <a href="/choosing-a-two-factor-authentication-system/">two-factor authentication with Authy</a>, and starting today we are expanding your options to keep your account safe with Google Authenticator and any Time-based One Time Password (TOTP) app of your choice.</p><p>If you want to get started right away, <a href="https://www.cloudflare.com/a/account/my-account">visit your account settings</a>. Setting up Two-Factor with Google Authenticator or with any TOTP app is easy - just use the app to scan the barcode you see in the Cloudflare dashboard, enter the code the app returns, and you’re good to go.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1Zx79zxgeaggLrqgyfoILY/49d4594bad96d6a5a57825b5b264561b/IMG_3701_5.png" />
            
            </figure>
    <div>
      <h3>Importance of Two-Factor Authentication</h3>
      <a href="#importance-of-two-factor-authentication">
        
      </a>
    </div>
    <p>Often when you hear that an account was ‘hacked’, it really means that the password was stolen.</p><blockquote><p>If the media stopped saying 'hacking' and instead said 'figured out their password', people would take password security more seriously.</p><p>— Khalil Sehnaoui (@sehnaoui) <a href="https://twitter.com/sehnaoui/status/816861012016197632">January 5, 2017</a></p></blockquote><p>Two-Factor authentication is sometimes thought of as something that should be used to protect <i>important</i> accounts, but the best practice is to always enable it when it is available. Without a second factor, any mishap involving your password can lead to a compromise. Journalist Mat Honan’s <a href="https://www.wired.com/2012/08/apple-amazon-mat-honan-hacking/">high profile compromise</a> in 2012 is a great example of the importance of two-factor authentication. When he later <a href="https://www.wired.com/2012/08/apple-amazon-mat-honan-hacking/">wrote about the incident</a> he said, "Had I used two-factor authentication for my Google account, it’s possible that none of this would have happened."</p>
    <div>
      <h3>What is a TOTP app?</h3>
      <a href="#what-is-a-totp-app">
        
      </a>
    </div>
    <p><a href="https://tools.ietf.org/html/rfc6238">TOTP (Time-based One Time Password)</a> is the mechanism that Google Authenticator, Authy and other two-factor authentication apps use to generate short-lived authentication codes. <a href="/choosing-a-two-factor-authentication-system">We’ve written previously on the blog</a> about how TOTP works.</p><p>We didn’t want to limit you to only using two-factor providers that we'd built integrations with, so we built an open TOTP integration in the Cloudflare dashboard, allowing you to set up two-factor with any app that implements TOTP. That means you can choose from a wide array of apps for logging into Cloudflare securely with two-factor such as <a href="https://m.vip.symantec.com/home.v">Symantec</a>, <a href="https://duo.com/product/trusted-users/two-factor-authentication/duo-mobile">Duo Mobile</a> and <a href="https://support.1password.com/guides/ios/?q=totp">1Password</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2NRvcimd3ivJwDuKgGRIOC/600caf42c23fcb0c9921fc19a759b003/Screen-Shot-2017-02-15-at-3.59.18-PM_5.png" />
            
            </figure>
    <div>
      <h3>Get Started</h3>
      <a href="#get-started">
        
      </a>
    </div>
    <p>If you want to enable Two-Factor Authentication with Google Authenticator or any other TOTP provider, visit <a href="https://cloudflare.com/a/account/my-account">your account settings here</a>. It’s easy to set up and the best way to secure your account. We also have step by step instructions for you <a href="https://support.cloudflare.com/hc/en-us/articles/200167866">in our knowledge base</a>.</p> ]]></content:encoded>
            <category><![CDATA[Google]]></category>
            <category><![CDATA[Authy]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">7tfSOTb7M9IB9n4KkDOsv0</guid>
            <dc:creator>Evan Johnson</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing Accelerated Mobile Links: Making the Mobile Web App-Quick]]></title>
            <link>https://blog.cloudflare.com/accelerated-mobile/</link>
            <pubDate>Thu, 12 Jan 2017 06:00:00 GMT</pubDate>
            <description><![CDATA[ We've predicted that more than half of the traffic to Cloudflare's network will come from mobile devices. Even if they are formatted to be displayed on a small screen, the mobile web is built on traditional web protocols and technologies that were designed for desktop. ]]></description>
            <content:encoded><![CDATA[ <p>In 2017, we've predicted that more than half of the traffic to Cloudflare's network will come from mobile devices. Even if they are formatted to be displayed on a small screen, the mobile web is built on traditional web protocols and technologies that were designed for desktop CPUs, network connections, and displays. As a result, browsing the mobile web feels sluggish compared with using native mobile apps.</p><p>In October 2015, the team at Google announced <a href="http://ampproject.org">Accelerated Mobile Pages (AMP)</a>, a new, open technology to make the mobile web as fast as native apps. Since then, a large number of publishers have adopted AMP. Today, 600 million pages across 700,000 different domains are available in the AMP format.</p><p>The majority of traffic to this AMP content comes from people running searches on Google.com. If a visitor finds content through some source other than a Google search, even if the content can be served from AMP, it typically won't be. As a result, the mobile web continues to be slower than it needs to be.</p>
    <div>
      <h4>Making the Mobile Web App-Quick</h4>
      <a href="#making-the-mobile-web-app-quick">
        
      </a>
    </div>
    <p>Cloudflare's <a href="https://www.cloudflare.com/website-optimization/accelerated-mobile-links/">Accelerated Mobile Links</a> helps solve this problem, making content, regardless of how it's discovered, app-quick. Once enabled, Accelerated Mobile Links automatically identifies links on a Cloudflare customer's site to content with an <a href="https://www.cloudflare.com/website-optimization/accelerated-mobile-links/">AMP</a> version available. If a link is clicked from a mobile device, the AMP content will be loaded nearly instantly.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5ieCvJzcYWwJ2YOKbUrKFZ/b68843dc767ab04a1d6e1d5ccb08b44f/amp-configuration-1.png" />
            
            </figure><p>To see how it works, try viewing this post from your mobile device and clicking any of these links:</p><ul><li><p><b>TechCrunch:</b> <a href="https://techcrunch.com/2017/01/11/cloudflare-explains-how-fbi-gag-order-impacted-business/">Cloudflare explains how FBI gag order impacted business</a></p></li><li><p><b>ZDNet:</b> <a href="http://www.zdnet.com/article/cloudflare-offers-http2-server-push-to-boost-internet-speeds/">CloudFlare figured out how to make the Web one second faster</a></p></li><li><p><b>The Register:</b> <a href="http://www.theregister.co.uk/2016/06/21/cloudflare_apologizes_for_telia_screwing_you_over/">CloudFlare apologizes for Telia screwing you over</a></p></li><li><p><b>AMP 8 Ball:</b> <a href="https://amp8ball.com/">Ask The Amp Magic 8-Ball A Yes Or No Question.</a></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4LVE0NPtwr0c0kluYKwhAR/506cdb4a92c3c32b632e5cfdfc03d429/AML_animated_demo.gif" />
            
            </figure></li></ul>
    <div>
      <h4>Increasing User Engagement</h4>
      <a href="#increasing-user-engagement">
        
      </a>
    </div>
    <p>One of the benefits of Accelerated Mobile Links is that AMP content is loaded in a viewer directly on the site that linked to the content. As a result, when a reader is done consuming the AMP content closing the viewer returns them to the original source of the link. In that way, every Cloudflare customers' site can be more like a native mobile app, with the corresponding increase in user engagement.</p><p>For large publishers that want an even more branded experience, Cloudflare will offer the ability to customize the domain of the viewer to match the publisher's domain. This, for the first time, provides a seamless experience where AMP content can be consumed without having to send visitors to a Google owned domain. If you're a large publisher interested in customizing the Accelerated Mobile Links viewer, you can contact <a>Cloudflare's team</a>.</p>
    <div>
      <h4>Innovating on AMP</h4>
      <a href="#innovating-on-amp">
        
      </a>
    </div>
    <p>While Google was the initial champion of AMP, the technologies involved are <a href="https://github.com/ampproject/amphtml">open</a>. We worked closely with the Google team in developing Cloudflare's Accelerated Mobile Links as well as our own AMP cache. Malte Ubl, the technical lead for the AMP Project at Google said of our collaboration:</p><p><i>"Working with Cloudflare on its AMP caching solution was as seamless as open-source development can be. Cloudflare has become a regular contributor on the project and made the code base better for all users of AMP. It is always a big step for a software project to go from supporting specific caches to many, and it is awesome to see Cloudflare’s elegant solution for this."</i></p><p>Cloudflare now powers the only <a href="https://github.com/ampproject/amphtml/blob/master/spec/amp-cache-guidelines.md">compliant</a> non-Google AMP cache with all the same performance and security benefits as Google.</p><p>In the spirit of open source, we're working to help develop updates to the project to address some of publishers' and end users' concerns. Specifically, here are some features we're developing to address concerns that have been expressed about AMP:</p><ul><li><p>Easier ways to share AMP content using publisher's original domains</p></li><li><p>Automatically redirecting desktop visitors from the AMP version back to the original version of the content</p></li><li><p>A way for end users who would prefer not to be redirected to the AMP version of content to opt out</p></li><li><p>The ability for publishers to brand the AMP viewer and serve it from their own domain</p></li></ul><p>Cloudflare is committed to the AMP project. Accelerated Mobile Links is the first AMP feature we're releasing, but we'll be doing more over the months to come. As of today, Accelerated Mobile Links is available to all Cloudflare customers for free. You can enable it in your <a href="https://www.cloudflare.com/a/performance/">Cloudflare Performance dashboard</a>. Stay tuned for more AMP features that will continue to increase the speed of the mobile web.</p> ]]></content:encoded>
            <category><![CDATA[Mobile]]></category>
            <category><![CDATA[Google]]></category>
            <category><![CDATA[AMP]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Optimization]]></category>
            <guid isPermaLink="false">7ztp6Ye5JzaL00PeVFH5va</guid>
            <dc:creator>Matthew Prince</dc:creator>
        </item>
        <item>
            <title><![CDATA[Tools for debugging, testing and using HTTP/2]]></title>
            <link>https://blog.cloudflare.com/tools-for-debugging-testing-and-using-http-2/</link>
            <pubDate>Fri, 04 Dec 2015 12:56:36 GMT</pubDate>
            <description><![CDATA[ With CloudFlare's release of HTTP/2 for all our customers the web suddenly has a lot of HTTP/2 connections. To get the most out of HTTP/2 you'll want to be using an up to date web browser (all the major browsers support HTTP/2). ]]></description>
            <content:encoded><![CDATA[ <p>With CloudFlare's release of HTTP/2 for all our customers the web suddenly has a lot of HTTP/2 connections. To get the most out of HTTP/2 you'll want to be using an up to date web browser (all the major browsers support HTTP/2).</p><p>But there are some non-browser tools that come in handy when working with HTTP/2. This blog post starts with a useful browser add-on, and then delves into command-line tools, load testing, conformance verification, development libraries and packet decoding for HTTP/2.</p><p>If you know of something that I've missed please write a comment.</p>
    <div>
      <h3>Browser Indicators</h3>
      <a href="#browser-indicators">
        
      </a>
    </div>
    <p>For Google Chrome there's a handy <a href="https://chrome.google.com/webstore/detail/http2-and-spdy-indicator/mpbpobfflnpcgagjijhmgnchggcjblin?hl=en">HTTP/2 and SPDY Indicator</a> extension that adds a colored lightning bolt to the browser bar showing the protocol being used when a web page is viewed.</p><p>The blue lightning bolt shown here indicates that the CloudFlare home page was served using HTTP/2:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5oLd6mBoInw37IHY1A2EuQ/82331d2180273d64fa55a756b757c0f8/Screen-Shot-2015-12-03-at-12-08-25.png" />
            
            </figure><p>A green lightning bolt indicates the site was served using SPDY and gives the SPDY version number. In this case SPDY/3.1:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5mis7IT15cGk32zcQAl73M/9747e4c2afc6d1a891e9b75ee6249895/Screen-Shot-2015-12-03-at-12-15-10.png" />
            
            </figure><p>A grey lightning bolt indicates that neither HTTP/2 no SPDY were used. Here the web page was served using HTTP/1.1.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/30VuLV8Q9qqPC2hpvtkU3l/3440efe822df32649b80a2547de2252c/Screen-Shot-2015-12-03-at-12-10-18.png" />
            
            </figure><p>There's a similar extension for <a href="https://addons.mozilla.org/en-GB/firefox/addon/spdy-indicator/">Firefox</a>.</p>
    <div>
      <h4>Online testing</h4>
      <a href="#online-testing">
        
      </a>
    </div>
    <p>There's also a handy <a href="https://tools.keycdn.com/http2-test">online tool</a> to check any individual web site.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/DwWwyYZflqaees6W4aJaw/fea382962f1dbb50827f29b480cb72a8/Screen-Shot-2015-12-03-at-12-22-11.png" />
            
            </figure>
    <div>
      <h4>Claire</h4>
      <a href="#claire">
        
      </a>
    </div>
    <p>CloudFlare also has a Google Chrome extension called <a href="https://chrome.google.com/webstore/detail/claire/fgbpcgddpmjmamlibbaobboigaijnmkl">Claire</a> that gives information about how a web page was loaded. For example here's the information that Claire shows for a site using CloudFlare that uses <a href="https://www.cloudflare.com/ipv6">IPv6</a>, <a href="https://www.cloudflare.com/railgun">Railgun</a>, and HTTP/2.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5xZKZgNYat4xEKVCyzYhn4/c4db4e2c32cccd4e78b8361fddb4ebcf/Screen-Shot-2015-12-03-at-17-12-50.png" />
            
            </figure>
    <div>
      <h3>Command-line Tools</h3>
      <a href="#command-line-tools">
        
      </a>
    </div>
    <p>There's a handy command-line tool called <a href="https://github.com/stefanjudis/is-http2-cli">is-http</a> which is installed using npm as follows:</p>
            <pre><code>npm install -g is-http2-cli</code></pre>
            <p>Once installed you can check the HTTP/2 status of a web on the command-line:</p>
            <pre><code>$ is-http2 www.cloudflare.com
✓ HTTP/2 supported by www.cloudflare.com
Supported protocols: h2 spdy/3.1 http/1.1

$ is-http2 www.amazon.com
× HTTP/2 not supported by www.amazon.com
Supported protocols: http/1.1</code></pre>
            <p>The <code>is-http</code> tool is also useful because it gives you a list of the protocols advertised by the server. As you can see <a href="http://www.cloudflare.com">www.cloudflare.com</a> supports HTTP/2, HTTP/1.1 and SPDY/3.1.</p>
    <div>
      <h4>curl</h4>
      <a href="#curl">
        
      </a>
    </div>
    <p>In version 7.43.0 the venerable <code>curl</code> tool got HTTP/2 support when it's linked with the <a href="https://nghttp2.org/">nghttp</a> library. To build <code>curl</code> from sources you'll need OpenSSL, zlib, nghttp2 and libev. I used the following sequence of commands.</p>
            <pre><code>$ curl -LO http://dist.schmorp.de/libev/libev-4.20.tar.gz
$ tar zvxf libev-4.20.tar.gz
$ cd libev-4.20
$ ./configure
$ make
$ sudo make install

$ curl -LO https://www.openssl.org/source/openssl-1.0.2d.tar.gz
$ tar zxvf openssl-1.0.2d.tar.gz
$ cd openssl-1.0.2d
$ ./config shared zlib-dynamic
$ make &amp;&amp; make test
$ sudo make install

$ curl -LO http://zlib.net/zlib-1.2.8.tar.gz
$ tar zxvf zlib-1.2.8.tar.gz
$ cd zlib-1.2.8
$ ./configure
$ make &amp;&amp; make test
$ sudo make install

$ curl -LO https://github.com/tatsuhiro-t/nghttp2/releases/download/v1.5.0/nghttp2-1.5.0.tar.gz
$ tar zxvf nghttp2-1.5.0.tar.gz
$ cd nghttp2-1.5.0
$ OPENSSL_CFLAGS="-I/usr/local/ssl/include" OPENSSL_LIBS="-L/usr/local/ssl/lib -lssl -lcrypto -ldl" ./configure
$ make
$ sudo make install

$ curl -LO http://curl.haxx.se/download/curl-7.46.0.tar.gz
$ tar zxvf curl-7.46.0.tar.gz
$ cd curl-7.46.0
$ ./configure
$ make &amp;&amp; make test
$ sudo make install
$ sudo ldconfig</code></pre>
            <p>Once installed <code>curl</code> has a new <code>--http2</code> option that causes it to use HTTP/2 (if it can). The <code>-v</code> verbose option will show information about the use of HTTP/2:</p>
            <pre><code>$ curl -vso /dev/null --http2 https://www.cloudflare.com/
[...]
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* TCP_NODELAY set
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0xc3dba0)
[...]</code></pre>
            
    <div>
      <h4>nghttp2</h4>
      <a href="#nghttp2">
        
      </a>
    </div>
    <p>If you built <code>curl</code> using my instructions above you will have built and installed some tools that come with the <a href="https://nghttp2.org/">nghttp2</a> library. One of those is a command-line client called <code>nghttp</code>. It can be used like <code>curl</code> to download from the web using HTTP/2 but it also has a handy verbose option that shows that actual HTTP/2 frames sent and received.</p><p>By running it with <code>-nv</code> you get HTTP/2 and throw away the actual downloaded page. Here's its output when downloading <a href="http://www.cloudflare.com">www.cloudflare.com</a> using HTTP/2. On a terminal with color support it uses coloring to highlight different parts of the log.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5XhV2QcvnnoVSUERRpyGgP/d204e9ebeaff7da0cac4ead13aa053a5/Screen-Shot-2015-12-03-at-13-24-46.png" />
            
            </figure>
    <div>
      <h4>h2c</h4>
      <a href="#h2c">
        
      </a>
    </div>
    <p>Another curl-like command-line tool for HTTP/2 is <a href="https://github.com/fstab/h2c">h2c</a>. It also enables dumping of HTTP/2 frames and has a nice feature that it runs in the background and keeps connections to servers alive and has a useful 'wiretap' feature for intercepting an HTTP/2 for debugging.</p><p>If you have <a href="https://golang.org/dl/">Go 1.5.1</a> installed then you can download it as follows:</p>
            <pre><code>$ export GO15VENDOREXPERIMENT=1
$ go get github.com/fstab/h2c</code></pre>
            <p>You start <code>h2c</code> running by doing <code>h2c start &amp;</code> which sets it running in the background. You can then communicate with a web server like this:</p>
            <pre><code> $ h2c connect www.cloudflare.com
 $ h2c get /
 $ h2c disconnect</code></pre>
            <p>And it will perform the HTTP request. To see detailed output at the HTTP/2 you start h2c with the <code>--dump</code> parameter:</p>
            <pre><code> $ h2c start --dump</code></pre>
            <p>You will then get detailed output dumped by that process, in color, of the HTTP/2 frames being used.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5QvyA73OaYNPT3ZJSOHay/ca5212fabad813467a33235b4653affd/Screen-Shot-2015-12-03-at-13-11-19.png" />
            
            </figure><p>Details of the wiretap feature are in [this blog post](<a href="http://unrestful.io/2015/08/28/wiretap.html">http://unrestful.io/2015/08/28/wiretap.html</a>).</p>
    <div>
      <h4>openssl s_client</h4>
      <a href="#openssl-s_client">
        
      </a>
    </div>
    <p>If you just want to find out what protocols a web site supports OpenSSL's <code>s_client</code> can be used. If you specify an empty <code>-nextprotoneg</code> option OpenSSL sends an empty TLS option asking for negotiation of the next protocol and the server responds with a complete list of protocols it supports.</p>
            <pre><code>$ openssl s_client -connect www.cloudflare.com:443 -nextprotoneg ''
CONNECTED(00000003)
Protocols advertised by server: h2, spdy/3.1, http/1.1</code></pre>
            <p>There you can see that <a href="http://www.cloudflare.com">www.cloudflare.com</a> support HTTP/2 (the h2), SPDY/3.1 and HTTP/1.1.</p>
    <div>
      <h4>h2i</h4>
      <a href="#h2i">
        
      </a>
    </div>
    <p>If you want to do low level HTTP/2 debugging there's an interactive client call <a href="https://github.com/golang/net/tree/master/http2/h2i">h2i</a>. Once again it requires that you have Go installed. To get it, run</p>
            <pre><code>$ go get github.com/golang/net/http2/h2i</code></pre>
            <p>You can then use <code>h2i</code> to connect to a site that uses HTTP/2 and send it individual HTTP/2 frames. For example, here's the start of session connecting to <a href="http://www.cloudflare.com">www.cloudflare.com</a> and requesting the home page using the <code>headers</code> command which allows you to type in a standard HTTP/1.1 request.</p>
            <pre><code>$ h2i www.cloudflare.com
Connecting to www.cloudflare.com:443 ...
Connected to 198.41.214.163:443
Negotiated protocol "h2"
[FrameHeader SETTINGS len=18]
 [MAX_CONCURRENT_STREAMS = 128]
 [INITIAL_WINDOW_SIZE = 2147483647]
 [MAX_FRAME_SIZE = 16777215]
[FrameHeader WINDOW_UPDATE len=4]
  Window-Increment = 2147418112

h2i&gt; headers
 (as HTTP/1.1)&gt; GET / HTTP/1.1
 (as HTTP/1.1)&gt; Host: www.cloudflare.com
 (as HTTP/1.1)&gt;
  Opening Stream-ID 1:
  :authority = www.cloudflare.com
  :method = GET
  :path = /
  :scheme = https
 [FrameHeader HEADERS flags=END_HEADERS stream=1 len=819]
   :status = "200"
   server = "cloudflare-nginx"
   date = "Fri, 04 Dec 2015 10:36:15 GMT"
   content-type = "text/html"
   last-modified = "Thu, 03 Dec 2015 22:27:41 G MT" 
   strict-transport-security = "max-age=31536000"
   x-content-type-options = "nosniff"
   x-frame-options = "SAMEORIGIN"
   cf-cache-status = "HIT"
   expires = "Fri, 04 Dec 2015 14:36:15 GMT"
   cache-control = "public, max-age=14400"
 [FrameHeader DATA stream=1 len=7261]
   "&lt;!DOCTYPE html&gt;\n&lt;html&gt;\n&lt;head&gt;\n&lt;!</code></pre>
            
    <div>
      <h3>Load Testing</h3>
      <a href="#load-testing">
        
      </a>
    </div>
    <p>The <a href="https://nghttp2.org">nghttp2</a> library also includes a load testing tool called <a href="https://nghttp2.org/documentation/h2load.1.html">h2load</a> which can be used a little like <a href="https://httpd.apache.org/docs/2.2/programs/ab.html">ab</a>. There's a useful <a href="https://nghttp2.org/documentation/h2load-howto.html">HOWTO</a> on using the tool.</p><p>I ran it against the CloudFlare test server like this:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/hWDbuK231mcZwO1JOKtvo/b012e7a9e6863bd360992858f0e24b8d/Screen-Shot-2015-12-03-at-16-02-50.png" />
            
            </figure>
    <div>
      <h3>Conformance</h3>
      <a href="#conformance">
        
      </a>
    </div>
    <p>If you are testing an HTTP/2 implementation there's a useful tool called h2spec which runs through a conformance test against a real HTTP/2 server. To use it first install <a href="https://golang.org/dl/">Go 1.5.1</a> and then do</p>
            <pre><code>$ go get github.com/summerwind/h2spec/cmd/h2spec</code></pre>
            <p>I only recommend it for testing locally running HTTP/2 servers. If you built <code>nghttp2</code> above while building <code>curl</code> you will also have the <code>nghttpd</code> HTTP/2 server available. You can run <code>h2spec</code> against it like this:</p>
            <pre><code>$ nghttpd --no-tls -D -d /tmp 8888
$ h2spec -p 8888</code></pre>
            <p><code>h2spec</code> will run a battery of tests against the server and output conformance information with a reference to the relevant part of <a href="https://tools.ietf.org/html/rfc7540">RFC7540</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1fHO4ABgKvK4LFEnRKf3sT/5613c3aafbdefdeae8584f09d476b9ae/Screen-Shot-2015-12-03-at-14-49-07.png" />
            
            </figure><p>If you need to test a number of web sites to see which support HTTP/2 there's a tool called <a href="https://github.com/jgrahamc/h2scan">h2scan</a>. Feed it a list of web sites and it will check to see if they support HTTPS, SPDY/3.1 and HTTP/2.</p>
            <pre><code>$ cat examples
www.cloudflare.com
www.amazon.com
www.yahoo.com
$ h2scan --fields &lt; examples
name,resolves,port443Open,tlsWorks,httpsWorks,cloudflare,spdyAnnounced,http2Announced,spdyWorks,http2Works,npn
www.cloudflare.com,t,t,t,t,t,t,t,t,t,h2 spdy/3.1 http/1.1
www.amazon.com,t,t,t,t,f,f,f,-,-,http/1.1
www.yahoo.com,t,t,t,t,f,t,f,t,-,spdy/3.1 spdy/3 http/1.1 http/1.0</code></pre>
            
    <div>
      <h3>Useful Libraries</h3>
      <a href="#useful-libraries">
        
      </a>
    </div>
    <p>If you are working in C and need to add HTTP/2 support to a program there's the <a href="https://nghttp2.org/">nghttp2</a> library that is full implementation of HTTP/2 with a simple interface. Their <a href="https://nghttp2.org/documentation/tutorial-client.html#libevent-client-c">HTTP/2 client tutorial</a> explains how to use the library to add HTTP/2 client capabilities. nghttp2 can also be used for <a href="https://nghttp2.org/documentation/tutorial-server.html#libevent-server-c">servers</a>.</p><p>Go programmers will find full HTTP/2 support will arrive with Go 1.6. If you can't wait until then there's an extension package for HTTP/2 <a href="https://godoc.org/golang.org/x/net/http2">golang.org/x/net/http2</a> . Details <a href="https://http2.golang.org/">here</a>.</p><p>There's a pure Ruby implementation of HTTP/2 available from <a href="https://github.com/igrigorik/http-2">Ilya Grigorik</a>.</p><p>Haskell programmers can use the <a href="https://hackage.haskell.org/package/http2">http2</a> hackage.</p>
    <div>
      <h3>Packet Snooping</h3>
      <a href="#packet-snooping">
        
      </a>
    </div>
    <p>The popular Wireshark packet analyzer added decoding on HTTP/2 in version <a href="https://www.wireshark.org/docs/relnotes/wireshark-1.12.0.html#_new_protocol_support">1.12.0</a> and fully decodes HTTP/2 frames. Unfortunately most HTTP/2 is sent over TLS which means that, by default, Wireshark will not be able to decrypt the packets to be able to get to the HTTP/2 for decoding.</p><p>Fortunately, there is a workaround if you are using Google Chrome for testing. It is possible to get Chrome to save the symmetric cryptographic key used for TLS connections to a file and Wireshark is able to read that file to decode TLS connections.</p><p>This is done by setting the <code>SSLKEYLOGFILE</code> environment variable before running Chrome. I'm running on Mac OS X and use Google Chrome Canary for testing so I run:</p>
            <pre><code>% export SSLKEYLOGFILE=`pwd`/sslkey.log
% /Applications/Google\ Chrome\ Canary.app/Contents/MacOS/Google\ Chrome\ Canary</code></pre>
            <p>Google Chrome will then write session keys to that file. In Wireshark I configure it to read the file by going to Preferences, expanding the Protocols list.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3VtO5XasCO2zeTqVfoTSPw/bc93d70e94487845510f5a8bbb4a0b06/Screen-Shot-2015-12-03-at-15-48-59.png" />
            
            </figure><p>Then I find SSL and set the Pre-Main-Secret log filename to point to the same file.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1MluGb7E88xSg352ummOYT/7bd9b00a4580287fe629502873e5c81e/Screen-Shot-2015-12-03-at-15-49-44.png" />
            
            </figure><p>Then Wireshark can decode the TLS connections made by that browser. Here's the beginning of a connection between Google Chrome Canary and the experimental server <a href="https://http2.cloudflare.com/">https://http2.cloudflare.com/</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3g7dIhFoqbTfoj24iX4Yu/535556886753cc531e686595f96004c4/Screen-Shot-2015-12-03-at-15-51-04.png" />
            
            </figure>
    <div>
      <h3>Chrome Developer View</h3>
      <a href="#chrome-developer-view">
        
      </a>
    </div>
    <p>Finally, if you are looking at the performance of your own web site it can be handy to understand which parts of the page were downloaded using HTTP/2 and which were not. You can do that pretty easily using the Developer view in Google Chrome. Here's a shot of the CloudFlare blog loaded in Chrome. There's an additional Protocol fields available on the pop-up menu.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3KIWDKeDrHOsJYLQxKDFcT/f28684e2e951ad03a2d3db3db5075ee6/Screen-Shot-2015-12-04-at-10-46-21.png" />
            
            </figure><p>Once added you can sort by protocol to see which parts were HTTP/2 (the h2) and which were other protocols.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4yftvuhcm97tnJvQJou28Z/53e7051e0112f8c01ac7edb1d6d0d96d/Screen-Shot-2015-12-04-at-10-46-29.png" />
            
            </figure>
    <div>
      <h3>Learning about HTTP/2</h3>
      <a href="#learning-about-http-2">
        
      </a>
    </div>
    <ol><li><p>Short <a href="https://www.cloudflare.com/http2/what-is-http2/">introduction to HTTP/2</a> from CloudFlare.</p></li><li><p><a href="http://daniel.haxx.se/http2/">http2 explained</a> from the creator of <code>curl</code>.</p></li><li><p>The <a href="http://http2.github.io/">home page</a> of the HTTP/2 working group with lots of information plus a list of useful <a href="https://github.com/http2/http2-spec/wiki/Tools">tools</a>.</p></li></ol> ]]></content:encoded>
            <category><![CDATA[HTTP2]]></category>
            <category><![CDATA[Google]]></category>
            <category><![CDATA[Claire]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Programming]]></category>
            <guid isPermaLink="false">4j93RdB2JSheBauKZqi3x2</guid>
            <dc:creator>John Graham-Cumming</dc:creator>
        </item>
        <item>
            <title><![CDATA[Why we raised $110m from Fidelity, Google, Microsoft, Baidu and Qualcomm]]></title>
            <link>https://blog.cloudflare.com/why-we-raised-110m-from-fidelity-google-microsoft-baidu-and-qualcomm/</link>
            <pubDate>Tue, 22 Sep 2015 16:21:56 GMT</pubDate>
            <description><![CDATA[ The past few years have been marked by tremendous growth for CloudFlare. At the time of our last fundraising in December 2012, CloudFlare was a team of 37 operating a network in 23 cities and 15 countries—today we number over 200 with a presence in 62 cities and 33 countries. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>The past few years have been marked by tremendous growth for CloudFlare. At the time of our last fundraising in December 2012, CloudFlare was a team of 37 operating a network in 23 cities and 15 countries—today we number over 200 with a presence in 62 cities and 33 countries. We’ve grown from delivering 85 billion page views per month for 500 thousand customers to nearly 1 <i>trillion</i> each month across 4 million Internet properties, all the while protecting our customers from hundreds of billions of cyber threats. The growth and resonance of our service since CloudFlare’s founding 5 years ago is beyond our wildest of expectations, but it is only in the coming years that our scale and efforts to build a better Internet will become visible.</p><p>In 2016 alone we will more than double our global presence, increase the size of our network by an order of magnitude, and with that allow millions of new businesses and online publishers to accelerate and secure their online applications and harness the growing power of the Internet economy. Our service is built on the simple premise that any individual or business should be able to quickly and easily ensure the global performance and availability of their websites and mobile apps, and withstand withering and sophisticated cyber attacks—all without a single piece of hardware, or the need to build a global network and the team of engineers to manage it all.</p><p>To sustain this level of growth requires investment. In this pursuit we’re pleased to announce that we have raised $110 million from Fidelity, Google, Microsoft, Baidu, Qualcomm and our existing investors. Beyond the additional capital raised, the broad, strategic participation in the round from many of the world’s leading technology firms validates the opportunity ahead of us, and positions us for the next chapter of our growth.</p><p>But before we write this new chapter, a bit about how we got here.</p>
    <div>
      <h3>A seismic shift is underway</h3>
      <a href="#a-seismic-shift-is-underway">
        
      </a>
    </div>
    <p>Seismic shifts are the sorts of groundbreaking developments that fundamentally alter the ways in which organizations and industries operate. We came upon a few such shifts as CloudFlare was founded in 2009. Although these shifts were already well underway, their strength has increased logarithmically each year.</p><p>These shifts are the massive movement of commerce and communication online, and the rapidly evolving means by which organizations deploy applications, increasingly in the cloud. Had one started a business in 2009, their first call might very well have been to Dell or HP to purchase servers (ours was to HP!), followed by calls to Oracle and Microsoft for the databases and software necessary to deliver the applications across the final tier of Internet hierarchy, the network edge. It is in this last tier that companies like Cisco capitalized on the last seismic shift at the network edge—the need to process and manage an explosion of network traffic directed towards the applications, and storage and compute platforms beneath.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/UleICftf8i3CuKtNZ6fUx/2e0119fb0f32aeb2b219be06e695bd86/HourglassCF-3.png" />
            
            </figure><p><i>Excerpt from our fundraising presentation titled "Scalability and elasticity of the cloud will prevail"</i></p><p>Today it is almost inconceivable that an upstart business would purchase a single piece of hardware, much less call anyone to procure it. In less time, with less cost, and more elasticity, an organization can turn up any number of storage and compute instances on cloud based platforms. The on-premise, perpetual license-based software and databases of yesteryear are now also available as services. Even complex applications such as customer relationship management (CRM) and enterprise resource management (ERP) systems are now largely dominated by software as a service (SaaS) offerings. CloudFlare is now pioneering this same shift at the network edge. In other words, it’s time to say farewell to hardware appliances.</p><p>The network edge can be broadly described as the “control plane” for all traffic directed to the layers below. This control plane ensures that traffic to the layers below consistently reaches its destination, that authorized users receive appropriate access, and that hackers are kept at bay. Each of these appliances (e.g., firewalls, load balancers, DDoS mitigation appliances, WAN optimizers, malware scanners, authentication devices, among many others) look for patterns in traffic, apply stored rules against the patterns, and make network I/O decisions to block, re-route, and compress, among many other functions. Now, CloudFlare is able to perform each of these functions without a single piece of customer hardware, and across a global network with immensely greater scale and elasticity, and a small fraction of the latency and cost.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/KNLdCWKUwfrzNQB5iSGyy/aa4dc63f1d47ecda01332b395c0d6119/NetworkEdge.png" />
            
            </figure><p><i>Excerpt from our fundraising presentation titled "The network edge now expands beyond the data center"</i></p><p>This shift at the network edge is further driven by its expansion, as well as the blurring of its <a href="https://www.cloudflare.com/learning/access-management/what-is-the-network-perimeter/">perimeter</a> (security jargon for the line delineating assets to be protected). This expansion is occurring across multiple vectors. Organizations now deploy their applications across multiple geographies to reduce latency, across disparate physical and cloud environments for redundancy and elasticity, and adopt third party SaaS applications to replace internal business systems. What used to be solved by hardware appliances is now far better solved by cloud services.</p><p>CloudFlare is the new control plane for the network edge. With a simple DNS change, traffic to an organization’s applications and store and compute environment are directed through CloudFlare’s global network no matter where they reside geographically, logically or virtually. With one of the largest edge networks globally, we can perform each of the aforementioned edge functions to accelerate, secure and ensure the availability of the applications behind us with blazing fast speed, absolute elasticity and an enormous return on investment compared to hardware based solutions.</p><p>This is our vision, and one that is now shared with many of the world’s leading technology firms (and investors). Each of Fidelity, Microsoft, Baidu, Qualcomm and Google are uniquely positioned to support CloudFlare’s evolution in a rapidly evolving environment, and address key questions. <i>What is our mobile strategy? What is our China strategy? How are we going to succeed with large enterprises? Will other giants partner or compete with you? How does CloudFlare become a public company?</i></p><p>Our choice of investors should hopefully provide some insight into our answers to each of these questions. Moreover, each of the participants in our round agreed to invest in the company without any special control provisions, board representation, economic treatment or even information rights. It was important for us as a company to continue to execute against our vision without distraction.</p>
    <div>
      <h3>Where to from here?</h3>
      <a href="#where-to-from-here">
        
      </a>
    </div>
    <p>We don’t precisely know, but if the past is any indicator, it may continue to be beyond our expectations. In a fun bit of CloudFlare history, we unearthed an e-mail between two of CloudFlare’s founders from May 2009.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/50MAFR0fDIrDXIetO2QafR/ff4ee7ecd6c9d40af40c07a6d5961a2e/CFHistory-1.png" />
            
            </figure><p>What might have sounded delusional to many five years ago (and they probably would have been right at the time), is a reality today. Five years later, and built upon the contributions of a talented team, our steadfast customers, and an unparalleled group of investors, we are well on the path to building something big and unique.</p><p><i>We’re always looking for feedback.</i> You asked for an intuitive, easy-to-use interface, and we <a href="/redesigning-cloudflare/">delivered</a>. Encryption and <a href="https://www.cloudflare.com/application-services/products/ssl/">SSL certificates</a>? <a href="/introducing-universal-ssl/">Now free, and included with all plans</a>. A larger network to serve your global visitor base? We’re <a href="https://www.cloudflare.com/network-map">62 deep</a>, and not stopping. This feedback has helped tremendously over the past 5 years, so please keep it coming as we tackle the next five.</p><p><i>We’re looking for new team members.</i> Great people who are passionate about building a better Internet, and willing to tackle big problems. We are building a service that allows anyone to run a website or mobile app as fast and secure as the Internet giants, improving the surfing experience for over 2 billion Internet users, and empowering millions of businesses to securely transact online each day. If that sounds interesting, check out our job postings <a href="https://www.cloudflare.com/join-our-team">here</a>.</p><p>Thanks for your support over the past five years.</p><p><i>—The CloudFlare team</i></p> ]]></content:encoded>
            <category><![CDATA[Milestones]]></category>
            <category><![CDATA[Baidu]]></category>
            <category><![CDATA[Google]]></category>
            <category><![CDATA[Investors]]></category>
            <category><![CDATA[Partners]]></category>
            <guid isPermaLink="false">6nwV6MFTliYcl6OfjstrBS</guid>
            <dc:creator>Joshua Motta</dc:creator>
        </item>
        <item>
            <title><![CDATA[Google PageSpeed Service customers: migrate to CloudFlare for acceleration]]></title>
            <link>https://blog.cloudflare.com/google-pagespeed-service-customers-migrate-to-cloudflare-for-acceleration/</link>
            <pubDate>Fri, 08 May 2015 10:02:55 GMT</pubDate>
            <description><![CDATA[ This week, Google announced that its hosted PageSpeed Service will be shut down. Everyone using the hosted service needs to move their site elsewhere before August 3 2015 to avoid breaking their website. ]]></description>
            <content:encoded><![CDATA[ <p>This week, Google <a href="https://developers.google.com/speed/pagespeed/service/Deprecation">announced</a> that its hosted PageSpeed Service will be shut down. Everyone using the hosted service needs to move their site elsewhere before August 3 2015 to avoid breaking their website.</p><p>We're inviting these hosted customers: don't wait...migrate your site to CloudFlare for global acceleration (and more) <a href="https://support.cloudflare.com/hc/en-us/articles/205227958-How-to-migrate-from-Google-s-hosted-PageSpeed-Service-to-CloudFlare">right now</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7KVJd8Ch9FpV3dAxQTY1ib/ae8dfbc92e18d9888dc5241cfcace342/lifeboat-640-wide-1.jpg" />
            
            </figure><p><a href="https://creativecommons.org/licenses/by/2.0/">CC BY 2.0</a> <a href="https://www.flickr.com/photos/24736216@N07/5750297661/in/photolist-9L8Ma8-9QZLTf-5rdJWU-rwcQr-6fVWHa-6G9rSh-7MWvj9-cT2sLQ-5XaYtU-pNgRbm-4bEjXh-qEpEFx-293kuW-4geSMX-q8frrm-s3Vgta-4geSUK-4giVMU-de52Pd-qaT6G3-ncr1By-rusfcC-dHf2BF-8fEMjj-qBmxFr-agjhEV-qChWS1-c6qmRW-kCzus2-c6qvZh-c6qvuo-c6qq9C-c6qozy-c6qnjs-aseaPP-7HxQFn-gv5pJC-6oGyoT-qoa4dZ-4bEjwb-4jKcE-5oGWYj-8rH9Px-5gtb1f-7AbnZs-8DBmXK-qEpDw8-aoESVc-9FKMWE-pvF8mv">image</a> by <a href="https://www.flickr.com/photos/24736216@N07/">Roger</a>As TechCrunch <a href="http://techcrunch.com/2015/05/06/google-shuts-down-pagespeed-service-for-accelerating-websites/">wrote</a>: "In many ways, PageSpeed Service is similar to what CloudFlare does but without the focus on security."</p>
    <div>
      <h3>What is PageSpeed?</h3>
      <a href="#what-is-pagespeed">
        
      </a>
    </div>
    <p>PageSpeed started as — and <a href="http://modpagespeed.com">continues</a> — as a Google-created software module for the Apache webserver to rewrite webpages to reduce latency and bandwidth, to help make the web faster.</p><p>Google <a href="http://googlewebmastercentral.blogspot.com/2011/07/page-speed-service-web-performance.html">introduced</a> their hosted PageSpeed Service in July 2011, to save webmasters the hassle of installing the software module.</p><p>It's the hosted service that is being discontinued.</p>
    <div>
      <h3>CloudFlare performance</h3>
      <a href="#cloudflare-performance">
        
      </a>
    </div>
    <p>CloudFlare provides similar capabilities to PageSpeed, such as <a href="https://support.cloudflare.com/hc/en-us/articles/200167996-Does-CloudFlare-have-HTML-JavaScript-and-CSS-compression-features-">minification</a>, <a href="https://support.cloudflare.com/hc/en-us/articles/200065689-What-does-Polish-do-">image compression</a>, and <a href="https://support.cloudflare.com/hc/en-us/articles/200168056-What-does-Rocket-Loader-do-">asynchronous loading</a>.</p><p>Additionally, CloudFlare offers more performance gains through a <a href="https://www.cloudflare.com/network-map">global network footprint</a>, <a href="https://www.cloudflare.com/railgun">Railgun</a> for dynamic content acceleration, built-in <a href="/staying-up-to-date-with-the-latest-protocols-spdy-3-1/">SPDY</a> support, and more.</p>
    <div>
      <h3>Not just speed</h3>
      <a href="#not-just-speed">
        
      </a>
    </div>
    <p>PageSpeed Service customers care about performance, and CloudFlare delivers. CloudFlare also includes <a href="https://www.cloudflare.com/features-security">security</a>, <a href="https://www.cloudflare.com/ssl">SSL</a>, <a href="https://www.cloudflare.com/dns">DNS</a>, and more, at all <a href="https://www.cloudflare.com/plans">plans</a>, including Free.</p>
    <div>
      <h3>Setting up CloudFlare</h3>
      <a href="#setting-up-cloudflare">
        
      </a>
    </div>
    <p>There's no need to install any software or change your host. CloudFlare works at the network level, with a five minute sign up process.</p><p>Here's simple instructions for <a href="https://support.cloudflare.com/hc/en-us/articles/205227958-How-to-migrate-from-Google-s-hosted-PageSpeed-Service-to-CloudFlare">migrating from the hosted PageSpeed Service to CloudFlare</a>.</p> ]]></content:encoded>
            <category><![CDATA[Google]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Speed]]></category>
            <guid isPermaLink="false">62RwrQgLrxOePLJzNBFfK</guid>
            <dc:creator>John Roberts</dc:creator>
        </item>
        <item>
            <title><![CDATA[CloudFlare is now a Google Cloud Platform Technology Partner]]></title>
            <link>https://blog.cloudflare.com/cloudflare-is-now-a-google-cloud-platform-technology-partner/</link>
            <pubDate>Mon, 13 Apr 2015 14:02:21 GMT</pubDate>
            <description><![CDATA[ We’re excited to announce that CloudFlare has just been named a Google Cloud Platform Technology Partner. So what does this mean? ]]></description>
            <content:encoded><![CDATA[ <p></p><p>We’re excited to announce that CloudFlare has <a href="http://www.marketwired.com/press-release/cloudflare-named-google-cloud-platform-technology-partner-2009073.htm">just been named</a> a Google Cloud Platform Technology Partner. So what does this mean? Now, Google Cloud Platform customers can experience the best of both worlds—the power and protection of the CloudFlare community along with the flexibility and scalability of Google’s infrastructure.</p><p>We share many mutual customers with Google, and this collaboration makes it even easier for Google Cloud Platform customers to get started with CloudFlare.</p>
    <div>
      <h4>How does it work?</h4>
      <a href="#how-does-it-work">
        
      </a>
    </div>
    <p>When CloudFlare is enabled, Google Cloud Platform customers have their infrastructure extended directly to the network edge, giving them faster content delivery as well as heightened optimization and security.</p>
    <div>
      <h4>Benefits Include:</h4>
      <a href="#benefits-include">
        
      </a>
    </div>
    <ul><li><p><b>2x Web Performance Speed</b> - CloudFlare uses advanced caching and the SPDY protocol to double web content transfer speeds, making web content transfer times significantly faster.</p></li><li><p><b>Datacenters at Your Customer’s Doorstep</b> - CloudFlare’s global edge network caches static files close to their destination, meaning that content always loads fast no matter where customers are located. Also, CloudFlare peers with Google in strategic locations globally, improving response times for Google Cloud Platform services.</p></li><li><p><b>Protection Against DDoS and SQL Injection Attacks</b> - Because CloudFlare sits on the edge, customers are protected from malicious traffic before it reaches Google's Cloud Platform. This keeps websites up and running without compromising page load times or performance.</p></li><li><p><b>...and more!</b> - Check out the full list of features <a href="https://www.cloudflare.com/google">here</a>.</p></li></ul>
    <div>
      <h4>Get Started!</h4>
      <a href="#get-started">
        
      </a>
    </div>
    <p>To get started and learn more about the partnership please visit: <a href="https://www.cloudflare.com/google">https://www.cloudflare.com/google</a>.</p> ]]></content:encoded>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Google]]></category>
            <category><![CDATA[Partners]]></category>
            <guid isPermaLink="false">7Miate8QlPNolIrIpmSzWR</guid>
            <dc:creator>Maria Karaivanova</dc:creator>
        </item>
        <item>
            <title><![CDATA[Improving compression with a preset DEFLATE dictionary]]></title>
            <link>https://blog.cloudflare.com/improving-compression-with-preset-deflate-dictionary/</link>
            <pubDate>Mon, 30 Mar 2015 09:21:21 GMT</pubDate>
            <description><![CDATA[ A few years ago Google made a proposal for a new HTTP compression method, called SDCH (SanDwiCH). The idea behind the method is to create a dictionary of long strings that appear throughout many pages of the same domain (or popular search results). ]]></description>
            <content:encoded><![CDATA[ <p>A few years ago Google made a proposal for a new HTTP compression method, called SDCH (SanDwiCH). The idea behind the method is to create a dictionary of long strings that appear throughout many pages of the same domain (or popular search results). The compression is then simply searching for the appearance of the long strings in a dictionary and replacing them with references to the aforementioned dictionary. Afterwards the output is further compressed with <a href="https://en.wikipedia.org/wiki/DEFLATE">DEFLATE</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6DlU37lpBE25TGdgfyM43y/495a90025471558e0ccb4978a8043ee1/15585984911_6a6bff8f30_z.jpg" />
            
            </figure><p><a href="https://creativecommons.org/licenses/by-sa/2.0/"><sub>CC BY SA 2.0</sub></a><sub> </sub><a href="https://www.flickr.com/photos/quinnanya/15585984911/in/photolist-pKhfiM-52LCYm-abTcAV-9eBXNP-9EQsPE-k2zt7W-3wGhDE-8FHYSs-4H94jD-8FFL2G-4U8deD-8FHZ7h-9zxQ3Q-96AfdV-8F5h1w-8FjCSW-8FeYBM-RbMqS-8FgTbT-8FjUvd-8FgF7T-8FjM4E-8Fgwtt-8Fjvxf-8FjtpU-8FjpUm-8Fjn3u-8Fjiv9-8Ff5ex-8Fid39-8FeTN6-8Fi3KW-8F29dP-8FhaH2-8FkcNm-8Fk8Kw-6DASnR-3T6V24-7PMw1h-7PJhZ8-uMSAt-cUzN8q-4fjD2P-eRviNL-BZg6b-qz5e4C-cqkfc-4szYvT-u1RCZ-ctvmWW"><sub>image</sub></a><sub> by </sub><a href="https://www.flickr.com/photos/quinnanya/"><sub>Quinn Dombrowski</sub></a></p><p>With the right dictionary for the right page the savings can be spectacular, even 70% smaller than gzip alone. In theory, a whole file can be replaced by a single token.</p><p>The drawbacks of the method are twofold: first - the dictionary that is created is fairly large and must be distributed as a separate file, in fact the dictionary is often larger than the individual pages it compresses; second - the dictionary is usually absolutely useless for another set of pages.</p><p>For large domains that are visited repeatedly the advantage is huge: at a cost of single dictionary download, all the following page views can be compressed with much higher efficiency. Currently we aware of Google and LinkedIn compressing content with SDCH.</p>
    <div>
      <h3>SDCH for millions of domains</h3>
      <a href="#sdch-for-millions-of-domains">
        
      </a>
    </div>
    <p>Here at CloudFlare our task is to support millions of domains, which have little in common, and creating a single SDCH dictionary is very difficult. Nevertheless better compression is important, because it produces smaller payloads, which result in content being delivered faster to our clients. That is why we set out to find that little something that is common to all the pages and to see if we could compress them further.</p><p>Besides SDCH (which is only supported by the Chrome browser), the common compression methods over HTTP are gzip and DEFLATE. Some do not know it but the compression they perform is identical. The two formats differ in the content headers they use, with gzip having slightly larger headers, and the error detection function - gzip uses CRC32, whereas DEFLATE uses Adler32.</p><p>Usually the servers opt to compress with gzip, however its cousin DEFLATE supports a neat feature called "Preset Dictionary". This dictionary is not like the dictionary used by SDCH, in fact it is not a real dictionary. To understand how this "dictionary" can be used to our advantage, it is important to understand how the DEFLATE algorithm works.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/15p0MSMrIz6i5jMxj6DmrP/191d330fe298e473be033432d5136dff/5510506796_dff8c07b64_z.jpg" />
            
            </figure><p><sub></sub><a href="https://creativecommons.org/licenses/by/2.0/"><sub>CC BY 2.0</sub></a><sub> </sub><a href="https://www.flickr.com/photos/crdot/5510506796/in/photolist-9oWMFS-5ZetrG-aB6EXd-7BFq6C-8bfuDz-8a7u13-8bfv1z-89PDmK-8a7uvj-8a4fbn-89STbj-5U3gbF-8a4fPH-8a7qPA-89PADp-7BFqLU-8a4fha-7A9GMJ-89SQaN-8biMCh-89PASH-89PCRT-8biMuU-8biM8j-89PCgx-8a7unY-89SSf3-49Vg4c-89PzPn-8a4e4B-89PziH-8biMRU-8a7uHq-ngPPhQ-nkCZTe-7TQ4Yj-8a7qhy-8biLH7-8a7rCW-8a4fp4-76ixPX-8a4fiH-7BFqow-eijkuy-8a7uDY-8a4aat-89Pzv8-89SQdU-nkDdYH-89Pzsp"><sub>image</sub></a><sub> by </sub><a href="https://www.flickr.com/photos/crdot/"><sub>Caleb Roenigk</sub></a></p><p>The DEFLATE algorithm consists of two stages, first it performs the LZ77 algorithm, where it simply goes over the input, and replaces occurrences of previously encountered strings with (short) "directions" where the same string can be found in the previous input. The directions are a tuple of (length, distance), where distance tells how far back in the input the string was and length tells how many bytes were matched. The minimal length deflate will match is 3 (4 in the highly optimized implementation CloudFlare uses), the maximal length is 258, and the farthest distance back is 32KB.</p><p>This is an illustration of the LZ77 algorithm:</p><p>Input:</p>
<div><table><thead>
  <tr>
    <th><span>L</span></th>
    <th><span>i</span></th>
    <th><span>t</span></th>
    <th><span>t</span></th>
    <th><span>l</span></th>
    <th><span>e</span></th>
    <th></th>
    <th><span>b</span></th>
    <th><span>u</span></th>
    <th><span>n</span></th>
    <th><span>n</span></th>
    <th><span>y</span></th>
    <th></th>
    <th><span>F</span></th>
    <th><span>o</span></th>
    <th><span>o</span></th>
    <th></th>
    <th><span>F</span></th>
    <th><span>o</span></th>
    <th><span>o</span></th>
  </tr>
</thead>
<tbody>
  <tr>
    <td></td>
    <td><span>W</span></td>
    <td><span>e</span></td>
    <td><span>n</span></td>
    <td><span>t</span></td>
    <td></td>
    <td><span>h</span></td>
    <td><span>o</span></td>
    <td><span>p</span></td>
    <td><span>p</span></td>
    <td><span>i</span></td>
    <td><span>n</span></td>
    <td><span>g</span></td>
    <td></td>
    <td><span>t</span></td>
    <td><span>h</span></td>
    <td><span>r</span></td>
    <td><span>o</span></td>
    <td><span>u</span></td>
    <td><span>g</span></td>
  </tr>
  <tr>
    <td><span>h</span></td>
    <td></td>
    <td><span>t</span></td>
    <td><span>h</span></td>
    <td><span>e</span></td>
    <td></td>
    <td><span>f</span></td>
    <td><span>o</span></td>
    <td><span>r</span></td>
    <td><span>e</span></td>
    <td><span>s</span></td>
    <td><span>t</span></td>
    <td></td>
    <td><span>S</span></td>
    <td><span>c</span></td>
    <td><span>o</span></td>
    <td><span>o</span></td>
    <td><span>p</span></td>
    <td><span>i</span></td>
    <td><span>n</span></td>
  </tr>
  <tr>
    <td><span>g</span></td>
    <td></td>
    <td><span>u</span></td>
    <td><span>p</span></td>
    <td></td>
    <td><span>t</span></td>
    <td><span>h</span></td>
    <td><span>e</span></td>
    <td></td>
    <td><span>f</span></td>
    <td><span>i</span></td>
    <td><span>e</span></td>
    <td><span>l</span></td>
    <td><span>d</span></td>
    <td></td>
    <td><span>m</span></td>
    <td><span>i</span></td>
    <td><span>c</span></td>
    <td><span>e</span></td>
    <td></td>
  </tr>
  <tr>
    <td><span>A</span></td>
    <td><span>n</span></td>
    <td><span>d</span></td>
    <td></td>
    <td><span>b</span></td>
    <td><span>o</span></td>
    <td><span>p</span></td>
    <td><span>p</span></td>
    <td><span>i</span></td>
    <td><span>n</span></td>
    <td><span>g</span></td>
    <td></td>
    <td><span>t</span></td>
    <td><span>h</span></td>
    <td><span>e</span></td>
    <td><span>m</span></td>
    <td></td>
    <td><span>o</span></td>
    <td><span>n</span></td>
    <td></td>
  </tr>
  <tr>
    <td><span>t</span></td>
    <td><span>h</span></td>
    <td><span>e</span></td>
    <td></td>
    <td><span>h</span></td>
    <td><span>e</span></td>
    <td><span>a</span></td>
    <td><span>d</span></td>
    <td></td>
    <td><span>D</span></td>
    <td><span>o</span></td>
    <td><span>w</span></td>
    <td><span>n</span></td>
    <td></td>
    <td><span>c</span></td>
    <td><span>a</span></td>
    <td><span>m</span></td>
    <td><span>e</span></td>
    <td></td>
    <td><span>t</span></td>
  </tr>
  <tr>
    <td><span>h</span></td>
    <td><span>e</span></td>
    <td></td>
    <td><span>G</span></td>
    <td><span>o</span></td>
    <td><span>o</span></td>
    <td><span>d</span></td>
    <td></td>
    <td><span>F</span></td>
    <td><span>a</span></td>
    <td><span>i</span></td>
    <td><span>r</span></td>
    <td><span>y</span></td>
    <td><span>,</span></td>
    <td></td>
    <td><span>a</span></td>
    <td><span>n</span></td>
    <td><span>d</span></td>
    <td></td>
    <td><span>s</span></td>
  </tr>
  <tr>
    <td><span>h</span></td>
    <td><span>e</span></td>
    <td></td>
    <td><span>s</span></td>
    <td><span>a</span></td>
    <td><span>i</span></td>
    <td><span>d</span></td>
    <td></td>
    <td><span>"</span></td>
    <td><span>L</span></td>
    <td><span>i</span></td>
    <td><span>t</span></td>
    <td><span>t</span></td>
    <td><span>l</span></td>
    <td><span>e</span></td>
    <td></td>
    <td><span>b</span></td>
    <td><span>u</span></td>
    <td><span>n</span></td>
    <td><span>n</span></td>
  </tr>
  <tr>
    <td><span>y</span></td>
    <td></td>
    <td><span>F</span></td>
    <td><span>o</span></td>
    <td><span>o</span></td>
    <td></td>
    <td><span>F</span></td>
    <td><span>o</span></td>
    <td><span>o</span></td>
    <td></td>
    <td><span>I</span></td>
    <td></td>
    <td><span>d</span></td>
    <td><span>o</span></td>
    <td><span>n</span></td>
    <td><span>'</span></td>
    <td><span>t</span></td>
    <td></td>
    <td><span>w</span></td>
    <td><span>a</span></td>
  </tr>
  <tr>
    <td><span>n</span></td>
    <td><span>t</span></td>
    <td></td>
    <td><span>t</span></td>
    <td><span>o</span></td>
    <td></td>
    <td><span>s</span></td>
    <td><span>e</span></td>
    <td><span>e</span></td>
    <td></td>
    <td><span>y</span></td>
    <td><span>o</span></td>
    <td><span>u</span></td>
    <td></td>
    <td><span>S</span></td>
    <td><span>c</span></td>
    <td><span>o</span></td>
    <td><span>o</span></td>
    <td><span>p</span></td>
    <td><span>i</span></td>
  </tr>
  <tr>
    <td><span>n</span></td>
    <td><span>g</span></td>
    <td></td>
    <td><span>u</span></td>
    <td><span>p</span></td>
    <td></td>
    <td><span>t</span></td>
    <td><span>h</span></td>
    <td><span>e</span></td>
    <td></td>
    <td><span>f</span></td>
    <td><span>i</span></td>
    <td><span>e</span></td>
    <td><span>l</span></td>
    <td><span>d</span></td>
    <td></td>
    <td><span>m</span></td>
    <td><span>i</span></td>
    <td><span>c</span></td>
    <td><span>e</span></td>
  </tr>
  <tr>
    <td></td>
    <td><span>A</span></td>
    <td><span>n</span></td>
    <td><span>d</span></td>
    <td></td>
    <td><span>b</span></td>
    <td><span>o</span></td>
    <td><span>p</span></td>
    <td><span>p</span></td>
    <td><span>i</span></td>
    <td><span>n</span></td>
    <td><span>g</span></td>
    <td></td>
    <td><span>t</span></td>
    <td><span>h</span></td>
    <td><span>e</span></td>
    <td><span>m</span></td>
    <td></td>
    <td><span>o</span></td>
    <td><span>n</span></td>
  </tr>
  <tr>
    <td></td>
    <td><span>t</span></td>
    <td><span>h</span></td>
    <td><span>e</span></td>
    <td></td>
    <td><span>h</span></td>
    <td><span>e</span></td>
    <td><span>a</span></td>
    <td><span>d</span></td>
    <td><span>.</span></td>
    <td><span>"</span></td>
    <td></td>
    <td></td>
    <td></td>
    <td></td>
    <td></td>
    <td></td>
    <td></td>
    <td></td>
    <td></td>
  </tr>
</tbody></table></div><p>Output (length tokens are blue, distance tokens are red):</p>
<div><table><thead>
  <tr>
    <th><span>L</span></th>
    <th><span>i</span></th>
    <th><span>t</span></th>
    <th><span>t</span></th>
    <th><span>l</span></th>
    <th><span>e</span></th>
    <th></th>
    <th><span>b</span></th>
    <th><span>u</span></th>
    <th><span>n</span></th>
    <th><span>n</span></th>
    <th><span>y</span></th>
    <th></th>
    <th><span>F</span></th>
    <th><span>o</span></th>
    <th><span>o</span></th>
    <th>5</th>
    <th>4</th>
    <th><span>W</span></th>
    <th><span>e</span></th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><span>n</span></td>
    <td><span>t</span></td>
    <td></td>
    <td><span>h</span></td>
    <td><span>o</span></td>
    <td><span>p</span></td>
    <td><span>p</span></td>
    <td><span>i</span></td>
    <td><span>n</span></td>
    <td><span>g</span></td>
    <td></td>
    <td><span>t</span></td>
    <td><span>h</span></td>
    <td><span>r</span></td>
    <td><span>o</span></td>
    <td><span>u</span></td>
    <td><span>g</span></td>
    <td><span>h</span></td>
    <td>3</td>
    <td>8</td>
  </tr>
  <tr>
    <td><span>e</span></td>
    <td></td>
    <td><span>f</span></td>
    <td><span>o</span></td>
    <td><span>r</span></td>
    <td><span>e</span></td>
    <td><span>s</span></td>
    <td><span>t</span></td>
    <td></td>
    <td><span>S</span></td>
    <td><span>c</span></td>
    <td><span>o</span></td>
    <td><span>o</span></td>
    <td>5</td>
    <td>28</td>
    <td><span>u</span></td>
    <td><span>p</span></td>
    <td>6</td>
    <td>23</td>
    <td><span>i</span></td>
  </tr>
  <tr>
    <td><span>e</span></td>
    <td><span>l</span></td>
    <td><span>d</span></td>
    <td></td>
    <td><span>m</span></td>
    <td><span>i</span></td>
    <td><span>c</span></td>
    <td><span>e</span></td>
    <td></td>
    <td><span>A</span></td>
    <td><span>n</span></td>
    <td><span>d</span></td>
    <td></td>
    <td><span>b</span></td>
    <td>9</td>
    <td>58</td>
    <td><span>e</span></td>
    <td><span>m</span></td>
    <td></td>
    <td><span>o</span></td>
  </tr>
  <tr>
    <td><span>n</span></td>
    <td>5</td>
    <td>35</td>
    <td><span>h</span></td>
    <td><span>e</span></td>
    <td><span>a</span></td>
    <td><span>d</span></td>
    <td></td>
    <td><span>D</span></td>
    <td><span>o</span></td>
    <td><span>w</span></td>
    <td><span>n</span></td>
    <td></td>
    <td><span>c</span></td>
    <td><span>a</span></td>
    <td><span>m</span></td>
    <td><span>e</span></td>
    <td>5</td>
    <td>19</td>
    <td><span>G</span></td>
  </tr>
  <tr>
    <td><span>o</span></td>
    <td><span>o</span></td>
    <td><span>d</span></td>
    <td></td>
    <td><span>F</span></td>
    <td><span>a</span></td>
    <td><span>i</span></td>
    <td><span>r</span></td>
    <td><span>y</span></td>
    <td><span>,</span></td>
    <td></td>
    <td><span>a</span></td>
    <td>3</td>
    <td>55</td>
    <td><span>s</span></td>
    <td>3</td>
    <td>20</td>
    <td><span>s</span></td>
    <td><span>a</span></td>
    <td><span>i</span></td>
  </tr>
  <tr>
    <td><span>d</span></td>
    <td></td>
    <td><span>"</span></td>
    <td><span>L</span></td>
    <td>20</td>
    <td>149</td>
    <td><span>I</span></td>
    <td></td>
    <td><span>d</span></td>
    <td><span>o</span></td>
    <td><span>n</span></td>
    <td><span>'</span></td>
    <td><span>t</span></td>
    <td></td>
    <td><span>w</span></td>
    <td><span>a</span></td>
    <td>3</td>
    <td>157</td>
    <td><span>t</span></td>
    <td><span>o</span></td>
  </tr>
  <tr>
    <td></td>
    <td><span>s</span></td>
    <td><span>e</span></td>
    <td><span>e</span></td>
    <td></td>
    <td><span>y</span></td>
    <td><span>o</span></td>
    <td><span>o</span></td>
    <td>56</td>
    <td>141</td>
    <td><span>.</span></td>
    <td><span>"</span></td>
    <td></td>
    <td></td>
    <td></td>
    <td></td>
    <td></td>
    <td></td>
    <td></td>
    <td></td>
  </tr>
</tbody></table></div><p>DEFLATE managed to reduce the original text from 251 characters, to just 152 tokens! Those tokens are later compressed further by Huffman encoding, which is the second stage.</p><p>How long and devotedly the algorithm searches for a string before it stops is defined by the compression level. For example with compression level 4 the algorithm will be happy to find a match of 16 bytes, whereas with level 9 it will attempt to look for the maximal 258 byte match. If a match was not found the algorithm outputs the input as is, uncompressed.</p><p>Clearly at the beginning of the input, there can be no references to previous strings, and it is always uncompressed. Similarly the first occurrence of any string in the input will never be compressed. For example almost all HTML files start with the string "&lt;html ", however in this string only the second HTML will be replaced with a match, and the rest of the string will remain uncompressed.</p><p>To solve this problem the deflate dictionary effectively acts as an initial back reference for possible matches. So if we add the aforementioned string "&lt;html " to the dictionary, the algorithm will be able to match it from the start, improving the compression ratio. And there are many more such strings that are used in any HTML page, which we can put in the dictionary to improve compression ratio. In fact the SPDY protocol uses this technique for HTTP header compression.</p><p>To illustrate, lets compress the children’s song with the help of a 42 byte dictionary, containing the following: Little bunny Foo hopping forest Good Fairy. The compressed output will then be:</p>
<div><table><thead>
  <tr>
    <th></th>
    <th></th>
    <th></th>
    <th></th>
    <th></th>
    <th></th>
    <th></th>
    <th></th>
    <th></th>
    <th></th>
    <th></th>
    <th></th>
    <th></th>
    <th></th>
    <th></th>
    <th></th>
    <th></th>
    <th></th>
    <th></th>
    <th></th>
  </tr></thead>
<tbody>
  <tr>
    <td>17</td>
    <td>42</td>
    <td>4</td>
    <td>4</td>
    <td><span>W</span></td>
    <td><span>e</span></td>
    <td><span>n</span></td>
    <td><span>t</span></td>
    <td>9</td>
    <td>51</td>
    <td><span>t</span></td>
    <td><span>h</span></td>
    <td></td>
    <td><span>o</span></td>
    <td><span>u</span></td>
    <td><span>g</span></td>
    <td><span>h</span></td>
    <td>3</td>
    <td>8</td>
    <td><span>e</span></td>
  </tr>
  <tr>
    <td>8</td>
    <td>63</td>
    <td><span>S</span></td>
    <td><span>c</span></td>
    <td><span>o</span></td>
    <td><span>o</span></td>
    <td>5</td>
    <td>28</td>
    <td><span>u</span></td>
    <td><span>p</span></td>
    <td>6</td>
    <td>23</td>
    <td><span>i</span></td>
    <td><span>e</span></td>
    <td><span>l</span></td>
    <td><span>d</span></td>
    <td></td>
    <td><span>m</span></td>
    <td><span>i</span></td>
    <td><span>c</span></td>
  </tr>
  <tr>
    <td><span>e</span></td>
    <td></td>
    <td><span>A</span></td>
    <td><span>n</span></td>
    <td><span>d</span></td>
    <td></td>
    <td><span>b</span></td>
    <td>9</td>
    <td>58</td>
    <td><span>e</span></td>
    <td><span>m</span></td>
    <td></td>
    <td><span>o</span></td>
    <td><span>n</span></td>
    <td>5</td>
    <td>35</td>
    <td><span>h</span></td>
    <td><span>e</span></td>
    <td><span>a</span></td>
    <td><span>d</span></td>
  </tr>
  <tr>
    <td></td>
    <td><span>D</span></td>
    <td><span>o</span></td>
    <td><span>w</span></td>
    <td><span>n</span></td>
    <td></td>
    <td><span>c</span></td>
    <td><span>a</span></td>
    <td><span>m</span></td>
    <td><span>e</span></td>
    <td>5</td>
    <td>19</td>
    <td>10</td>
    <td>133</td>
    <td><span>,</span></td>
    <td><span>a</span></td>
    <td>3</td>
    <td>55</td>
    <td><span>s</span></td>
    <td>3</td>
  </tr>
  <tr>
    <td>20</td>
    <td><span>s</span></td>
    <td><span>a</span></td>
    <td><span>i</span></td>
    <td><span>d</span></td>
    <td></td>
    <td><span>"</span></td>
    <td>21</td>
    <td>149</td>
    <td><span>I</span></td>
    <td></td>
    <td><span>d</span></td>
    <td><span>o</span></td>
    <td><span>n</span></td>
    <td><span>'</span></td>
    <td><span>t</span></td>
    <td></td>
    <td><span>w</span></td>
    <td><span>a</span></td>
    <td>3</td>
  </tr>
  <tr>
    <td>157</td>
    <td><span>t</span></td>
    <td><span>o</span></td>
    <td></td>
    <td><span>s</span></td>
    <td><span>e</span></td>
    <td><span>e</span></td>
    <td></td>
    <td><span>y</span></td>
    <td><span>o</span></td>
    <td><span>o</span></td>
    <td>56</td>
    <td>141</td>
    <td><span>.</span></td>
    <td><span>"</span></td>
    <td></td>
    <td></td>
    <td></td>
    <td></td>
    <td></td>
  </tr>
</tbody></table></div><p>Now, even strings at the very beginning of the input are compressed and strings that only appear once in the file are compressed as well. With the help of the dictionary we are down to 115 tokens. That means roughly 25% better compression rate.</p>
    <div>
      <h2>An experiment</h2>
      <a href="#an-experiment">
        
      </a>
    </div>
    <p>We wanted to see if we could make a dictionary that would benefit ALL the HTML pages we serve, and not just a specific domain. To that end we scanned over 20,000 publicly available random HTML pages that passed through our servers on a random sunny day, we took the first 16KB of each page, and used them to prepare two dictionaries, one of 16KB and the other of 32KB. Using a larger dictionary is useless, because then it would be larger than the LZ77 window used by deflate.</p><p>To build a dictionary, I made a little go program that takes a set of files and performs "pseudo" LZ77 over them, finding strings that DEFLATE would not compress in the first 16KB of each input file. It then performs a frequency count of the individual strings, and scores them according to their length and frequency. In the end the highest scoring strings are saved into the dictionary file.</p><p>Our benchmark consists of another set of pages obtained in similar manner. The number of benchmarked files was about 19,000 with total size of 563MB.</p>
<div><table><thead>
  <tr>
    <th><span>deflate -4</span></th>
    <th><span>deflate -9</span></th>
    <th><span>deflate -4 + 16K dict</span></th>
    <th><span>deflate -9 + 16K dict</span></th>
    <th><span>deflate -4 + 32K dict</span></th>
    <th><span>deflate -9 + 32K dict</span></th>
    <th></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>Size (KB)</span></td>
    <td><span>169,176</span></td>
    <td><span>166,012</span></td>
    <td><span>161,896</span></td>
    <td><span>158,352</span></td>
    <td><span>161,212</span></td>
    <td><span>157,444</span></td>
  </tr>
  <tr>
    <td><span>Time (sec)</span></td>
    <td><span>6.90</span></td>
    <td><span>11.56</span></td>
    <td><span>7.15</span></td>
    <td><span>11.80</span></td>
    <td><span>7.88</span></td>
    <td><span>11.82</span></td>
  </tr>
</tbody></table></div><p>We can see from the results that the compression we gain for level 4 is almost 5% better than without the dictionary, which is even greater than the compression gained by using level 9 compression, while being substantially faster. For level 9, the gain is greater than 5% without a significant performance hit.</p><p>The results highly depend on the dataset used for the dictionary and on the compressed pages. For example when making a dicitonary aimed at a specific web site, the compression rate for that site increased by up to 30%.</p><p>For very small pages, such as error pages, with size less than 1KB, a DEFLATE dictionary was able to gain compression rates of up to 50% smaller than DEFLATE alone.</p><p>Of course different dictionaries may be used for different file types. In fact we think that it would make sense to create a standard set of dictionaries that could be used accross the web.</p><p>The utility to make a dictionary for deflate can be found at <a href="https://github.com/vkrasnov/dictator">https://github.com/vkrasnov/dictator</a>.The optimized version of zlib used by CloudFlare can be found at <a href="https://github.com/cloudflare/zlib">https://github.com/cloudflare/zlib</a></p> ]]></content:encoded>
            <category><![CDATA[Google]]></category>
            <category><![CDATA[Optimization]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Compression]]></category>
            <guid isPermaLink="false">4bo94j5hDmZv1CmCO7udHU</guid>
            <dc:creator>Vlad Krasnov</dc:creator>
        </item>
        <item>
            <title><![CDATA[The little extra that comes with Universal SSL]]></title>
            <link>https://blog.cloudflare.com/the-little-extra-that-comes-with-universal-ssl/</link>
            <pubDate>Mon, 06 Oct 2014 21:35:38 GMT</pubDate>
            <description><![CDATA[ Last Monday we announced our SSL for Free plan users called Universal SSL. Universal SSL means that any site running on CloudFlare gets a free SSL certificate, and is automatically secured over HTTPS. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>CC BY 2.0 by <a href="https://www.flickr.com/photos/jdhancock/">JD Hancock</a></p><p>Last Monday we announced our SSL for Free plan users called <a href="/introducing-universal-ssl/">Universal SSL</a>. Universal SSL means that any site running on CloudFlare gets a <a href="https://www.cloudflare.com/application-services/products/ssl/">free SSL certificate</a>, and is automatically secured over HTTPS.</p><p>Using SSL for a web site helps make the site more secure, but there's another benefit: it can also make the site faster. That's because the <a href="http://en.wikipedia.org/wiki/SPDY">SPDY protocol</a>, created by Google to speed up the web, actually requires SSL and only web sites that support HTTPS can use SPDY.</p><p>CloudFlare has long supported <a href="/introducing-spdy/">SPDY</a>, and kept up to date with improvements in the protocol. We currently support the most recent version of SPDY: <a href="/staying-up-to-date-with-the-latest-protocols-spdy-3-1/">3.1</a>.</p><p>CloudFlare's mission to bring the tools of the Internet giants to everyone is two fold: security and performance. As part of the Universal SSL launch, we also rolled out SPDY for everyone. Many of the web's largest sites use SPDY; now all sites that use CloudFlare are in the same league.</p><p>If your site is on CloudFlare, and you use a modern browser that supports SPDY, you'll find that the HTTPS version of your site is now served over SPDY. SPDY allows the browser to query for multiple objects in one request, and for the objects to be sent down the wire as they are ready to prevent hold-ups—that can be a big performance win.</p><p>Our goal is a faster, safer, more secure Internet. Universal SSL and SPDY for everyone are two big steps in that direction.</p> ]]></content:encoded>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Google]]></category>
            <category><![CDATA[spdy]]></category>
            <category><![CDATA[HTTPS]]></category>
            <category><![CDATA[Universal SSL]]></category>
            <category><![CDATA[SSL]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">4PvbgsCFQYYxbLxKwrmajz</guid>
            <dc:creator>John Graham-Cumming</dc:creator>
        </item>
        <item>
            <title><![CDATA[Google Now Factoring HTTPS Support Into Ranking; CloudFlare On Track to Make it Free and Easy]]></title>
            <link>https://blog.cloudflare.com/google-now-factoring-https-support-into-ranking-cloudflare-on-track-to-make-it-free-and-easy/</link>
            <pubDate>Wed, 06 Aug 2014 14:00:00 GMT</pubDate>
            <description><![CDATA[ As of today, there are only about 2 million websites that support HTTPS. That's a shamefully low number. Two things are about to happen that we at CloudFlare are hopeful will begin to change that and make everyone love locks (at least on the web!). ]]></description>
            <content:encoded><![CDATA[ <p>As of today, there are only about 2 million websites that support HTTPS. That's a shamefully low number. Two things are about to happen that we at CloudFlare are hopeful will begin to change that and make everyone love locks (at least on the web!).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/68sVz4KPq4HjnBuy7onwfb/af37f57829e084a38f7c87ceb4e698c7/10290363093_9ff8f91c1c_z_1.jpg" />
            
            </figure><p>CC BY 2.0 by <a href="https://www.flickr.com/photos/greggman/">Gregg Tavares</a></p>
    <div>
      <h3>Google Ranks Crypto</h3>
      <a href="#google-ranks-crypto">
        
      </a>
    </div>
    <p>First, Google <a href="http://googleonlinesecurity.blogspot.co.uk/2014/08/https-as-ranking-signal_6.html">just announced</a> that they will begin taking into account whether a site supports HTTPS connections in their ranking algorithm. This means that if you care about SEO then ensuring your site supports HTTPS should be a top priority. Kudos to Google to giving webmasters a big incentive to add SSL to their sites.</p>
    <div>
      <h3>SSL All Things</h3>
      <a href="#ssl-all-things">
        
      </a>
    </div>
    <p>Second, at CloudFlare we've cleared one of the last major technical hurdle before making SSL available for every one of our customers -- even free customers. One of the challenges we had was ensuring we still had the flexibility to move traffic to sites dynamically between the servers that make up our network. While we can do this easily when traffic is over an HTTP connection, when a connection uses HTTPS we need to ensure that the correct certificates are in place and loaded into memory before requests are processed by a server.</p><p>To accomplish this, we needed to redesign how certificates are loaded into a server's memory. Previously, we'd load certificates into memory before traffic was directed to a server. That creates challenges when dealing with millions of domains and when shifting traffic to help isolate or mitigate an attack.</p>
    <div>
      <h3>Lazy Loading Certs</h3>
      <a href="#lazy-loading-certs">
        
      </a>
    </div>
    <p>Last week we pushed new code that allows us to "lazy load" <a href="https://www.cloudflare.com/application-services/products/ssl/">SSL certificates</a> on demand. This means that a certificate only needs to be in a data center, not on a particular server, before HTTPS traffic needing the certificate is directed to that server. When a request is received, the server can now dynamically retrieve the correct certificate even if it hasn't been previously loaded into memory. This allows us to continue to shift traffic to manage our network even if we are managing SSL certificates for millions of domains.</p><p>We're on track to roll out SSL for all CloudFlare customers by mid-October. When we do, the number of sites that support HTTPS on the Internet will more than double. That they'll also rank a bit higher is pretty cool too.</p><p>In the meantime, if you want a quick way to boost your Google ranking, upgrading to any paid CloudFlare account will enable HTTPS by default. Even before we make it free, it's already the fastest, easiest way to get HTTPS support on any site.</p> ]]></content:encoded>
            <category><![CDATA[SEO]]></category>
            <category><![CDATA[Google]]></category>
            <category><![CDATA[HTTPS]]></category>
            <category><![CDATA[SSL]]></category>
            <category><![CDATA[Optimization]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">7uUcuydFEdXscBcjp7P4ex</guid>
            <dc:creator>Matthew Prince</dc:creator>
        </item>
        <item>
            <title><![CDATA[It's Go Time on Linux]]></title>
            <link>https://blog.cloudflare.com/its-go-time-on-linux/</link>
            <pubDate>Wed, 05 Mar 2014 00:00:00 GMT</pubDate>
            <description><![CDATA[ Some interesting changes related to timekeeping in the upcoming Go 1.3 release inspired us to take a closer look at how Go programs keep time with the help of the Linux kernel. Timekeeping is a complex topic and determining the current time isn’t as simple as it might seem at first glance. ]]></description>
            <content:encoded><![CDATA[ <p>Some interesting changes related to timekeeping in the upcoming Go 1.3 release inspired us to take a closer look at how Go programs keep time with the help of the Linux kernel. Timekeeping is a complex topic and determining the current time isn’t as simple as it might seem at first glance.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4VWH2QXVaqkSnUkzLetIdn/5bfae1adbf588346480801608a09be9e/The_Persistence_of_Memory.jpg" />
            
            </figure><p><a href="http://golang.org/">Go</a> running on the Linux kernel has been used to build <a href="/go-at-cloudflare/">many</a> <a href="/red-october-cloudflares-open-source-implementation-of-the-two-man-rule/">important</a> <a href="/what-weve-been-doing-with-go/">systems</a> like RRDNS (our DNS server) at CloudFlare. Accurately, precisely and efficiently determining the time is an important part of many of the these systems.</p><p>To see why time is important, consider that humans have had some trouble convincing computers to keep time for them in the recent past. It been a bit more than a decade since we had to dust off our best COBOL programmers to tackle <a href="https://en.wikipedia.org/wiki/Year_2000_problem">Y2K</a>.</p><p>More recently, a bug in the handling of a leap second propagated through the Network Time Protocol (NTP) also took many systems off-line. As we've seen in recent days, NTP is very useful for synchronizing computer clocks and/or <a href="/technical-details-behind-a-400gbps-ntp-amplification-ddos-attack">DDoSing them</a>. The leap second bug <a href="http://www.somebits.com/weblog/tech/bad/leap-second-2012.html">received</a> <a href="http://www.datastax.com/dev/blog/linux-cassandra-and-saturdays-leap-second-problem">extensive</a> <a href="http://arstechnica.com/business/2012/07/one-day-later-the-leap-second-v-the-internet-scorecard/">coverage</a>. <a href="http://googleblog.blogspot.com/2011/09/time-technology-and-leaping-seconds.html">Google was ready</a> but many other popular sites were taken offline.</p><p>We also have the <a href="https://en.wikipedia.org/wiki/Year_2038_problem">Year 2038 problem</a> to look forward to. Hopefully there will still be a few engineers around then that remember what this 32-bit thing was all about.</p>
    <div>
      <h2>Time in Go</h2>
      <a href="#time-in-go">
        
      </a>
    </div>
    <p>Everything starts with the <a href="http://golang.org/pkg/time/">time</a> package that is part of Go’s standard library. The time package provides types for <a href="http://golang.org/pkg/time/#Time">Time</a>, <a href="http://golang.org/pkg/time/#Duration">Duration</a>, <a href="http://golang.org/pkg/time/#Ticker">Ticker</a>, <a href="http://golang.org/pkg/time/#Timer">Timer</a> and various utility functions for manipulating these types.</p><p>The most commonly used function in this package is probably the <a href="http://golang.org/pkg/time/#Now">time.Now</a> function, which returns the current time as a Time struct. The Time struct has 3 fields:</p>
            <pre><code>type Time struct {
    sec int64
	nsec uintptr
	loc *Location
}</code></pre>
            <p><a href="http://golang.org/pkg/time/#Location">Location</a> contains the timezone information for the time.</p><p><a href="http://golang.org/pkg/time/#Duration">Duration</a> is used to express the difference between two Times and to configure timers and tickers.</p><p><a href="http://golang.org/pkg/time/#Timer">Timer</a> is useful for implementing a timeout, typically as part of a <a href="http://golang.org/ref/spec#Select_statements">select</a> statement. <a href="http://golang.org/pkg/time/#Ticker">Ticker</a> can be used to wake up periodically, usually when you are using select in a for loop.</p><p>Go’s time package is used in many other places in the Go standard library. When dealing with socket connections that may go slow or stop sending data completely, one uses the SetDeadline functions that are part of the <a href="http://golang.org/pkg/net/#Conn">net.Conn</a> interface.</p><p>We love writing tests at CloudFlare, and having unit tests that include some kind of random component can turn up interesting issues. You can use the current time to seed random number generators in tests, using:</p>
            <pre><code>rand.New(rand.NewSource(time.Now().UnixNano()))</code></pre>
            <p>If you’re generating random numbers for a secure application, you really want to be using the <a href="http://golang.org/pkg/crypto/rand/">crypto/rand</a> package. Interestingly, even the initialization of <a href="http://golang.org/pkg/crypto/rand/#pkg-variables">crypto/rand.Reader</a> <a href="http://golang.org/src/pkg/crypto/rand/rand_unix.go#L114">incorporates</a> the current time.</p><p>The current time also features when one <a href="http://golang.org/src/pkg/log/log.go#L131">logs</a> something using the <a href="http://golang.org/pkg/log">log</a> package.</p><p>A very useful service called <a href="https://sourcegraph.com/code.google.com/p/go/symbols/go/code.google.com/p/go/src/pkg/time/Now">Sourcegraph</a> turns up more than 6000 examples where time.Now is used. For example, the <a href="http://camlistore.org/">Camlistore</a> code base calls time.Now in about 130 different places.</p><p>With time.Now being as pervasive as it is, have you ever wondered how it works? Time to dive deeper.</p>
    <div>
      <h2>System calls</h2>
      <a href="#system-calls">
        
      </a>
    </div>
    <p>The most important changes to the way in which Go programs keep time on Linux was committed on 8 November 2012 in <a href="https://code.google.com/p/go/source/detail?r=42c8d3aadc40">changeset 42c8d3aadc40</a> Let’s analyse the commit message and the code for some clues:</p>
            <pre><code>runtime: use vDSO clock_gettime for time.now &amp; runtime.nanotime
on Linux/amd64. Performance improvement aside, time.Now() now 
gets real nanosecond resolution on supported systems.</code></pre>
            <p>To understand this commit message better, we first need to review the system calls available on Linux for obtaining the value of the clock.</p><p>In the beginning, there was <a href="http://man7.org/linux/man-pages/man2/time.2.html">time</a> and <a href="http://man7.org/linux/man-pages/man2/settimeofday.2.html">gettimeofday</a>, which existed in SVr4, 4.3BSD and was described in POSIX.1-2001. time returns the number of seconds since the <a href="https://en.wikipedia.org/wiki/Unix_time">Unix epoch</a>, 1970-01-01 00:00:00 UTC and is defined in C as:</p>
            <pre><code>time_t time(time_t *t)</code></pre>
            <p>time_t is 4 bytes on 32-bit platforms and 8 bytes on 64-bit platforms, hence the Y2038 problem mentioned above.</p><p>gettimeofday returns the number of seconds and microseconds since the epoch and is defined in C as:</p>
            <pre><code>int gettimeofday(struct timeval *tv, struct timezone *tz)</code></pre>
            <p>gettimeofday populates a <code>struct timeval</code>, which has the following fields:</p>
            <pre><code>struct timeval {
	time_t tv_sec; /* seconds */
	suseconds_t tv_usec; /* microseconds */
}</code></pre>
            <p>gettimeofday yields timestamps that only have microsecond precision. POSIX.1-2008 marks gettimeofday as obsolete, recommending the use of <a href="http://man7.org/linux/man-pages/man2/clock_gettime.2.html">clock_gettime</a> instead, which is defined in C as:</p>
            <pre><code>int clock_gettime(clockid_t clk_id, struct timespec *tp)</code></pre>
            <p>clock_gettime populates a <code>struct timespec</code>, which has the following fields:</p>
            <pre><code>struct timespec {
	time_t tv_sec; /* seconds */
	long tv_nsec; /* nanoseconds */
}</code></pre>
            <p>clock_gettime can yield timestamps that have nanosecond precision. The clock ID parameter determines the type of clock to use. Of interest to us are:</p><ul><li><p><code>CLOCK_REALTIME</code>: a system-wide clock that measures real time. This clock is affected by discontinuous jumps in the system time and by incremental adjustments by made using the <a href="http://man7.org/linux/man-pages/man3/adjtime.3.html">adjtime</a> function or NTP.</p></li><li><p><code>CLOCK_MONOTONIC</code>: a clock that represents monotonic time since some unspecified starting point. This clock is not affected by discontinous jumps in the system time, but is affected by adjtime and NTP.</p></li><li><p><code>CLOCK_MONOTONIC_RAW</code>: similar to <code>CLOCK_MONOTONIC</code>, but not subject to adjustment by adjtime or NTP.</p></li></ul>
    <div>
      <h2>time.Now</h2>
      <a href="#time-now">
        
      </a>
    </div>
    <p>With this background we can look at the code for time.Now. (Hint: click on the <a href="http://golang.org/pkg/time/#Now">function name</a> in godoc to look at the code yourself.)</p><p>The time.Now function is implemented using a function called now that is internal to package time, but is actually provided by the Go runtime. In other words, there is no code for the function in package time itself.</p><p>Let’s take a closer look at the Linux implementations for the <a href="http://golang.org/src/pkg/runtime/sys_linux_386.s#L107">386</a> and <a href="http://golang.org/src/pkg/runtime/sys_linux_amd64.s#L104">amd64</a> platforms. We see that these functions are implemented in assembler and call a function to retrieve the current time. You might have been expecting to see a system call, i.e. an <code>INT 0x80</code> instruction on 386 or the <code>SYSCALL</code> instruction on amd64, to the kernel at this point, but Go does something much more interesting on Linux.</p><p>The Linux kernel provides Virtual Dynamically linked Shared Objects (vDSO) as a way for user space applications to make low-overhead calls to functions that would normally involve a system call to the kernel.</p><p>If you’re writing your application in a language that uses glibc, you are probably already getting your time via vDSO. Go doesn’t use glibc, so it has to implement this functionality in its runtime. The relevant code is in <a href="http://golang.org/src/pkg/runtime/vdso_linux_amd64.c">vdso_linux_amd64.c</a> in the <a href="http://golang.org/pkg/runtime/">runtime package</a>.</p><p>Finally, if you’re the kind of person that likes to stare into the bowels of your operating system, here’s the <a href="http://lxr.free-electrons.com/source/arch/x86/vdso/vclock_gettime.c">kernel side of the vDSO</a>.</p><p>vDSO support for time functions is currently 64-bit only, but a <a href="http://lwn.net/Articles/583963/">kernel patch</a> is in the works to add them on 32-bit platforms. When this happens, the Go runtime code will have to be updated to take advantage of this.</p>
    <div>
      <h2>Benchmarks</h2>
      <a href="#benchmarks">
        
      </a>
    </div>
    <p>When frequently calling a function to determine the time, you may be interested to know how long it takes to return. In other words, how now is “now” really? For benchmarking purposes, you can make these system calls directly from Go code. We have prepared <a href="https://github.com/cloudflare/autobench/commit/7d1effaf1fe0669ac28ee7ebe67216ee95f8a1b5">a patch</a> to Dave Cheney’s excellent <a href="https://github.com/cloudflare/autobench">autobench</a> project so that you may benchmark these system calls and other time-related functions yourself.</p><p>Benchmarks can also help us measure the time saved by calling gettimeofday and clock_gettime via the vDSO mechanism instead of the traditional system call path.</p><p>We’ll also use autobench to compare the performance of different versions of Go for the same set of time functions.</p><p>All benchmarks numbers below were obtained on an Intel Core i7-3540M CPU running at its maximum clock speed of 3 GHz. The CPU frequency scaling governor was set to performance mode to ensure reliable benchmark results.</p><p>We’ll use the Go 1.2 stable release as a baseline.</p><p>BenchmarkSyscallTime and BenchmarkVDSOTime measure the time it takes to make a time system call and vDSO call, respectively:</p>
            <pre><code>BenchmarkSyscallTime 38.2 ns/op
BenchmarkVDSOTime    3.85 ns/op</code></pre>
            <p>BenchmarkSyscallGettimeofday and BenchmarkVDSOGettimeofday measure the time it takes make to call a gettimeofday system call and vDSO call, respectively:</p>
            <pre><code>BenchmarkSyscallGettimeofday 59.3 ns/op
BenchmarkVDSOGettimeofday    23.4 ns/op</code></pre>
            <p>BenchmarkTimeNow measures the time it takes to call time.Now, which makes an underlying vDSO call to <code>clock_gettime(CLOCK_REALTIME)</code> and converts the returned value to a time.Time struct:</p>
            <pre><code>BenchmarkTimeNow 23.6 ns/op</code></pre>
            <p>Using autobench, we can also compare different Go versions against each other. To see how far we've come in the last few years, we also compared Go 1.2 to Go 1.0.3, which was released on Sep 27, 2012. The major difference was in the benchmark for time.Now:</p>
            <pre><code>benchmark        old ns/op  new ns/op  delta
BenchmarkTimeNow 406        23         -94.19%</code></pre>
            <p>To repeat this test with Go 1.0.3, you'll need the fix in <a href="https://code.google.com/p/go/source/detail?r=419dcca62a3d">changeset 419dcca62a3d</a> to compile if you are using a recent version of GCC.</p>
    <div>
      <h2>Clock Sources</h2>
      <a href="#clock-sources">
        
      </a>
    </div>
    <p>The speed at which you can tell time also depending on the clock source being used by your kernel. To see which clock sources you have available, run the following command:</p>
            <pre><code>$ cat /sys/devices/system/clocksource/clocksource0/available_clocksource
tsc hpet acpi_pm</code></pre>
            <p>To see which clock source is currently in use, run this command:</p>
            <pre><code>$ cat /sys/devices/system/clocksource/clocksource0/current_clocksource
tsc</code></pre>
            <p><a href="https://en.wikipedia.org/wiki/Time_Stamp_Counter">Time Stamp Counter (TSC)</a> is a 64-bit register that can be read by the RDTSC instruction. It is faster to read than the <a href="https://en.wikipedia.org/wiki/High_Precision_Event_Timer">High Precision Event Timer (HPET)</a>. The ACPI Power Management Timer (APCI PMT) is another timer that is found on many motherboards.</p><p>We can also use the same benchmarks above to compare the TSC clock source and a HPET clock source. Doing so requires booting Linux with the <code>clocksource=hpet</code> kernel command line parameter. Here are the results:</p>
            <pre><code>benchmark                              tsc ns/op   hpet ns/op  delta
BenchmarkSyscallGettimeofday              59          645      +987.69%
BenchmarkVDSOGettimeofday                 23          598      +2455.56%
BenchmarkSyscallClockGettimeRealtime      58          642      +995.56%
BenchmarkSyscallClockGettimeMonotonic     57          641      +1012.85%
BenchmarkTimeNow                          23          598      +2433.90%</code></pre>
            <p>As you can see, querying the HPET clock source takes significantly longer.</p><p>Not all CPUs are created equal. To see which TSC features your CPU supports, run the following command:</p>
            <pre><code>$ cat /proc/cpuinfo | grep tsc</code></pre>
            <p>You will see some or all of the following CPU flags related to the TSC:</p><ul><li><p>tsc</p></li><li><p>rdtscp</p></li><li><p>constant_tsc</p></li><li><p>nonstop_tsc</p></li></ul><p>The <i>tsc</i> flag indicates that your CPU has the 64-bit TSC register, which has been present since the Pentium.</p><p>The <i>rdtscp</i> flag indicates that your CPU supports the newer RDTSCP instruction, in addition to the RDTSC instruction. Intel has an interesting whitepaper on <a href="http://www.intel.com/content/www/us/en/intelligent-systems/embedded-systems-training/ia-32-ia-64-benchmark-code-execution-paper.html">How to Benchmark Code Execution Times on Intel IA-32 and IA-64 Instruction Set Architectures</a> with more details about the differences.</p><p>The <i>constant_tsc</i> flag indicates that the TSC runs at constant frequency irrespective of the current frequency, voltage or throttling state of the CPU, commonly referred to as its P- and T-state.</p><p>The <i>nonstop_tsc</i> flag indicates that TSC does not stop, irrespective of the CPU’s power saving mode, referred to as its C-state.</p><p>These features work in conjunction to provide an invariant TSC. There is more discussion over at the <a href="http://software.intel.com/en-us/forums/topic/280440">Intel Software forums</a> if you’re interested.</p>
    <div>
      <h2>Coming up in Go 1.3</h2>
      <a href="#coming-up-in-go-1-3">
        
      </a>
    </div>
    <p>There has also been some interesting work on the time front in the upcoming Go 1.3 release. The time.Sleep function, Ticker, and Timer now use <code>clock_gettime(CLOCK_MONOTONIC)</code> on Linux and other platforms. This work has been a good example of the broader community contributing to improve the core of Go, as can be seen in <a href="https://code.google.com/p/go/issues/detail?id=6007">issue 6007</a>, <a href="https://codereview.appspot.com/53010043/">CL 53010043</a> and <a href="https://groups.google.com/d/topic/golang-dev/gVFa7DC8UI0/discussion">discussion on golang-dev</a>.</p>
    <div>
      <h2>Further Reading</h2>
      <a href="#further-reading">
        
      </a>
    </div>
    <p>That’s it for today. If kernels and clocks excite you, CloudFlare is always looking to hire great Go and Linux kernel engineers. See our <a href="https://www.cloudflare.com/join-our-team">Careers</a> page.</p><p>It would not have been possible to write this article without the help from some other sources. If you want to know more, here’s some recommended reading:</p><p>The book <a href="http://www.amazon.com/Understanding-Linux-Kernel-Third-Edition/dp/0596005652">Understanding the Linux Kernel</a> by Daniel Bovet and Marco Cesati has an entire chapter on the timekeeping architecture in the kernel.</p><p>More about timers <a href="http://stackoverflow.com/questions/10921210/cpu-tsc-fetch-operation-especially-in-multicore-multi-processor-environment">on StackOverflow</a></p><p>Read more about <a href="http://www.linuxjournal.com/content/creating-vdso-colonels-other-chicken">vDSOs</a>.</p><p>Read more about <a href="http://juliusdavies.ca/posix_clocks/clock_realtime_linux_faq.html">clock_gettime</a>.</p><p>Sometimes bug reports can be a great source of information on details of the kernel and your CPU. <a href="https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=474091">Red Hat Bugzilla #474091</a> was a gold mine of information on the CPU flags for the TSC.</p><p>More at Quora on the the benefits of <a href="http://www.quora.com/Linux/What-are-the-advantages-and-disadvantages-of-TSC-and-HPET-as-a-clocksource">TSC vs HPET</a>.</p><p>StackOverflow also had some information on the <a href="https://stackoverflow.com/questions/7987671/what-is-the-acpi-pm-linux-clocksource-used-for-what-hardware-implements-it">ACPI PM clock source</a>.</p><p>Finally, some discussion of the <a href="https://twistedmatrix.com/trac/ticket/2424#comment:23">differences between CLOCK_MONOTONIC and CLOCK_MONOTONIC_RAW</a> as used in the Twisted framework for Python.</p> ]]></content:encoded>
            <category><![CDATA[Linux]]></category>
            <category><![CDATA[Go]]></category>
            <category><![CDATA[DDoS]]></category>
            <category><![CDATA[Google]]></category>
            <category><![CDATA[Attacks]]></category>
            <guid isPermaLink="false">7xktWOfH0yiQAj7YiLjfh1</guid>
            <dc:creator>Albert Strasheim</dc:creator>
        </item>
        <item>
            <title><![CDATA[Staying up to date with the latest protocols: SPDY/3.1]]></title>
            <link>https://blog.cloudflare.com/staying-up-to-date-with-the-latest-protocols-spdy-3-1/</link>
            <pubDate>Mon, 17 Feb 2014 16:00:00 GMT</pubDate>
            <description><![CDATA[ Back in June 2012 CloudFlare started a beta rollout of Google's then new SPDY protocol and we took a detailed look at how SPDY makes web sites faster. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Back in June 2012 CloudFlare started a <a href="/introducing-spdy">beta rollout</a> of Google's then new <a href="http://en.wikipedia.org/wiki/SPDY">SPDY</a> protocol and we took a <a href="/what-makes-spdy-speedy">detailed look</a> at how SPDY makes web sites faster.</p><p>Since then we've watched carefully as SPDY has evolved through different versions and have been keeping an eye on a new Google-driven protocol called <a href="http://en.wikipedia.org/wiki/QUIC">QUIC</a>. In August 2012 we <a href="/spdy-now-one-click-simple-for-any-website">rolled out SPDY</a> for all our customers by making it a simple (one click) configuration option.</p><p>As SPDY has progressed we've become more and more confident in the protocol and made it automatic for all our Pro, Business and Enterprise customers. No click needed. (Since SPDY runs over SSL/TLS to use it a web site must have an <a href="https://www.cloudflare.com/application-services/products/ssl/">SSL certificate</a>).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2PpchE7eOhRAjW1Oi7wL7c/ba229bb4051630fed5c4923065197e63/Screen_Shot_2014-02-18_at_10.00.09.png" />
            
            </figure><p>Last week we rolled out the most recent version of SPDY, 3.1, for all customers. SPDY/3.1 is supported by Google Chrome and Mozilla Firefox. As older versions of SPDY (particularly SPDY/2) are being deprecated it's vital for us to keep up to date.</p><p>To see whether a site is served over SPDY it's possible to use the CloudFlare <a href="https://chrome.google.com/webstore/detail/claire/fgbpcgddpmjmamlibbaobboigaijnmkl">Claire</a> extension for Google Chrome. Here it's showing that the popular <a href="https://news.ycombinator.com/">Hacker News</a> site is served over SPDY.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/jsIJ2FU6I4dxoEVHtn4zj/908a09be427f3171d90a7b4223d4e2fa/Screen_Shot_2014-02-18_at_10.36.01.png" />
            
            </figure><p>To discover the exact version used you can dive into Chrome's <a>chrome://net-internals</a> where the version is shown for each connection. Here connections to Google, CloudFlare and Hacker News are all using the latest version.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3s8Y75eF7NDiyE2Un8PpCU/c0bf780210d14bf8844e6856e6a80da0/Screen_Shot_2014-02-18_at_07.48.33.png" />
            
            </figure>
    <div>
      <h3>What changed in SPDY/3 and SPDY/3.1</h3>
      <a href="#what-changed-in-spdy-3-and-spdy-3-1">
        
      </a>
    </div>
    <p>A key advantage of SPDY is its ability to multiplex many HTTP request streams onto a single TCP connection. In the past, various hacks (such as <a href="/using-cloudflare-to-mix-domain-sharding-and-spdy">domain sharding</a>) have been used to get around the fact that only sequential, synchronous requests were possible with HTTP over TCP. SPDY changed all that.</p><p><a href="http://www.chromium.org/spdy/spdy-protocol/spdy-protocol-draft3">SPDY/3</a> introduced <a href="http://en.wikipedia.org/wiki/Flow_control_(data)">flow control</a> so that SPDY clients (and servers) could control the amount of data they receive on a SPDY connection. <a href="http://www.chromium.org/spdy/spdy-protocol/spdy-protocol-draft3-1">SPDY/3.1</a> extended flow control to individual SPDY streams (each SPDY connection handles multiple simultaneous streams of data). Flow control is important because different clients (think of the differences in available memory in laptops, desktops and mobile phones) will have varying limitations on how much data they can receive at any one time.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/65Q11v5zJ1nBrsBJAYYl7x/d19997a15bcbbfe80e5b7268aedb6a5a/9739472905_4dd2992b6c_c.jpg" />
            
            </figure>
    <div>
      <h3>Protocols coming QUIC and fast</h3>
      <a href="#protocols-coming-quic-and-fast">
        
      </a>
    </div>
    <p>Part of the service CloudFlare provides is being on top of the latest advances in Internet and web technologies. We've stayed on top of SPDY and will continue to roll out updates as the protocol evolves (and we'll support HTTP/2 just as soon as it is practical).</p><p>The most recent web protocol we've started experimenting with is QUIC. QUIC is radically different from HTTP or SPDY because it uses UDP as its underlying transport. Currently, we have an internal QUIC server running serving a static version of the CloudFlare home page. As our experiments with QUIC progress we'll make an alpha site available for people who, like us, are interested in experimenting with bleeding edge technologies.</p><p>If you're interested in playing with QUIC today you'll need to build the test <a href="https://code.google.com/p/chromium/codesearch#chromium/src/net/tools/quic/&amp;ct=rc&amp;cd=2&amp;q=quic&amp;sq=package:chromium">QUIC server and client</a> that are part of the <a href="http://www.chromium.org/">Chromium</a> project, get <a href="https://www.google.co.uk/intl/en/chrome/browser/canary.html">Google Chrome Canary</a> and enable QUIC in <a>chrome://flags</a>. You'll probably also want the latest <a href="http://www.wireshark.org/download.html">Wireshark</a> (the development release) which is capable of decoding QUIC frames.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1YkDhnZrbBRzcJDyG9jprc/487e5f62fc83d51cdf227ebd74d0f134/Screen_Shot_2014-02-17_at_13.01.26.png" />
            
            </figure><p>And once QUIC makes the move from experimental to beta we'll be sure to make it available for our customers.</p> ]]></content:encoded>
            <category><![CDATA[Google]]></category>
            <category><![CDATA[spdy]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Claire]]></category>
            <category><![CDATA[QUIC]]></category>
            <guid isPermaLink="false">7vzWPpCYEy8qZe6HioMhFv</guid>
            <dc:creator>John Graham-Cumming</dc:creator>
        </item>
        <item>
            <title><![CDATA[Using CloudFlare to mix domain sharding and SPDY]]></title>
            <link>https://blog.cloudflare.com/using-cloudflare-to-mix-domain-sharding-and-spdy/</link>
            <pubDate>Thu, 26 Dec 2013 17:00:00 GMT</pubDate>
            <description><![CDATA[ It’s common knowledge that domain sharding, where the resources in a web page are shared across different domains (or subdomains), is a good thing.  ]]></description>
            <content:encoded><![CDATA[ <p><i>Note: this post originally appeared as part of the </i><a href="http://calendar.perfplanet.com/2013/"><i>2013 PerfPlanet Calendar</i></a></p><p>It’s common knowledge that domain sharding, where the resources in a web page are shared across different domains (or subdomains), is a good thing. It’s a good thing because browsers limit the number of connections per domain: splitting a web page across domains means more connections and hence faster page downloads. Overall domain sharding results in a better end-user experience, and can be a useful way of sharing load across web servers.</p><p>But with the adoption of Google’s SPDY protocol the domain sharding situation is totally different. In fact, domain sharding can hurt performance when SPDY is in use and isn’t <a href="http://www.chromium.org/spdy/spdy-best-practices">recommended</a>. To understand why, here’s the popular 4chan.org web site downloaded without SPDY but using SSL (it’s possible to do this comparison without SSL, but less interesting because the timings are very different).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4ujkST6m884tzfzwb11KJx/1fb5571fe8c36739468aa4f3332374e3/jgc6.png" />
            
            </figure><p>You can see that there are three domains involved: <a href="http://www.4chan.org">www.4chan.org</a> (from which the initial HTML is downloaded), s.4cdn.org and t.4cdn.org. 4chan is using two domains to shard resources like JavaScript, CSS and images. After the initial HTML is downloaded on line 1, the browser (I used IE 10 here) looks up the DNS entry for s.4cdn.org and t.4cdn.org and opens three connections to each (lines 2 to 7).</p><p>In the diagram above the orange represents the TCP connection, and the purple the SSL negotiation. After using those 6 connections to download a resource, the same connections are reused (classic HTTP/1.1 Keep-Alive behaviour) to get further resources. Finally, line 16, there’s a separate connection to send Google Analytics information.</p><p>Now take a look at the same site downloaded using SPDY/2 via Google Chrome.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6quofHrSSDVAzXegbsAY5a/97b9764e6e898b42e0e57eb112de849f/jgc8.png" />
            
            </figure><p>Line 1 shows the same sort of connection, SSL negotiation and download of the page (here it took 591ms to complete vs. 549 ms above). But then the behaviour is totally different. Line 2 shows a single TCP connection and single SSL negotiation to s.4cdn.org. That connection is then used to download all the resources for s.4cdn.org and t.4cdn.org in parallel. Finally, there’s the same, separate Google Analytics connection. What you’re seeing there is SPDY in action.</p><p>The SPDY version was slightly faster: the page was visually complete at 1.1s; with SSL without SPDY it was 1.3s (although you’d have to account for differences in paint time between IE 10 and Chrome to really understand those values). There are two important things happening in the SPDY version:</p><p>Firstly, Chrome has noticed that s.4cdn.org and t.4cdn.org are the same site (they have the same IP addresses and the certificate for s.4cdn.org is valid for t.4cdn.org as well: it’s a wildcard certificate for 4cdn.org) and so it doesn’t bother with separate SSL connection: one will do. It then requests resources from each of those domains across the same SPDY connection. To do that it simply specifies the correct Host in the SPDY request. These can be see in the chrome://net-internals view. Here are two requests on the same SPDY connection for different Hosts.</p><p>t=1386958552557 [st=  1]    SPDY_SESSION_SYN_STREAM
--&gt; fin = true
--&gt; accept: */*
accept-encoding: gzip,deflate,sdch
accept-language: en-US,en;q=0.8,fr;q=0.6
cache-control: no-cache
host: s.4cdn.org
method: GET
pragma: no-cache
referer: <a href="https://www.4chan.org/">https://www.4chan.org/</a>
scheme: https
url: /js/fp-combined-compiled.7.js
user-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/31.0.1650.63 Safari/537.36
version: HTTP/1.1
--&gt; stream_id = 5
--&gt; unidirectional = false
t=1386958552577 [st= 21]    SPDY_SESSION_SYN_STREAM
--&gt; fin = true
--&gt; accept: image/webp,*/*;q=0.8
accept-encoding: gzip,deflate,sdch
accept-language: en-US,en;q=0.8,fr;q=0.6
cache-control: no-cache
host: t.4cdn.org
method: GET
pragma: no-cache
referer: <a href="https://www.4chan.org/">https://www.4chan.org/</a>
scheme: https
url: /cgl/thumb/1386957763148s.jpg
user-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/31.0.1650.63 Safari/537.36
version: HTTP/1.1
--&gt; stream_id = 7
--&gt; unidirectional = false</p><p>So, Chrome has detected that these domains are actually the same machine and made a single connection. That’s great, and doesn’t have a performance impact, but the actual decision to shard at all has had an impact.</p><p>Secondly, notice how in the SPDY case there are two TLS negotiations (one for <a href="http://www.4chan.org">www.4chan.org</a> and one for s.4cdn.org). The site could have loaded much faster if all the resources had been on <a href="http://www.4chan.org">www.4chan.org</a> (or on domains that shared a certificate; for this reason wildcard certificates work well with SPDY connections because the browser can use a single shared connection) because the entire download could have been done in a SPDY connection. Because 4chan uses a special domain (on a different IP with a different certificate) for resources it’s necessary to set up a new connection. In the example above, all the resources have to wait for a DNS lookup (27 ms), TCP connection (29 ms) and SSL negotiation (71 ms) before the SPDY connection can start requesting them. That’s a total of 127 ms. The page was visually complete in 1100 ms; if a single domain had been used then SPDY would have saved another 127 ms (almost 12% of the time).</p><p>So, for SPDY it’s actually better to not shard; for non-SPDY domain sharding remains a useful technique. (If you are interested in the actual test data the IE 10/SSL test is <a href="http://www.webpagetest.org/result/131213_FK_QC1/">http://www.webpagetest.org/result/131213_FK_QC1/</a> and the Chrome/SPDY test is <a href="http://www.webpagetest.org/result/131213_NE_QCQ/">http://www.webpagetest.org/result/131213_NE_QCQ/</a>).</p>
    <div>
      <h3>The best of both worlds</h3>
      <a href="#the-best-of-both-worlds">
        
      </a>
    </div>
    <p>The question then becomes, can you have the best of both worlds? With a little DNS trickery it’s possible to set up a site that works well whether SPDY is available or not. Since I don’t have access to 4chan to do live experiments, I copied the <a href="http://www.4chan.org">www.4chan.org</a> home page and all the included resources to my own web server and set up three domains: r.jgc.org (the root domain), s.jgc.org (equivalent to s.4cdn.org) and t.jgc.org (equivalent to t.4cdn.org). I then manually edited the HTML and CSS so that all the linked resources pointed to either s.jgc.org and t.jgc.org in the same manner as the original 4chan site. But, critically, I used a single certificate for all three domains. Here’s the site being loaded using IE 10 over SSL.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/36hLzFeZOJ8i7mdm7MP6Hf/33556c46922a0a2e7a649c6c64893fe7/jgc2.png" />
            
            </figure><p>And here’s the site loaded using Chrome with SPDY.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7gD8cltZjtCKvomgdCgtF1/8e19d99bada976cf600b34cf849b641d/jgc4.png" />
            
            </figure><p>As you can see, the domain sharding worked in IE 10. There are multiple connections to the s.jgc.org and t.jgc.org domains downloading resources in parallel. And the same configuration worked for Chrome with SPDY because it detected that these shared a certificate and used a single SPDY connection for everything (including the initial page download).</p><p>In Chrome there was only a single TCP connection and a single DNS lookup needed despite the presence of three domains. The IE 10/SSL version was visually complete in 1100 ms and used 9 TCP/SSL connections (plus one extra for Google Analytics). The Chrome/SPDY version was visually complete 200ms (at 900ms) and used… a single SPDY connection (plus an extra connection for Google Analytics). If you’re interested the IE 10/SSL test is <a href="http://www.webpagetest.org/result/131213_NP_TB3/">here</a> and the Chrome/SPDY test is <a href="http://www.webpagetest.org/result/131213_ZP_TBY/">here</a>.</p><p>For the best performance for the older browsers and the latest, shiny SPDY browsers domain sharding should still be used but using a certificate that covers the domains used means that only a single SPDY connection will be needed.</p><p>CloudFlare Makes This EasyCloudFlare has both <a href="https://www.cloudflare.com/features-security">simple SSL</a> options and <a href="https://www.cloudflare.com/features-cdn">push button SPDY</a> available. By setting up SSL on CloudFlare with subdomains you'll automatically get SPDY as well. In the test above it took me about 10 minutes to set up the test subdomains on jgc.org and enable both SSL and SPDY.</p>
    <div>
      <h3>Thanks</h3>
      <a href="#thanks">
        
      </a>
    </div>
    <p>Thanks to Andrew Galloni for his assistance reviewing and investigating the interaction between SPDY and domain sharding.</p> ]]></content:encoded>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Google]]></category>
            <category><![CDATA[Chrome]]></category>
            <category><![CDATA[spdy]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">4DLn2kM9SaXhB138yI9EFd</guid>
            <dc:creator>John Graham-Cumming</dc:creator>
        </item>
        <item>
            <title><![CDATA[Why some cryptographic keys are much smaller than others]]></title>
            <link>https://blog.cloudflare.com/why-are-some-keys-small/</link>
            <pubDate>Fri, 20 Sep 2013 06:00:00 GMT</pubDate>
            <description><![CDATA[ If you connect to CloudFlare's web site using HTTPS the connection will be secured using one of the many encryption schemes supported by SSL/TLS.  ]]></description>
            <content:encoded><![CDATA[ <p>If you connect to <a href="https://www.cloudflare.com/">CloudFlare's web site</a> using HTTPS the connection will be secured using one of the many encryption schemes supported by SSL/TLS. When I connect using Chrome I get an RC4_128 connection (with a 128-bit key) which used the ECDHE_RSA key exchange mechanism (with a 2,048-bit key) to set the connection up.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7G4da35qbg9auSgeT2YClg/94a14cd2d77227b4a74d532131b8b52c/cloudflare-tls.png" />
            
            </figure><p>If you're not familiar with the cryptographic protocols involved you might be wondering why one part uses a 128-bit key and another a 2,048-bit key. And you'd be forgiven for wondering why a large key wasn't used throughout and whether a 128-bit key is weaker than a 2,048-bit key. This blog post will explain why a 128-bit <i>symmetric</i> key is, in fact, a bit more secure than a 2,048-bit <i>asymmetric</i> key; you have to look at both the type of encryption being used (symmetric or asymmetric) and the key length to understand the strength of the encryption.</p><p>My connection above used a symmetric cipher (RC4_128) with a 128-bitkey and an asymmetric cipher (ECDHE_RSA) with a 2,048-bit key.</p><p>You might also have seen other key lengths in use. For example, when I connect to the British Government portal <a href="https://www.gov.uk/">gov.uk</a> I get a TLS connection that uses AES_256_CBC (with a 256-bit key) set up using RSA with a 2,048-bit key. It's not uncommon to see RSA with a 1,024-bit key as well.</p><p>To understand these key lengths it's necessary to understand a little about the actual encryption schemes they are used with.</p>
    <div>
      <h3>Symmetric Cryptography</h3>
      <a href="#symmetric-cryptography">
        
      </a>
    </div>
    <p>The RC4_128 and AES_256_CBC schemes mentioned above are <a href="https://en.wikipedia.org/wiki/Symmetric-key_algorithm">symmetric cryptographic schemes</a>. Symmetric simply means that the same key is used to encipher and decipher the encrypted web traffic. In one case, a 128-bit key is used, in another a 256-bit key.</p><p>Symmetric cryptography is the oldest form there is. When children use a <a href="https://en.wikipedia.org/wiki/Caesar_cipher">Caesar Cipher</a> (shifting each letter in the alphabet some fixed number of places) they are performing symmetric cryptography. In that case, the key is the number of places to shift letters and there are 26 possible keys (which is roughly like saying the Caesar Cipher has a roughly 5-bit key).</p><p>Here's a Caesar Shift with a key of 7 (each letter is moved up in the alphabet 7 places):</p><p>ISTHE BESTT HATCO RPORA LBOBB YSHAF TOECA NDOON SHORT NOTIC E
PZAOL ILZAA OHAJV YWVYH SIVII FZOHM AVLJH UKVVU ZOVYA UVAPJ L</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4QAkwOHHM2r0OUPElBkBpv/ed81bf8342b3b74d2c701e8094b340ff/caesar-cipher.png" />
            
            </figure><p>There are lots of ways to break the Caesar Cipher, but one way is to try out all 26 possible keys. That's really not that hard since there are only 26 possible solutions:</p>
            <pre><code>PZAOL ILZAA OHAJV YWVYH SIVII FZOHM AVLJH UKVVU ZOVYA UVAPJ L
----- ----- ----- ----- ----- ----- ----- ----- ----- ----- -</code></pre>
            <p>0: PZAOL ILZAA OHAJV YWVYH SIVII FZOHM AVLJH UKVVU ZOVYA UVAPJ L
1: OYZNK HKYZZ NGZIU XVUXG RHUHH EYNGL ZUKIG TJUUT YNUXZ TUZOI K
2: NXYMJ GJXYY MFYHT WUTWF QGTGG DXMFK YTJHF SITTS XMTWY STYNH J
3: MWXLI FIWXX LEXGS VTSVE PFSFF CWLEJ XSIGE RHSSR WLSVX RSXMG I
4: LVWKH EHVWW KDWFR USRUD OEREE BVKDI WRHFD QGRRQ VKRUW QRWLF H
5: KUVJG DGUVV JCVEQ TRQTC NDQDD AUJCH VQGEC PFQQP UJQTV PQVKE G
6: JTUIF CFTUU IBUDP SQPSB MCPCC ZTIBG UPFDB OEPPO TIPSU OPUJD F
7: ISTHE BESTT HATCO RPORA LBOBB YSHAF TOECA NDOON SHORT NOTIC E
8: HRSGD ADRSS GZSBN QONQZ KANAA XRGZE SNDBZ MCNNM RGNQS MNSHB D
9: GQRFC ZCQRR FYRAM PNMPY JZMZZ WQFYD RMCAY LBMML QFMPR LMRGA C
10: FPQEB YBPQQ EXQZL OMLOX IYLYY VPEXC QLBZX KALLK PELOQ KLQFZ B
11: EOPDA XAOPP DWPYK NLKNW HXKXX UODWB PKAYW JZKKJ ODKNP JKPEY A
12: DNOCZ WZNOO CVOXJ MKJMV GWJWW TNCVA OJZXV IYJJI NCJMO IJODX Z
13: CMNBY VYMNN BUNWI LJILU FVIVV SMBUZ NIYWU HXIIH MBILN HINCW Y
14: BLMAX UXLMM ATMVH KIHKT EUHUU RLATY MHXVT GWHHG LAHKM GHMBV X
15: AKLZW TWKLL ZSLUG JHGJS DTGTT QKZSX LGWUS FVGGF KZGJL FGLAU W
16: ZJKYV SVJKK YRKTF IGFIR CSFSS PJYRW KFVTR EUFFE JYFIK EFKZT V
17: YIJXU RUIJJ XQJSE HFEHQ BRERR OIXQV JEUSQ DTEED IXEHJ DEJYS U
18: XHIWT QTHII WPIRD GEDGP AQDQQ NHWPU IDTRP CSDDC HWDGI CDIXR T
19: WGHVS PSGHH VOHQC FDCFO ZPCPP MGVOT HCSQO BRCCB GVCFH BCHWQ S
20: VFGUR ORFGG UNGPB ECBEN YOBOO LFUNS GBRPN AQBBA FUBEG ABGVP R
21: UEFTQ NQEFF TMFOA DBADM XNANN KETMR FAQOM ZPAAZ ETADF ZAFUO Q
22: TDESP MPDEE SLENZ CAZCL WMZMM JDSLQ EZPNL YOZZY DSZCE YZETN P
23: SCDRO LOCDD RKDMY BZYBK VLYLL ICRKP DYOMK XNYYX CRYBD XYDSM O
24: RBCQN KNBCC QJCLX AYXAJ UKXKK HBQJO CXNLJ WMXXW BQXAC WXCRL N
25: QABPM JMABB PIBKW ZXWZI TJWJJ GAPIN BWMKI VLWWV APWZB VWBQK M</p>
    <div>
      <h3>How long?</h3>
      <a href="#how-long">
        
      </a>
    </div>
    <p>The goal of modern symmetric cryptography is to make this sort of'trying out all the possible keys' the only approach to breaking a symmetric cipher. Algorithms like <a href="https://en.wikipedia.org/wiki/RC4">RC4</a> and <a href="https://en.wikipedia.org/wiki/Advanced_Encryption_Standard">AES</a> scramble data based on a key. The key itself is, ideally, randomly chosen from the set of all possible keys.</p><p>On a side note, there are now <a href="/staying-on-top-of-tls-attacks">serious problems with RC4</a> and as better replacements come along (such as ciphersuites based on AES) CloudFlare will update the ciphersuites it uses to provide the best level of protection.</p><p>That said the basic idea is that the only way to break into a connection secured with a symmetric cipher is to try out all the keys. Which brings us to 128-bit and 256-bit keys.</p><p>A 128-bit key means there are 340,282,366,920,938,463,463,374,607,431,768,211,456 possible keys to try. A 256-bit key has the square of that many keys to try: a hugenumber.</p><p>To put that in context imagine trying to test all the keys for a 128-bit AES encryption using the <a href="http://en.wikipedia.org/wiki/AES_instruction_set">special AES instructions</a> added to the latest Intel microprocessors. These instructions are designed to be very fast and according to Intel's own data decrypting a block of AES encrypted data would take 5.6 cycles on <a href="http://ark.intel.com/products/47932">Intel i7 Processor</a> with 4 cores.</p><p>Put another way, that processor could try out one key on one block of data in about 1.7 nanoseconds. At that speed it would take it about <b>1.3 * 10^12 * the age of the universe</b> to check all the keys (you'd probably only have to check half before finding the right one so divide that incredibly long time by two).</p><p>Since personal computers became available roughly 1 billion of them have been sold. Imagine that they all had the same top-of-the-line processor and were used to attack a 128-bit key. You'd manage to get the time down to <b>660 * the age of the universe</b>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1w5RgPJRr4Gnxk4E2rWqFu/cbc76d09e7671de7fe3731c38fcfdb8a/milliways.png" />
            
            </figure><p>Image (c) BBC TV</p><p>So, breaking 128-bit keys by brute force just isn't practical. And breaking 256-bit is even less possible. So, for symmetric ciphers, keys of these lengths make sense.</p><p>But that's not the case for asymmetric cryptography.</p>
    <div>
      <h3>Asymmetric Key Lengths</h3>
      <a href="#asymmetric-key-lengths">
        
      </a>
    </div>
    <p><a href="https://en.wikipedia.org/wiki/Public-key_cryptography">Asymmetric cryptography</a> works by having two different keys, one for encryption and one for decryption. It's also often called 'public key cryptography' because it's possible to make one key public (allowing someone to encrypt a message) while keeping the other private (only the holder of the private key can decrypt the message encrypted with its related public key).</p><p>In order to have these special properties the public and private keys are related by some mathematical process. Aside: in the symmetric examples there's only one key and it's just any value of the right number of bits. This randomness of a symmetric key means it can be relatively short as we saw.</p><p>For example, in the popular RSA scheme used with SSL/TLS the public and private keys consist in part of the product of two large prime numbers. Making an RSA key starts with picking two random prime numbers. The security of RSA relies (in part) on the fact that it's easy to choose two random prime numbers, but it's very hard to discover what they are when just given the product of them.</p><p>Suppose there are two prime numbers picked at random called p0 and p1. Part of the RSA public (and private) key is called the modulus and it is just p0*p1. If an attacker can decompose (or factor) the modulus into p0 and p1 they can break RSA because they can work out the private key. Mathematicians believe that it is very hard to factor a product of two primes and the security of web transactions relies, in part, on that belief.</p><p>Typical RSA key sizes are 1,024 or 2,048 or 4,096 bits. That number is the number of bits in the modulus. For each there will be a pair of primes of roughly 512 bits or 1,024 bits or 2,048 bits depending on the key size picked. Those primes are chosen by some random process (highlighting once again the <a href="/why-randomness-matters">importance of random number generators</a>).</p><p>But we still haven't answered the question of why these key sizes are so large. Just as in the symmetric key case, attacks on say 2,048-bit RSA are based on trying out all keys of a certain size, but unlike the symmetric key scheme not every 2,048-bit number is an RSA key (because it has to be the product of two primes).</p><p>So, although the key size is larger there are actually fewer possible RSA keys for any given number of bits that there are for the same symmetric key size. That's because there are only so many prime numbers of that size and below. The RSA scheme can only use pairs of prime numbers, whereas the symmetric schemes can use any number at all of the same size.</p><p>This diagram (called an <a href="https://en.wikipedia.org/wiki/Ulam_spiral">Ulam Spiral</a>) shows the numbers from 1 to 40,000 as black or white dots. The black dots are the primes</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3VIXrzzZVLON430lnHxCPO/940bf5e06b1965d20e0f6371c695ca06/ulam-spiral.png" />
            
            </figure><p>Image from <a href="https://en.wikipedia.org/wiki/File:Ulam_1.png">Wikipedia</a></p><p>If you used a 256-bit RSA key (roughly consisting of two 128-bit prime numbers multiplied together) you'd quickly find that your encryption had been broken by someone using a fast home PC. There are only so many 128-bit prime numbers and there are fast ways of attacking the factorization problem (such as the <a href="https://en.wikipedia.org/wiki/General_number_field_sieve">General Number Field Sieve</a> that actually make breaking RSA keys a little easier than trying out every single key).</p><p>Any time there's a pattern in a cryptographic key it represents a chink in the crytography's armor. For example, in a perfect world, people would pick completely random passwords. Because they don't there are patterns in the passwords and they can be guessed or broken without trying out every possible password.</p><p>RSA keys have a distinctive pattern: they are the product of two prime numbers. That provides the chink; today that chink is best exploited by the General Number Field Sieve. In the symmetric key case there are no such patterns: the keys are just large randomly-chosen numbers. (Of course, if you don't pick your symmetric key randomly you might actually be helping an attacker find a way to break your encrypted messages.)</p><p>A few years ago the 512-bit RSA key used to sign software for TIcalculators was broken by <a href="https://en.wikipedia.org/wiki/Texas_Instruments_signing_key_controversy">an individual</a> using a PC that ran for 73 days using the open source <a href="http://www.boo.net/~jasonp/qs.html">msieve and ggnfs</a> prorgrams.</p><p>So, asymmetric keys have to be much larger than symmetric keys because there are less of them for a given number of bits, and because there are patterns within the keys themselves.</p>
    <div>
      <h3>Recommendations</h3>
      <a href="#recommendations">
        
      </a>
    </div>
    <p>The <a href="http://www.keylength.com/en/3/">ECRYPT II recommendations</a> on key length say that a 128-bit symmetric key provides the same strength of protection as a 3,248-bit asymmetric key. And that those key lengths provide long term protection of data encrypted with them.</p><p>The length of time a key is good for is also important. Over time computers get faster and techniques for breaking encryption schemes (particuarly techniques for breaking asymmetric encryption) get better. That 512-bit key used for TI calculators probably looked pretty safe when it was first chosen. And in 1999 a key of that length took a <a href="https://en.wikipedia.org/wiki/Cray_C90">supercomputer</a> to break.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/43FhANStq4MmuxKjgUXS6Y/70e7ba13b64f49f60eec15570a1f037a/sneakers.png" />
            
            </figure><p>We keep an eye on these reports when choosing ciphers and key lengths to secure our and our customers' communications.</p><p>Because of the importance of protecting our customers' communications CloudFlare has also opted to roll out <a href="/staying-on-top-of-tls-attacks">forward secrecy</a> for our SSL/TLS connections. That means that the public/private keys used for connections are generated freshly each time. That prevents bulk attacks where a single public/private key (such as the one used in simple RSA-based certificates) is broken revealing all of the symmetric keys used to secure HTTPS for a web site.</p>
    <div>
      <h3>Forward Secrecy and Elliptic Curves</h3>
      <a href="#forward-secrecy-and-elliptic-curves">
        
      </a>
    </div>
    <p>Returning to the HTTPS connection I made to CloudFlare at the start, the key negotiation was done using ECDHE_RSA. That's the ephemeral version of the Diffie-Hellman key exchange mechanism that uses elliptic curves and RSA for authentication. That's quite amouthful. It breaks down like this:</p><ol><li><p>The public/private key pair used to this connection was ephemeral: it was only created for this connection. That's what gives 'forward secrecy'.</p></li><li><p>The actual public-key encryption scheme used was <a href="https://en.wikipedia.org/wiki/Elliptic_curve_Diffie%E2%80%93Hellman">Elliptic Curve Diffie-Hellman</a>. Elliptic Curve Cryptography uses a different branch of mathematics than RSA. Looking at the ECRYPT II report shows that a 128-bit symmetric key is as strong as a 3,248-bit asymmetric key; to get the equivalent strength from an Elliptic Curve Cryptographic scheme requires a key with 256-bits.</p></li><li><p>So, Google Chrome set up an ephemeral <a href="https://www.imperialviolet.org/2011/11/22/forwardsecret.html">256-bit</a> Elliptic Curve Diffie Hellman public/private key pair and used it to agree on a 128-bit symmetric key for the rest of the communication.</p></li><li><p>To prove that the web site really was <a href="http://www.cloudflare.com">www.cloudflare.com</a> the 2,048-bit RSA key was used along with the web site's certificate.</p></li></ol><p>So, three different key lengths were used: 128-bit (with RC4), 256-bit (with ECDHE) and 2,048-bit (with RSA). All three key lengths provide similar levels of security.</p> ]]></content:encoded>
            <category><![CDATA[Chrome]]></category>
            <category><![CDATA[Google]]></category>
            <category><![CDATA[RSA]]></category>
            <guid isPermaLink="false">47tSgx3GbC9M36Jv0ftTSZ</guid>
            <dc:creator>John Graham-Cumming</dc:creator>
        </item>
        <item>
            <title><![CDATA[Why Google Went Offline Today and a Bit about How the Internet Works]]></title>
            <link>https://blog.cloudflare.com/why-google-went-offline-today-and-a-bit-about/</link>
            <pubDate>Tue, 06 Nov 2012 09:09:00 GMT</pubDate>
            <description><![CDATA[ Today, Google's services experienced a limited outage for about 27 minutes over some portions of the Internet. The reason this happened dives into the deep, dark corners of networking.  ]]></description>
            <content:encoded><![CDATA[ <p>Today, Google's services experienced a limited outage for about 27 minutes over some portions of the Internet. The reason this happened dives into the deep, dark corners of networking. I'm a network engineer at CloudFlare and I played a small part in helping ensure Google came back online. Here's a bit about what happened.</p><p>At around 6:24pm PST / 02:24 UTC (5 Nov. 2012 PST / 6 Nov. 2012 UTC), CloudFlare employees noticed that Google's services were offline. We use Google Apps for things like email so when we can't reach their servers the office notices quickly. I'm on the Network Engineering team so I jumped online to figure out if the problem was local to us or global.</p>
    <div>
      <h3>Troubleshooting</h3>
      <a href="#troubleshooting">
        
      </a>
    </div>
    <p>I quickly realised that we were unable to resolve all of Googles services — or even reach 8.8.8.8, Googles public DNS server — so I started troubleshooting DNS.</p><p>$ dig +trace google.com</p><p>Here's the response I got when I tried to reach any of Google.com's name servers:</p><p>google.com.                172800        IN        NS        ns2.google.com.google.com.                172800        IN        NS        ns1.google.com.google.com.                172800        IN        NS        ns3.google.com.google.com.                172800        IN        NS        ns4.google.com.;; Received 164 bytes from 192.12.94.30#53(e.gtld-servers.net) in 152 ms;; connection timed out; no servers could be reached</p><p>The fact that no servers could be reached means something was wrong. Specifically, it meant that from our office network we were unable to reach any of Googles DNS servers.</p><p>I started to look at the network layer, see if that's where the problems lay.</p><p>PING 216.239.32.10 (216.239.32.10): 56 data bytesRequest timeout for icmp_seq 092 bytes from 1-1-15.edge2-eqx-sin.moratelindo.co.id (202.43.176.217): Time to live exceeded</p><p>That was curious. Normally, we shouldn't be seeing an Indonesian ISP (Moratel) in the path to Google. I jumped on one of CloudFlare's routers to check what was going on. Meanwhile, others reports from around the globe on Twitter suggested we weren't the only ones seeing the problem.</p>
    <div>
      <h3>Internet Routing</h3>
      <a href="#internet-routing">
        
      </a>
    </div>
    <p>To understand what went wrong you need to understand a bit about how networking on the Internet works. The Internet is a collection of networks, known as "Autonomous Systems" (AS). Each network has a unique number to identify it known as AS number. CloudFlare's AS number is 13335, Google's is 15169. The networks are connected together by what is known as Border Gateway Protocol (BGP). BGP is the glue of the Internet — announcing what IP addresses belong to each network and establishing the routes from one AS to another. An Internet "route" is exactly what it sounds like: a path from the IP address on one AS to an IP address onanother AS.</p><p>BGP is largely a trust-based system. Networks trust each other to say which IP addresses and other networks are behind them. When you send a packet or make a request across the network, your ISP connects to its upstream providers or peers and finds the shortest path from your ISP to the destination network.</p><p>Unfortunately, if a network starts to send out an announcement of a particular IP address or network behind it, when in fact it is not, if that network is trusted by its upstreams and peers then packets can end up misrouted. That is what was happening here.</p><p>I looked at the BGP Routes for a Google IP Address. The route traversed Moratel (23947), an Indonesian ISP. Given that I'm looking at the routing from California and Google is operating Data Centre's not far from our office, packets should never be routed via Indonesia. The most likely cause was that Moratel was announcing a network that wasn't actually behind them.</p><p>The BGP Route I saw at the time was:</p><p><a>tom@edge01.sfo01</a>&gt; show route 216.239.34.10                          inet.0: 422168 destinations, 422168 routes (422154 active, 0 holddown, 14 hidden)+ = Active Route, - = Last Active, * = Both216.239.34.0/24    *[BGP/170] 00:15:47, MED 18, localpref 100                      AS path: 4436 3491 23947 15169 I                    &gt; to 69.22.153.1 via ge-1/0/9.0</p><p>Looking at other routes, for example to Google's Public DNS, it was also stuck routing down the same (incorrect) path:</p><p><a>tom@edge01.sfo01</a>&gt; show route 8.8.8.8 inet.0: 422196 destinations, 422196 routes (422182 active, 0 holddown, 14 hidden)+ = Active Route, - = Last Active, * = Both8.8.8.0/24         *[BGP/170] 00:27:02, MED 18, localpref 100                      AS path: 4436 3491 23947 15169 I                    &gt; to 69.22.153.1 via ge-1/0/9.0</p>
    <div>
      <h3>Route Leakage</h3>
      <a href="#route-leakage">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/22fmryxOS3Tm9oxu2HxywQ/316632dbcb404c1c21d9df929a7fe5fb/fingersyouhaveusedtodial.png.scaled500.png" />
            
            </figure><p>(Image Credit: The Simpsons)</p><p>Situations like this are referred to in the industry as "route leakage", as the route has "leaked" past normal paths. This isn't an unprecedented event. Google previously suffered a <a href="http://www.renesys.com/blog/2008/02/pakistan-hijacks-youtube-1.shtml">similar outage</a> when Pakistan was allegedly trying to censor a video on YouTube and the National ISP of Pakistan null routed the service's IP addresses. Unfortunately, they leaked the null route externally. Pakistan Telecom's upstream provider, PCCW, trusted what Pakistan Telecom's was sending them and the routes spread across the Internet. The effect was YouTube was knocked offline for around 2 hours.</p><p>The case today was similar. Someone at Moratel likely "fat fingered" an Internet route. PCCW, who was Moratel's upstream provider, trusted the routes Moratel was sending to them. And, quickly, the bad routes spread. It is unlikely this was malicious, but rather a misconfiguaration or an error evidencing some of the failings in the BGP Trust model.</p>
    <div>
      <h3>The Fix</h3>
      <a href="#the-fix">
        
      </a>
    </div>
    <p>The solution was to get Moratel to stop announcing the routes they shouldn't be. A large part of being a network engineer, especially working at a large network like CloudFlare's, is having relationships with other network engineers around the world. When I figured out the problem, I contacted a colleague at Moratel to let him know what was going on. He was able to fix the problem at around 2:50 UTC / 6:50pm PST. Around 3 minutes later, routing returned to normal and Google's services came back online.</p><p>Looking at peering maps, I'd estimate the outage impacted around 3–5% of the Internet's population. The heaviest impact will have been felt in Hong Kong, where PCCW is the incumbent provider. If you were in the area and unable to reach Google's services around that time, now you know why.</p>
    <div>
      <h3>Building a Better Internet</h3>
      <a href="#building-a-better-internet">
        
      </a>
    </div>
    <p>This all is a reminder about how the Internet is a system built on trust. Today's incident shows that, even if you're as big as Google, factors outside of your direct control can impact the ability of your customers to get to your site so it's important to have a network engineering team that is watching routes and managing your connectivity around the clock. CloudFlare works every day to ensure our customers get the optimal possible routes. We look out for all the websites on our network to ensure that their traffic is always delivered as fast as possible. Just another day in our ongoing efforts to <a href="https://twitter.com/search?q=%23savetheweb">#savetheweb</a>.</p>
    <div>
      <h4>Update: Tuesday, November 6 11:00am PST</h4>
      <a href="#update-tuesday-november-6-11-00am-pst">
        
      </a>
    </div>
    <p>Moratel says the issue was caused by an unexpected hardware failure, causing this abnormal condition. This was not a malicious attempt. Moratel immediately shutdown the BGP peering with Google after contact was made while the hardware failure was being looked into.</p><hr /><p><i>Thanks for reading all the way to the end. If you enjoyed this post, take a second to </i><a href="http://www.cloudflare.com/overview"><i>learn more about CloudFlare</i></a><i> or </i><a href="http://crunchies2012.techcrunch.com/nominate/?MTpDbG91ZEZsYXJl"><i>nominate us for the 2012 Crunchie Award for Best Technical</i><i>Innovation</i></a><i>.</i></p> ]]></content:encoded>
            <category><![CDATA[Network]]></category>
            <category><![CDATA[BGP]]></category>
            <category><![CDATA[Outage]]></category>
            <category><![CDATA[Google]]></category>
            <category><![CDATA[Reliability]]></category>
            <category><![CDATA[Tech Talks]]></category>
            <category><![CDATA[Deep Dive]]></category>
            <guid isPermaLink="false">4faLB1sIG2FgoyFXgvQ6Qr</guid>
            <dc:creator>Tom Paseka</dc:creator>
        </item>
    </channel>
</rss>