
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sun, 05 Apr 2026 20:04:44 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Secondary DNS - Deep Dive]]></title>
            <link>https://blog.cloudflare.com/secondary-dns-deep-dive/</link>
            <pubDate>Tue, 15 Sep 2020 11:00:00 GMT</pubDate>
            <description><![CDATA[ The goal of Cloudflare operated Secondary DNS is to allow our customers with custom DNS solutions, be it on-premise or some other DNS provider, to be able to take advantage of Cloudflare's DNS performance and more recently, through Secondary Override, our proxying and security capabilities too. ]]></description>
            <content:encoded><![CDATA[ 
    <div>
      <h2>How Does Secondary DNS Work?</h2>
      <a href="#how-does-secondary-dns-work">
        
      </a>
    </div>
    <p>If you already understand how Secondary DNS works, please feel free to skip this section. It does not provide any Cloudflare-specific information.</p><p>Secondary DNS has many use cases across the Internet; however, traditionally, it was used as a synchronized backup for when the primary DNS server was unable to respond to queries. A more modern approach involves focusing on redundancy across many different nameservers, which in many cases broadcast the same anycasted IP address.</p><p>Secondary DNS involves the unidirectional transfer of DNS zones from the primary to the Secondary DNS server(s). One primary can have any number of Secondary DNS servers that it must communicate with in order to keep track of any zone updates. A zone update is considered a change in the contents of a  zone, which ultimately leads to a Start of Authority (SOA) serial number increase. The zone’s SOA serial is one of the key elements of Secondary DNS; it is how primary and secondary servers synchronize zones. Below is an example of what an SOA record might look like during a dig query.</p>
            <pre><code>example.com	3600	IN	SOA	ashley.ns.cloudflare.com. dns.cloudflare.com. 
2034097105  // Serial
10000 // Refresh
2400 // Retry
604800 // Expire
3600 // Minimum TTL</code></pre>
            <p>Each of the numbers is used in the following way:</p><ol><li><p>Serial - Used to keep track of the status of the zone, must be incremented at every change.</p></li><li><p>Refresh - The maximum number of seconds that can elapse before a Secondary DNS server must check for a SOA serial change.</p></li><li><p>Retry - The maximum number of seconds that can elapse before a Secondary DNS server must check for a SOA serial change, after previously failing to contact the primary.</p></li><li><p>Expire - The maximum number of seconds that a Secondary DNS server can serve stale information, in the event the primary cannot be contacted.</p></li><li><p>Minimum TTL - Per <a href="https://tools.ietf.org/html/rfc2308">RFC 2308</a>, the number of seconds that a DNS negative response should be cached for.</p></li></ol><p>Using the above information, the Secondary DNS server stores an SOA record for each of the zones it is tracking. When the serial increases, it knows that the zone must have changed, and that a zone transfer must be initiated.  </p>
    <div>
      <h2>Serial Tracking</h2>
      <a href="#serial-tracking">
        
      </a>
    </div>
    <p>Serial increases can be detected in the following ways:</p><ol><li><p>The fastest way for the Secondary DNS server to keep track of a serial change is to have the primary server NOTIFY them any time a zone has changed using the DNS protocol as specified in <a href="https://www.ietf.org/rfc/rfc1996.txt">RFC 1996</a>, Secondary DNS servers will instantly be able to initiate a zone transfer.</p></li><li><p>Another way is for the Secondary DNS server to simply poll the primary every “Refresh” seconds. This isn’t as fast as the NOTIFY approach, but it is a good fallback in case the notifies have failed.</p></li></ol><p>One of the issues with the basic NOTIFY protocol is that anyone on the Internet could potentially notify the Secondary DNS server of a zone update. If an initial SOA query is not performed by the Secondary DNS server before initiating a zone transfer, this is an easy way to perform an <a href="https://www.cloudflare.com/learning/ddos/dns-amplification-ddos-attack/">amplification attack</a>. There is two common ways to prevent anyone on the Internet from being able to NOTIFY Secondary DNS servers:</p><ol><li><p>Using transaction signatures (TSIG) as per <a href="https://tools.ietf.org/html/rfc2845">RFC 2845</a>. These are to be placed as the last record in the extra records section of the DNS message. Usually the number of extra records (or ARCOUNT) should be no more than two in this case.</p></li><li><p>Using IP based access control lists (ACL). This increases security but also prevents flexibility in server location and IP address allocation.</p></li></ol><p>Generally NOTIFY messages are sent over UDP, however TCP can be used in the event the primary server has reason to believe that TCP is necessary (i.e. firewall issues).</p>
    <div>
      <h2>Zone Transfers</h2>
      <a href="#zone-transfers">
        
      </a>
    </div>
    <p>In addition to serial tracking, it is important to ensure that a standard protocol is used between primary and Secondary DNS server(s), to efficiently transfer the zone. DNS zone transfer protocols do not attempt to solve the confidentiality, authentication and integrity triad (CIA); however, the use of TSIG on top of the basic zone transfer protocols can provide integrity and authentication. As a result of <a href="https://www.cloudflare.com/learning/dns/what-is-dns/">DNS</a> being a public protocol, confidentiality during the zone transfer process is generally not a concern.</p>
    <div>
      <h3>Authoritative Zone Transfer (AXFR)</h3>
      <a href="#authoritative-zone-transfer-axfr">
        
      </a>
    </div>
    <p>AXFR is the original zone transfer protocol that was specified in <a href="https://tools.ietf.org/html/rfc1034">RFC 1034</a> and <a href="https://tools.ietf.org/html/rfc1035">RFC 1035</a> and later further explained in <a href="https://tools.ietf.org/html/rfc5936">RFC 5936</a>. AXFR is done over a TCP connection because a reliable protocol is needed to ensure packets are not lost during the transfer. Using this protocol, the primary DNS server will transfer all of the zone contents to the Secondary DNS server, in one connection, regardless of the serial number. AXFR is recommended to be used for the first zone transfer, when none of the records are propagated, and IXFR is recommended after that.</p>
    <div>
      <h3>Incremental Zone Transfer (IXFR)</h3>
      <a href="#incremental-zone-transfer-ixfr">
        
      </a>
    </div>
    <p>IXFR is the more sophisticated zone transfer protocol that was specified in <a href="https://tools.ietf.org/html/rfc1995">RFC 1995</a>. Unlike the AXFR protocol, during an IXFR, the primary server will only send the secondary server the records that have changed since its current version of the zone (based on the serial number). This means that when a Secondary DNS server wants to initiate an IXFR, it sends its current serial number to the primary DNS server. The primary DNS server will then format its response based on previous versions of changes made to the zone. IXFR messages must obey the following pattern:</p><ol><li><p><b><i>Current latest SOA</i></b></p></li><li><p><b><i>Secondary server current SOA</i></b></p></li><li><p><b><i>DNS record deletions</i></b></p></li><li><p><b><i>Secondary server current SOA + changes</i></b></p></li><li><p><b><i>DNS record additions</i></b></p></li><li><p><b><i>Current latest SOA</i></b></p></li></ol><p>Steps 2,3,4,5,6 can be repeated any number of times, as each of those represents one change set of deletions and additions, ultimately leading to a new serial.</p><p>IXFR can be done over UDP or TCP, but again TCP is generally recommended to avoid packet loss.</p>
    <div>
      <h2>How Does Secondary DNS Work at Cloudflare?</h2>
      <a href="#how-does-secondary-dns-work-at-cloudflare">
        
      </a>
    </div>
    <p>The DNS team loves microservice architecture! When we initially implemented Secondary DNS at Cloudflare, it was done using <a href="https://mesosphere.github.io/marathon/">Mesos Marathon</a>. This allowed us to separate each of our services into several different marathon apps, individually scaling apps as needed. All of these services live in our core data centers. The following services were created:</p><ol><li><p>Zone Transferer - responsible for attempting IXFR, followed by AXFR if IXFR fails.</p></li><li><p>Zone Transfer Scheduler - responsible for periodically checking zone SOA serials for changes.</p></li><li><p>Rest API - responsible for registering new zones and primary nameservers.</p></li></ol><p>In addition to the marathon apps, we also had an app external to the cluster:</p><ol><li><p>Notify Listener - responsible for listening for notifies from primary servers and telling the Zone Transferer to initiate an AXFR/IXFR.</p></li></ol><p>Each of these microservices communicates with the others through <a href="https://kafka.apache.org/">Kafka</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6cS9HY6kpiqdrdyD3tqQjI/55d2a781d351cc854f416dd9b1e7d4f3/image8-1.png" />
            
            </figure><p>Figure 1: Secondary DNS Microservice Architecture‌‌</p><p>Once the zone transferer completes the AXFR/IXFR, it then passes the zone through to our zone builder, and finally gets pushed out to our edge at each of our <a href="https://www.cloudflare.com/network/">200 locations.</a></p><p>Although this current architecture worked great in the beginning, it left us open to many vulnerabilities and scalability issues down the road. As our Secondary DNS product became more popular, it was important that we proactively scaled and reduced the technical debt as much as possible. As with many companies in the industry, Cloudflare has recently migrated all of our core data center services to <a href="https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/">Kubernetes</a>, moving away from individually managed apps and Marathon clusters.</p><p>What this meant for Secondary DNS is that all of our Marathon-based services, as well as our NOTIFY Listener, had to be migrated to Kubernetes. Although this long migration ended up paying off, many difficult challenges arose along the way that required us to come up with unique solutions in order to have a seamless, zero downtime migration.</p>
    <div>
      <h2>Challenges When Migrating to Kubernetes</h2>
      <a href="#challenges-when-migrating-to-kubernetes">
        
      </a>
    </div>
    <p>Although the entire DNS team agreed that kubernetes was the way forward for Secondary DNS, it also introduced several challenges. These challenges arose from a need to properly scale up across many distributed locations while also protecting each of our individual data centers. Since our core does not rely on anycast to automatically distribute requests, as we introduce more customers, it opens us up to denial-of-service attacks.</p><p>The two main issues we ran into during the migration were:</p><ol><li><p>How do we create a distributed and reliable system that makes use of kubernetes principles while also making sure our customers know which IPs we will be communicating from?</p></li><li><p>When opening up a public-facing UDP socket to the Internet, how do we protect ourselves while also preventing unnecessary spam towards primary nameservers?.</p></li></ol>
    <div>
      <h3>Issue 1:</h3>
      <a href="#issue-1">
        
      </a>
    </div>
    <p>As was previously mentioned, one form of protection in the Secondary DNS protocol is to only allow certain IPs to initiate zone transfers. There is a fine line between primary servers allow listing too many IPs and them having to frequently update their IP ACLs. We considered several solutions:</p><ol><li><p><a href="https://github.com/nirmata/kube-static-egress-ip">Open source k8s controllers</a></p></li><li><p>Altering <a href="https://en.wikipedia.org/wiki/Network_address_translation">Network Address Translation(NAT)</a> entries</p></li><li><p>Do not use k8s for zone transfers</p></li><li><p>Allowlist all Cloudflare IPs and dynamically update</p></li><li><p>Proxy egress traffic</p></li></ol><p>Ultimately we decided to proxy our egress traffic from k8s, to the DNS primary servers, using static proxy addresses. <a href="https://github.com/shadowsocks/shadowsocks-libev">Shadowsocks-libev</a> was chosen as the <a href="https://en.wikipedia.org/wiki/SOCKS">SOCKS5</a> implementation because it is fast, secure and known to scale. In addition, it can handle both UDP/TCP and IPv4/IPv6.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2JcJoxa44FoxHtMrWBXCP5/62805a1a8091ba1597f33614bece420b/image1-8.png" />
            
            </figure><p>Figure 2: Shadowsocks proxy Setup</p><p>The partnership of k8s and Shadowsocks combined with a large enough IP range brings many benefits:</p><ol><li><p>Horizontal scaling</p></li><li><p>Efficient load balancing</p></li><li><p>Primary server ACLs only need to be updated once</p></li><li><p>It allows us to make use of kubernetes for both the Zone Transferer and the Local ShadowSocks Proxy.</p></li><li><p>Shadowsocks proxy can be reused by many different Cloudflare services.</p></li></ol>
    <div>
      <h3>Issue 2:</h3>
      <a href="#issue-2">
        
      </a>
    </div>
    <p>The Notify Listener requires listening on static IPs for NOTIFY Messages coming from primary DNS servers. This is mostly a solved problem through the use of <a href="https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typeloadbalancer">k8s services of type loadbalancer</a>, however exposing this service directly to the Internet makes us uneasy because of its susceptibility to <a href="https://www.cloudflare.com/learning/ddos/dns-flood-ddos-attack/">attacks</a>. Fortunately <a href="https://www.cloudflare.com/ddos/">DDoS protection</a> is one of Cloudflare's strengths, which lead us to the likely solution of <a href="https://en.wikipedia.org/wiki/Eating_your_own_dog_food">dogfooding</a> one of our own products, <a href="https://www.cloudflare.com/products/cloudflare-spectrum/">Spectrum</a>.</p><p>Spectrum provides the following features to our service:</p><ol><li><p>Reverse proxy TCP/UDP traffic</p></li><li><p>Filter out Malicious traffic</p></li><li><p>Optimal routing from edge to core data centers</p></li><li><p><a href="https://www.cisco.com/c/dam/en_us/solutions/industries/docs/gov/IPV6at_a_glance_c45-625859.pdf">Dual Stack</a> technology</p></li></ol>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1S3Ha1uEWxV2bbrzSJY9y2/84a6c819cf54b3ade09d9e36ecad8ac2/image5-3.png" />
            
            </figure><p>Figure 3: Spectrum interaction with Notify Listener</p><p>Figure 3 shows two interesting attributes of the system:</p><ol><li><p><b>Spectrum &lt;-&gt; k8s IPv4 only:</b></p></li><li><p>This is because our custom k8s load balancer currently only supports IPv4; however, Spectrum has no issue terminating the IPv6 connection and establishing a new IPv4 connection.</p></li><li><p><b>Spectrum &lt;-&gt; k8s routing decisions based of L4 protocol</b>:</p></li><li><p>This is because k8s only supports one of TCP/UDP/SCTP per service of type load balancer. Once again, spectrum has no issues proxying this correctly.</p></li></ol><p>One of the problems with using a L4 proxy in between services is that source IP addresses get changed to the source IP address of the proxy (Spectrum in this case). Not knowing the source IP address means we have no idea who sent the NOTIFY message, opening us up to attack vectors. Fortunately, Spectrum’s <a href="https://developers.cloudflare.com/spectrum/getting-started/proxy-protocol/">proxy protocol</a> feature is capable of adding custom headers to TCP/UDP packets which contain source IP/Port information.</p><p>As we are using <a href="https://github.com/miekg/dns">miekg/dns</a> for our Notify Listener, adding proxy headers to the DNS NOTIFY messages would cause failures in validation at the DNS server level. Alternatively, we were able to implement custom <a href="https://github.com/miekg/dns/blob/master/server.go#L156-L162">read and write decorators</a> that do the following:</p><ol><li><p><b>Reader:</b> Extract source address information on inbound NOTIFY messages. Place extracted information into new DNS records located in the additional section of the message.</p></li><li><p><b>Writer:</b> Remove additional records from the DNS message on outbound NOTIFY replies. Generate a new reply using proxy protocol headers.</p></li></ol><p>There is no way to spoof these records, because the server only permits two extra records, one of which is the optional TSIG. Any other records will be overwritten.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5pg2MCnr4MUQapXAtQRLeK/365f1bdaec1c60c88ae408b5c0dea128/image4-6.png" />
            
            </figure><p>Figure 4: Proxying Records Between Notifier and Spectrum‌‌</p><p>This custom decorator approach abstracts the proxying away from the Notify Listener through the use of the DNS protocol.  </p><p>Although knowing the source IP will block a significant amount of bad traffic, since NOTIFY messages can use both UDP and TCP, it is prone to <a href="/the-root-cause-of-large-ddos-ip-spoofing/">IP spoofing</a>. To ensure that the primary servers do not get spammed, we have made the following additions to the Zone Transferer:</p><ol><li><p>Always ensure that the SOA has actually been updated before initiating a zone transfer.</p></li><li><p>Only allow at most one working transfer and one scheduled transfer per zone.</p></li></ol>
    <div>
      <h2>Additional Technical Challenges</h2>
      <a href="#additional-technical-challenges">
        
      </a>
    </div>
    
    <div>
      <h3>Zone Transferer Scheduling</h3>
      <a href="#zone-transferer-scheduling">
        
      </a>
    </div>
    <p>As shown in figure 1, there are several ways of sending Kafka messages to the Zone Transferer in order to initiate a zone transfer. There is no benefit in having a large backlog of zone transfers for the same zone. Once a zone has been transferred, assuming no more changes, it does not need to be transferred again. This means that we should only have at most one transfer ongoing, and one scheduled transfer at the same time, for any zone.</p><p>If we want to limit our number of scheduled messages to one per zone, this involves ignoring Kafka messages that get sent to the Zone Transferer. This is not as simple as ignoring specific messages in any random order. One of the benefits of Kafka is that it holds on to messages until the user actually decides to acknowledge them, by committing that messages offset. Since Kafka is just a queue of messages, it has no concept of order other than first in first out (FIFO). If a user is capable of reading from the Kafka topic concurrently, it is entirely possible that a message in the middle of the queue be committed before a message at the end of the queue.</p><p>Most of the time this isn’t an issue, because we know that one of the concurrent readers has read the message from the end of the queue and is processing it. There is one Kubernetes-related catch to this issue, though: pods are ephemeral. The kube master doesn’t care what your concurrent reader is doing, it will kill the pod and it’s up to your application to handle it.</p><p>Consider the following problem:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2LbPUGXSdf0uzNeI68U06Q/13295276ff697029d06c58b2e4a9df81/image2-2.png" />
            
            </figure><p>Figure 5: Kafka Partition‌‌</p><ol><li><p>Read offset 1. Start transferring zone 1.</p></li><li><p>Read offset 2. Start transferring zone 2.</p></li><li><p>Zone 2 transfer finishes. Commit offset 2, essentially also marking offset 1.</p></li><li><p>Restart pod.</p></li><li><p>Read offset 3 Start transferring zone 3.</p></li></ol><p>If these events happen, zone 1 will never be transferred. It is important that zones stay up to date with the primary servers, otherwise stale data will be served from the Secondary DNS server. The solution to this problem involves the use of a list to track which messages have been read and completely processed. In this case, when a zone transfer has finished, it does not necessarily mean that the kafka message should be immediately committed. The solution is as follows:</p><ol><li><p>Keep a list of Kafka messages, sorted based on offset.</p></li><li><p>If finished transfer, remove from list:</p></li><li><p>If the message is the oldest in the list, commit the messages offset.</p></li></ol>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1g6Ku6Is2fixeHyiTIoEwC/0bb4026298db401018aefc89b027a717/image9-2.png" />
            
            </figure><p>Figure 6: Kafka Algorithm to Solve Message Loss</p><p>This solution is essentially soft committing Kafka messages, until we can confidently say that all other messages have been acknowledged. It’s important to note that this only truly works in a distributed manner if the Kafka messages are keyed by zone id, this will ensure the same zone will always be processed by the same Kafka consumer.</p>
    <div>
      <h2>Life of a Secondary DNS Request</h2>
      <a href="#life-of-a-secondary-dns-request">
        
      </a>
    </div>
    <p>Although Cloudflare has a <a href="https://www.cloudflare.com/network/">large global network</a>, as shown above, the zone transferring process does not take place at each of the edge datacenter locations (which would surely overwhelm many primary servers), but rather in our core data centers. In this case, how do we propagate to our edge in seconds? After transferring the zone, there are a couple more steps that need to be taken before the change can be seen at the edge.</p><ol><li><p>Zone Builder - This interacts with the Zone Transferer to build the zone according to what Cloudflare edge understands. This then writes to <a href="/introducing-quicksilver-configuration-distribution-at-internet-scale/">Quicksilver</a>, our super fast, distributed KV store.</p></li><li><p>Authoritative Server - This reads from Quicksilver and serves the built zone.</p></li></ol>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3INig0h7w1oSks6FwirZls/c99581c1c0019f5fc5dd2a80884dc943/image3-6.png" />
            
            </figure><p>Figure 7: End to End Secondary DNS‌‌</p>
    <div>
      <h2>What About Performance?</h2>
      <a href="#what-about-performance">
        
      </a>
    </div>
    <p>At the time of writing this post, according to <a href="http://dnsperf.com">dnsperf.com</a>, Cloudflare leads in global performance for both <a href="https://www.dnsperf.com/">Authoritative</a> and <a href="https://www.dnsperf.com/#!dns-resolvers">Resolver</a> DNS. Here, Secondary DNS falls under the authoritative DNS category here. Let’s break down the performance of each of the different parts of the Secondary DNS pipeline, from the primary server updating its records, to them being present at the Cloudflare edge.</p><ol><li><p>Primary Server to Notify Listener - Our most accurate measurement is only precise to the second, but we know UDP/TCP communication is likely much faster than that.</p></li><li><p>NOTIFY to Zone Transferer - This is negligible</p></li><li><p>Zone Transferer to Primary Server - 99% of the time we see ~800ms as the average latency for a zone transfer.</p></li></ol>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/70uQkuuFtfY5UPVrza1ZpK/ccfa5d2a7c522369d34c8b0671056167/image7-2.png" />
            
            </figure><p>Figure 8: Zone XFR latency</p><p>4. Zone Transferer to Zone Builder - 99% of the time we see ~10ms to build a zone.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3cfNN2ilrDOES8QwG5ok4I/ab8e3bafc66ec05e55530d997f628a68/image11-1.png" />
            
            </figure><p>Figure 9: Zone Build time</p><p>5. Zone Builder to Quicksilver edge: 95% of the time we see less than 1s propagation.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1imVuigo0GB7gLug2ISqL7/2ad64a24445b9aa75094d78437710d16/image6-2.png" />
            
            </figure><p>Figure 10: Quicksilver propagation time</p><p>End to End latency: less than 5 seconds on average. Although we have several external probes running around the world to test propagation latencies, they lack precision due to their sleep intervals, location, provider and number of zones that need to run. The actual propagation latency is likely much lower than what is shown in figure 10. Each of the different colored dots is a separate data center location around the world.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5BjcchQWzBqyYbX6ksIYMj/0f167f037efdeaf5cece6a054b3095ab/image10.png" />
            
            </figure><p>Figure 11: End to End Latency</p><p>An additional test was performed manually to get a real world estimate, the test had the following attributes:</p><p>Primary server: NS1Number of records changed: 1Start test timer event: Change record on NS1Stop test timer event: Observe record change at Cloudflare edge using digRecorded timer value: 6 seconds</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Cloudflare serves 15.8 trillion DNS queries per month, operating within 100ms of 99% of the Internet-connected population. The goal of Cloudflare operated Secondary DNS is to allow our customers with custom DNS solutions, be it on-premise or some other DNS provider, to be able to take advantage of Cloudflare's DNS performance and more recently, through <a href="/orange-clouding-with-secondary-dns/">Secondary Override</a>, our proxying and security capabilities too. Secondary DNS is currently available on the Enterprise plan, if you’d like to take advantage of it, please let your account team know. For additional documentation on Secondary DNS, please refer to our <a href="https://support.cloudflare.com/hc/en-us/articles/360001356152-How-do-I-setup-and-manage-Secondary-DNS-">support article</a>.</p> ]]></content:encoded>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[Growth]]></category>
            <category><![CDATA[Kubernetes]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Deep Dive]]></category>
            <guid isPermaLink="false">2W2no7YfwWXkJXtgYtK4CU</guid>
            <dc:creator>Alex Fattouche</dc:creator>
        </item>
        <item>
            <title><![CDATA[Ulaanbaatar, Mongolia]]></title>
            <link>https://blog.cloudflare.com/mongolia/</link>
            <pubDate>Tue, 02 Oct 2018 23:59:00 GMT</pubDate>
            <description><![CDATA[ Whenever you get into a conversation about exotic travel or ponder visiting the four corners of the globe, inevitably you end up discussing Ulaanbaatar in Mongolia.  ]]></description>
            <content:encoded><![CDATA[ <p>Whenever you get into a conversation about exotic travel or ponder visiting the four corners of the globe, inevitably you end up discussing Ulaanbaatar in Mongolia. Travelers want to experience the rich culture and vivid blue skies of Mongolia; a feature which gives the country its nickname of <a href="https://www.travelweekly.com/Asia-Travel/Mongolia-Pristine-beauty-in-the-Land-of-the-Eternal-Blue-Sky">“Land of the Eternal Blue Sky”</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6RKZanh8oMvKp79ISqf2dm/ddc68cb7236d18605143b38389647e41/Ulaanbaatar-Mongolia.png" />
            
            </figure><p>Ulaanbaatar (or Ulan Bator; but shortened to UB by many) is the capital of Mongolia and located nearly a mile above sea level just outside the <a href="https://en.wikipedia.org/wiki/Gobi_Desert">Gobi Desert</a> - a desert that spans a good percentage of Central Asia’s Mongolia. (The rest of the Gobi Desert extends into China). The country is nestled squarely between Russia to the north and China to the south. It’s also home to some of the richest and ancient customs and festivals around. It’s those festivals that successfully draw in the tourists who want to experience something quite unique. Luckily, even with all the tourists, Mongolia has managed to keep its local customs; both in the cities and within its nomadic tribes.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1mbsB7HMMapvzAaAk7Aajk/6eab4afd146f4b23d0e689c4e625659e/Mongolia_1996_CIA_map.jpg" />
            
            </figure><p><i>via </i><a href="https://commons.wikimedia.org/wiki/File:Mongolia_1996_CIA_map.jpg"><i>Wikipedia</i></a></p><p>History also has drawn explorers and conquerors to and from the region; but more on that later.</p>
    <div>
      <h3>Cloudflare is also drawn into Mongolia</h3>
      <a href="#cloudflare-is-also-drawn-into-mongolia">
        
      </a>
    </div>
    <p>Any avid reader of our blogs will know that we frequently explain that the expansion of our network provides customers and end-users with both more capacity and less latency. That goal (covering 95% of the Internet population with 10 milliseconds or less of latency) means that Mongolia was seriously on our radar.</p><p>Now we have a data center in Ulaanbaatar, Mongolia, latency into that blue sky country is significantly reduced. Prior to this data center going live we were shipping bits into the country via Hong Kong, a whopping 1,800 miles away (or 50 milliseconds if we talk latency). That's far! We know this new data center is a win-win for both mobile and broadband customers within the country and for Cloudflare customers as a whole.</p>
    <div>
      <h3>Just how did we get Cloudflare into Mongolia?</h3>
      <a href="#just-how-did-we-get-cloudflare-into-mongolia">
        
      </a>
    </div>
    <p>Ulaanbaatar is city number 154 on Cloudflare’s network. Our expansion into cities like Ulaanbaatar doesn’t just happen instantly; it takes many teams within Cloudflare in order to successfully deploy in a place like this.</p><p>However, before deploying, we need to negotiate a place to deploy into. A new city requires a secure data center for us to build into. A bandwidth partner is also required. We need to get access to the local networks and to also acquire cache-fill bandwidth in order to operate our CDN. Once we have those items negotiated, we can focus on the next steps. Any site we build has to match our own stringent security standards (we are <a href="https://www.cloudflare.com/learning/privacy/what-is-pci-dss-compliance/">PCI/DSS compliant</a> – hence all our data centers need to also be PCI/DSS compliant). That’s a paperwork process, which surprisingly takes longer than most other stages (because we care about security).</p><p>Then logistics kicks in. A BOM (Bill of Materials) is created. Serial numbers recorded. Power plugs chosen. Fun fact: Cloudflare data centers are nearly all identical, except the power cables. While we live in a world where fiber optic cables and connectors are universal, independent of location (or speed in some cases), the power connections we receive for our data centers vary widely as we expand around the globe.</p><p>The actual shipping is possibly the more interesting part of the logistics process. Getting a few pallets of hardware strapped up and ready to move is only a small part of the process. Paperwork again becomes the all-powerful issue. Each country has its own customs and import process, each shipping company has its own twist on how to process things, and each shipment needs to be coordinated with a receiving party. Our logistics team pulls off this miracle for new sites, upgrades for existing sites, replacement parts for existing sites, all while sometimes calmly listening to mundane on-hold music from around the globe.</p><p>Then our hardware arrives! Seriously, this is a biggie and those around the office that follow these new city builds are always celebrating on those days. The logistics team has done their part; now it’s time for the deployment team to kick-in.</p><p>The deployment team’s main goal is to get hardware racked-and-stacked in a site where (in most cases) we are contracting out the remote-hands to do this work. Sometimes we send people to a site to build it; however, that’s not a scalable process and hence we use local remote-hands contractors to do this heavy-lifting and cabling. There are diagrams, there are instructions, there are color-coded cables (‘cause the right cable needs to go into the right socket). Depending on the size of the build; it can be just a days work or up-to a weeks worth of work. We vary our data center sizes based on the capacity needs for each city. Once racked-and-stacked there is one more job to get done within the Infrastructure team. They get the initial private network connection enabled and secured. That single connection provides us with the ability to continue to the next step. Actually setting up the network and servers at the new site.</p><p>Every new data center site is shipped with zero configuration loaded into network hardware and compute servers. They all ship raw with no Cloudflare knowledge embedded into them. This is by design. The network teams first goal is to configure the routers and switches, which is mainly a bootstrap process in order for the hardware to phone-home and securely request its full configuration setup. We have previously written about our extensive <a href="https://www.cloudflare.com/network-automation-at-scale-ebook/">network automation</a> methods. In the case of a new site, it’s not that different. Once the site can communicate back home, it’s clear to the software base that its configuration is out of date. Updates are pushed to the site and <a href="https://www.cloudflare.com/network-services/solutions/network-monitoring-tools/">network monitoring</a> is automatically enabled.</p><p>But let's not paint a perfect rosy picture. There can still be networking issues. Just one is worth pointing-out as it’s a recurring issue and one that plagues the industry globally. Fiber optic cables can sometimes be plugged in with their receive and transmit sides reversed. It’s a 50:50 chance of being right. Sometimes it just amazes us that we can’t get this fixed; but … a quick swap of the two connectors and we are back in business!</p>
    <div>
      <h3>Those explorers and conquerors</h3>
      <a href="#those-explorers-and-conquerors">
        
      </a>
    </div>
    <p>No conversation about Mongolia would be valid unless we discuss Genghis Khan. Back in the 13th century, Genghis Khan founded the Mongol Empire. He unified the tribes of what is now Mongolia (and beyond). He established the largest land empire in history and is well described both online and via various TV documentaries (The History Channel doesn’t skimp when it comes to <a href="https://www.history.com/topics/china/genghis-khan">covering him</a>). Genghis Khan was a name of honor that he didn’t receive till 1206. Before that he was just named “Temujin”.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2cvUCdM6c6ZEFTMwfsXzZc/5d751c72f8d759499f619c88273239ba/35367135654_1251f9fbf8_o.jpg" />
            
            </figure><p><a href="https://www.flickr.com/photos/120420083@N05/35367135654/"><i>Photo</i></a><i> by </i><a href="https://www.flickr.com/photos/120420083@N05/"><i>SarahTz</i></a><i> CC by/</i><a href="https://creativecommons.org/licenses/by/2.0/"><i>2.0</i></a></p><p>Around 30 miles outside Ulaanbaatar is the equestrian statue of Genghis Khan on the banks of the Tuul River in <a href="https://en.wikipedia.org/wiki/Gorkhi-Terelj_National_Park">Gorkhi Terelj National Park</a>. Pictured above, this statue is 131 feet tall and built from 250 tons of steel.</p>
    <div>
      <h3>Meanwhile back in Mongolia in present time</h3>
      <a href="#meanwhile-back-in-mongolia-in-present-time">
        
      </a>
    </div>
    <p>We get to announce our new city (and country) data center during a very special time. The Golden Eagle Festival takes place during the first weekend of October (that’s October 6 and 7 this year). It’s a test of a hunter’s speed and agility. In this case the hunters (nomads of Mongolia) are practicing an ancient technique of using Golden Eagles to hunt. It takes place in the far west of the country in the Altai Mountains.</p><p>The most famous festival in Mongolia is the Naadam Festival in mid-July. So many things going on within that festival, all of which is a celebration of Mongolian and nomadic culture. The festival celebrates the local sports (wrestling, archery, and horse racing) along with plenty of arts. The opening ceremony can be quite elaborate.</p><p>When discussing Mongolia, travelers nearly always want to make sure their itineraries overlap at least one of these festivals!</p>
    <div>
      <h3>Cloudflare continues to expand globally</h3>
      <a href="#cloudflare-continues-to-expand-globally">
        
      </a>
    </div>
    <p>Last week was Cloudflare’s Birthday Week and we announced many services and products. Rest-assured that everything we announced is instantly available in Ulaanbaatar, Mongolia (data center city 154) just like it’s available everywhere else on Cloudflare’s network.</p><p>One final trivia point regarding our Ulaanbaatar data center. With Ulaanbaatar live, we now have datacenters covering the letters <b>A</b> thru <b>Z</b> (from Amsterdam to Zurich), i.e. with <b>U</b> added, every letter is now fully represented.</p><p>If you like what you’ve read and want to come join our infrastructure team, our logistics team, our network team, our SRE team, or any of the teams that help with these global expansions, then we would love to see your resume or CV. Look <a href="https://www.cloudflare.com/careers">here</a> for job opening.</p> ]]></content:encoded>
            <category><![CDATA[Data Center]]></category>
            <category><![CDATA[Asia]]></category>
            <category><![CDATA[Network]]></category>
            <category><![CDATA[Growth]]></category>
            <category><![CDATA[Cloudflare Network]]></category>
            <guid isPermaLink="false">ek9tlQYUye5NB9Iy0IvoI</guid>
            <dc:creator>Martin J Levy</dc:creator>
        </item>
        <item>
            <title><![CDATA[How we made our DNS stack 3x faster]]></title>
            <link>https://blog.cloudflare.com/how-we-made-our-dns-stack-3x-faster/</link>
            <pubDate>Tue, 11 Apr 2017 09:28:45 GMT</pubDate>
            <description><![CDATA[ Cloudflare is now well into its 6th year and providing authoritative DNS has been a core part of infrastructure from the start. We’ve since grown to be the largest and one of the fastest managed DNS services on the Internet, hosting DNS for nearly 100,000 of the Alexa top 1M sites. ]]></description>
            <content:encoded><![CDATA[ <p>Cloudflare is now well into its 6th year and providing authoritative DNS has been a core part of infrastructure from the start. We’ve since grown to be the largest and one of the fastest managed DNS services on the Internet, hosting DNS for nearly 100,000 of the <a href="https://www.datanyze.com/market-share/dns/Alexa%20top%201M/">Alexa top 1M sites</a> and over 6 million other web properties – or DNS zones.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6k94yrsuPlbsjgsmuf9tVF/e6f7de37d5e1a03b66333b72cd60092e/8159769501_c2026331b8_k.jpg" />
            
            </figure><p><a href="https://creativecommons.org/licenses/by/2.0/">CC-BY 2.0</a> <a href="https://flic.kr/p/dr3Xc4">image</a> by <a href="https://www.flickr.com/photos/jurvetson/">Steve Jurvetson</a></p><p>Today Cloudflare’s DNS service answers around 1 million queries per second – not including attack traffic – via a global anycast network. Naturally as a growing startup, the technology we used to handle tens or hundreds of thousands of zones a few years ago became outdated over time, and couldn't keep up with the millions we have today. Last year we decided to replace two core elements of our DNS infrastructure: the part of our DNS server that answers authoritative queries and the data pipeline which takes changes made by our customers to DNS records and distributes them to our edge machines across the globe.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/9rXGrQRlGmIWAfKr8oM2D/f85999d6398fba0c3627c9ea7c6746a6/data-flow-3.png" />
            
            </figure><p>The rough architecture of the system can be seen above. We store customer DNS records and other origin server information in a central database, convert the raw data into a format usable by our edge in the middle, and then distribute it to our <a href="https://www.cloudflare.com/network/">&gt;100 data centers</a> (we call them PoPs - Points of Presence) using a KV (key/value) store.</p><p>The queries are served by a custom DNS server, rrDNS, that we’ve been using and developing for several years. In the early days of Cloudflare, our DNS service was built on top of PowerDNS, but that was phased out and replaced by rrDNS in 2013.</p><p>The Cloudflare DNS team owns two elements of the data flow: the data pipeline itself and rrDNS. The first goal was to replace the data pipeline with something entirely new as the current software was starting to show its age; as any &gt;5 year old infrastructure would. The existing data pipeline was originally built for use with PowerDNS, and slowly evolved over time. It contained many warts and obscure features because it was built to translate our DNS records into the PowerDNS format.</p>
    <div>
      <h3>A New Data Model</h3>
      <a href="#a-new-data-model">
        
      </a>
    </div>
    <p>In the old system, the data model was fairly simple. We’d store the DNS records roughly in the same structure that they are represented in our UI or API: one entry per resource record (RR). This meant that the data pipeline only had to perform fairly rudimentary encoding tasks when generating the zone data to be distributed to the edge.</p><p>Zone metadata and RRs were encoded using a mix of JSON and Protocol Buffers, though we weren’t making particularly good use of the schematized nature of the protocols so the schemas were very bloated and the resulting data ended up being larger than necessary. Not to mention that as the number of total RRs in our database headed north of 100 million, these small differences in encoding made a significant difference in aggregate.</p><p>It’s worth remembering here that <a href="https://www.cloudflare.com/learning/dns/what-is-dns/">DNS</a> doesn’t really operate on a per-RR basis when responding to queries. You query for a name and a type (e.g <code>example.com</code> and <code>AAAA</code>) and you’ll be given an RRSet which is a <i>collection</i> of RRs. The old data format had RRSets broken out into multiple RR entries (one key per record) which typically meant multiple roundtrips to our KV store to answer a single query. We wanted to change this and group data by RRSet so that a single request could be made to the KV store to retrieve all the data needed to answer a query. Because Cloudflare optimizes heavily for DNS performance, multiple KV lookups were limiting our ability to make rrDNS go as fast as possible.</p><p>In a similar vein, for lookups like A/AAAA/CNAME we decided to group the values into a single “address” key instead of one key per RRset. This further avoids having to perform extra lookups in the most common cases. Squishing keys together also helps reduce memory usage of the cache we use in front of the KV store, since we’re storing more information against a single cache key.</p><p>After settling on this new data model, we needed to figure out how to serialize the data and pass it to the edge. As mentioned, we were previously using a mix of JSON and Protocol Buffers, and we decided to replace this with a purely <a href="http://msgpack.org/">MessagePack</a>-based implementation.</p>
    <div>
      <h4>Why MessagePack?</h4>
      <a href="#why-messagepack">
        
      </a>
    </div>
    <p>MessagePack is a binary serialization format that is typed, but does not have a strict schema built into the format. In this regard, it can be considered a little like JSON. For both the reader and the writer, extra fields can be present or absent and it’s up to your application code to compensate.</p><p>In contrast, Protocol Buffers (or other formats like <a href="https://capnproto.org/">Cap’n Proto</a>) require a schema for data structures defined in a language agnostic format, and then generate code for the specific implementation. Since DNS already has a large structured schema, we didn’t want to have to duplicate all of this schema in another language and then maintain it. In the old implementation with Protocol Buffers, we’d not properly defined schemas for all DNS types – to avoid this maintenance overhead – which resulted in a very confusing data model for rrDNS.</p><p>When looking for new formats we wanted something that would be fast, easy to use and that could integrate easily into the code base and libraries we were already using. rrDNS makes heavy use of the <a href="https://github.com/miekg/dns">miekg/dns</a> Go library which uses a large collection of structs to represent each RR type, for example:</p>
            <pre><code>type SRV struct {
	Hdr      RR_Header
	Priority uint16
	Weight   uint16
	Port     uint16
	Target   string `dns:"domain-name"`
}</code></pre>
            <p>When decoding the data written by our pipeline in rrDNS we need to convert the RRs into these structs. As it turns out, the <a href="https://github.com/tinylib/msgp">tinylib/msgp</a> library we had been investigating has a rather nice set of code generation tools. This would allow us to auto-generate efficient Go code from the struct definitions without having to maintain another schema definition in another format.</p><p>This meant we could work with the miekg RR structs (of which we are already familiar with from rrDNS) in the data pipeline, serialize them straight into binary data, and then deserialize them again at the edge straight into a struct we could use. We didn't need to worry about mapping from one set of structures to another using this technique, which simplified things greatly.</p><p>MessagePack also performs incredibly well compared to other formats on the market. Here’s an excerpt <a href="https://github.com/alecthomas/go_serialization_benchmarks#results">from a Go serialization benchmarking test</a>; we can see that on top of the other reasons MessagePack benefits our stack, it outperforms pretty much every other viable cross-platform option.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3AlursaqXqiLWWVYqmCZV/7b618379c494fd0ec48f5b8dc1de3e7c/unmarshal.png" />
            
            </figure><p>One unexpected surprise after switching to this new model was that we actually reduced the space required to store the data at the edge by around 9x, which was a significantly higher saving compared to our initial estimates. It just goes to show how much impact a bloated data model can have on a system.</p>
    <div>
      <h3>A New Data Pipeline</h3>
      <a href="#a-new-data-pipeline">
        
      </a>
    </div>
    <p>Another very important feature of Cloudflare’s DNS is our ability to propagate zone changes around the globe in a matter of seconds, not minutes or hours. Our existing pipeline was struggling to keep up with the growing number of zones, and with changes to at least 5 zones each second, even at the quietest of times we needed something new.</p>
    <div>
      <h4>Global distribution is hard</h4>
      <a href="#global-distribution-is-hard">
        
      </a>
    </div>
    <p>For a while now we’ve had this monitoring, and we are able to visualize propagation times across the globe. The graph below is taken from our end-to-end monitoring: it makes changes to DNS via our API and watches for the change from various probes around the world. Each dot on the graph represents a particular probe talking to one of our PoPs, and the delay is tracked as the time it took for a change made via our API to be visible externally.</p><p>Due to various layers of caches – both inside and outside of our control – we see some banding on 10s intervals under 1 minute, and it fluctuates all the time. For monitoring and alerting of this nature, the granularity we have here is sufficient but it’s something we’d definitely like to improve. In normal operation, new DNS data is actually available to 99% of our global PoPs in under 5s.</p><p>In this time frame we can see there were a couple of incidents where delays of a few minutes were visible for a small number of PoPs due to network connectivity, but generally all probes reported stable propagation times.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3b39QD2BNytbKhSVw2eFVp/f9e4a87160f66595c3b2de54984d0ef5/drift-ok-2.png" />
            
            </figure><p>In contrast, here’s a graph of the old data pipeline for the same period. We can see how the graph represents the growing delay in visible changes for all PoPs at any given time.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6XyhiYnT2kz0pM2PJygck4/8f4a2a772597b3a64dd04110e1e1611d/drift-delayed-1.png" />
            
            </figure><p>With a new data model designed and ready to go, one that better matched our query patterns, we set out implementing a new service to pick up changes to our zones in the central data store, do any needed processing and send the resulting output to our KV store.</p><p>The new service (written in our favourite language Go) has been running in production since July 2016, and we’ve now migrated over <b>99%</b> of Cloudflare customer zones over to it. If we exclude incidents where issues with congestion across the internet affect connectivity to or from a particular location, the new pipeline itself has experienced zero delays thus far.</p>
    <div>
      <h4>Authoritative rrDNS v2</h4>
      <a href="#authoritative-rrdns-v2">
        
      </a>
    </div>
    <p>rrDNS is a modular application, which allows us to write different “filters” that can hand off processing of different types of queries to different code. The Authoritative filter is responsible for taking an incoming DNS query, looking up the zone the query name belongs to, and performing all relevant logic to find the RRSet to send back to the client.</p><p>Since we’ve completely revised the underlying DNS data model at our edge, we needed to make significant changes to the “Authoritative Filter” in rrDNS. This too is an old area of the code base that hasn’t significantly changed in a number of years. As with any ageing code base, this brings a number of challenges, so we opted to re-write the filter completely. This allowed us to redesign it from the ground up on our new data model, keeping a keen eye on performance, and to better suit the scale and shape of our DNS traffic today. Starting fresh also made it much easier to build in good development practices, such as high test coverage and better documentation.</p><p>We’ve been running the v2 version of the authoritative filter in production alongside the existing code since the later months of 2016, and it has already played a key role in the DNS aspects of our new <a href="https://www.cloudflare.com/load-balancing/">load balancing product</a>.</p><p>The results with the new filter have been great: we’re able to respond to DNS queries on average 3x faster than before, which is excellent news for our customers and improves our ability to mitigate large DNS attacks. We can see here that as the percentage of zones migrated increased, we saw a significant improvement in our average response time.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/m0Z5sDnulTMQHBtjLDLm7/89c4749f7a7855ecf1ad8330957b4869/grafana-rrdns-response.png" />
            
            </figure>
    <div>
      <h4>Replacing the wings while flying</h4>
      <a href="#replacing-the-wings-while-flying">
        
      </a>
    </div>
    <p>The most time consuming part of the project was migrating customers from the old system to something entirely new, without impacting customers or anybody noticing what we were doing. Achieving this involved a significant effort from variety of people in our customer facing, support and operations teams. Cloudflare has many offices in different time zones – London, San Francisco, Singapore and Austin – so keeping everyone in sync was key to our success.</p><p>Already, as a part of the release process for rrDNS we automatically sample and replay production queries against existing and upcoming code to detect unexpected differences, so naturally we decided to extend this idea for our migration. For any zone to pass the migration test, we compared the possible answers for the entire zone from the old system and the new system. Just one failure would result in the tool skipping the zone.</p><p>This allowed us to iteratively test the migration of zones and fix issues as they arose, keeping releases simple and regular. We chose not to do a single – and very scary – switch away from the old system, but run them both in parallel and slowly move zones over keeping them both in sync. Meaning we quickly could migrate zones back in case something unexpected happened.</p><p>Once we got going we were safely migrating zones at several hundred thousand per day, and we kept a close eye on how far we were from our initial goal of 99%. The last mile is still in progress, as there is often an element of customer engagement for some complex configurations that need attention.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/54cH2meY3F08ohfUhEtcp0/0c9b43f28c798738b67d49ca2b2f7a11/migrated-zones-1.png" />
            
            </figure>
    <div>
      <h4>What did we gain?</h4>
      <a href="#what-did-we-gain">
        
      </a>
    </div>
    <p>Replacing a piece of infrastructure this core to Cloudflare took significant effort from a large variety of teams. So what did we gain?</p><ul><li><p>Average of 3x performance boost in code handling DNS queries</p></li><li><p>Faster and more consistent updates to DNS data around the globe</p></li><li><p>A much more robust system for SREs to operate and engineers to maintain</p></li><li><p>Consolidated feature-set based on today’s requirements, and better documentation of edge case behaviours</p></li><li><p>More test coverage, better metrics and higher confidence in our code, making it safer to make changes and develop our DNS products</p></li></ul><p>Now that we’re now able to process our customers DNS more quickly, we’ll soon be rolling out support for a few new RR types and some other exciting new things in the coming months.</p><p><b>Does solving these kinds of technical and operational challenges excite you? Cloudflare is always hiring for talented specialists and generalists within our </b><a href="https://www.cloudflare.com/careers/jobs/?department=Engineering"><b>Engineering</b></a><b>, </b><a href="https://www.cloudflare.com/careers/jobs/"><b>Technical Operations</b></a><b> and </b><a href="https://www.cloudflare.com/careers"><b>other teams</b></a><b>.</b></p> ]]></content:encoded>
            <category><![CDATA[Reliability]]></category>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Speed]]></category>
            <category><![CDATA[Growth]]></category>
            <category><![CDATA[Cloudflare History]]></category>
            <guid isPermaLink="false">2sobSP2PzNwwQzN32CQsML</guid>
            <dc:creator>Tom Arnfeld</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare at Google NEXT 2017]]></title>
            <link>https://blog.cloudflare.com/cloudflare-at-google-next-2017/</link>
            <pubDate>Wed, 08 Mar 2017 00:44:00 GMT</pubDate>
            <description><![CDATA[ The Cloudflare team is headed to Google NEXT 2017 from March 8th - 10th at Moscone Center in San Francisco, CA. We’re excited to meet with customers, partners, and new friends.

 ]]></description>
            <content:encoded><![CDATA[ <p>The Cloudflare team is headed to <a href="https://cloudnext.withgoogle.com/">Google NEXT 2017</a> from March 8th - 10th at Moscone Center in San Francisco, CA. We’re excited to meet with customers, partners, and new friends.</p><p>Come learn about Cloudflare’s recent partnership with Google Cloud Platform (GCP) through their <a href="https://cloud.google.com/interconnect/cdn-interconnect">CDN Interconnect Program</a>. Cloudflare offers performance and security to <b>over 25,000 Google Cloud Platform customers</b>. The CDN Interconnect program allows Cloudflare’s servers to establish high-speed interconnections with Google Cloud Platform at various locations around the world, accelerating dynamic content while reducing bandwidth and egress billing costs.</p><p>We’ll be at booth C7 discussing the benefits of Cloudflare, our partnership with Google Cloud Platform, and handing out Cloudflare SWAG. In addition, our Co-Founder, Michelle Zatlyn, will be presenting “<a href="https://cloudnext.withgoogle.com/schedule#target=a-cloud-networking-blueprint-for-securing-your-workloads-6c6de36a-59a5-4e6f-b434-f57653ffc997">A Cloud Networking Blueprint for Securing Your Workloads</a>” on Thursday, March 9th from 11:20 AM to 12:20 PM at Moscone West, Room 2005.</p>
    <div>
      <h3>What is Google Cloud Platform’s CDN Interconnect Program?</h3>
      <a href="#what-is-google-cloud-platforms-cdn-interconnect-program">
        
      </a>
    </div>
    <p>Google Cloud Platform’s CDN Interconnect program allows select <a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/">CDN providers</a> to establish direct interconnect links with Google’s edge network at various locations. Customers egressing network traffic from Google Cloud Platform through one of these links will benefit from the direct connectivity to the CDN providers and will be billed according to the lower Google Cloud Interconnect pricing.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4AFurp2fQQxJSJ3HTeoJCN/5760b3b4ba34052a5c975a6a3c77fe5f/Screen-Shot-2017-03-07-at-12.34.23-PM-1.png" />
            
            </figure><p>Joint customers of Cloudflare and Google Cloud Platform can <b>expect bandwidth savings of up to 75% and receive </b><a href="https://cloud.google.com/interconnect/cdn-interconnect#pricing"><b>discounted egress pricing</b></a>. Egress traffic is traffic flowing from Google Cloud Platform servers to Cloudflare’s servers. The high-speed interconnections between GCP and Cloudflare speed up the delivery of dynamic content for visitors.</p>
    <div>
      <h3>How does the CDN Interconnect program work?</h3>
      <a href="#how-does-the-cdn-interconnect-program-work">
        
      </a>
    </div>
    <p>As part of this program, 41 Cloudflare data centers are directly connected to Google Cloud Platform’s infrastructure. When one of these Cloudflare data centers requests content from a Google Cloud Platform origin, it’s routed through a high-performance interconnect instead of the public Internet. This dramatically reduces latency for origin requests, and it also enables discounted Google Cloud Platform egress pricing in the US, Europe and Asia regions.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6sSNOri8N1xrlCbUVRTgLc/f807d5d8514dae8b1eac1e26846291a5/google-cloud-how-it-works-1.svg" />
            
            </figure>
    <div>
      <h3>Joint Customer Stories</h3>
      <a href="#joint-customer-stories">
        
      </a>
    </div>
    <p>Quizlet and Discord, two prominent joint customers of Cloudflare and Google Cloud Platform, have shared their performance, security, and cost-savings stories.</p>
    <div>
      <h4>Discord</h4>
      <a href="#discord">
        
      </a>
    </div>
    <p><a href="https://www.cloudflare.com/case-studies/discord">Discord</a> is a free voice and text chat app designed specifically for gaming. In one year, Discord grew from 25,000 concurrent users to 2.4 million, a 9,000 percent growth. Discord’s 25 million registered users send 100 million messages per day across the platform, requiring a global presence with tremendous amounts of network throughput. As Discord experiences explosive growth, they're thankful Cloudflare helps keep bandwidth &amp; hardware costs down and web performance high.</p><ul><li><p>Saving $100,000 on annual hardware costs</p></li><li><p>Saving $100,000 monthly on Google Cloud Network Egress bill</p></li><li><p>Secure traffic even with spikes of websockets events up to 2 million/second</p></li></ul><p>Learn more about Discord’s use of Cloudflare on Google Cloud Platform: <a href="https://www.cloudflare.com/case-studies/discord">https://www.cloudflare.com/case-studies/discord</a></p>
    <div>
      <h4>Quizlet</h4>
      <a href="#quizlet">
        
      </a>
    </div>
    <p><a href="https://www.cloudflare.com/case-studies/quizlet">Quizlet</a> is the world’s largest student and teacher online learning community. Every month, over 20 million active learners from 130 countries practice and master more than 140 million study sets of content on every conceivable subject and topic. Quizlet’s <a href="http://www.alexa.com/siteinfo">Alexa ranking</a> is 588 globally, and 104 in the United States, ranking it as one of the most highly-trafficked websites.</p><p>Quizlet receives performance and security benefits, while saving more than 50 percent on their Google Cloud networking egress bill by using Cloudflare.</p><ul><li><p>Save 50% on monthly Google Cloud Network Egress Bill</p></li><li><p>Reduced daily bandwidth use by 76 percent reduction (or over 10 Tb)</p></li></ul><p>Learn more about Quizlet’s use of Cloudflare on Google Cloud Platform:<a href="https://www.cloudflare.com/case-studies/quizlet">https://www.cloudflare.com/case-studies/quizlet</a></p>
    <div>
      <h3>Presentation by Cloudflare Co-Founder Michelle Zatlyn</h3>
      <a href="#presentation-by-cloudflare-co-founder-michelle-zatlyn">
        
      </a>
    </div>
    <p>Cloudflare’s Co-Founder, Michelle Zatlyn, will be presenting alongside Google and Palo Alto Networks, in a talk titled “<a href="https://cloudnext.withgoogle.com/schedule#target=a-cloud-networking-blueprint-for-securing-your-workloads-6c6de36a-59a5-4e6f-b434-f57653ffc997">A Cloud Networking Blueprint for Securing Your Workloads</a>”.</p>
    <div>
      <h4>Date &amp; Time</h4>
      <a href="#date-time">
        
      </a>
    </div>
    <p>Thursday, March 9th | 11:20 AM - 12:20 PM | Moscone West, Room 2005</p>
    <div>
      <h4>Abstract</h4>
      <a href="#abstract">
        
      </a>
    </div>
    <p>Securing your workloads in the cloud requires shifting away from the traditional “perimeter” security to a “pervasive, hierarchical, scalable” security model. In this session, we discuss cloud networking best practices for securing enterprise and cloud-native workloads on Google Cloud Platform. We describe a network security blueprint that covers securing your virtual networks (VPCs), DDoS protection, using third-party security appliances and services, and visibility and analytics for your deployments. We also highlight Google’s experiences in delivering its own services securely and future trends in cloud network security.</p> ]]></content:encoded>
            <category><![CDATA[Google]]></category>
            <category><![CDATA[Events]]></category>
            <category><![CDATA[Life at Cloudflare]]></category>
            <category><![CDATA[Growth]]></category>
            <guid isPermaLink="false">eBF3xKxxekq2jCbfemoOq</guid>
            <dc:creator>Brady Gentile</dc:creator>
        </item>
        <item>
            <title><![CDATA[CloudFlare's 2012: Happy New Year!]]></title>
            <link>https://blog.cloudflare.com/cloudflares-2012/</link>
            <pubDate>Tue, 01 Jan 2013 02:14:00 GMT</pubDate>
            <description><![CDATA[ For about half the world (and about half of CloudFlare's data centers) it's already 2013. As our team (most of whom are in San Francisco) get ready to celebrate New Year's Eve, wanted to quickly look back on CloudFlare's 2012. 
 ]]></description>
            <content:encoded><![CDATA[ <p></p><p>For about half the world (and about half of CloudFlare's data centers) it's already 2013. As our team (most of whom are in San Francisco) get ready to celebrate New Year's Eve, wanted to quickly look back on CloudFlare's 2012. Here are some stats that tell the story of our last year:</p><ul><li><p>Page views served by CloudFlare in 2012: 679,237,127,874</p></li><li><p>Hits served via CloudFlare's network in 2012: 3,691,532,490,107</p></li><li><p>Bandwidth served from CloudFlare's network in 2012: 765 Petabytes</p></li><li><p>Bandwidth we saved our customers in 2012: 436 Petabytes</p></li><li><p>New sites that signed up for CloudFlare in 2012: 573,177</p></li><li><p>Threats stopped by CloudFlare in 2012: 281,701,624,076</p></li><li><p>New CloudFlare data centers added in 2012: 10</p></li></ul><p>Over 2012, we saw more than 720 million unique IPs connect to CloudFlare's network. Our best estimate is that behind each of those IPs there are 1.8 Internet users. In other words, we saw approximately 1.3 billion Internet users pass through CloudFlare's network in 2012. That's well over half of the Internet's total population of users.</p><p>We also saved a ton of time that those Internet users would have otherwise spent waiting for websites to load. If you add up all the time that people would have spent waiting for websites to load had CloudFlare not existed in 2012, you get more than 891 lifetimes worth of time saved. We're really proud of that.</p><p>We have a number of improvements, new features, new data centers, and other surprised lined up for 2013. From everyone at CloudFlare, Happy New Year! Here's to an even faster, safer Internet in the year ahead.</p> ]]></content:encoded>
            <category><![CDATA[Save The Web]]></category>
            <category><![CDATA[Growth]]></category>
            <category><![CDATA[Cloudflare History]]></category>
            <guid isPermaLink="false">DDNMBi0oCTv6rDyAH5QdV</guid>
            <dc:creator>Matthew Prince</dc:creator>
        </item>
        <item>
            <title><![CDATA[CloudFlare Is a Community]]></title>
            <link>https://blog.cloudflare.com/cloudflare-is-a-community/</link>
            <pubDate>Fri, 17 Feb 2012 22:46:00 GMT</pubDate>
            <description><![CDATA[ Today, CloudFlare adds more than 250 customers every ~6 hours, but getting our first 250 took several months and a lot of faith. When we started working on CloudFlare, an employee at a major CDN company warned us that we had no idea all the crazy things people did with their websites. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Today, CloudFlare adds more than 250 new customers every six hours or so, but getting our first 250 took several months and a lot of faith from our earliest adopters. When we started working on CloudFlare, an employee at a major CDN company warned us that we had no idea all the crazy things people did with their websites. He wasn't kidding. For the first sites that signed up, we usually made them slower and offered little additional protection. But, over time, and with the patience of our first users, we incorporated everything we learned from each new site and built something great.</p>
    <div>
      <h3>Together We Grow Stronger</h3>
      <a href="#together-we-grow-stronger">
        
      </a>
    </div>
    <p>The core value proposition of CloudFlare has always been that the system gets smarter and faster with each new website that joins. In that sense, CloudFlare is a community. When an attack is launched against any one site, knowledge about that attack is immediately shared across the rest of the network. Similarly, we use data from the performance of sites on CloudFlare to help tune optimizations for each new site that joins.</p>
    <div>
      <h3>Onward and Upward</h3>
      <a href="#onward-and-upward">
        
      </a>
    </div>
    <p>Today CloudFlare's community is made up of hundreds of thousands of sites, and each new site that joins continues to make the system better. Together we have brought the resources previously reserved only for the Internet giants to the rest of the web, and we've grown into a giant ourselves. We now power more page views per month than Twitter, Amazon.com, Wikipedia, Zynga, AOL, Apple, and Bing — <i>combined</i>. We have big plans for tomorrow and ways we are continuing to work to save the web, but we'll always remember that we couldn't have done it without you.</p><p>From the whole CloudFlare team, thank you!</p><p>P.S. - Want to have an even bigger impact? <a href="http://www.cloudflare.com/join-our-team.html">We're always hiring</a> and we usually get our best candidates from our existing users.</p> ]]></content:encoded>
            <category><![CDATA[Growth]]></category>
            <category><![CDATA[Community]]></category>
            <category><![CDATA[Milestones]]></category>
            <category><![CDATA[Cloudflare History]]></category>
            <guid isPermaLink="false">4ngjwLrxXeQIyQ23f8CtQE</guid>
            <dc:creator>Matthew Prince</dc:creator>
        </item>
        <item>
            <title><![CDATA[Receiving the WSJ Award for Most Innovative Internet Technology Company]]></title>
            <link>https://blog.cloudflare.com/receiving-wall-street-journal-award/</link>
            <pubDate>Sat, 12 Nov 2011 02:56:00 GMT</pubDate>
            <description><![CDATA[ A few weeks ago the Wall Street Journal named CloudFlare the most innovative Networking & Internet Technology of 2011. We were pretty flattered given that winners from other categories included IBM for Watson... ]]></description>
            <content:encoded><![CDATA[ <p></p><p>A few weeks ago the <i>Wall Street Journal</i> <a href="/wall-street-journal-cloudflare-the-most-innov">named CloudFlare the most innovative Networking &amp; Internet Technology of 2011</a>. We were pretty flattered given that winners from other categories included IBM for Watson (the Jeopardy-playing computer), a company that figured out how to create stem cells from any cell in your body, and another that used algae to produce oil and diesel fuels cheaper than drilling for fossil fuels.</p><p>On Tuesday, I traveled down to Redwood City to receive the award at the FASTech conference and pose for this picture which, I'm sure, will cause my mom to write in (probably to customer support) and tell me I need a haircut.</p><p>Every day Ian on our team pulls a "stat of the day." He asks the rest of us to guess the answer. Today's was: "How much has CloudFlare grown since January 1, 2011?" It's pretty incredible to realize the answer: our traffic has <b>grown by more than 3,200% since January</b>. And we're not slowing down. In the next few days, we'll cross having saved our users more than <i>2 petabytes</i> of bandwidth. And, as we just got finished discussing during our Friday all-team B.E.E.R. meeting, if you knew what we had planned in 2012 you'd realize: we're just getting started.</p><p>Want to be a part of the ride? <a href="http://www.cloudflare.com/join-our-team.html">We're hiring</a>.</p> ]]></content:encoded>
            <category><![CDATA[Growth]]></category>
            <category><![CDATA[Awards]]></category>
            <category><![CDATA[Cloudflare History]]></category>
            <guid isPermaLink="false">406PtkCBMYuTZwjE2I1r2Y</guid>
            <dc:creator>Matthew Prince</dc:creator>
        </item>
        <item>
            <title><![CDATA[Welcome Citizens of CloudFland!]]></title>
            <link>https://blog.cloudflare.com/welcome-to-cloudfland/</link>
            <pubDate>Mon, 22 Aug 2011 22:11:00 GMT</pubDate>
            <description><![CDATA[ We crossed a milestone today: 312M people passed through CloudFlare's network in the last 30 days. That may seem like a strange number for a milestone, but it also happens to be the total population of the United States (the third most populous nation in the world). ]]></description>
            <content:encoded><![CDATA[ <p>We crossed a milestone today: 312M people passed through CloudFlare's network in the last 30 days. That may seem like a strange number for a milestone, but it also happens to be the total population of the United States (the third most populous nation in the world). Kevin put together the following infographic to put the milestone in perspective.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6Y5ZfIJ6QeuwOLLKzB9bkt/20b8b934bc2fad1cea8158bf7928455b/cloudfland-population-infographic.png.scaled500.png" />
            
            </figure> ]]></content:encoded>
            <category><![CDATA[Growth]]></category>
            <category><![CDATA[Community]]></category>
            <category><![CDATA[Milestones]]></category>
            <category><![CDATA[Cloudflare History]]></category>
            <guid isPermaLink="false">1czjneih5Cb1kc3avuEj7c</guid>
            <dc:creator>Matthew Prince</dc:creator>
        </item>
        <item>
            <title><![CDATA[10 Billion Page Views]]></title>
            <link>https://blog.cloudflare.com/10-billion-page-views/</link>
            <pubDate>Fri, 29 Jul 2011 21:25:00 GMT</pubDate>
            <description><![CDATA[ About an hour ago we crossed 10 billion page views having been powered by CloudFlare over the last 30 days. That means more than 13% of worldwide Internet visitors passed through our network at least once in the last month.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>We're ending the week on a high. About an hour ago we crossed 10 billion page views having been powered by CloudFlare over the last 30 days. Those pages were served to about 250 million unique visitors. To put it in perspective, that means more than 13% of worldwide Internet visitors passed through our network at least once in the last month. That's almost 100 million more unique visitors than Twitter, and more than 3 billion more page views than Wikipedia, over the same period. It's also a lot more than we were doing just a month ago ourselves. We're now averaging over 25,000 requests per second. All I can say is: Wow. Great job team!</p> ]]></content:encoded>
            <category><![CDATA[Growth]]></category>
            <category><![CDATA[Milestones]]></category>
            <category><![CDATA[Cloudflare History]]></category>
            <guid isPermaLink="false">5n4YuObINZ0P9XckCuhct4</guid>
            <dc:creator>Matthew Prince</dc:creator>
        </item>
        <item>
            <title><![CDATA[At the Risk of Tooting Our Own Horn...]]></title>
            <link>https://blog.cloudflare.com/at-the-risk-of-tooting-our-own-horn/</link>
            <pubDate>Wed, 27 Jul 2011 00:35:00 GMT</pubDate>
            <description><![CDATA[ Kevin, the visual designer who recently joined CloudFlare's team, pulled together this graph the other day. Even having lived through the last 10 months, it's still incredible to see them represented here.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Kevin, the visual designer who recently joined CloudFlare's team, pulled together this graph the other day. Even having lived through the last 10 months, it's still incredible to see them represented here. We reached 100 million people experiencing the benefits of CloudFlare's network about seven months after our launch. After that, it took us only three more months to cross 200 million. And our growth has continued to accelerate: as of this morning, just a couple weeks after crossing 200 million, more than 230 million people every month now experience a faster, safer Internet because of CloudFlare.</p><p>One thing that I find particularly interesting about this graph is a similarity between most of these fast-growing companies. With the notable exception of Zynga (and to a lesser extend Yahoo) they have all built platforms that enhance other people's content. YouTube allowed anyone to publish their own videos. Similarly, Twitter, LinkedIn, and Facebook let anyone share their own personal and professional details with a broad audience. In some ways, Google's platform is backward of the others -- taking other people's content and overlaying an index to make it searchable -- but the core idea of making other people's content better is the same.</p><p>Several months ago, I was talking with one of the famous, gray-haired engineers of Silicon Valley and he said, "What CloudFlare really is is the YouTube for websites." I have to confess that didn't entirely click for me until Kevin showed me this graph. There were ways to publish videos online prior to YouTube, but the company made doing so faster, easier, and better. There are ways to publish websites prior toCloudFlare, but what we've built is building a transparent platform to make any website faster, safer, and better.</p><p>This is an incredibly rewarding job, but today was particularly fun. We sat one of the conference rooms in the CFHQ and planned the rollout of what we're launching next. The Internet continues to suffer many challenges, but we think we've built a platform that can continue to help solve many of them. Stay tuned...</p> ]]></content:encoded>
            <category><![CDATA[Growth]]></category>
            <category><![CDATA[Milestones]]></category>
            <category><![CDATA[Cloudflare History]]></category>
            <guid isPermaLink="false">7v9lUjDUGiUyIHmj3Q9w5R</guid>
            <dc:creator>Matthew Prince</dc:creator>
        </item>
        <item>
            <title><![CDATA[Growing Cloudflare]]></title>
            <link>https://blog.cloudflare.com/growing-cloudflare/</link>
            <pubDate>Tue, 12 Jul 2011 16:59:00 GMT</pubDate>
            <description><![CDATA[ Last September, CloudFlare launched and our world turned upside down. It turns out that webmasters worldwide had been waiting for a way to make their sites twice as fast and protect them from online threats. And, it turned out, if you give web admins a simple, easy-to-use interface... ]]></description>
            <content:encoded><![CDATA[ <p>Last September, Cloudflare launched and our world turned upside down. It turns out that webmasters worldwide had been waiting for a way to make their sites twice as fast and protect them from online threats. And, it turned out, if you give web admins a simple, easy-to-use interface, and make the product something anyone could afford, then literally thousands of new websites would sign up every day.</p><p>Today, just 10 months after our public launch, more than 10% of the Internet's visitors experience a faster, safer web because of CloudFlare every month. That's hard for us to imagine, even as we've watched the logs relentlessly grow. Needless to say, we are excited about where we are heading!</p><p>Today we announced that we'd raised more capital in order to help grow CloudFlare and it got picked up by a <a href="http://allthingsd.com/20110712/web-security-startup-cloudflare-lands-20-million-funding-round/?refcat=news">few</a> <a href="http://gigaom.com/2011/07/12/cloudflare-funding/">media</a> <a href="http://techcrunch.com/2011/07/12/oh-by-the-way-cloudflare-raised-20-million-last-november/">outlets</a>. I wanted to give a bit of background on how we got here.</p><p>It takes some capital in order to start a company like CloudFlare. While others will undoubtedly try to build a network like ours on top of EC2 or someone else's cloud, we believe you have to build the network from the ground up in order for it to really scale. We had originally raised a little over $2 million to build CloudFlare back in November 2009. We raised this initial Series A from Ray Rothrock at <a href="http://www.venrock.com/">Venrock</a>, and Carl Ledbetter at <a href="http://www.pelionvp.com">Pelion Venture Partners</a>. Ray is the leading security investor for the last 20 years, and Carl was CTO at Novell and knows everything there is to know about computer networking. Beyond their industry expertise, they're also both great people who have been a pleasure to work with.</p><p>Immediately after our launch at TechCrunch Disrupt in late September 2010, CloudFlare started to grow quickly. Traffic through our network was doubling week over week. It was an exciting time for our team of eight. The quick adoption validated for us that CloudFlare could be a big, meaningful business.</p><p>Our public launch also introduced us to many incredible people, including partners, future employees and investors. One of the individuals that we met was Scott Sandell at <a href="http://www.nea.com/">NEA</a> in November 2010. He was immediately excited about CloudFlare's goal to make the Internet better for every website and web visitor. He was so excited that he became the lead investor for our Series B round where we <a href="http://www.marketwire.com/press-release/cloudflare-raises-20-million-to-bring-performance-and-security-to-every-website-1536886.htm">raised $20M</a>.</p><p>Before we took an investment from NEA, we asked two things of Scott. First, we asked that we not announce the new funding until we actually started to spend it. Second, we asked that we continue to stick to our original plan of building CloudFlare at the rate we'd originally intended. Raising money is a means to an end, it is not an end in and of itself. We wanted CloudFlare to focus on our plan and proceed at our own pace.</p><p>Ray, Carl, and Scott have been force multipliers to CloudFlare. They have quietly advised as we continued to execute on CloudFlare's original plan and lay the groundwork for the developments we have in store. While VCs can get a bad wrap from entrepreneurs, if you find the right ones then they can be terrific at helping a disruptive new business like ours flourish.</p><p>Today we are already more than 20 times the size that we were in November 2010 when we closed the funding round, and our pace of growth is only accelerating. We have just begun to deploy that capital to accomplish two things: 1) further expanding our network to provide better service to our users; and 2) hiring the best employees to build a <a href="http://www.cloudflare.com/people.html">world class team</a> capable of helping solve the Internet's toughest challenges.</p><p>We have great things in store, and we're just getting started. As I've become fond of saying: stay tuned....</p> ]]></content:encoded>
            <category><![CDATA[Growth]]></category>
            <category><![CDATA[Cloudflare History]]></category>
            <guid isPermaLink="false">668EoYlgsxeqCwJr3hTE7U</guid>
            <dc:creator>Matthew Prince</dc:creator>
        </item>
    </channel>
</rss>