Subscribe to receive notifications of new posts:

The Technical Challenges of Building Cloudflare WARP

2019-09-25

18 min read
This post is also available in 简体中文, Deutsch, 日本語, Español and Français.

If you have seen our other post you know that we released WARP to the last members of our waiting list today. With WARP our goal was to secure and improve the connection between your mobile devices and the Internet. Along the way we ran into problems with phone and operating system versions, diverse networks, and our own infrastructure, all while working to meet the pent up demand of a waiting list nearly two million people long.

To understand all these problems and how we solved them we first need to give you some background on how the Cloudflare network works:

How Our Network Works

The Cloudflare network is composed of data centers located in 194 cities and in more than 90 countries. Every Cloudflare data center is composed of many servers that receive a continual flood of requests and has to distribute those requests between the servers that handle them. We use a set of routers to perform that operation:

Our routers listen on Anycast IP addresses which are advertised over the public Internet. If you have a site on Cloudflare, your site is available via two of these addresses. In this case, I am doing a DNS query for “workers.dev”, a site which is powered by Cloudflare:

➜ dig workers.dev

;; QUESTION SECTION:
;workers.dev.      IN  A

;; ANSWER SECTION:
workers.dev.    161  IN  A  198.41.215.162
workers.dev.    161  IN  A  198.41.214.162

;; SERVER: 1.1.1.1#53(1.1.1.1)

workers.dev is available at two addresses 198.41.215.162 and 198.41.214.162 (along with two IPv6 addresses available via the AAAA DNS query). Those two addresses are advertised from every one of our data centers around the world. When someone connects to any Internet property on Cloudflare, each networking device their packets pass through will choose the shortest path to the nearest Cloudflare data center from their computer or phone.

Once the packets hit our data center, we send them to one of the many servers which operate there. Traditionally, one might use a load balancer to do that type of traffic distribution across multiple machines. Unfortunately putting a set of load balancers capable of handling our volume of traffic in every data center would be exceptionally expensive, and wouldn’t scale as easily as our servers do. Instead, we use devices built for operating on exceptional volumes of traffic: network routers.

Once a packet hits our data center it is processed by a router. That router sends the traffic to one of a set of servers responsible for handling that address using a routing strategy called ECMP (Equal-Cost Multi-Path). ECMP refers to the situation where the router doesn’t have a clear ‘winner’ between multiple routes, it has multiple good next hops, all to the same ultimate destination. In our case we hack that concept a bit, rather than using ECMP to balance across multiple intermediary links, we make the intermediary link addresses the final destination of our traffic: our servers.

Here's the configuration of a Juniper-brand router of the type which might be in one of our data centers, and which is configured to balance traffic across three destinations:

user@host# show routing-options

static {
  route 172.16.1.0/24 next-hop [ 172.16.2.1 172.16.2.2 172.16.2.3 ];
}
forwarding-table {
  export load-balancing-policy;
}

Since the ‘next-hop’ is our server, traffic will be split across multiple machines very efficiently.

TCP, IP, and ECMP

IP is responsible for sending packets of data from addresses like 93.184.216.34 to 208.80.153.224 (or [2606:2800:220:1:248:1893:25c8:1946] to [2620:0:860:ed1a::1] in the case of IPv6) across the Internet. It's the "Internet Protocol".

TCP (Transmission Control Protocol) operates on top of a protocol like IP which can send a packet from one place to another, and makes data transmission reliable and useful for more than one process at a time. It is responsible for taking the unreliable and misordered packets that might arrive over a protocol like IP and delivering them reliably, in the correct order. It also introduces the concept of a ‘port’, a number from 1-65535 which help route traffic on a computer or phone to a specific service (such as the web or email). Each TCP connection has a source and destination port which is included in the header TCP adds to the beginning of each packet. Without the idea of ports it would not be easy to figure out which messages were destined for which program. For example, both Google Chrome and Mail might wish to send messages over your WiFi connection at the same time, so they will each use their own port.

Here's an example of making a request for https://cloudflare.com/ at 198.41.215.162, on the default port for HTTPS: 443. My computer has randomly assigned me the port 51602 which it will listen on it for a response, which will (hopefully) receive the contents of the site:

Internet Protocol Version 4, Src: 19.5.7.21, Dst: 198.41.215.162
    Protocol: TCP (6)
    Source: 19.5.7.21
    Destination: 198.41.215.162
Transmission Control Protocol, Src Port: 51602, Dst Port: 443, Seq: 0, Len: 0
    Source Port: 51602
    Destination Port: 443

Looking at the same request from the Cloudflare side will be a mirror image, a request from my public IP address originating at my source port, destined for port 443 (I’m ignoring NAT for the moment, more on that later):

Internet Protocol Version 4, Src: 198.41.215.16, Dst: 19.5.7.21
    Protocol: TCP (6)
    Source: 198.41.215.162
    Destination: 19.5.7.21
Transmission Control Protocol, Src Port: 443, Dst Port: 51602, Seq: 0, Len: 0
    Source Port: 443
    Destination Port: 51602

We can now return to ECMP! It could be theoretically possible to use ECMP to balance packets between servers randomly, but you would almost never want to do that. A message over the Internet is generally composed of multiple TCP packets. If each packet were sent to a different server it would be impossible to reconstruct the original message in any one place and act on it. Even beyond that, it would be terrible for performance: we rely on being able to maintain long-lived TCP and TLS sessions which require a persistent connection to a single server. To provide that persistence, our routers don’t balance traffic randomly, they use a combination of four values: the source address, the source port, the destination address, and the destination port. Traffic with the same combination of those four values will always make it to the same server. In the case of my example above, all of my messages destined to cloudflare.com will make it to a single server which can reconstruct the TCP packets into my request and return packets in a response.

Enter WARP

For a conventional request it is very important that our ECMP routing sends all of your packets to the same server for the duration of your request. Over the web a request commonly lasts less than ten seconds and the system works well. Unfortunately we quickly ran into issues with WARP.

WARP uses a session key negotiated with public-key encryption to secure packets. For a successful connection, both sides must negotiate a connection which is then only valid for that particular client and the specific server they are talking to. This negotiation takes time and has to be completed any time a client talks to a new server. Even worse, if packets get sent which expect one server, and end up at another, they can’t be decrypted, breaking the connection. Detecting those failed packets and restarting the connection from scratch takes so much time that our alpha testers experienced it as a complete loss of their Internet connection. As you can imagine, testers don’t leave WARP on very long when it prevents them from using the Internet.

WARP was experiencing so many failures because devices were switching servers much more often than we expected. If you recall, our ECMP router configuration uses a combination of (Source IP, Source Port, Destination IP, Destination Port) to match a packet to a server. Destination IP doesn’t generally change, WARP clients are always connecting to the same Anycast addresses. Similarly, Destination Port doesn’t change, we always listen on the same port for WARP traffic. The other two values, Source IP and Source Port, were changing much more frequently than we had planned.

One source of these changes was expected. WARP runs on cell phones, and cell phones commonly switch from Cellular to Wi-Fi connections. When you make that switch you suddenly go from communicating over the Internet via your cellular carrier’s (like AT&T or Verizon) IP address space to that of the Internet Service Provider your Wi-Fi connection uses (like Comcast or Google Fiber). It’s essentially impossible that your IP address won’t change when you move between connections.

The port changes occurred even more frequently than could be explained by network switches however. For an understanding of why we need to introduce one more component of Internet lore: Network Address Translation.

NAT

An IPv4 address is composed of 32 bits (often written as four eight-bit numbers). If you exclude the reserved addresses which can’t be used, you are left with 3,706,452,992 possible addresses. This number has remained constant since IPv4 was deployed on the ARPANET in 1983, even as the number of devices has exploded (although it might go up a bit soon if the 0.0.0.0/8 becomes available). This data is based on Gartner Research predictions and estimates:

IPv6 is the definitive solution to this problem. It expands the length of an address from 32 to 128 bits, with 125 available in a valid Internet address at the moment (all public IPv6 addresses have the first three bits set to 001, the remaining 87.5% of the IPv6 address space is not considered necessary yet). 2^125 is an impossibly large number and would be more than enough for every device on Earth to have its own address. Unfortunately, 21 years after it was published, IPv6 still remains unsupported on many networks. Much of the Internet still relies on IPv4, and as seen above, there aren’t enough IPv4 addresses for every device to have their own.

To solve this problem many devices are commonly put behind a single Internet-addressable IP address. A router is used to do Network Address Translation; to take messages which arrive on that single public IP and forward them to the appropriate device on their local network. In effect it’s as if everyone in your apartment building had the same street address, and the postal worker was responsible for sorting out what mail was meant for which person.

When your devices send a packet destined for the Internet your router intercepts it. The router then rewrites the source address to the single public Internet address allocated for you, and the source port to a port which is unique for all the messages being sent across all the Internet-connected devices on your network. Just as your computer chooses a random source port for your messages which was unique between all the different processes on your computer, your router chooses a random source port which is unique for all the Internet connections across your entire network. It remembers the port it is selecting for you as belonging to your connection, and allows the message to continue over the Internet.

When a response arrives destined for the port it has allocated to you, it matches it to your connection and again rewrites it, this time replacing the destination address with your address on the local network, and the destination port with the original source port you specified. It has transparently allowed all the devices on your network to act as if they were one big computer with a single Internet-connected IP address.

This process works very well for the duration of a common request over the Internet. Your router only has so much space however, so it will helpfully delete old port assignments, freeing up space for new ones. It generally waits for the connection to not have any messages for thirty seconds or more before deleting an assignment, making it unlikely a response will arrive which it can no longer direct to the appropriate source. Unfortunately, WARP sessions need to last much longer than thirty seconds.

When you next send a message after your NAT session has expired, you are given a new source port. That new port causes your ECMP mapping (based on source IP, source port, destination IP, destination port) to change, causing us to route your requests to a new machine within the Cloudflare data center your messages are arriving at. This breaks your WARP session, and your Internet connection.

We experimented extensively with methods of keeping your NAT session fresh by periodically sending keep-alive messages which would prevent routers and mobile carriers from evicting mappings. Unfortunately waking the radio of your device every thirty seconds has unfortunate consequences for your battery life, and it was not entirely successful at preventing port and address changes. We needed a way to always map sessions to the same machine, even as their source port (and even source address) changed.

Fortunately, we had a solution which came from elsewhere at Cloudflare. We don’t use dedicated load balancers, but we do have many of the same problems load balancers solve. We have long needed to map traffic to Cloudflare servers with more control than ECMP allows alone. Rather than deploying an entire tier of load balancers, we use every server in our network as a load balancer, forwarding packets first to an arbitrary machine and then relying on that machine to forward the packet to the appropriate host. This consumes minimal resources and allows us to scale our load balancing infrastructure with each new machine we add. We have a lot more to share on how this infrastructure works and what makes it unique, subscribe to this blog to be notified when that post is released.

To make our load balancing technique work though we needed a way to identify which client a WARP packet was associated with before it could be decrypted. To understand how we did that it’s helpful to understand how WARP encrypts your messages. The industry standard way of connecting a device to a remote network is a VPN. VPNs use a protocol like IPsec to allow your device to send messages securely to a remote network. Unfortunately, VPNs are generally rather disliked. They slow down connections, eat battery life, and their complexity makes them frequently the source of security vulnerabilities. Users of corporate networks which mandate VPNs often hate them, and the idea that we would convince millions of consumers to install one voluntarily seemed ridiculous.

After considering and testing several more modern options, we landed on WireGuard®. WireGuard is a modern, high performance, and most importantly, simple, protocol created by Jason Donenfeld to solve the same problem. Its original code-base is less than 1% the size of a popular IPsec implementation, making it easy for us to understand and secure. We chose Rust as the language most likely to give us the performance and safety we needed and implemented WireGuard while optimizing the code heavily to run quickly on the platforms we were targeting. Then we open sourced the project.

WireGuard changes two very relevant things about the traffic you send over the Internet. The first is it uses UDP not TCP. The second is it uses a session key negotiated with public-key encryption to secure the contents of that UDP packet.

TCP is the conventional protocol used for loading a website over the Internet. It combines the ability to address ports (which we talked about previously) with reliable delivery and flow control. Reliable delivery ensures that if a message is dropped, TCP will eventually resend the missing data. Flow control gives TCP the tools it needs to handle many clients all sharing the same link who exceed its capacity. UDP is a much simpler protocol which trades these capabilities for simplicity, it makes a best-effort attempt to send a message, and if the message is missing or there is too much data for the links, messages are simply never heard of again.

UDP’s lack of reliability would normally be a problem while browsing the Internet, but we are not simply sending UDP, we are sending a complete TCP packet _inside_ our UDP packets.

Inside the payload encrypted by WireGuard we have a complete TCP header which contains all the information necessary to ensure reliable delivery. We then wrap it with WireGuard’s encryption and use UDP to (less-than-reliably) send it over the Internet. Should it be dropped TCP will do its job just as if a network link lost the message and resend it. If we instead wrapped our inner, encrypted, TCP session in another TCP packet as some other protocols do we would dramatically increase the number of network messages required, destroying performance.

The second interesting component of WireGuard relevant to our discussion is public-key encryption. WireGuard allows you to secure each message you send such that only the specific destination you are sending it to can decrypt it. That is a powerful way of ensuring your security as you browse the Internet, but it means it is impossible to read anything inside the encrypted payload until the message has reached the server which is responsible for your session.

Returning to our load balancing issue, you can see that only three things are accessible to us before we can decrypt the message: The IP Header, the UDP Header, and the WireGuard header. Neither the IP Header or UDP Header include the information we need, as we have already failed with the four pieces of information they contain (source IP, source port, destination IP, destination port). That leaves the WireGuard header as the one location where we can find an identifier which can be used to keep track of who the client was before decrypting the message. Unfortunately, there isn’t one. This is the format of the message used to initiate a connection:

sender looks temptingly like a client id, but it’s randomly assigned every handshake. Handshakes have to be performed every two minutes to rotate keys making them insufficiently persistent. We could have forked the protocol to add any number of additional fields, but it is important to us to remain wire-compatible with other WireGuard clients. Fortunately, WireGuard has a three byte block in its header which is not currently used by other clients. We decided to put our identifier in this region and still support messages from other WireGuard clients (albeit with less reliable routing than we can offer). If this reserved section is used for other purposes we can ignore those bits or work with the WireGuard team to extend the protocol in another suitable way.

When we begin a WireGuard session we include our clientid field which is provided by our authentication server which has to be communicated with to begin a WARP session:

Data messages similarly include the same field:

It's important to note that the clientid is only 24 bits long. That means there are less possible clientid values than the current number of users waiting to use WARP. This suits us well as we don't need or want the ability to track individual WARP users. clientid is only necessary for load balancing, once it serves its purpose we get it expunged from our systems as quickly as we can.

The load balancing system now uses a hash of the clientid to identify which machine a packet should be routed to, meaning  WARP messages always arrive at the same machine even as you change networks or move from Wi-Fi to cellular, and the problem was eliminated.

Client Software

Cloudflare has never developed client software before. We take pride in selling a service anyone can use without needing to buy hardware or provision infrastructure. To make WARP work, however, we needed to deploy our code onto one of the most ubiquitous hardware platforms on Earth: smartphones.

While developing software on mobile devices has gotten steadily easier over the past decade, unfortunately developing low-level networking software remains rather difficult. To consider one example: we began the project using the latest iOS connection API called Network, introduced in iOS 12. Apple strongly recommends the use of Network, in their words “Your customers are going to appreciate how much better your connections, how much more reliable your connections are established, and they'll appreciate the longer battery life from the better performance.”

The Network framework provides a pleasantly high-level API which, as they say, integrates well with the native performance features built into iOS. Creating a UDP connection (connection is a bit of a misnomer, there are no connections in UDP, just packets) is as simple as:

self.connection = NWConnection(host: hostUDP, port: portUDP, using: .udp)

And sending a message can be as easy as:

self.connection?.send(content: content)

Unfortunately, at a certain point code actually gets deployed, and bug reports begin flowing in. The first issue was the simplicity of the API made it impossible for us to process more than a single UDP packet at a time. We commonly use packets of up to 1500 bytes, running a speed test on my Google Fiber connection currently results in a speed of 370 Mbps, or almost thirty-one thousand packets per second. Attempting to process each packet individually was slowing down connections by as much as 40%. According to Apple, the best solution to get the performance we needed was to fallback to the older NWUDPSession API, introduced in iOS 9.

IPv6

If we compare the code required to create a NWUDPSession to the example above you will notice that we suddenly care which protocol, IPv4 or IPv6, we are using:

let v4Session = NWUDPSession(upgradeFor: self.ipv4Session)
v4Session.setReadHandler(self.filteringReadHandler, maxDatagrams: 32)

In fact, NWUDPSession does not handle many of the more tricky elements of creating connections over the Internet. For example, the Network framework will automatically determine whether a connection should be made over IPv4 or 6:

NWUDPSession does not do this for you, so we began creating our own logic to determine which type of connection should be used. Once we began to experiment, it quickly became clear that they are not created equal. It’s fairly common for a route to the same destination to have very different performance based on whether you use its IPv4 or IPv6 address. Often this is because there are simply fewer IPv4 addresses which have been around for longer, making it possible for those routes to be better optimized by the Internet’s infrastructure.

Every Cloudflare product has to support IPv6 as a rule. In 2016, we enabled IPv6 for over 98% of our network, over four million sites, and made a pretty big dent in IPv6 adoption on the web:

We couldn’t release WARP without IPv6 support. We needed to ensure that we were always using the fastest possible connection while still supporting both protocols with equal measure. To solve that we turned to a technology we have used with DNS for years: Happy Eyeballs. As codified in RFC 6555 Happy Eyeballs is the idea that you should try to look for both an IPv4 and IPv6 address when doing a DNS lookup. Whichever returns first, wins. That way you can allow IPv6 websites to load quickly even in a world which does not fully support it.

As an example, I am loading the website http://zack.is/. My web browser makes a DNS request for both the IPv4 address (an “A” record) and the IPv6 address (an “AAAA” record) at the same time:

Internet Protocol Version 4, Src: 192.168.7.21, Dst: 1.1.1.1
User Datagram Protocol, Src Port: 47447, Dst Port: 53
Domain Name System (query)
    Queries
        zack.is: type A, class IN

Internet Protocol Version 4, Src: 192.168.7.21, Dst: 1.1.1.1
User Datagram Protocol, Src Port: 49946, Dst Port: 53
Domain Name System (query)
    Queries
        zack.is: type AAAA, class IN

In this case the response to the A query returned more quickly, and the connection is begun using that protocol:

Internet Protocol Version 4, Src: 1.1.1.1, Dst: 192.168.7.21
User Datagram Protocol, Src Port: 53, Dst Port: 47447
Domain Name System (response)
    Queries
        zack.is: type A, class IN
    Answers
        zack.is: type A, class IN, addr 104.24.101.191
       
Internet Protocol Version 4, Src: 192.168.7.21, Dst: 104.24.101.191
Transmission Control Protocol, Src Port: 55244, Dst Port: 80, Seq: 0, Len: 0
    Source Port: 55244
    Destination Port: 80
    Flags: 0x002 (SYN)

We don’t need to do DNS queries to make WARP connections, we know the IP addresses of our data centers already, but we do want to know which of the IPv4 and IPv6 addresses will lead to a faster route over the Internet. To accomplish that we perform the same technique but at the network level: we send a packet over each protocol and use the protocol which returns first for subsequent messages. With some error handling and logging removed for brevity, it appears as:

let raceFinished = Atomic<Bool>(false)

let happyEyeballsRacer: (NWUDPSession, NWUDPSession, String) -> Void = {
    (session, otherSession, name) in
    // Session is the session the racer runs for, otherSession is a session we race against

    let handleMessage: ([Data]) -> Void = { datagrams in
        // This handler will be executed twice, once for the winner, again for the loser.
        // It does not matter what reply we received. Any reply means this connection is working.

        if raceFinished.swap(true) {
            // This racer lost
            return self.filteringReadHandler(data: datagrams, error: nil)
        }

        // The winner becomes the current session
        self.wireguardServerUDPSession = session

        session.setReadHandler(self.readHandler, maxDatagrams: 32)
        otherSession.setReadHandler(self.filteringReadHandler, maxDatagrams: 32)
    }

    session.setReadHandler({ (datagrams) in
        handleMessage(datagrams)
    }, maxDatagrams: 1)

    if !raceFinished.value {
        // Send a handshake message
        session.writeDatagram(onViable())
    }
}

This technique successfully allows us to support IPv6 addressing. In fact, every device which uses WARP instantly supports IPv6 addressing even on networks which don’t have support. Using WARP takes the 34% of Comcast’s network which doesn’t support IPv6 or the 69% of Charter’s network which doesn’t (as of 2018), and allows those users to communicate to IPv6 servers successfully.

This test shows my phone’s IPv6 support before and after enabling WARP:

Dying Connections

Nothing is simple however, with iOS 12.2 NWUDPSession began to trigger errors which terminated connections. These errors were only identified with a code ‘55’. After some research it appears 55 has referred to the same error since the early foundations of the FreeBSD operating system OS X was originally built upon. In FreeBSD it’s commonly referred to as ENOBUFS, and it’s returned when the operating system does not have sufficient BUFfer Space to handle the operation being completed. For example, looking at the source of a FreeBSD today, you see this code in its IPv6 implementation:

In this example, if enough memory cannot be allocated to accommodate the size of an IPv6 and ICMP6 header, the error ENOBUFS (which is mapped to the number 55) will be returned. Unfortunately, Apple’s take on FreeBSD is not open source however: how, when, and why they might be returning the error is a mystery. This error has been experienced by other UDP-based projects, but a resolution is not forthcoming.

What is clear is once an error 55 begins occurring, the connection is no longer usable. To handle this case we need to reconnect, but doing the same Happy Eyeballs mechanic we do on initial connection is both unnecessary (as we were already talking over the fastest connection), and will consume valuable time. Instead we add a second connection method which is only used to recreate an already working session:

/**
Create a new UDP connection to the server using a Happy Eyeballs like heuristic.

This function should be called when first establishing a connection to the edge server.

It will initiate a new connection over IPv4 and IPv6 in parallel, keeping the connection that receives the first response.
*/

func connect(onViable: @escaping () -> Data, onReply: @escaping () -> Void, onFailure: @escaping () -> Void, onDisconnect: @escaping () -> Void)

/**
Recreate the current connections.

This function should be called as a response to error code 55, when a quick connection is required.

Unlike `happyEyeballs`, this function will use viability as its only success criteria.
*/

func reconnect(onViable: @escaping () -> Void, onFailure: @escaping () -> Void, onDisconnect: @escaping () -> Void)

Using reconnect we are able to recreate sessions broken by code 55 errors, but it still adds a latency hit which is not ideal. As with all client software development on a closed-source platform however, we are dependent on the platform to identify and fix platform-level bugs.

Truthfully, this is just one of a long list of platform-specific bugs we ran into building WARP. We hope to continue working with device vendors to get them fixed. There are an unimaginable number of device and connection combinations, and each connection doesn’t just exist at one moment in time, they are always changing, entering and leaving broken states almost faster than we can track. Even now, getting WARP to work on every device and connection on Earth is not a solved problem, we still get daily bug reports which we work to triage and resolve.

WARP+

WARP is meant to be a place where we can apply optimizations which make the Internet better. We have a lot of experience making websites more performant, WARP is our opportunity to experiment with doing the same for all Internet traffic.

At Cloudflare we have a product called Argo. Argo makes websites' time to first byte more than 30% faster on average by continually monitoring thousands of routes over the Internet between our data centers. That data builds a database which maps every IP address range with the fastest possible route to every destination. When a packet arrives it first reaches the closest data center to the client, then that data center uses data from our tests to discover the route which will get the packet to its destination with the lowest possible latency. You can think of it like a traffic-aware GPS for the Internet.

Argo has historically only operated on HTTP packets. HTTP is the protocol which powers the web, sending messages which load websites on top of TCP and IP. For example, if I load http://zack.is/, an HTTP message is sent inside a TCP packet:

Internet Protocol Version 4, Src: 192.168.7.21, Dst: 104.24.101.191
Transmission Control Protocol, Src Port: 55244, Dst Port: 80
    Source Port: 55244
    Destination Port: 80
    TCP payload (414 bytes)
Hypertext Transfer Protocol
    GET / HTTP/1.1\r\n
    Host: zack.is\r\n
    Connection: keep-alive\r\n
    Accept-Encoding: gzip, deflate\r\n
    Accept-Language: en-US,en;q=0.9\r\n
    \r\n

The modern and secure web presents a problem for us however: When I make the same request over HTTPS (https://zack.is) rather than just HTTP (http://zack.is), I see a very different result over the wire:

Internet Protocol Version 4, Src: 192.168.7.21, Dst: 104.25.151.102
Transmission Control Protocol, Src Port: 55983, Dst Port: 443
    Source Port: 55983
    Destination Port: 443
    Transport Layer Security
    TCP payload (54 bytes)
Transport Layer Security
    TLSv1.2 Record Layer: Application Data Protocol: http-over-tls
        Encrypted Application Data: 82b6dd7be8c5758ad012649fae4f469c2d9e68fe15c17297…

My request has been encrypted! It’s no longer possible for WARP (or anyone but the destination) to tell what is in the payload. It might be HTTP, but it also might be any other protocol. If my site is one of the twenty-million which use Cloudflare already, we can decrypt the traffic and accelerate it (along with a long list of other optimizations). But for encrypted traffic destined for another source existing HTTP-only Argo technology was not going to work.

Fortunately we now have a good amount of experience working with non-HTTP traffic through our Spectrum and Magic Transit products. To solve our problem the Argo team turned to the CONNECT protocol.

As we now know, when a WARP request is made it first communicates over the WireGuard protocol to a server running in one of our 194 data centers around the world. Once the WireGuard message has been decrypted, we examine the destination IP address to see if it is an HTTP request destined for a Cloudflare-powered site, or a request destined elsewhere. If it’s destined for us it enters our standard HTTP serving path; often we can reply to the request directly from our cache in the very same data center.

If it’s not destined for a Cloudflare-powered site we instead forward the packet to a proxy process which runs on each machine. This proxy is responsible for loading the fastest path from our Argo database and beginning an HTTP session with a machine in the data center this traffic should be forwarded to. It uses the CONNECT command to both transmit metadata (as headers) and turn the HTTP session into a connection which can transmit the raw bytes of the payload:

CONNECT 8.54.232.11:5564 HTTP/1.1\r\n
Exit-Tcp-Keepalive-Duration: 15\r\n
Application: warp\r\n
\r\n
<data to send to origin>

Once the message arrives at the destination data center it is either forwarded to another data center (if that is best for performance), or directed directly to the origin which is awaiting the traffic.

Smart routing is just the beginning of WARP+; We have a long list of projects and plans which are all aimed at making your Internet faster, and couldn’t be more thrilled to finally have a platform to test them with.

Our Mission

Today, after well over a year of development, WARP is available to you and to your friends and family. For us though, this is just the beginning. With the ability to improve full network connection for all traffic, we unlock a whole new world of optimizations and security improvements which were simply impossible before. We couldn't be more excited to experiment, play, and eventually release, all sorts of new WARP and WARP+ features.

Cloudflare’s mission is to help build a better Internet. If we are willing to experiment and solve hard technical problems together we believe we can help make the future of the Internet better than the Internet of today, and we are all grateful to play a part in that. Thank you for trusting us with your Internet connection.

WARP was built by Oli Yu, Vlad Krasnov, Chris Branch, Dane Knecht, Naga Tripirineni, Andrew Plunk, Adam Schwartz, Irtefa, and intern Michelle Chen with support from members of our Austin, San Francisco, Champaign, London, Warsaw, and Lisbon offices.

Cloudflare's connectivity cloud protects entire corporate networks, helps customers build Internet-scale applications efficiently, accelerates any website or Internet application, wards off DDoS attacks, keeps hackers at bay, and can help you on your journey to Zero Trust.

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here. If you're looking for a new career direction, check out our open positions.
Birthday WeekWARP1.1.1.1VPNDeep DiveProduct News

Follow on X

Cloudflare|@cloudflare

Related posts

October 08, 2024 1:00 PM

Cloudflare acquires Kivera to add simple, preventive cloud security to Cloudflare One

The acquisition and integration of Kivera broadens the scope of Cloudflare’s SASE platform beyond just apps, incorporating increased cloud security through proactive configuration management of cloud services. ...

September 27, 2024 1:00 PM

Network trends and natural language: Cloudflare Radar’s new Data Explorer & AI Assistant

The Cloudflare Radar Data Explorer provides a simple Web-based interface to build more complex API queries, including comparisons and filters, and visualize the results. The accompanying AI Assistant translates a user’s natural language statements or questions into the appropriate Radar API calls....