
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sun, 05 Apr 2026 16:51:44 GMT</lastBuildDate>
        <item>
            <title><![CDATA[How Workers VPC Services connects to your regional private networks from anywhere in the world]]></title>
            <link>https://blog.cloudflare.com/workers-vpc-open-beta/</link>
            <pubDate>Wed, 05 Nov 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Workers VPC Services enter open beta today. We look under the hood to see how Workers VPC connects your globally-deployed Workers to your regional private networks by using Cloudflare's global network, while abstracting cross-cloud networking complexity. ]]></description>
            <content:encoded><![CDATA[ <p>In April, we shared our vision for a <a href="https://blog.cloudflare.com/workers-virtual-private-cloud/"><u>global virtual private cloud on Cloudflare</u></a>, a way to unlock your applications from regionally constrained clouds and on-premise networks, enabling you to build truly cross-cloud applications.</p><p>Today, we’re announcing the first milestone of our Workers VPC initiative: VPC Services. VPC Services allow you to connect to your APIs, containers, virtual machines, serverless functions, databases and other services in regional private networks via <a href="https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/"><u>Cloudflare Tunnels</u></a> from your <a href="https://workers.cloudflare.com/"><u>Workers</u></a> running anywhere in the world. </p><p>Once you set up a Tunnel in your desired network, you can register each service that you want to expose to Workers by configuring its host or IP address. Then, you can access the VPC Service as you would any other <a href="https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/"><u>Workers service binding</u></a> — Cloudflare’s network will automatically route to the VPC Service over Cloudflare’s network, regardless of where your Worker is executing:</p>
            <pre><code>export default {
  async fetch(request, env, ctx) {
    // Perform application logic in Workers here	

    // Call an external API running in a ECS in AWS when needed using the binding
    const response = await env.AWS_VPC_ECS_API.fetch("http://internal-host.com");

    // Additional application logic in Workers
    return new Response();
  },
};</code></pre>
            <p>Workers VPC is now available to everyone using Workers, at no additional cost during the beta, as is Cloudflare Tunnels. <a href="https://dash.cloudflare.com/?to=/:account/workers/vpc/services"><u>Try it out now.</u></a> And read on to learn more about how it works under the hood.</p>
    <div>
      <h2>Connecting the networks you trust, securely</h2>
      <a href="#connecting-the-networks-you-trust-securely">
        
      </a>
    </div>
    <p>Your applications span multiple networks, whether they are on-premise or in external clouds. But it’s been difficult to connect from Workers to your APIs and databases locked behind private networks. </p><p>We have <a href="https://blog.cloudflare.com/workers-virtual-private-cloud/"><u>previously described</u></a> how traditional virtual private clouds and networks entrench you into traditional clouds. While they provide you with workload isolation and security, traditional virtual private clouds make it difficult to build across clouds, access your own applications, and choose the right technology for your stack.</p><p>A significant part of the cloud lock-in is the inherent complexity of building secure, distributed workloads. VPC peering requires you to configure routing tables, security groups and network access-control lists, since it relies on networking across clouds to ensure connectivity. In many organizations, this means weeks of discussions and many teams involved to get approvals. This lock-in is also reflected in the solutions invented to wrangle this complexity: Each cloud provider has their own bespoke version of a “Private Link” to facilitate cross-network connectivity, further restricting you to that cloud and the vendors that have integrated with it.</p><p>With Workers VPC, we’re simplifying that dramatically. You set up your Cloudflare Tunnel once, with the necessary permissions to access your private network. Then, you can configure Workers VPC Services, with the tunnel and hostname (or IP address and port) of the service you want to expose to Workers. Any request made to that VPC Service will use this configuration to route to the given service within the network.</p>
            <pre><code>{
  "type": "http",
  "name": "vpc-service-name",
  "http_port": 80,
  "https_port": 443,
  "host": {
    "hostname": "internally-resolvable-hostname.com",
    "resolver_network": {
      "tunnel_id": "0191dce4-9ab4-7fce-b660-8e5dec5172da"
    }
  }
}</code></pre>
            <p>This ensures that, once represented as a Workers VPC Service, a service in your private network is secured in the same way other Cloudflare bindings are, using the Workers binding model. Let’s take a look at a simple VPC Service binding example:</p>
            <pre><code>{
  "name": "WORKER-NAME",
  "main": "./src/index.js",
  "vpc_services": [
    {
      "binding": "AWS_VPC2_ECS_API",
      "service_id": "5634563546"
    }
  ]
}</code></pre>
            <p>Like other Workers bindings, when you deploy a Worker project that tries to connect to a VPC Service, the access permissions are verified at deploy time to ensure that the Worker has access to the service in question. And once deployed, the Worker can use the VPC Service binding to make requests to that VPC Service — and only that service within the network. </p><p>That’s significant: Instead of exposing the entire network to the Worker, only the specific VPC Service can be accessed by the Worker. This access is verified at deploy time to provide a more explicit and transparent service access control than traditional networks and access-control lists do.</p><p>This is a key factor in the design of Workers bindings: de facto security with simpler management and making Workers immune to Server-Side Request Forgery (SSRF) attacks. <a href="https://blog.cloudflare.com/workers-environment-live-object-bindings/#security"><u>We’ve gone deep on the binding security model in the past</u></a>, and it becomes that much more critical when accessing your private networks. </p><p>Notably, the binding model is also important when considering what Workers are: scripts running on Cloudflare’s global network. They are not, in contrast to traditional clouds, individual machines with IP addresses, and do not exist within networks. Bindings provide secure access to other resources within your Cloudflare account – and the same applies to Workers VPC Services.</p>
    <div>
      <h2>A peek under the hood</h2>
      <a href="#a-peek-under-the-hood">
        
      </a>
    </div>
    <p>So how do VPC Services and their bindings route network requests from Workers anywhere on Cloudflare’s global network to regional networks using tunnels? Let’s look at the lifecycle of a sample HTTP Request made from a VPC Service’s dedicated <b>fetch()</b> request represented here:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4iUTiZjmbm2ujppLugfxJo/4db92fdf8549c239f52d8636e2589baf/image4.png" />
          </figure><p>It all starts in the Worker code, where the <b>.fetch() </b>function of the desired VPC Service is called with a standard JavaScript <a href="https://developer.mozilla.org/en-US/docs/Web/API/Request"><u>Request</u></a> (as represented with Step 1). The Workers runtime will use a <a href="https://capnproto.org/"><u>Cap’n Proto</u></a> remote-procedure-call to send the original HTTP request alongside additional context, as it does for many other Workers bindings. </p><p>The Binding Worker of the VPC Service System receives the HTTP request along with the binding context, in this case, the Service ID of the VPC Service being invoked. The Binding Worker will proxy this information to the Iris Service within an HTTP CONNECT connection, a standard pattern across Cloudflare’s bindings to place connection logic to Cloudflare’s edge services within Worker code rather than the Workers runtime itself (Step 2). </p><p>The Iris Service is the main service for Workers VPC. Its responsibility is to accept requests for a VPC Service and route them to the network in which your VPC Service is located. It does this by integrating with <a href="https://blog.cloudflare.com/extending-local-traffic-management-load-balancing-to-layer-4-with-spectrum/#how-we-enabled-spectrum-to-support-private-networks"><u>Apollo</u></a>, an internal service of <a href="https://developers.cloudflare.com/cloudflare-one/?cf_target_id=2026081E85C775AF31266A26CE7F3D4D"><u>Cloudflare One</u></a>. Apollo provides a unified interface that abstracts away the complexity of securely connecting to networks and tunnels, <a href="https://blog.cloudflare.com/from-ip-packets-to-http-the-many-faces-of-our-oxy-framework/"><u>across various layers of networking</u></a>. </p><p>To integrate with Apollo, Iris must complete two tasks. First, Iris will parse the VPC Service ID from the metadata and fetch the information of the tunnel associated with it from our configuration store. This includes the tunnel ID and type from the configuration store (Step 3), which is the information that Iris needs to send the original requests to the right tunnel.</p><p>Second, Iris will create the UDP datagrams containing DNS questions for the A and AAAA records of the VPC Service’s hostname. These datagrams will be sent first, via Apollo. Once DNS resolution is completed, the original request is sent along, with the resolved IP address and port (Step 4). That means that steps 4 through 7 happen in sequence twice for the first request: once for DNS resolution and a second time for the original HTTP Request. Subsequent requests benefit from Iris’ caching of DNS resolution information, minimizing request latency.</p><p>In Step 5, Apollo receives the metadata of the Cloudflare Tunnel that needs to be accessed, along with the DNS resolution UDP datagrams or the HTTP Request TCP packets. Using the tunnel ID, it determines which datacenter is connected to the Cloudflare Tunnel. This datacenter is in a region close to the Cloudflare Tunnel, and as such, Apollo will route the DNS resolution messages and the Original Request to the Tunnel Connector Service running in that datacenter (Step 5).</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6eXnv33qvTvGRRNGqS9ywj/99e57beeaa32de0724c6c9f396ab3b17/image3.png" />
          </figure><p>The Tunnel Connector Service is responsible for providing access to the Cloudflare Tunnel to the rest of Cloudflare’s network. It will relay the DNS resolution questions, and subsequently the original request to the tunnel over the QUIC protocol (Step 6).</p><p>Finally, the Cloudflare Tunnel will send the DNS resolution questions to the DNS resolver of the network it belongs to. It will then send the original HTTP Request from its own IP address to the destination IP and port (Step 7). The results of the request are then relayed all the way back to the original Worker, from the datacenter closest to the tunnel all the way to the original Cloudflare datacenter executing the Worker request.</p>
    <div>
      <h2>What VPC Service allows you to build</h2>
      <a href="#what-vpc-service-allows-you-to-build">
        
      </a>
    </div>
    <p>This unlocks a whole new tranche of applications you can build on Cloudflare. For years, Workers have excelled at the edge, but they've largely been kept "outside" your core infrastructure. They could only call public endpoints, limiting their ability to interact with the most critical parts of your stack—like a private accounts API or an internal inventory database. Now, with VPC Services, Workers can securely access those private APIs, databases, and services, fundamentally changing what's possible.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/DDDzgVtHtK92DZ4LwKhLI/904fc30fcab4797fd6ee263f09b85ab1/image2.png" />
          </figure><p>This immediately enables true cross-cloud applications that span Cloudflare Workers and any other cloud like AWS, GCP or Azure. We’ve seen many customers adopt this pattern over the course of our private beta, establishing private connectivity between their external clouds and Cloudflare Workers. We’ve even done so ourselves, connecting our Workers to Kubernetes services in our core datacenters to power the control plane APIs for many of our services. Now, you can build the same powerful, distributed architectures, using Workers for global scale while keeping stateful backends in the network you already trust.</p><p>It also means you can connect to your on-premise networks from Workers, allowing you to modernize legacy applications with the performance and infinite scale of Workers. More interesting still are some emerging use cases for developer workflows. We’ve seen developers run <a href="https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/"><code><u>cloudflared</u></code></a> on their laptops to connect a deployed Worker back to their local machine for real-time debugging. The full flexibility of Cloudflare Tunnels is now a programmable primitive accessible directly from your Worker, opening up a world of possibilities.</p>
    <div>
      <h2>The path ahead of us</h2>
      <a href="#the-path-ahead-of-us">
        
      </a>
    </div>
    <p>VPC Services is the first milestone within the larger Workers VPC initiative, but we’re just getting started. Our goal is to make connecting to any service and any network, anywhere in the world, a seamless part of the Workers experience. Here’s what we’re working on next:</p><p><b>Deeper network integration</b>. Starting with Cloudflare Tunnels was a deliberate choice. It's a highly available, flexible, and familiar solution, making it the perfect foundation to build upon. To provide more options for enterprise networking, we're going to be adding support for standard IPsec tunnels, Cloudflare Network Interconnect (CNI), and AWS Transit Gateway, giving you and your teams more choices and potential optimizations. Crucially, these connections will also become truly bidirectional, allowing your private services to initiate connections back to Cloudflare resources such as pushing events to Queues or fetching from R2.</p><p><b>Expanded protocol and service support. </b>The next step beyond HTTP is enabling access to TCP services. This will first be achieved by integrating with Hyperdrive. We're evolving the previous Hyperdrive support for private databases to be simplified with VPC Services configuration, avoiding the need to add Cloudflare Access and manage security tokens. This creates a more native experience, complete with Hyperdrive's powerful connection pooling. Following this, we will add broader support for raw TCP connections, unlocking direct connectivity to services like Redis caches and message queues from <a href="https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/"><code><u>Workers ‘connect()’</u></code></a>.</p><p><b>Ecosystem compatibility. </b>We want to make connecting to a private service feel as natural as connecting to a public one. To do so, we will be providing a unique autogenerated hostname for each Workers VPC Service, similar to <a href="https://developers.cloudflare.com/hyperdrive/get-started/#write-a-worker"><u>Hyperdrive’s connection strings</u></a>. This will make it easier to use Workers VPC with existing libraries and object–relational mapping libraries that may require a hostname (e.g., in a global ‘<code>fetch()</code>’ call or a MongoDB connection string). Workers VPC Service hostname will automatically resolve and route to the correct VPC Service, just as the ‘<code>fetch()</code>’ command does.</p>
    <div>
      <h2>Get started with Workers VPC</h2>
      <a href="#get-started-with-workers-vpc">
        
      </a>
    </div>
    <p>We’re excited to release Workers VPC Services into open beta today. We’ve spent months building out and testing our first milestone for Workers to private network access. And we’ve refined it further based on feedback from both internal teams and customers during the closed beta. </p><p><b>Now, we’re looking forward to enabling everyone to build cross-cloud apps on Workers with Workers VPC, available for free during the open beta.</b> With Workers VPC, you can bring your apps on private networks to region Earth, closer to your users and available to Workers across the globe.</p><p><a href="https://dash.cloudflare.com/?to=/:account/workers/vpc/services"><b><u>Get started with Workers VPC Services for free now.</u></b></a></p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Workers VPC]]></category>
            <category><![CDATA[Cloudflare Tunnel]]></category>
            <category><![CDATA[Network]]></category>
            <category><![CDATA[Hybrid Cloud]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[VPC]]></category>
            <category><![CDATA[Private Network]]></category>
            <guid isPermaLink="false">3nRyPdIVogbDGSeUZgRY41</guid>
            <dc:creator>Thomas Gauvin</dc:creator>
            <dc:creator>Matt Alonso</dc:creator>
            <dc:creator>Eric Falcão</dc:creator>
        </item>
        <item>
            <title><![CDATA[Connect and secure any private or public app by hostname, not IP — free for everyone in Cloudflare One]]></title>
            <link>https://blog.cloudflare.com/tunnel-hostname-routing/</link>
            <pubDate>Thu, 18 Sep 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Tired of IP Lists? Securely connect private networks to any app by its hostname, not its IP address. This routing is now built into Cloudflare Tunnel and is free for all Cloudflare One customers. ]]></description>
            <content:encoded><![CDATA[ <p>Connecting to an application should be as simple as knowing its name. Yet, many security models still force us to rely on brittle, ever-changing IP addresses. And we heard from many of you that managing those ever-changing IP lists was a constant struggle. </p><p>Today, we’re taking a major step toward making that a relic of the past.</p><p>We're excited to announce that you can now route traffic to <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/"><u>Cloudflare Tunnel</u></a> based on a hostname or a domain. This allows you to use Cloudflare Tunnel to build simple zero-trust and egress policies for your private and public web applications without ever needing to know their underlying IP. This is one more step on our <a href="https://blog.cloudflare.com/egress-policies-by-hostname/"><u>mission</u></a> to strengthen platform-wide support for hostname- and domain-based policies in the <a href="https://developers.cloudflare.com/cloudflare-one/"><u>Cloudflare One</u></a> <a href="https://www.cloudflare.com/learning/access-management/what-is-sase/">SASE</a> platform, simplifying complexity and improving security for our customers and end users. </p>
    <div>
      <h2>Grant access to applications, not networks</h2>
      <a href="#grant-access-to-applications-not-networks">
        
      </a>
    </div>
    <p>In August 2020, the National Institute of Standards (NIST) published <a href="https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-207.pdf"><u>Special Publication 800-207</u></a>, encouraging organizations to abandon the "castle-and-moat" model of security (where trust is established on the basis of network location) and move to a <a href="https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/">Zero Trust model </a>(where we “<a href="https://www.whitehouse.gov/wp-content/uploads/2022/01/M-22-09.pdf"><u>verify anything and everything attempting to establish access</u></a>").</p><p>Now, instead of granting broad network permissions, you grant specific access to individual resources. This concept, known as per-resource authorization, is a cornerstone of the Zero Trust framework, and it presents a huge change to how organizations have traditionally run networks. Per-resource authorization requires that access policies be configured on a per-resource basis. By applying the principle of least privilege, you give users access only to the resources they absolutely need to do their job. This tightens security and shrinks the potential attack surface for any given resource.</p><p>Instead of allowing your users to access an entire network segment, like <code><b>10.131.0.0/24</b></code>, your security policies become much more precise. For example:</p><ul><li><p>Only employees in the "SRE" group running a managed device can access <code><b>admin.core-router3-sjc.acme.local</b></code>.</p></li><li><p>Only employees in the "finance" group located in Canada can access <code><b>canada-payroll-server.acme.local</b></code>.</p></li><li><p>All employees located in New York can access<b> </b><code><b>printer1.nyc.acme.local</b></code>.</p></li></ul><p>Notice what these powerful, granular rules have in common? They’re all based on the resource’s private <b>hostname</b>, not its IP address. That’s exactly what our new hostname routing enables. We’ve made it dramatically easier to write effective zero trust policies using stable hostnames, without ever needing to know the underlying IP address.</p>
    <div>
      <h2>Why IP-based rules break</h2>
      <a href="#why-ip-based-rules-break">
        
      </a>
    </div>
    <p>Let's imagine you need to secure an internal server, <code><b>canada-payroll-server.acme.local</b></code>. It’s hosted on internal IP <code><b>10.4.4.4</b></code> and its hostname is available in internal private DNS, but not in public DNS. In a modern cloud environment, its IP address is often the least stable thing about it. If your security policy is tied to that IP, it's built on a shaky foundation.</p><p>This happens for a few common reasons:</p><ul><li><p><b>Cloud instances</b>: When you launch a compute instance in a cloud environment like AWS, you're responsible for its <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/hostname-types.html"><u>hostname</u></a>, but not always its IP address. As a result, you might only be tracking the hostname and may not even know the server's IP.</p></li><li><p><b>Load Balancers</b>: If the server is behind a load balancer in a cloud environment (like AWS ELB), its IP address could be changing dynamically in response to <a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/application/application-load-balancers.html"><u>changes in traffic</u></a>.</p></li><li><p><b>Ephemeral infrastructure</b>: This is the "<a href="https://cloudscaling.com/blog/cloud-computing/the-history-of-pets-vs-cattle/"><u>cattle, not pets</u></a>" world of modern infrastructure. Resources like servers in an autoscaling group, containers in a Kubernetes cluster, or applications that spin down overnight are created and destroyed as needed. They keep a persistent hostname so users can find them, but their IP is ephemeral and changes every time they spin up.</p></li></ul><p>To cope with this, we've seen customers build complex scripts to maintain dynamic "IP Lists" — mappings from a hostname to its IPs that are updated every time the address changes. While this approach is clever, maintaining IP Lists is a chore. They are brittle, and a single error could cause employees to lose access to vital resources.</p><p>Fortunately, hostname-based routing makes this IP List workaround obsolete.</p>
    <div>
      <h2>How it works: secure a private server by hostname using Cloudflare One SASE platform</h2>
      <a href="#how-it-works-secure-a-private-server-by-hostname-using-cloudflare-one-sase-platform">
        
      </a>
    </div>
    <p>To see this in action, let's create a policy from our earlier example: we want to grant employees in the "finance" group located in Canada access to <code><b>canada-payroll-server.acme.local</b></code>. Here’s how you do it, without ever touching an IP address.</p><p><b>Step 1: Connect your private network</b></p><p>First, the server's network needs a secure connection to Cloudflare's global network. You do this by installing our lightweight agent, <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/"><u>cloudflared</u></a>, in the same local area network as the server, which creates a secure Cloudflare Tunnel. You can create a new tunnel directly from cloudflared by running <code><b>cloudflared tunnel create &lt;TUNNEL-NAME&gt;</b></code> or using your Zero Trust dashboard.</p><div>
  
</div><p>
<b>Step 2: Route the hostname to the tunnel</b></p><p>This is where the new capability comes into play. In your Zero Trust dashboard, you now establish a route that binds the <i>hostname</i> <code>canada-payroll-server.acme.local</code> directly to that tunnel. In the past, you could only route an IP address (<code>10.4.4.4)</code> or its subnet (<code>10.4.4.0/24</code>). That old method required you to create and manage those brittle IP Lists we talked about. Now, you can even route entire domains, like <code>*.acme.local</code>, directly to the tunnel, simply by creating a hostname route to <code>acme.local</code>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3mcoBAILYENIP6kGW4tw96/bb7ec6571ae7b4f04b5dc0456f694d59/1.png" />
          </figure><p>For this to work, you must delete your private network’s subnet (in this case <code>10.0.0.0/8</code>) and <code>100.64.0.0/10</code> from the <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-devices/warp/configure-warp/route-traffic/split-tunnels/"><u>Split Tunnels Exclude</u></a> list. You also need to remove <code>.local</code> from the <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-devices/warp/configure-warp/route-traffic/local-domains/"><u>Local Domain Fallback</u></a>.</p><p>(As an aside, we note that this feature also works with domains. For example, you could bind <code>*.acme.local</code> to a single tunnel, if desired.)</p><p><b>Step 3: Write your zero trust policy</b></p><p>Now that Cloudflare knows <i>how</i> to reach your server by its name, you can write a policy to control <i>who</i> can access it. You have a couple of options:</p><ul><li><p><b>In Cloudflare Access (for HTTPS applications):</b> Write an <a href="https://developers.cloudflare.com/cloudflare-one/applications/non-http/self-hosted-private-app/"><u>Access policy</u></a> that grants employees in the “finance” group access to the private hostname <code>canada-payroll-server.acme.local</code>. This is ideal for applications accessible over HTTPS on port 443.
</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7lIZI9ThsAWtxFZZis3HtZ/08451586dbe373ff137bd9e91d23dea6/2.png" />
          </figure><p></p></li><li><p><b>In Cloudflare Gateway (for HTTPS applications):</b> Alternatively, write a <a href="https://developers.cloudflare.com/cloudflare-one/policies/gateway/"><u>Gateway policy</u></a> that grants employees in the “finance” group access to the <a href="https://developers.cloudflare.com/cloudflare-one/policies/gateway/network-policies/#sni"><u>SNI</u></a> <code>canada-payroll-server.acme.local</code>. This <a href="https://developers.cloudflare.com/cloudflare-one/policies/gateway/network-policies/protocol-detection/"><u>works</u></a> for services accessible over HTTPS on any port.
</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5GpwDZNmdzapOyjOgFFlKD/50e2d0df64d2230479ad8d0a013de24b/3.png" />
          </figure><p></p></li><li><p><b>In Cloudflare Gateway (for non-HTTP applications):</b> You can also write a <a href="https://developers.cloudflare.com/cloudflare-one/policies/gateway/"><u>Gateway policy</u></a> that blocks DNS resolution <code>canada-payroll-server.acme.local</code> for all employees except the “finance” group.</p></li></ul>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3na5Mf6UMpBcKYm6JWmnzd/5791054c944300e667c3829e9bd8c6ec/4.png" />
          </figure><p>The principle of "trust nothing" means your security posture should start by denying traffic by default. For this setup to work in a true Zero Trust model, it should be paired with a default Gateway policy that blocks all access to your internal IP ranges. Think of this as ensuring all doors to your private network are locked by default. The specific <code>allow</code> policies you create for hostnames then act as the keycard, unlocking one specific door only for authorized users.</p><p>Without that foundational "deny" policy, creating a route to a private resource would make it accessible to everyone in your organization, defeating the purpose of a least-privilege model and creating significant security risks. This step ensures that only the traffic you explicitly permit can ever reach your corporate resources.</p><p>And there you have it. We’ve walked through the entire process of writing a per-resource policy using only the server’s private hostname. No IP Lists to be seen anywhere, simplifying life for your administrators.</p>
    <div>
      <h2>Secure egress traffic to third-party applications</h2>
      <a href="#secure-egress-traffic-to-third-party-applications">
        
      </a>
    </div>
    <p>Here's another powerful use case for hostname routing: controlling outbound connections from your users to the public Internet. Some third-party services, such as banking portals or partner APIs, use an IP allowlist for security. They will only accept connections that originate from a specific, dedicated public source IP address that belongs to your company.</p><p>This common practice creates a challenge. Let's say your banking portal at <code>bank.example.com</code> requires all traffic to come from a dedicated source IP <code>203.0.113.9</code> owned by your company. At the same time, you want to enforce a zero trust policy that <i>only</i> allows your finance team to access that portal. You can't build your policy based on the bank's destination IP — you don't control it, and it could change at any moment. You have to use its hostname.</p><p>There are two ways to solve this problem. First, if your dedicated source IP is purchased from Cloudflare, you can use the <a href="https://blog.cloudflare.com/egress-policies-by-hostname/"><u>“egress policy by hostname” feature</u></a> that we announced previously. By contrast, if your dedicated source IP belongs to your organization, or is leased from cloud provider, then we can solve this problem with hostname-based routing, as shown in the figure below:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6wXu6FMiiVz4lXsESFrBTg/e1bb13e8eef0653ab311d0800d95f391/5.png" />
          </figure><p>Here’s how this works:</p><ol><li><p><b>Force traffic through your dedicated IP.</b> First, you deploy a <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/"><u>Cloudflare Tunnel</u></a> in the network that owns your dedicated IP (for example, your primary VPC in a cloud provider). All traffic you send through this tunnel will exit to the Internet with <code>203.0.113.9</code> as its source IP.</p></li><li><p><b>Route the banking app to that tunnel.</b> Next, you create a hostname route in your Zero Trust dashboard. This rule tells Cloudflare: "Any traffic destined for <code>bank.example.com</code> must be sent through this specific tunnel."</p></li><li><p><b>Apply your user policies.</b> Finally, in Cloudflare Gateway, you create your granular access rules. A low-priority <a href="https://developers.cloudflare.com/cloudflare-one/policies/gateway/network-policies/"><u>network policy</u></a> blocks access to the <a href="https://developers.cloudflare.com/cloudflare-one/policies/gateway/network-policies/#sni"><u>SNI</u></a> <code>bank.example.com</code> for everyone. Then, a second, higher-priority policy explicitly allows users in the "finance" group to access the <a href="https://developers.cloudflare.com/cloudflare-one/policies/gateway/network-policies/#sni"><u>SNI</u></a> <code>bank.example.com</code>.</p></li></ol><p>Now, when a finance team member accesses the portal, their traffic is correctly routed through the tunnel and arrives with the source IP the bank expects. An employee from any other department is blocked by Gateway before their traffic even enters the tunnel. You've enforced a precise, user-based zero trust policy for a third-party service, all by using its public hostname.</p>
    <div>
      <h2>Under the hood: how hostname routing works</h2>
      <a href="#under-the-hood-how-hostname-routing-works">
        
      </a>
    </div>
    <p>To build this feature, we needed to solve a classic networking challenge. The routing mechanism for Cloudflare Tunnel is a core part of Cloudflare Gateway, which operates at both Layer 4 (TCP/UDP) and Layer 7 (HTTP/S) of the network stack.</p><p>Cloudflare Gateway must make a decision about which Cloudflare Tunnel to send traffic upon receipt of the very first IP packet in the connection. This means the decision must necessarily be made at Layer 4, where Gateway only sees the IP and TCP/UDP headers of a packet. IP and TCP/UDP headers contain the destination IP address, but do not contain destination <i>hostname</i>. The hostname is only found in Layer 7 data (like a TLS SNI field or an HTTP Host header), which isn't even available until after the Layer 4 connection is already established.</p><p>This creates a dilemma: how can we route traffic based on a hostname before we've even seen the hostname? </p>
    <div>
      <h3>Synthetic IPs to the rescue</h3>
      <a href="#synthetic-ips-to-the-rescue">
        
      </a>
    </div>
    <p>The solution lies in the fact that Cloudflare Gateway also acts as a DNS resolver. This means we see the user's <i>intent </i>— the DNS query for a hostname — <i>before</i> we see the actual application traffic. We use this foresight to "tag" the traffic using a <a href="https://blog.cloudflare.com/egress-policies-by-hostname/"><u>synthetic IP address</u></a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7Kd3x5SppGp8G4KZeO34n/67b338ca8e81db63e110dc89c7596bf6/6.png" />
          </figure><p>Let’s walk through the flow:</p><ol><li><p><b>DNS Query</b>. A user's device sends a DNS query for
 <code>canada-payroll-server.acme.local </code>to the Gateway resolver.</p></li><li><p><b>Private Resolution</b>. Gateway asks the <code>cloudflared </code>agent running in your private network to resolve the real IP for that hostname. Since <code>cloudflared</code> has access to your internal DNS, it finds the real private IP <code>10.4.4.4</code>, and sends it back to the Gateway resolver.</p></li><li><p><b>Synthetic Response</b>. Here's the key step. Gateway resolver <b>does not</b> send the real IP (<code>10.4.4.4</code>) back to the user. Instead, it temporarily assigns an <i>initial resolved IP</i> from a reserved Carrier-Grade NAT (CGNAT) address space (e.g., <code>100.80.10.10</code>) and sends the initial resolved IP back to the user's device. The initial resolved IP acts as a tag that allows Gateway to identify network traffic destined to <code>canada-payroll-server.acme.local</code>. The initial resolved IP is randomly selected and temporarily assigned from one of the two IP address ranges:</p><ul><li><p>IPv4: <code>100.80.0.0/16</code></p></li><li><p>IPv6: <code>2606:4700:0cf1:4000::/64</code> </p></li></ul></li><li><p><b>Traffic Arrives</b>. The user's device sends its application traffic (e.g., an HTTPS request) to the destination IP it received from Gateway resolver: the initial resolved IP <code>100.80.10.10</code>.</p></li><li><p><b>Routing and Rewriting</b>. When Gateway sees an incoming packet destined for <code>100.80.10.10</code>, it knows this traffic is for <code>canada-payroll-server.acme.local</code> and must be sent through a specific Cloudflare Tunnel. It then rewrites the destination IP on the packet back to the <i>real</i> private destination IP (<code>10.4.4.4</code>) and sends it down the correct tunnel.</p></li></ol><p>The traffic goes down the tunnel and arrives at <code>canada-payroll-server.acme.local</code> at IP (<code>10.4.4.4)</code> and the user is connected to the server without noticing any of these mechanisms. By intercepting the DNS query, we effectively tag the network traffic stream, allowing our Layer 4 router to make the right decision without needing to see Layer 7 data.</p>
    <div>
      <h2>Using Gateway Resolver Policies for fine grained control</h2>
      <a href="#using-gateway-resolver-policies-for-fine-grained-control">
        
      </a>
    </div>
    <p>The routing capabilities we've discussed provide simple, powerful ways to connect to private resources. But what happens when your network architecture is more complex? For example, what if your private DNS servers are in one part of your network, but the application itself is in another?</p><p>With Cloudflare One, you can solve this by creating policies that separate the path for DNS resolution from the path for application traffic for the very same hostname using <a href="https://developers.cloudflare.com/cloudflare-one/policies/gateway/resolver-policies"><u>Gateway Resolver Policies</u></a>. This gives you fine-grained control to match complex network topologies.</p><p>Let's walk through a scenario:</p><ul><li><p>Your private DNS resolvers, which can resolve <code><b>acme.local</b></code>, are located in your core datacenter, accessible only via <code><b>tunnel-1</b></code>.</p></li><li><p>The webserver for <code><b>canada-payroll-server.acme.local</b></code><b> </b>is hosted in a specific cloud VPC, accessible only via <code><b>tunnel-2</b></code>.</p></li></ul>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2sVMsS4DhuN2yoTlGWTK5X/e5a66330c951e7b65428f5c76b5c7b0a/7.png" />
          </figure><p>Here’s how to configure this split-path routing.</p><p><b>Step 1: Route DNS Queries via </b><code><b>tunnel-1</b></code></p><p>First, we need to tell Cloudflare Gateway how to reach your private DNS server</p><ol><li><p><b>Create an IP Route:</b> In the Networks &gt; Tunnels area of your Zero Trust dashboard, create a route for the IP address of your private DNS server (e.g., <code><b>10.131.0.5/32</b></code>) and point it to <code><b>tunnel-1</b></code><code>.</code> This ensures any traffic destined for that specific IP goes through the correct tunnel to your datacenter.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/32JcjFZXGuhDEHHlWJoF1C/4223a6f2e5b7b49015abfbfd9b4fd20f/8.png" />
          </figure><p></p></li><li><p><b>Create a Resolver Policy:</b> Go to <b>Gateway -&gt; Resolver Policies</b> and create a new policy with the following logic:</p><ul><li><p><b>If</b> the query is for the domain <code><b>acme.local</b></code> …</p></li><li><p><b>Then</b>... resolve it using a designated DNS server with the IP <code><b>10.131.0.5</b></code>.
</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2j8kYsD692tCRYcDKoDXvb/7dbb20f426ba47350fb0b2906046d5f0/9.png" />
          </figure><p></p></li></ul></li></ol><p>With these two rules, any DNS lookup for <code><b>acme.local</b></code> from a user's device will be sent through <code>tunnel-1</code> to your private DNS server for resolution.</p><p><b>Step 2: Route Application Traffic via </b><code><b>tunnel-2</b></code></p><p>Next, we'll tell Gateway where to send the actual traffic (for example, HTTP/S) for the application.</p><p><b>Create a Hostname Route:</b> In your Zero Trust dashboard, create a <b>hostname route</b> that binds <code><b>canada-payroll-server.acme.local </b></code>to <code><b>tunnel-2</b></code>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3Ufzpsb1FUYrM39gMiyovs/c5d10828f58b0e7c854ff9fa721e1757/10.png" />
          </figure><p>This rule instructs Gateway that any application traffic (like HTTP, SSH, or any TCP/UDP traffic) for <code><b>canada-payroll-server.acme.local</b></code> must be sent through <code><b>tunnel-2</b></code><b> </b>leading to your cloud VPC.</p><p>Similarly to a setup without Gateway Resolver Policy, for this to work, you must delete your private network’s subnet (in this case <code>10.0.0.0/8</code>) and <code>100.64.0.0/10</code> from the <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-devices/warp/configure-warp/route-traffic/split-tunnels/"><u>Split Tunnels Exclude</u></a> list. You also need to remove <code>.local</code> from the <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-devices/warp/configure-warp/route-traffic/local-domains/"><u>Local Domain Fallback</u></a>.</p><p><b>Putting It All Together</b></p><p>With these two sets of policies, the "synthetic IP" mechanism handles the complex flow:</p><ol><li><p>A user tries to access <code>canada-payroll-server.acme.local</code>. Their device sends a DNS query to Cloudflare Gateway Resolver.</p></li><li><p>This DNS query matches a Gateway Resolver Policy, causing Gateway Resolver to forward the DNS query through <code>tunnel-1</code> to your private DNS server (<code>10.131.0.5</code>).</p></li><li><p>Your DNS server responds with the server’s actual private destination IP (<code>10.4.4.4</code>).</p></li><li><p>Gateway receives this IP and generates a “synthetic” initial resolved IP (<code>100.80.10.10</code>) which it sends back to the user's device.</p></li><li><p>The user's device now sends the HTTP/S request to the initial resolved IP (<code>100.80.10.10</code>).</p></li><li><p>Gateway sees the network traffic destined for the initial resolved IP (<code>100.80.10.10</code>) and, using the mapping, knows it's for <code>canada-payroll-server.acme.local</code>.</p></li><li><p>The Hostname Route now matches. Gateway sends the application traffic through tunnel-2 and rewrites its destination IP to the webserver’s actual private IP (<code>10.4.4.4</code>).</p></li><li><p>The <code>cloudflared</code> agent at the end of tunnel-2 forwards the traffic to the application's destination IP (<code>10.4.4.4</code>), which is on the same local network.</p></li></ol><p>The user is connected, without noticing that DNS and application traffic have been routed over totally separate private network paths. This approach allows you to support sophisticated split-horizon DNS environments and other advanced network architectures with simple, declarative policies.</p>
    <div>
      <h2>What onramps does this support?</h2>
      <a href="#what-onramps-does-this-support">
        
      </a>
    </div>
    <p>Our hostname routing capability is built on the "synthetic IP" (also known as <i>initially resolved IP</i>) mechanism detailed earlier, which requires specific Cloudflare One products to correctly handle both the DNS resolution and the subsequent application traffic. Here’s a breakdown of what’s currently supported for connecting your users (on-ramps) and your private applications (off-ramps).</p>
    <div>
      <h4><b>Connecting Your Users (On-Ramps)</b></h4>
      <a href="#connecting-your-users-on-ramps">
        
      </a>
    </div>
    <p>For end-users to connect to private hostnames, the feature currently works with <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-devices/warp/"><b><u>WARP Client</u></b></a>, agentless <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-devices/agentless/pac-files/"><b><u>PAC files</u></b></a> and <a href="https://developers.cloudflare.com/cloudflare-one/policies/browser-isolation/"><b><u>Browser Isolation</u></b></a>.</p><p>Connectivity is also possible when users are behind <a href="https://developers.cloudflare.com/magic-wan/"><b><u>Magic WAN</u></b></a> (in active-passive mode) or <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/private-net/warp-connector/"><b><u>WARP Connector</u></b></a>, but it requires some additional configuration. To ensure traffic is routed correctly, you must update the routing table on your device or router to send traffic for the following destinations through Gateway:</p><ul><li><p>The initially resolved IP ranges: <code>100.80.0.0/16</code> (IPv4) and <code>2606:4700:0cf1:4000::/64</code> (IPv6).</p></li><li><p>The private network CIDR where your application is located (e.g., <code>10.0.0.0/8)</code>.</p></li><li><p>The IP address of your internal DNS resolver.</p></li><li><p>The Gateway DNS resolver IPs: <code>172.64.36.1</code> and <code>172.64.36.2</code>.</p></li></ul><p>Magic WAN customers will also need to point their DNS resolver to these Gateway resolver IPs and ensure they are running Magic WAN tunnels in active-passive mode: for hostname routing to work, DNS queries and the resulting network traffic must reach Cloudflare over the same Magic WAN tunnel. Currently, hostname routing will not work if your end users are at a site that has more than one Magic WAN tunnel actively transiting traffic at the same time.</p>
    <div>
      <h4><b>Connecting Your Private Network (Off-Ramps)</b></h4>
      <a href="#connecting-your-private-network-off-ramps">
        
      </a>
    </div>
    <p>On the other side of the connection, hostname-based routing is designed specifically for applications connected via <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/"><b><u>Cloudflare Tunnel</u></b></a> (<code>cloudflared</code>). This is currently the only supported off-ramp for routing by hostname.</p><p>Other traffic off-ramps, while fully supported for IP-based routing, are not yet compatible with this specific hostname-based feature. This includes using Magic WAN, WARP Connector, or WARP-to-WARP connections as the off-ramp to your private network. We are actively working to expand support for more on-ramps and off-ramps in the future, so stay tuned for more updates.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>By enabling routing by hostname directly within Cloudflare Tunnel, we’re making security policies simpler, more resilient, and more aligned with how modern applications are built. You no longer need to track ever-changing IP addresses. You can now build precise, per-resource authorization policies for HTTPS applications based on the one thing that should matter: the name of the service you want to connect to. This is a fundamental step in making a zero trust architecture intuitive and achievable for everyone.</p><p>This powerful capability is available today, built directly into Cloudflare Tunnel and free for all Cloudflare One customers.</p><p>Ready to leave IP Lists behind for good? Get started by exploring our <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/private-net/cloudflared/connect-private-hostname/"><u>developer documentation</u></a> to configure your first hostname route. If you're new to <a href="https://developers.cloudflare.com/cloudflare-one/"><u>Cloudflare One</u></a>, you can sign up today and begin securing your applications and networks in minutes.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Tunnel]]></category>
            <category><![CDATA[SASE]]></category>
            <category><![CDATA[Cloudflare One]]></category>
            <category><![CDATA[Cloudflare Network]]></category>
            <category><![CDATA[Cloudflare Gateway]]></category>
            <category><![CDATA[Egress]]></category>
            <category><![CDATA[Zero Trust]]></category>
            <category><![CDATA[Access Control Lists (ACLs)]]></category>
            <category><![CDATA[Hostnames]]></category>
            <guid isPermaLink="false">gnroEH7P2oE00Ba0wJLHT</guid>
            <dc:creator>Nikita Cano</dc:creator>
            <dc:creator>Sharon Goldberg</dc:creator>
        </item>
        <item>
            <title><![CDATA[Conventional cryptography is under threat. Upgrade to post-quantum cryptography with Cloudflare Zero Trust]]></title>
            <link>https://blog.cloudflare.com/post-quantum-zero-trust/</link>
            <pubDate>Mon, 17 Mar 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ We’re thrilled to announce that organizations can now protect their sensitive corporate network traffic against quantum threats by tunneling it through Cloudflare’s Zero Trust platform. ]]></description>
            <content:encoded><![CDATA[ <p>Quantum computers are actively being developed that will eventually have the ability to break the cryptography we rely on for securing modern communications. Recent <a href="https://blog.google/technology/research/google-willow-quantum-chip/"><u>breakthroughs</u></a> in quantum computing have underscored the vulnerability of conventional cryptography to these attacks. Since 2017, Cloudflare has <a href="https://blog.cloudflare.com/tag/post-quantum/"><u>been at the forefront</u></a> of developing, standardizing, and implementing <a href="https://www.cloudflare.com/learning/ssl/quantum/what-is-post-quantum-cryptography/"><u>post-quantum cryptography</u></a> to withstand attacks by quantum computers. </p><p>Our mission is simple: we want every Cloudflare customer to have a clear path to quantum safety. Cloudflare recognizes the urgency, so we’re committed to managing the complex process of upgrading cryptographic algorithms, so that you don’t have to worry about it. We're not just talking about doing it. <a href="https://radar.cloudflare.com/adoption-and-usage#post-quantum-encryption-adoption"><u>Over 35% of the non-bot HTTPS traffic that touches Cloudflare today is post-quantum secure.</u></a> </p><p>The <a href="https://www.nist.gov/"><u>National Institute of Standards and Technology (NIST)</u></a> also recognizes the urgency of this transition. On November 15, 2024, NIST made a landmark <a href="https://nvlpubs.nist.gov/nistpubs/ir/2024/NIST.IR.8547.ipd.pdf"><u>announcement</u></a> by setting a timeline to phase out <a href="https://en.wikipedia.org/wiki/RSA_(cryptosystem)"><u>RSA</u></a> and <a href="https://blog.cloudflare.com/a-relatively-easy-to-understand-primer-on-elliptic-curve-cryptography/"><u>Elliptic Curve Cryptography (ECC)</u></a>, the conventional cryptographic algorithms that underpin nearly every part of the Internet today. According to NIST’s announcement, these algorithms will be deprecated by 2030 and completely disallowed by 2035.</p><p>At Cloudflare, we aren’t waiting until 2035 or even 2030. We believe privacy is a fundamental human right, and advanced cryptography should be <a href="https://blog.cloudflare.com/post-quantum-crypto-should-be-free/"><u>accessible to everyone</u></a> without compromise. No one should be required to pay extra for post-quantum security. That’s why any visitor accessing a <a href="https://blog.cloudflare.com/pq-2024/"><u>website protected by Cloudflare today</u></a> benefits from post-quantum cryptography, when using a major browser like <a href="https://blog.chromium.org/2024/05/advancing-our-amazing-bet-on-asymmetric.html"><u>Chrome, Edge</u></a>, or <a href="https://www.mozilla.org/en-US/firefox/135.0/releasenotes/"><u>Firefox</u></a>. (And, we are excited to see a <a href="https://radar.cloudflare.com/explorer?dataSet=http&amp;groupBy=post_quantum&amp;filters=botClass%253DLikely_Human%252Cos%253DiOS"><u>small percentage of (mobile) Safari traffic</u></a> in our Radar data.) Well over a third of the human traffic passing through Cloudflare today already enjoys this enhanced security, and we expect this share to increase as more browsers and clients are upgraded to support post-quantum cryptography. </p><p>While great strides have been made to protect human web traffic, not every application is a web application. And every organization has internal applications (both web and otherwise) that do not support post-quantum cryptography.  </p><p>How should organizations go about upgrading their sensitive corporate network traffic to support post-quantum cryptography?</p><p>That’s where today’s announcement comes in. We’re thrilled to announce the first phase of end-to-end quantum readiness of our <a href="https://www.cloudflare.com/zero-trust/">Zero Trust platform</a><b>, </b>allowing customers to protect their corporate network traffic with post-quantum cryptography.<b> Organizations can tunnel their corporate network traffic though Cloudflare’s Zero Trust platform, protecting it against quantum adversaries without the hassle of individually upgrading each and every corporate application, system, or network connection.</b> </p><p>More specifically, organizations can use our Zero Trust platform to route communications from end-user devices (via web browser or Cloudflare’s <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-devices/warp/"><u>WARP device client</u></a>) to secure applications connected with <a href="https://blog.cloudflare.com/post-quantum-tunnel/"><u>Cloudflare Tunnel</u></a>, to gain end-to-end quantum safety, in the following use cases: </p><ul><li><p><b>Cloudflare’s clientless </b><a href="https://developers.cloudflare.com/cloudflare-one/policies/access/"><b><u>Access</u></b></a><b>: </b>Our clientless <a href="https://www.cloudflare.com/learning/access-management/what-is-ztna/">Zero Trust Network Access (ZTNA)</a> solution verifies user identity and device context for every HTTPS request to corporate applications from a web browser. Clientless Access is now protected end-to-end with post-quantum cryptography.</p></li><li><p><b>Cloudflare’s </b><a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-devices/warp/"><b><u>WARP device client</u></b></a><b>:</b> By mid-2025, customers using the WARP device client will have all of their traffic (regardless of protocol) tunneled over a connection protected by post-quantum cryptography. The WARP client secures corporate devices by privately routing their traffic to Cloudflare's global network, where Gateway applies advanced web filtering and Access enforces policies for secure access to applications. </p></li><li><p><b>Cloudflare </b><a href="https://developers.cloudflare.com/cloudflare-one/policies/gateway/http-policies/"><b><u>Gateway</u></b></a>: Our <a href="https://www.cloudflare.com/learning/access-management/what-is-a-secure-web-gateway/">Secure Web Gateway (SWG) </a>— designed to inspect and filter TLS traffic in order to block threats and unauthorized communications — now supports TLS with post-quantum cryptography. </p></li></ul><p>In the remaining sections of this post, we’ll explore the threat that quantum computing poses and the challenges organizations face in transitioning to post-quantum cryptography. We’ll also dive into the technical details of how our Zero Trust platform supports post-quantum cryptography today and share some plans for the future.</p>
    <div>
      <h3>Why transition to post-quantum cryptography and why now? </h3>
      <a href="#why-transition-to-post-quantum-cryptography-and-why-now">
        
      </a>
    </div>
    <p>There are two key reasons to adopt post-quantum cryptography now:</p>
    <div>
      <h4>1. The challenge of deprecating cryptography</h4>
      <a href="#1-the-challenge-of-deprecating-cryptography">
        
      </a>
    </div>
    <p>History shows that updating or removing outdated cryptographic algorithms from live systems is extremely difficult. For example, although the MD5 hash function was <a href="https://iacr.org/archive/eurocrypt2005/34940019/34940019.pdf"><u>deemed insecure in 2004</u></a> and long since deprecated, it was still in use with the RADIUS enterprise authentication protocol as recently as 2024. In July 2024, Cloudflare contributed to research revealing an <a href="https://blog.cloudflare.com/radius-udp-vulnerable-md5-attack/"><u>attack on RADIUS</u></a> that exploited its reliance on MD5. This example underscores the enormous challenge of updating legacy systems — this difficulty in achieving <a href="https://en.wikipedia.org/wiki/Cryptographic_agility"><i><u>crypto-agility</u></i></a> — which will be just as demanding when it’s time to transition to post-quantum cryptography. So it makes sense to start this process now.</p>
    <div>
      <h4>2. The “harvest now, decrypt later” threat</h4>
      <a href="#2-the-harvest-now-decrypt-later-threat">
        
      </a>
    </div>
    <p>Even though quantum computers lack enough qubits to break conventional cryptography today, adversaries can harvest and store encrypted communications or steal datasets with the intent of decrypting them once quantum technology matures. If your encrypted data today could become a liability in 10 to 15 years, planning for a post-quantum future is essential. For this reason, we have already started working with some of the most innovative <a href="https://www.cloudflare.com/banking-and-financial-services/">banks</a>, ISPs, and <a href="https://www.cloudflare.com/public-sector/">governments</a> around the world as they begin their journeys to quantum safety. </p><p>The U.S. government is already addressing these risks. On January 16, 2025, the White House issued <a href="https://www.federalregister.gov/documents/2025/01/17/2025-01470/strengthening-and-promoting-innovation-in-the-nations-cybersecurity"><u>Executive Order 14144</u></a> on Strengthening and Promoting Innovation in the Nation’s Cybersecurity. This order requires government agencies to “<i>regularly update a list of product categories in which products that support post-quantum cryptography (PQC) are widely available…. Within 90 days of a product category being placed on the list … agencies shall take steps to include in any solicitations for products in that category a requirement that products support PQC.</i>”</p><p>At Cloudflare, we’ve been <a href="https://blog.cloudflare.com/the-tls-post-quantum-experiment/"><u>researching</u></a>, <a href="https://blog.cloudflare.com/securing-the-post-quantum-world/"><u>developing</u></a>, and <a href="https://www.ietf.org/archive/id/draft-kwiatkowski-tls-ecdhe-mlkem-02.html"><u>standardizing</u></a> post-quantum cryptography <a href="https://blog.cloudflare.com/tag/post-quantum/"><u>since 2017</u></a>. Our strategy is simple:</p><p><b>Simply tunnel your traffic through Cloudflare’s quantum-safe connections to immediately protect against harvest-now-decrypt-later attacks, without the burden of upgrading every cryptographic library yourself.</b></p><p>Let’s take a closer look at how the migration to post-quantum cryptography is taking shape at Cloudflare.</p>
    <div>
      <h3>A two-phase migration to post-quantum cryptography</h3>
      <a href="#a-two-phase-migration-to-post-quantum-cryptography">
        
      </a>
    </div>
    <p>At Cloudflare, we’ve largely focused on migrating the <a href="https://www.cloudflare.com/learning/ssl/transport-layer-security-tls/"><u>TLS (Transport Layer Security) 1.3</u></a> protocol to post-quantum cryptography.   TLS primarily secures the communications for web applications, but it is also widely used to secure email, messaging, <a href="https://blog.cloudflare.com/zero-trust-warp-with-a-masque/"><u>VPN connections</u></a>, <a href="https://www.cloudflare.com/learning/dns/dns-over-tls/"><u>DNS</u></a>, and many other protocols.  This makes TLS an ideal protocol to focus on when migrating to post-quantum cryptography.</p><p>The migration involves updating two critical components of TLS 1.3: <a href="https://www.cloudflare.com/learning/ssl/what-is-an-ssl-certificate/"><u>digital signatures used in certificates</u></a> and <a href="https://blog.cloudflare.com/post-quantum-key-encapsulation/"><u>key agreement mechanisms</u></a>. We’ve made significant progress on key agreement, but the migration to post-quantum digital signatures is still in its early stages.</p>
    <div>
      <h4>Phase 1: Migrating key agreement</h4>
      <a href="#phase-1-migrating-key-agreement">
        
      </a>
    </div>
    <p>Key agreement protocols enable two parties to securely establish a shared secret key that they can use to secure and encrypt their communications. Today, vendors have largely converged on transitioning TLS 1.3 to support a post-quantum key exchange protocol known as <a href="https://blog.cloudflare.com/nists-first-post-quantum-standards/"><u>ML-KEM</u></a> (Module-lattice based Key-Encapsulation Mechanism Standard). There are two main reasons for prioritizing migration of key agreement:</p><ul><li><p><b>Performance:</b> ML-KEM <a href="https://blog.cloudflare.com/pq-2024/"><u>performs</u></a> well with the <a href="https://www.cloudflare.com/learning/ssl/why-use-tls-1.3/">TLS 1.3 protocol,</a> even for short-lived network connections.</p></li><li><p><b>Security</b>: Conventional cryptography is vulnerable to “harvest now, decrypt later” attacks. In this threat model, an adversary intercepts and stores encrypted communications today and later (in the future) uses a quantum computer to derive the secret key, compromising the communication. As of March 2025, well over a third of the human web traffic reaching the Cloudflare network is protected against these attacks by TLS 1.3 with hybrid ML-KEM key exchange.</p></li></ul>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1Tgfy0HYHA5MM6JjaNP2Z1/b601d2938be3c52decf1f3cec7313c6e/image6.png" />
          </figure><p><sup><i>Post-quantum encrypted share of human HTTPS request traffic seen by Cloudflare per </i></sup><a href="https://radar.cloudflare.com/adoption-and-usage?dateRange=52w"><sup><i><u>Cloudflare Radar</u></i></sup></a><sup><i> from March 1, 2024 to March 1, 2025. (Captured on March 13, 2025.)</i></sup></p><p>Here’s how to check if your Chrome browser is using ML-KEM for key agreement when visiting a website: First, <a href="https://developer.chrome.com/docs/devtools/inspect-mode#:~:text=Open%20DevTools,The%20element's%20margin%2C%20in%20pixels."><u>Inspect the page</u></a>, then open the <a href="https://developer.chrome.com/docs/devtools/security"><u>Security tab</u></a>, and finally look for <a href="https://www.ietf.org/archive/id/draft-kwiatkowski-tls-ecdhe-mlkem-02.html"><u>X25519MLKEM768</u></a> as shown here:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6EoD5jFMXJeWFeRtG9w6Uy/85aa13123d64f21ea93313f674d4378f/image1.png" />
          </figure><p>This indicates that your browser is using key-agreement protocol ML-KEM <i>in combination with</i> conventional <a href="https://blog.cloudflare.com/a-relatively-easy-to-understand-primer-on-elliptic-curve-cryptography/"><u>elliptic curve cryptography</u></a> on curve <a href="https://en.wikipedia.org/wiki/Curve25519"><u>X25519</u></a>. This provides the protection of the tried-and-true conventional cryptography (<a href="https://en.wikipedia.org/wiki/Curve25519"><u>X25519</u></a>) alongside the new post-quantum key agreement (<a href="https://blog.cloudflare.com/nists-first-post-quantum-standards/"><u>ML-KEM</u></a>).</p>
    <div>
      <h4>Phase 2: Migrating digital signatures</h4>
      <a href="#phase-2-migrating-digital-signatures">
        
      </a>
    </div>
    <p><a href="https://www.cloudflare.com/learning/ssl/what-is-an-ssl-certificate/"><u>Digital signatures are used in TLS certificates</u></a> to validate the authenticity of connections — allowing the client to be sure that it is really communicating with the server, and not with an adversary that is impersonating the server. </p><p>Post-quantum digital signatures, however, are significantly larger, and thus slower, than their current counterparts. This performance impact has slowed their adoption, particularly because they slow down short-lived TLS connections. </p><p>Fortunately, post-quantum signatures are not needed to prevent harvest-now-decrypt-later attacks. Instead, they primarily protect against attacks by an adversary that is actively using a quantum computer to tamper with a live TLS connection. We still have some time before quantum computers are able to do this, making the migration of digital signatures a lower priority.</p><p>Nevertheless, Cloudflare is actively <a href="https://datatracker.ietf.org/doc/draft-ietf-lamps-dilithium-certificates/07/"><u>involved in standardizing</u></a> post-quantum signatures for TLS certificates. We are also experimenting with their deployment on long-lived TLS connections and exploring <a href="https://datatracker.ietf.org/doc/draft-davidben-tls-merkle-tree-certs/"><u>new approaches</u></a> to achieve post-quantum authentication without sacrificing performance. Our goal is to ensure that post-quantum digital signatures are ready for widespread use when quantum computers are able to actively attack live TLS connections.</p>
    <div>
      <h3>Cloudflare Zero Trust + PQC: future-proofing security</h3>
      <a href="#cloudflare-zero-trust-pqc-future-proofing-security">
        
      </a>
    </div>
    <p>The Cloudflare Zero Trust platform replaces legacy corporate security perimeters with Cloudflare's global network, making access to the Internet and to corporate resources faster and safer for teams around the world. Today, we’re thrilled to announce that Cloudflare's Zero Trust platform protects your data from quantum threats as it travels over the public Internet.  There are three key quantum-safe use cases supported by our Zero Trust platform in this first phase of quantum readiness.</p>
    <div>
      <h4>Quantum-safe clientless Access</h4>
      <a href="#quantum-safe-clientless-access">
        
      </a>
    </div>
    <p><a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-devices/agentless/"><u>Clientless</u></a> <a href="https://developers.cloudflare.com/cloudflare-one/applications/configure-apps/self-hosted-public-app/"><u>Cloudflare Access</u></a> now protects an organization’s Internet traffic to internal web applications against quantum threats, even if the applications themselves have not yet migrated to post-quantum cryptography. ("Clientless access" is a method of accessing network resources without installing a dedicated client application on the user's device. Instead, users connect and access information through a web browser.)</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5mKiboLMsIEuNt1MaXlWsy/dad0956066e97db69401757b18e8ce5f/image4.png" />
          </figure><p>Here’s how it works today:</p><ul><li><p><b>PQ connection via browser: </b>(Labeled (1) in the figure.)
As long as the user's web browser supports post-quantum key agreement, the connection from the device to Cloudflare's network is secured via TLS 1.3 with post-quantum key agreement.</p></li><li><p><b>PQ within Cloudflare’s global network: </b>(Labeled (2) in the figure) 
If the user and origin server are geographically distant, then the user’s traffic will enter Cloudflare’s global network in one geographic location (e.g. Frankfurt), and exit at another (e.g. San Francisco).  As this traffic moves from one datacenter to another inside Cloudflare’s global network, these hops through the network are secured via TLS 1.3 with post-quantum key agreement.<b> </b></p></li><li><p><b>PQ Cloudflare Tunnel: </b>(Labeled (3) in the figure)
Customers establish a Cloudflare Tunnel from their datacenter or public cloud — where their corporate web application is hosted — to Cloudflare's network. This tunnel is secured using TLS 1.3 with post-quantum key agreement, safeguarding it from harvest-now-decrypt-later attacks.</p></li></ul><p>Putting it together, clientless Access provides <b>end-to-end</b> quantum safety for accessing corporate HTTPS applications, without requiring customers to upgrade the security of corporate web applications.</p>
    <div>
      <h4>Quantum-safe Zero Trust with Cloudflare’s WARP Client-to-Tunnel configuration (as a VPN replacement)</h4>
      <a href="#quantum-safe-zero-trust-with-cloudflares-warp-client-to-tunnel-configuration-as-a-vpn-replacement">
        
      </a>
    </div>
    <p>By mid-2025, organizations will be able to protect <b>any protocol</b>, not just HTTPS, by tunneling it through Cloudflare's Zero Trust platform with post quantum cryptography, thus providing quantum safety as traffic travels across the Internet from the end-user’s device to the corporate office/data center/cloud environment.</p><p>Cloudflare’s Zero Trust platform is ideal for replacing traditional VPNs, and enabling <a href="https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/"><u>Zero Trust architectures</u></a> with modern authentication and authorization polices.  Cloudflare’s WARP client-to-tunnel is a popular network configuration for our Zero Trust platform: organizations deploy Cloudflare’s <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-devices/warp/"><u>WARP device client</u></a> on their end users’ devices, and then use <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/"><u>Cloudflare Tunnel</u></a> to connect to their corporate office, cloud, or data center environments.   </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5xovIIyVOO32xrXBs0ZFcf/110928926b86f12777f16518b1313875/image3.png" />
          </figure><p> Here are the details:  </p><ul><li><p><b>PQ connection via WARP client (coming in mid-2025): </b>(Labeled (1) in the figure)
The WARP client uses the <a href="https://blog.cloudflare.com/zero-trust-warp-with-a-masque/"><u>MASQUE protocol</u></a> to connect from the device to Cloudflare’s global network. We are working to add support for establishing this MASQUE connection with TLS 1.3 with post-quantum key agreement, with a target completion date of mid-2025.  </p></li><li><p><b>PQ within Cloudflare’s global network:  </b>(Labeled (2) in the figure) 
As traffic moves from one datacenter to another inside Cloudflare’s global network, each hop it takes through Cloudflare’s network is already secured with TLS 1.3 with post-quantum key agreement.</p></li><li><p><b>PQ Cloudflare Tunnel: </b>(Labeled (3) in the figure)
As mentioned above, <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/"><u>Cloudflare Tunnel</u></a> already supports post-quantum key agreement. </p></li></ul><p>Once the upcoming post-quantum enhancements to the WARP device client are complete, customers can encapsulate their traffic in quantum-safe tunnels, effectively mitigating the risk of harvest-now-decrypt-later attacks without any heavy lifting to individually upgrade their networks or applications.  And this provides comprehensive protection for any protocol that can be sent through these tunnels, not just for HTTPS!</p>
    <div>
      <h4>Quantum-safe SWG (end-to-end PQC for access to third-party web applications)</h4>
      <a href="#quantum-safe-swg-end-to-end-pqc-for-access-to-third-party-web-applications">
        
      </a>
    </div>
    <p>A <a href="https://developers.cloudflare.com/cloudflare-one/policies/gateway/http-policies/"><u>Secure Web Gateway</u></a> (SWG) is used to secure access to third-party websites on the public Internet by intercepting and inspecting TLS traffic. </p><p>Cloudflare Gateway is now a quantum-safe SWG for HTTPS traffic. As long as the third-party website that is being inspected supports post-quantum key agreement, then Cloudflare’s SWG also supports post-quantum key agreement. This holds regardless of the onramp that the customer uses to get to Cloudflare's network (i.e. <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-devices/agentless/"><u>web browser</u></a>, <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-devices/warp/"><u>WARP device client</u></a>, <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/private-net/warp-connector/"><u>WARP Connector</u></a>, <a href="https://developers.cloudflare.com/magic-wan/"><u>Magic WAN</u></a>), and only requires the use of a browser that supports post-quantum key agreement.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6vnkEFkvKbhSAxp33GmRk7/c58d00a14767a03b2422af1c48a53ba9/image5.png" />
          </figure><p>Cloudflare Gateway's HTTPS SWG feature involves two post-quantum TLS connections, as follows:</p><ul><li><p><b>PQ connection via browser: </b>(Labeled (1) in the figure)  
A TLS connection is initiated from the user's browser to a data center in Cloudflare's network that performs the TLS inspection. As long as the user's web browser supports post-quantum key agreement, this connection is secured by TLS 1.3 with post-quantum key agreement.  </p></li><li><p><b>PQ connection to the origin server: </b>(Labeled (2) in the figure)  
A TLS connection is initiated from a datacenter in Cloudflare's network to the origin server, which is typically controlled by a third party. The connection from Cloudflare’s SWG currently supports post-quantum key agreement, as long as the third party’s origin server also already supports post-quantum key agreement.  You can test this out today by using <a href="https://pq.cloudflareresearch.com/"><u>https://pq.cloudflareresearch.com/</u></a> as your third-party origin server. </p></li></ul><p>Put together, Cloudflare’s SWG is quantum-ready to support secure access to any third-party website that is quantum ready today or in the future. And this is true regardless of the onramp used to get end users' traffic into Cloudflare's global network!</p>
    <div>
      <h3>The post-quantum future: Cloudflare’s Zero Trust platform leads the way</h3>
      <a href="#the-post-quantum-future-cloudflares-zero-trust-platform-leads-the-way">
        
      </a>
    </div>
    <p>Protecting our customers from emerging quantum threats isn't just a priority — it's our responsibility. Since 2017, Cloudflare has been pioneering post-quantum cryptography through research, standardization, and strategic implementation across our product ecosystem.</p><p><b>Today marks a milestone: </b>We're launching the first phase of quantum-safe protection for our Zero Trust platform. Quantum-safe clientless Access and Secure Web Gateway are available immediately, with WARP client-to-tunnel network configurations coming by mid-2025. As we continue to advance the state of the art in post-quantum cryptography, our commitment to continuous innovation ensures that your organization stays ahead of tomorrow's threats.  Let us worry about crypto-agility so that you don’t have to.</p><p>To learn more about how Cloudflare’s built-in crypto-agility can future-proof your business, visit our <a href="http://cloudflare.com/pqc"><u>Post-Quantum Cryptography</u></a> webpage.</p>
    <div>
      <h3>Watch on Cloudflare TV</h3>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div>
  
</div><p></p> ]]></content:encoded>
            <category><![CDATA[Security Week]]></category>
            <category><![CDATA[Post-Quantum]]></category>
            <category><![CDATA[Zero Trust]]></category>
            <category><![CDATA[Cloudflare Gateway]]></category>
            <category><![CDATA[Cloudflare Access]]></category>
            <category><![CDATA[Clientless]]></category>
            <category><![CDATA[Cloudflare Tunnel]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[Research]]></category>
            <guid isPermaLink="false">18HFPrh07hn9Zqp8kaonRp</guid>
            <dc:creator>Sharon Goldberg</dc:creator>
            <dc:creator>Wesley Evans</dc:creator>
            <dc:creator>Bas Westerbaan</dc:creator>
            <dc:creator>John Engates</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing WARP Connector: paving the path to any-to-any connectivity]]></title>
            <link>https://blog.cloudflare.com/introducing-warp-connector-paving-the-path-to-any-to-any-connectivity-2/</link>
            <pubDate>Wed, 20 Mar 2024 13:00:05 GMT</pubDate>
            <description><![CDATA[ Starting today, Zero Trust administrators can deploy our new WARP Connector for simplified any-to-any connectivity ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4EJrRp1522sGWgJ2FbWds2/6df257860be57516553e791ef6c28917/image3-30.png" />
            
            </figure><p>In the ever-evolving domain of enterprise security, <a href="https://www.cloudflare.com/ciso/">CISOs</a> and CIOs have to tirelessly build new enterprise networks and maintain old ones to achieve performant any-to-any connectivity. For their team of network architects, surveying their own environment to keep up with changing needs is half the job. The other is often unearthing new, innovative solutions which integrate seamlessly into the existing landscape. This continuous cycle of construction and fortification in the pursuit of secure, flexible infrastructure is exactly what Cloudflare’s SASE offering, Cloudflare One, was built for.</p><p>Cloudflare One has progressively evolved based on feedback from customers and analysts. Today, we are thrilled to introduce the public availability of the Cloudflare WARP Connector, a new tool that makes bidirectional, site-to-site, and mesh-like connectivity even easier to secure without the need to make any disruptive changes to <a href="https://www.cloudflare.com/the-net/network-infrastructure/">existing network infrastructure</a>.</p>
    <div>
      <h2>Bridging a gap in Cloudflare's Zero Trust story</h2>
      <a href="#bridging-a-gap-in-cloudflares-zero-trust-story">
        
      </a>
    </div>
    <p>Cloudflare's approach has always been focused on offering a breadth of products, acknowledging that there is no one-size-fits-all solution for network connectivity. Our vision is simple: any-to-any connectivity, any way you want it.</p><p>Prior to the WARP Connector, one of the easiest ways to connect your infrastructure to Cloudflare, whether that be a local HTTP server, web services served by a Kubernetes cluster, or a private network segment, was through the <a href="https://www.cloudflare.com/products/tunnel/">Cloudflare Tunnel</a> app connector, <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/"><i>cloudflared</i></a>. In many cases this works great, but over time customers began to surface a long tail of use cases which could not be supported based on the underlying architecture of cloudflared. This includes situations where customers utilize VOIP phones, necessitating a SIP server to establish outgoing connections to user’s softphones, or a CI/CD server sending notifications to relevant stakeholders for each stage of the <a href="https://www.cloudflare.com/learning/serverless/glossary/what-is-ci-cd/">CI/CD pipelines</a>. Later in this blog post, we explore these use cases in detail.</p><p>As <i>cloudflared</i> proxies at Layer 4 of the <a href="https://www.cloudflare.com/learning/ddos/glossary/open-systems-interconnection-model-osi/">OSI model</a>, its design was optimized specifically to proxy requests to origin services — it was not designed to be an active listener to handle requests from origin services. This design trade-off means that cloudflared needs to source NAT all requests it proxies to the application server. This setup is convenient for scenarios where customers don't need to update routing tables to deploy cloudflared in front of their original services. However, it also means that customers can’t see the true source IP of the client sending the requests. This matters in scenarios where a network firewall is logging all the network traffic, as the source IP of all the requests will be <i>cloudflared’s</i> IP address, causing the customer to lose visibility into the true client source.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2nMu5Ecf8e72QaIbf6eyiI/2e3bc3445611bd6cf0a6fa1fee96e0af/image6-10.png" />
            
            </figure>
    <div>
      <h2>Build or borrow</h2>
      <a href="#build-or-borrow">
        
      </a>
    </div>
    <p>To solve this problem, we identified two potential solutions: start from scratch by building a new connector, or borrow from an existing connector, likely in either cloudflared or WARP.</p><p>The following table provides an overview of the tradeoffs of the two approaches:</p><table><colgroup><col></col><col></col><col></col></colgroup><tbody><tr><td><p><span>Features</span></p></td><td><p><span>Build in </span><span>cloudflared</span></p></td><td><p><span>Borrow from WARP </span></p></td></tr><tr><td><p><span>Bidirectional traffic flows</span></p></td><td><p><span>As described in the earlier section, limitations of Layer 4 proxying.</span></p></td><td><p><span>This does proxying at </span></p><p><span>Layer 3, because of which it can act as default gateway for that subnet, enabling it to support traffic flows from both directions.</span></p></td></tr><tr><td><p><span>User experience</span></p></td><td><p><span>For Cloudflare One customers, they have to work with two distinct products (cloudflared and WARP) to connect their services and users.</span></p></td><td><p><span>For Cloudflare One customers, they just have to get familiar with a single product to connect their users as well as their networks.</span></p></td></tr><tr><td><p><span>Site-to-site connectivity between branches, data centers (on-premise and cloud) and headquarters.</span></p></td><td><p><span>Not recommended</span></p></td><td><p><span>For sites where running  agents on each device is not feasible, this could easily connect the sites to users running WARP clients in other sites/branches/data centers. This would work seamlessly where the underlying tunnels are all the same.</span></p></td></tr><tr><td><p><span>Visibility into true source IP</span></p></td><td><p><span>It does source NATting.</span></p></td><td><p><span>Since it acts as the default gateway, it preserves the true source IP address for any traffic flow.</span></p></td></tr><tr><td><p><span>High availability</span></p></td><td><p><span>Inherently reliable by </span><a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/deploy-tunnels/deploy-cloudflared-replicas/"><span>design </span></a><span>and supports replicas for failover scenarios.</span></p></td><td><p><span>Reliability specifications are very different for a default gateway use case vs endpoint device agent. Hence, there is opportunity to innovate here. </span></p></td></tr></tbody></table>
    <div>
      <h2>Introducing WARP Connector</h2>
      <a href="#introducing-warp-connector">
        
      </a>
    </div>
    <p>Starting today, the introduction of WARP Connector opens up new <a href="https://developers.cloudflare.com/reference-architecture/sase-reference-architecture/#connecting-networks">possibilities</a>: server initiated (SIP/VOIP) flows; site-to-site connectivity, connecting branches, headquarters, and cloud platforms; and even mesh-like networking with WARP-to-WARP. Under the hood, this new connector is an extension of warp-client that can act as a virtual router for any subnet within the network to on/off-ramp traffic through Cloudflare.</p><p>By building on WARP, we were able to take advantage of its design, where it creates a virtual network interface on the host to logically subdivide the physical interface (NIC) for the purpose of routing IP traffic. This enables us to send bidirectional traffic through the WireGuard/<a href="/zero-trust-warp-with-a-masque">MASQUE</a> tunnel that’s maintained between the host and Cloudflare edge. By virtue of this architecture, customers also get the added benefit of visibility into the true source IP of the client.</p><p>WARP Connector can be easily deployed on the default gateway without any additional routing changes. Alternatively, static routes can be configured for specific CIDRs that need to be routed via WARP Connector, and the static routes can be configured on the default gateway or on every host in that subnet.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/WD2pig7ka0aWKGTL8EBJ0/91cedc19d8eda4f402b336e8219c958e/image2-31.png" />
            
            </figure>
    <div>
      <h2>Private network use cases</h2>
      <a href="#private-network-use-cases">
        
      </a>
    </div>
    <p>Here we’ll walk through a couple of key reasons why you may want to deploy our new connector, but remember that this solution can support numerous services, such as Microsoft’s System Center Configuration Manager (SCCM), Active Directory server updates, VOIP and SIP traffic, and developer workflows with complex CI/CD pipeline interaction. It’s also important to note this connector can either be run alongside cloudflared and Magic WAN, or can be a standalone remote access and site-to-site connector to the Cloudflare Global network.</p>
    <div>
      <h3>Softphone and VOIP servers</h3>
      <a href="#softphone-and-voip-servers">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1aRUqSm8U71JrJlAjpaR85/097753cc28df73f7d5719633343b18ca/image5-18.png" />
            
            </figure><p>For users to establish a voice or video call over a VOIP software service, typically a SIP server within the private network brokers the connection using the last known IP address of the end-user. However, if traffic is proxied anywhere along the path, this often results in participants only receiving partial voice or data signals. With the WARP Connector, customers can now apply granular policies to these services for secure access, fortifying VOIP infrastructure within their <a href="https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/">Zero Trust framework</a>.</p>
    <div>
      <h3>Securing access to CI/CD pipeline</h3>
      <a href="#securing-access-to-ci-cd-pipeline">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6Yolk1Mb2eQqmkibRapZzU/2748774f23b3df87d395a11b5d6c8281/image4-29.png" />
            
            </figure><p>An organization’s DevOps ecosystem is generally built out of many parts, but a CI/CD server such as Jenkins or Teamcity is the epicenter of all development activities. Hence, securing that CI/CD server is critical. With the WARP Connector and WARP Client, organizations can secure the entire CI/CD pipeline and also streamline it easily.</p><p>Let's look at a typical CI/CD pipeline for a Kubernetes application. The environment is set up as depicted in the diagram above, with WARP clients on the developer and QA laptops and a WARP Connector securely connecting the CI/CD server and staging servers on different networks:</p><ol><li><p>Typically, the CI/CD pipeline is triggered when a developer commits their code change, invoking a webhook on the CI/CD server.</p></li><li><p>Once the images are built, it's time to deploy the code, which is typically done in stages: test, staging and production.</p></li><li><p>Notifications are sent to the developer and QA engineer to notify them when the images are ready in the test/staging environments.</p></li><li><p>QA engineers receive the notifications via webhook from the CI/CD servers to kick-start their monitoring and troubleshooting workflow.</p></li></ol><p>With WARP Connector, customers can easily connect their developers to the tools in the DevOps ecosystem by keeping the ecosystem private and not exposing it to the public. Once the DevOps ecosystem is securely connected to Cloudflare, granular security policies can be easily applied to secure access to the CI/CD pipeline.</p>
    <div>
      <h3>True source IP address preservation</h3>
      <a href="#true-source-ip-address-preservation">
        
      </a>
    </div>
    <p>Organizations running Microsoft AD Servers or non-web application servers often need to identify the true source IP address for auditing or policy application. If these requirements exist, WARP Connector simplifies this, offering solutions without adding NAT boundaries. This can be useful to <a href="https://www.cloudflare.com/learning/bots/what-is-rate-limiting/">rate-limit</a> unhealthy source IP addresses, for ACL-based policies within the perimeter, or to collect additional diagnostics from end-users.</p>
    <div>
      <h2>Getting started with WARP Connector</h2>
      <a href="#getting-started-with-warp-connector">
        
      </a>
    </div>
    <p>As part of this launch, we’re making some changes to the Cloudflare One Dashboard to better highlight our different network on/off ramp options. As of today, a new “Network” tab will appear on your dashboard. This will be the new home for the Cloudflare Tunnel UI.</p><p>We are also introducing the new “Routes” tab next to “Tunnels”. This page will present an organizational view of customer’s virtual networks, Cloudflare Tunnels, and routes associated with them. This new page helps answer a customer’s questions pertaining to their network configurations, such as: “Which Cloudflare Tunnel has the route to my host 192.168.1.2 ” or “If a route for CIDR 192.168.2.1/28 exists, how can it be accessed” or “What are the overlapping CIDRs in my environment and which VNETs do they belong to?”. This is extremely useful for customers who have very complex enterprise networks that use the Cloudflare dashboard for troubleshooting connectivity issues.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/454aKuxMVd93ZAmtubFPl/ba84f1e86a2c1b0ebaaa7e6e36f29199/image1-32.png" />
            
            </figure><p>Embarking on your WARP Connector journey is straightforward. Currently deployable on Linux hosts, users can select “create a Tunnel” and pick from either cloudflared or WARP to deploy straight from the dashboard. Follow our <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/private-net/warp-connector/#set-up-warp-connector">developer documentation</a> to get started in a few easy steps. In the near future we will be adding support for more platforms where WARP Connectors can be deployed.</p>
    <div>
      <h2>What’s next?</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>Thank you to all of our private beta customers for their invaluable feedback. Moving forward, our immediate focus in the coming quarters is on simplifying deployment, mirroring that of cloudflared, and enhancing high availability through redundancy and failover mechanisms.</p><p>Stay tuned for more updates as we continue our journey in innovating and enhancing the Cloudflare One platform. We're excited to see how our customers leverage WARP Connector to transform their connectivity and security landscape.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Tunnel]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[WARP]]></category>
            <category><![CDATA[Zero Trust]]></category>
            <category><![CDATA[SASE]]></category>
            <guid isPermaLink="false">64DSFDvFcQHNrtAi6A7jze</guid>
            <dc:creator>Abe Carryl</dc:creator>
            <dc:creator>Janani Rajendiran</dc:creator>
        </item>
        <item>
            <title><![CDATA[Elevate load balancing with Private IPs and Cloudflare Tunnels: a secure path to efficient traffic distribution]]></title>
            <link>https://blog.cloudflare.com/elevate-load-balancing-with-private-ips-and-cloudflare-tunnels-a-secure-path-to-efficient-traffic-distribution/</link>
            <pubDate>Fri, 08 Sep 2023 13:00:01 GMT</pubDate>
            <description><![CDATA[ We are extremely excited to announce a new addition to our Load Balancing solution, Private Network Load Balancing with deep integrations with Zero Trust!
 ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6ILMvjlajN04XDxywfXL1a/fcaf05a762de5ac61baa3b001fd4edc7/image7-1.png" />
            
            </figure><p>In the dynamic world of modern applications, efficient load balancing plays a pivotal role in delivering exceptional user experiences. Customers commonly leverage load balancing, so they can efficiently use their existing infrastructure resources in the best way possible. Though, load balancing is not a ‘one-size-fits-all, out of the box’ solution for everyone. As you go deeper into the details of your traffic shaping requirements and as your architecture becomes more complex, different flavors of load balancing are usually required to achieve these varying goals, such as steering between datacenters for public traffic, creating high availability for critical internal services with private IPs, applying steering between servers in a single datacenter, and more. We are extremely excited to announce a new addition to our Load Balancing solution, Private Network Load Balancing with deep integrations with Zero Trust!  </p><p>A common problem businesses run into is that almost no providers can satisfy all these requirements, resulting in a growing list of vendors to manage disparate data sources to get a clear view of your traffic pipeline, and investment into incredibly expensive hardware that is complicated to set up and maintain. Not having a single source of truth to dwindle down ‘time to resolution’ and a single partner to work with in times when things are not operating within the ideal path can be the difference between a proactive, healthy growing business versus one that is reactive and constantly having to put out fires. The latter can result in extreme slowdown to developing amazing features/services, reduction in revenue, tarnishing of brand trust, decreases in adoption - the list goes on!</p><p>For eight years, we have provided top-tier global traffic load balancing (GTM) capabilities to thousands of customers across the globe. But why should the steering intelligence, failover, and reliability we guarantee stop at the front door of the selected datacenter and only operate with public traffic? We came to the conclusion that we should go even further. Today is the start of a long series of new features that allow traffic steering, failover, session persistence, SSL/TLS offloading and much more to take place between servers after datacenter selection has occurred! Instead of relying <i>only</i> on the relative weight to determine which server traffic should be sent to, you can now bring the same intelligent steering policies, such as <a href="https://developers.cloudflare.com/load-balancing/understand-basics/traffic-steering/origin-level-steering/least-outstanding-requests-pools/"><u>least outstanding requests steering</u></a> or <a href="https://developers.cloudflare.com/load-balancing/understand-basics/traffic-steering/origin-level-steering/hash-origin-steering/"><u>hash steering</u></a>, to any of your many data centers. This also means you have a single partner for <b>all</b> of your load balancing initiatives and a single pane of glass to inform business decisions! Cloudflare is thrilled to introduce the powerful combination of private IP support for Load Balancing with Cloudflare Tunnels and Private Network Load Balancing, offering customers a solution that blends unparalleled efficiency, security, flexibility, and privacy.</p>
    <div>
      <h3>What is a load balancer?</h3>
      <a href="#what-is-a-load-balancer">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2batHEyi8vzCMEjgy2E2rD/601bc0a81e751b02dc332cc531623546/pasted-image-0.png" />
            
            </figure><p>A Cloudflare load balancer directs a request from a user to the appropriate origin pool within a data center</p><p>Load balancing — functionality that’s been around for the last 30 years to help businesses leverage their existing infrastructure resources. <a href="https://www.cloudflare.com/learning/performance/what-is-load-balancing/">Load balancing</a> works by proactively steering traffic away from unhealthy origin servers and — for more advanced solutions — intelligently distributing traffic load based on different steering <a href="https://developers.cloudflare.com/load-balancing/understand-basics/traffic-steering/">algorithms</a>. This process ensures that errors aren’t served to end users and empowers businesses to tightly couple overall business objectives to their traffic behavior. Cloudflare Load Balancing has made it simpler and easier to securely and reliably manage your traffic across multiple data centers around the world. With Cloudflare Load Balancing, your traffic will be directed reliably regardless of the scale of traffic or where it originates with customizable steering, affinity and failover. This clearly has an advantage over a physical load balancer since it can be configured easily and traffic doesn’t have to reach one of your data centers to be routed to another location, introducing single points of failure and significant <a href="https://www.cloudflare.com/learning/performance/glossary/what-is-latency/">latency</a>. When compared with other global traffic management load balancers, Cloudflare’s Load Balancing offering is easier to set up, simpler to understand, and is fully integrated with the Cloudflare platform as one single product for all load balancing needs.</p>
    <div>
      <h3>What are Cloudflare Tunnels?</h3>
      <a href="#what-are-cloudflare-tunnels">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1Pn6iWjweXKPpg0Kl00s0W/6a0ef1886f7ce5ea114e010e9b5a32eb/Group-3345--1-.png" />
            
            </figure><p>Origins and servers of various types can be connected to Cloudflare using Cloudflare Tunnel. Users can also secure their traffic using WARP, allowing traffic to be secured and managed end to end through Cloudflare.‌ ‌</p><p>In 2018, Cloudflare introduced <a href="https://www.cloudflare.com/products/tunnel">Cloudflare Tunnels</a>, a private, secure connection between your data center and Cloudflare. Traditionally, from the moment an Internet property is deployed, developers spend an exhaustive amount of time and energy locking it down through access control lists, rotating IP addresses, or more complex solutions like <a href="https://www.cloudflare.com/learning/network-layer/what-is-gre-tunneling/">GRE tunnels</a>. We built Tunnel to help alleviate that burden. With Tunnels, users can create a private link from their origin server directly to Cloudflare without exposing your services directly to the public internet or allowing incoming connections in your data center’s firewall. Instead, this private connection is established by running a lightweight daemon, <code>cloudflared</code>, in your data center, which creates a secure, outbound-only connection. This means that only traffic that you’ve configured to pass through Cloudflare can reach your private origin.</p>
    <div>
      <h3>Unleashing the potential of Cloudflare Load Balancing with Cloudflare Tunnels</h3>
      <a href="#unleashing-the-potential-of-cloudflare-load-balancing-with-cloudflare-tunnels">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4fxiGzB4slJt02ZgQppT1d/1eadd4e57c66458024c2247080c7a07d/After-Elevate-Load-Balancing-with-Private-IP-Support-and-Cloudflare-Tunnels.png" />
            
            </figure><p>Cloudflare Load Balancing can easily and securely direct a user’s request to a specific origin within your private data center or public cloud using Cloudflare Tunnels</p><p>Combining Cloudflare Tunnels with Cloudflare Load Balancing allows you to remove your physical load balancers from your data center and have your Cloudflare load balancer reach out to your servers directly via their private IP addresses with health checks, steering, and all other Load Balancing features currently available. Instead of configuring your on-premise load balancer to expose each service and then updating your Cloudflare load balancer, you can configure it all in one place. This means that from the end-user to the server handling the request, all your configuration can be done in a single place – the Cloudflare dashboard. On top of this, you can say goodbye to the multi hundred thousand dollar price tag to hardware appliances, the incredible management overhead and investing in a solution that has a time limit for its delivered value.</p><p>Load Balancing serves as the backbone for online services, ensuring seamless traffic distribution across servers or data centers. Traditional load balancing techniques often require exposing services on a data center’s public IP addresses, forcing organizations to create complex configurations vulnerable to security risks and potential data exposure. By harnessing the power of private IP support for Load Balancing in conjunction with Cloudflare Tunnels, Cloudflare is revolutionizing the way businesses <a href="https://www.cloudflare.com/application-services/solutions/">protect and optimize their applications</a>. With clear steps to <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/install-and-setup/tunnel-guide/">install</a> the cloudflared agent to connect your private network to Cloudflare’s network via Cloudflare Tunnels, directly and securely routing traffic into your data centers becomes easier than ever before!</p>
    <div>
      <h3>Publicly exposing services in private data centers is complicated</h3>
      <a href="#publicly-exposing-services-in-private-data-centers-is-complicated">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/fGhHqn2IZj4gMUL24c7lb/16ec2f39fbb2fe37fd26eff7f6d2479f/Before-Elevate-Load-Balancing-with-Private-IP-Support-and-Cloudflare-Tunnels_-A-Secure-Path-to-Efficient-Traffic-Distributio.png" />
            
            </figure><p><sup>A visitor’s request hits a global traffic management (GTM) load balancer directing the request to a data center, then a firewall, then a local load balancer and then an origin</sup></p><p>Load balancing within a private data center can be expensive and difficult to manage. The idea of keeping security first while ensuring ease of use and flexibility for your internal workforce is a tricky balance to strike. It’s not only the ‘how’ of securely exposing internal services, but how to best balance traffic between servers at a single location within your private network!</p><p>In a private data center, even a very simple website can be fairly complex in terms of networking and configuration. Let’s walk through a simple example of a customer device connecting to a website. A customer device performs a DNS lookup for the business’s website and receives an IP address corresponding to a customer data center. The customer then makes an HTTPS request to that IP address, passing the original hostname via <a href="https://www.cloudflare.com/learning/ssl/what-is-sni/">Server Name Indication</a> (SNI). That load balancer forwards that request to the corresponding origin server and returns the response to the customer device.</p><p>This example doesn’t have any advanced functionality and the stack is already difficult to configure:</p><ul><li><p>Expose the service or server on a private IP.</p></li><li><p>Configure your data center’s networking to expose the LB on a public IP or IP range.</p></li><li><p>Configure your load balancer to forward requests for that hostname and/or public IP to your server’s private IP.</p></li><li><p>Configure a DNS record for your domain to point to your load balancer’s public IP.</p></li></ul><p>In large enterprises, each of these configuration changes likely requires approval from several stakeholders and modified through different repositories, websites and/or private web interfaces. Load balancer and networking configurations are often maintained as complex configuration files for Terraform, Chef, Puppet, Ansible or a similar infrastructure-as-code service. These configuration files can be syntax checked or tested but are rarely tested thoroughly prior to deployment. Each deployment environment is often unique enough that thorough testing is often not feasible given the time and hardware requirements needed to do so. This means that changes to these files can negatively affect other services within the data center. In addition, opening up an ingress to your data center <a href="https://www.cloudflare.com/learning/security/what-is-an-attack-surface/">widens the attack surface</a> for varying security risks such as <a href="https://www.cloudflare.com/learning/ddos/what-is-a-ddos-attack/">DDoS attacks</a> or catastrophic data breaches. To make things worse, each vendor has a different interface or API for configuring their devices or services. For example, some <a href="https://www.cloudflare.com/learning/dns/glossary/what-is-a-domain-name-registrar/">registrars</a> only have XML APIs while others have JSON REST APIs. Each device configuration may have different Terraform providers or Ansible playbooks. This results in complex configurations accumulating over time that are difficult to consolidate or standardize, inevitably resulting in technical debt.</p><p>Now let’s add additional origins. For each additional origin for our service, we’ll have to go set up and expose that origin and configure the physical load balancer to use our new origin. Now let’s add another data center. Now we need another solution to distribute across our data centers. This results in one system for global traffic management and another, separate system, for local traffic. . These solutions have in the past come from different vendors and will have to be configured in different ways even though they should serve the same purpose: load balancing. This makes managing your web traffic unnecessarily difficult. Why should you have to configure your origins in two different load balancers? Why can’t you manage all the traffic for all the origins for a service in the same place?</p>
    <div>
      <h3>Simpler and better: Load Balancing with Tunnels</h3>
      <a href="#simpler-and-better-load-balancing-with-tunnels">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5uBuu8as4xTsf4QLZtpjt7/b361efe686342832c1c1de254e22cccf/pasted-image-0--1-.png" />
            
            </figure><p>Cloudflare Load Balancing can manage traffic for all your offices, data centers, remote users, public clouds, private clouds and hybrid clouds in one place‌ ‌</p><p>With Cloudflare Load Balancing and Cloudflare Tunnel, you can manage all your public and private origins in one place: the Cloudflare dashboard. Cloudflare load balancers can be easily configured using the Cloudflare dashboard or the Cloudflare API. There’s no need to <a href="https://www.cloudflare.com/learning/access-management/what-is-ssh/">SSH</a> or open a remote desktop to modify load balancer configurations for your public or private servers. All configurations can be done through the dashboard UI or Cloudflare API, with full parity between the two.</p><p>With Cloudflare Tunnel <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/install-and-setup/tunnel-guide/">set up and running</a> in your data center, everything is ready to connect your origin server to Cloudflare network and load balancers. You do not need to configure any ingress to your data center since Cloudflare Tunnel operates only over outbound connections and can securely reach out to privately addressed services inside your data center. To expose your service to Cloudflare, you just <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/private-net/tunnel-virtual-networks/#route-ips-over-virtual-networks">set up your private IP range to be routed over that tunnel</a>. Then, you can create a Cloudflare load balancer and input the corresponding private IP address and <a href="https://developers.cloudflare.com/load-balancing/understand-basics/traffic-management/">virtual network ID into your origin pool</a>. After that, Cloudflare manages the DNS and load balancing across your private servers. Now your origin is receiving traffic exclusively via Cloudflare Tunnel and your physical load balancer is no longer needed!</p><p>This groundbreaking integration enables organizations to deploy load balancers while keeping their applications securely shielded from the public Internet. The customer’s traffic passes through Cloudflare’s data centers, allowing customers to continue to take full advantage of Cloudflare’s security and performance services. Also, by leveraging Cloudflare Tunnels, traffic between Cloudflare and customer origins remains isolated within trusted networks, bolstering privacy, security, and peace of mind.</p>
    <div>
      <h3>The advantages of Private IP support with Cloudflare Tunnels</h3>
      <a href="#the-advantages-of-private-ip-support-with-cloudflare-tunnels">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4zxhWEuT3ZCugzEXSDSHiL/ce99377d9cdbd29aebe95130ac8dd6d1/pasted-image-0--2-.png" />
            
            </figure><p><sup>Cloudflare Load Balancing works in conjunction with all the security and privacy products that Cloudflare has to offer including DDoS protection, Web Application Firewall and Bot Management</sup></p><p><b>Unified Traffic Management : </b>All the features and ease of use that were part of Cloudflare Load Balancing for Global Traffic Management are also available with Private Network Load Balancing. You can configure your public and private origins in one dashboard as opposed to several services and vendors. Now, all your private origins can benefit from the features that Cloudflare Load Balancing is known for: instant failover, customizable steering between data centers, ease of use, custom rules and configuration updates in a matter of seconds. They will also benefit from our newer features including least connection steering, least outstanding request steering, and session affinity by header. This is just a small subset of the expansive feature set for Load Balancing. See our <a href="https://developers.cloudflare.com/load-balancing/"><u>dev docs</u></a> for more features and details on the offering.</p><p><b>Enhanced Security</b>: By combining private IP support with Cloudflare Tunnels, organizations can fortify their security posture and protect sensitive data. With private IP addresses and encrypted connections via Cloudflare Tunnel, the risk of unauthorized access and potential attacks is significantly reduced – traffic remains within trusted networks. You can also configure <a href="https://developers.cloudflare.com/cloudflare-one/policies/access/">Cloudflare Access</a> to add single sign-on support for your application and restrict your application to a subset of authorized users. In addition, you still benefit from Firewall rules, Rate Limiting rules, Bot Management, DDoS protection and all the other Cloudflare products available today allowing comprehensive security configurations.</p><p><b>Uncompromising Privacy</b>: As data privacy continues to take center stage, businesses must ensure the confidentiality of user information. Cloudflare's private IP support with Cloudflare Tunnels enables organizations to segregate applications and keep sensitive data within their private network boundaries. Custom rules also allow you to direct traffic for specific devices to specific data centers. For example, you can use custom rules to direct traffic from Eastern and Western Europe to your European data centers, so you can easily keep those users’ data within Europe. This minimizes the exposure of data to external entities, preserving user privacy and complying with strict privacy regulations across different geographies.</p><p><b>Flexibility &amp; Reliability</b>: Scale and adaptability are some of the major foundations of a well-operating business. <a href="https://www.cloudflare.com/learning/access-management/how-to-implement-zero-trust/">Implementing solutions</a> that fit your business’ needs today is not enough. Customers must find solutions that meet their needs for the next three or more years. The blend of Load Balancing with Cloudflare Tunnels within our <a href="https://www.cloudflare.com/zero-trust/solutions/">Zero Trust solution</a> lends to the very definition of flexibility and reliability! Changes to load balancer configurations propagate around the world in a matter of seconds, making load balancers an effective way to respond to incidents. Also, instant failover, health monitoring, and steering policies all help to maintain high availability for your applications, so you can deliver the reliability that your users expect. This is all in addition to best in class <a href="https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/">Zero Trust</a> capabilities that are deeply integrated such as, but not limited to Secure Web Gateway (SWG), remote browser isolation, network logs and data loss prevention.</p><p><b>Streamlined Infrastructure</b>: Organizations can consolidate their network architecture and establish secure connections across distributed environments. This unification reduces complexity, lowers operational overhead, and facilitates efficient resource allocation. Whether you need to apply a global traffic manager to intelligently direct traffic between datacenters within your private network, or steer between specific servers after datacenter selection has taken place, there is now a clear, single lens to manage your global and local traffic, regardless of whether the source or destination of the traffic is public or private. Complexity can be a large hurdle in achieving and maintaining fast, agile business units. Consolidating into a single provider, like Cloudflare, that provides security, reliability, and observability will not only save significant cost but allows your teams to move faster and focus on growing their business, enhancing critical services, and developing incredible features, rather than taping together infrastructure that may not work in a few years. Leave the heavy lifting to us, and let us empower you and your team to focus on creating amazing experiences for your employees and end-users.</p><p>The lack of agility, flexibility, and lean operations of hardware appliances for local traffic does not justify the hundreds of thousands of dollars spent on them, along with the huge overhead of managing CPU, memory, power, cooling, etc. Instead, we want to help businesses move this logic to the cloud by abstracting away the needless overhead and bringing more focus back to teams to do what they do best, building amazing experiences, and allowing Cloudflare to do what we do best, protecting, accelerating, and building heightened reliability. Stay tuned for more updates on Cloudflare's Private Network Load Balancing and how it can reduce architecture complexity while bringing more insight, security, and control to your teams. In the meantime, check out our new <a href="https://cf-assets.www.cloudflare.com/slt3lc6tev37/7siMQh0goJJnH4PYbAzOxC/f4a66ebdf20cca2ec85c2b9261fb8a38/Optimize-Web-Performance.pdf"><u>whitepaper</u></a>!</p>
    <div>
      <h3>Looking to the future</h3>
      <a href="#looking-to-the-future">
        
      </a>
    </div>
    <p>Cloudflare's impactful solution, private IP support for Load Balancing with Cloudflare Tunnels as part of the Zero Trust solution, reaffirms our commitment to providing cutting-edge tools that prioritize security, privacy, and performance. By leveraging private IP addresses and secure tunnels, Cloudflare empowers businesses to fortify their network infrastructure while ensuring compliance with regulatory requirements. With enhanced security, uncompromising privacy, and streamlined infrastructure, load balancing becomes a powerful driver of efficient and secure public or private services.</p><p>As a business grows and its systems scale up, they'll need the features that Cloudflare Load Balancing is known for: health monitoring, steering, and failover. As availability requirements increase due to growing demands and standards from end-users, customers can add health checks, enabling automatic failover to healthy servers when an unhealthy server begins to fail. When the business begins to receive more traffic from around the world, they can create new pools for different regions and use dynamic steering to reduce latency between the user and the server. For intensive or long-running requests, such as complex datastore queries, customers can benefit from leveraging <a href="https://developers.cloudflare.com/load-balancing/understand-basics/traffic-steering/origin-level-steering/least-outstanding-requests-pools/"><u>least outstanding requests steering</u></a> to reduce the number of concurrent requests per server. Before, this could all be done with publicly addressable IPs, but it is now available for pools with public IPs, private servers, or combinations of the two. Private Network Load Balancing  is live and ready to use today! Check out our <a href="https://developers.cloudflare.com/load-balancing/understand-basics/traffic-management/"><u>dev docs for instructions on how to get started</u></a>.</p><p>Stay tuned for our next addition to add new Load Balancing onramp support for Spectrum and WARP with Cloudflare Tunnels with private IPs for your <a href="https://developers.cloudflare.com/fundamentals/get-started/concepts/network-layers/">Layer 4</a> traffic, allowing us to support TCP and UDP applications in your private data centers!</p> ]]></content:encoded>
            <category><![CDATA[Load Balancing]]></category>
            <category><![CDATA[Cloudflare Tunnel]]></category>
            <category><![CDATA[Traffic]]></category>
            <guid isPermaLink="false">6WBtRI0c6K4SqCsCAd3hgn</guid>
            <dc:creator>Brian Batraski</dc:creator>
            <dc:creator>Mathew Jacob</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare is deprecating Railgun]]></title>
            <link>https://blog.cloudflare.com/deprecating-railgun/</link>
            <pubDate>Thu, 01 Jun 2023 13:00:39 GMT</pubDate>
            <description><![CDATA[ Cloudflare will deprecate Railgun on January 2024 ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1IBFHWjg3uScvelj1OHZ75/c644e331d55785b32401b1ca68fbda82/image1-63.png" />
            
            </figure><p>Cloudflare will deprecate the <a href="https://www.cloudflare.com/website-optimization/railgun/">Railgun product</a> on January 31, 2024. At that time, existing Railgun deployments and connections will stop functioning. Customers have the next eight months to migrate to a supported Cloudflare alternative which will vary based on use case.</p><p>Cloudflare first launched Railgun more than ten years ago. Since then, we have released several products in different areas that better address the problems that Railgun set out to solve. However, we shied away from the work to formally deprecate Railgun.</p><p>That reluctance led to Railgun stagnating and customers suffered the consequences. We did not invest time in better support for Railgun. Feature requests never moved. Maintenance work needed to occur and that stole resources away from improving the Railgun replacements. We allowed customers to deploy a zombie product and, starting with this deprecation, we are excited to correct that by helping teams move to significantly better alternatives that are now available in Cloudflare’s network.</p><p>We know that this will require migration effort from Railgun customers over the next eight months. We want to make that as smooth as possible. Today’s announcement features recommendations on how to choose a replacement, how to get started, and guidance on where you can reach us for help.</p>
    <div>
      <h3>What is Railgun?</h3>
      <a href="#what-is-railgun">
        
      </a>
    </div>
    <p>Cloudflare’s reverse proxy <a href="https://www.cloudflare.com/application-services/solutions/">secures and accelerates your applications</a> by placing a Cloudflare data center in over 285+ cities between your infrastructure and your audience. Bad actors attempting to attack your applications hit our network first where products like our WAF and DDoS mitigation service stop them. Your visitors and users connect to our data centers where our cache can serve them content without the need to reach all the way back to your origin server.</p><p>For some customers, your infrastructure also runs on Cloudflare’s network in the form of Cloudflare Workers. Others maintain origin servers running on anything from a Raspberry Pi to a hyperscale public cloud. In those cases, Cloudflare needs to connect to that infrastructure to grab new content that our network can serve from our cache to your audience.</p><p>However, some content cannot be cached. Dynamically-generated or personalized pages can change for every visitor and every session. Cloudflare Railgun <a href="/railgun-in-the-real-world/">aimed to solve</a> that by determining what was the minimum amount of content that changed and attempting to only send that difference in an efficient transfer - a form of <a href="/efficiently-compressing-dynamically-generated-53805/">delta compression</a>. By reducing the amount of content that needed to be sent to Cloudflare’s network, we could accelerate page loads for end users.</p><p>Railgun accomplishes this goal by running a piece of software inside the customer’s environment, the Railgun listener, and a corresponding service running in Cloudflare’s network, the Railgun sender. The pair establish a permanent TCP connection. The listener keeps track of the most recent version of a page that was requested. When a request arrives for a known page, the listener sends an HTTP request to the origin server, determines what content changed, and then compresses and sends only the delta to the sender in Cloudflare’s network.</p>
    <div>
      <h3>Why deprecate a product?</h3>
      <a href="#why-deprecate-a-product">
        
      </a>
    </div>
    <p>The last major release of Railgun took place eight years ago in 2015. However, products should not be deprecated just because active development stops. We believe that a company should retire a product only when:</p><ul><li><p>the maintenance impacts the ability to focus on solving new problems for customers and</p></li><li><p>when improved alternatives exist for customers to adopt in replacement.</p></li></ul><p>Hundreds of customers still use Railgun today and the service has continued to run over the last decade without too much involvement from our team. That relative stability deterred us from pushing customers to adopt newer technologies that solved the same problems. As a result, we kept Railgun in a sort of maintenance mode for the last few years.</p>
    <div>
      <h3>Why deprecate Railgun now?</h3>
      <a href="#why-deprecate-railgun-now">
        
      </a>
    </div>
    <p>Cloudflare’s network has evolved in the eight years since the last Railgun release. We deploy hardware and run services in more than 285 cities around the world, nearly <a href="/panama-expands-cloudflare-network-to-50-countries/">tripling</a> the number of cities since Railgun was last updated. The hardware itself also advanced, becoming more <a href="/the-epyc-journey-continues-to-milan-in-cloudflares-11th-generation-edge-server/">efficient and capable</a>.</p><p>The software platform of Cloudflare’s network developed just as fast. Every data center in Cloudflare’s network can run every service that we provide to our customers. These services range from our traditional reverse proxy products to forward proxy services like Zero Trust to our compute and storage platform Cloudflare Workers. Supporting such a broad range of services requires a platform that can adapt to the requirements of the evolving needs of these products.</p><p>Maintaining Railgun, despite having better alternatives, creates a burden on our ability to continue investing in new solutions. Some of these tools that power Railgun are themselves approaching an end of life state. Others will likely present security risks that we are not comfortable accepting in the next few years.</p><p>We considered several options before deciding on deprecation. First, we could accept the consequences of inaction, leaving our network in a worse state and our Railgun customers in purgatory. Second, we could run Railgun on dedicated infrastructure and silo it from the rest of our network. However, that would violate our principle that every piece of hardware in Cloudflare runs every service.</p><p>Third, we could spin up a new engineering team and rebuild Railgun from scratch in a modern way. Doing so would take away from resources we could otherwise invest in newer technologies. We also believe that existing, newer products from Cloudflare solve the same problems that Railgun set out to address. Rebuilding Railgun would take away from our ability to keep shipping and would duplicate better features already released in other products. As a result, we have decided to deprecate Railgun.</p>
    <div>
      <h3>What alternatives are available?</h3>
      <a href="#what-alternatives-are-available">
        
      </a>
    </div>
    <p>Railgun addressed a number of problems for our customers at launch. Today, we have solutions available that solve the same range of challenges in significantly improved ways.</p><p>We do not have an exact like-for-like successor for Railgun. The solutions that solve the same set of problems have also evolved with our customers. Different use cases that customers deploy Railgun to address will map to different solutions available in Cloudflare today. We have broken out some of the most common reasons that customers used Railgun and where we recommend they consider migrating.</p><p><b>“I use Railgun to maintain a persistent, secure connection to Cloudflare’s network without the need for a static publicly available IP address.”</b>Customers can deploy <a href="https://www.cloudflare.com/products/tunnel/">Cloudflare Tunnel</a> to connect their infrastructure to Cloudflare’s network without the need to expose a public IP address. Cloudflare Tunnel software runs in your environment, similar to the Railgun listener, and creates an outbound-only connection to Cloudflare’s network. Cloudflare Tunnel is available at no cost.</p><p><b>“I use Railgun to front multiple services running in my infrastructure.”</b>Cloudflare Tunnel <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/install-and-setup/tunnel-guide/remote/">can be deployed</a> in this type of bastion mode to support multiple services running behind it in your infrastructure. You can use Tunnel to support services beyond just HTTP servers, and you can deploy replicas of the Cloudflare Tunnel connector for high availability.</p><p><b>“I use Railgun for performance improvements.”</b>Cloudflare has invested significantly in performance upgrades in the eight years since the last release of Railgun. This list is not comprehensive, but highlights some areas where performance can be significantly improved by adopting newer services relative to using Railgun.</p><ul><li><p>Cloudflare Tunnel features Cloudflare’s <a href="https://www.cloudflare.com/pg-lp/argo-smart-routing/">Argo Smart Routing</a> technology, a service that delivers both “middle mile” and last mile optimization, <a href="/argo-v2/">reducing round trip</a> time by up to 40%. Web assets using Argo perform, on average, 30% faster overall.</p></li><li><p><a href="/cloudflare-network-interconnect/">Cloudflare Network Interconnect</a> (CNI) gives customers the ability to directly connect to our network, either virtually or physically, to improve the reliability and performance of the connection between Cloudflare’s network and your infrastructure. CNI customers have a dedicated on-ramp to Cloudflare for their origins.</p></li></ul><p><b>“I use Railgun to reduce the amount of data that egresses from my infrastructure to Cloudflare.”</b>Certain public cloud providers <a href="/aws-egregious-egress/">charge egregious egress</a> fees for you to move your own data outside their environment. We believe that degrades an open Internet and locks in customers. We have spent the last several years investing in ways to reduce or eliminate these altogether.</p><ul><li><p>Members of the <a href="https://www.cloudflare.com/bandwidth-alliance/">Bandwidth Alliance</a> mutually agree to waive transfer fees. If your infrastructure runs in Oracle Cloud, Microsoft Azure, Google Cloud, Backblaze and more than a dozen other providers you pay zero cost to send data to Cloudflare.</p></li><li><p>Cloudflare’s <a href="https://www.cloudflare.com/products/r2/">R2 storage product</a> requires customers to pay zero egress fees as well. R2 provides global object storage with an <a href="https://www.cloudflare.com/developer-platform/solutions/s3-compatible-object-storage/">S3-compatible</a> API and easy migration to give customers the ability to build multi-cloud architectures.</p></li></ul>
    <div>
      <h3>What is the timeline?</h3>
      <a href="#what-is-the-timeline">
        
      </a>
    </div>
    <p>From the time of this announcement, customers have eight months available to migrate away from Railgun. January 31, 2024, will be the last day that Railgun connections will be supported. Starting on February 1, 2024, existing Railgun connections will stop functioning.</p><p>Over the next few days we will prevent new Railgun deployments from being created. Zones with Railgun connections already established will continue to function during the migration window.</p>
    <div>
      <h3>How can I get help?</h3>
      <a href="#how-can-i-get-help">
        
      </a>
    </div>
    <p>Contract customers can reach out to their Customer Success team to discuss additional questions or migration plans. Each of Cloudflare’s regions has a specialist available to help guide teams who need additional help during the migration.</p><p>Customers can also raise questions and provide commentary in <a href="https://community.cloudflare.com/t/cloudflare-is-deprecating-railgun/516753">this dedicated forum room</a>. We will continue to staff that discussion and respond to questions as customers share them.</p>
    <div>
      <h3>What’s next?</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>Railgun customers will also receive an email notice later today about the deprecation plan and timeline. We will continue sending email notices multiple times over the next eight months leading up to the deprecation.</p><p>We are grateful to the Railgun customers who first selected Cloudflare to accelerate the applications and websites that power their business. We are excited to share the latest Cloudflare features with them that will continue to make them faster as they reach their audience.</p> ]]></content:encoded>
            <category><![CDATA[Railgun]]></category>
            <category><![CDATA[Performance]]></category>
            <category><![CDATA[Cloudflare Tunnel]]></category>
            <guid isPermaLink="false">7m4ljf07IEVPS8sEowilh0</guid>
            <dc:creator>Sam Rhea</dc:creator>
        </item>
        <item>
            <title><![CDATA[Protect your key server with Keyless SSL and Cloudflare Tunnel integration]]></title>
            <link>https://blog.cloudflare.com/protect-your-key-server-with-keyless-ssl-and-cloudflare-tunnel-integration/</link>
            <pubDate>Thu, 16 Mar 2023 13:00:00 GMT</pubDate>
            <description><![CDATA[ Now, customers will be able to use our Cloudflare Tunnels product to send traffic to the key server through a secure channel, without publicly exposing it to the rest of the Internet ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Today, we’re excited to announce a big security enhancement to our Keyless SSL offering. Keyless SSL allows customers to store their private keys on their own hardware, while continuing to use Cloudflare’s proxy services. In the past, the configuration required customers to expose the location of their key server through a DNS record - something that is publicly queryable. Now, customers will be able to use our Cloudflare Tunnels product to send traffic to the key server through a secure channel, without publicly exposing it to the rest of the Internet.</p>
    <div>
      <h3>A primer on Keyless SSL</h3>
      <a href="#a-primer-on-keyless-ssl">
        
      </a>
    </div>
    <p>Security has always been a critical aspect of online communication, especially when it comes to protecting sensitive information. Today, Cloudflare manages private keys for millions of domains which allows the data communicated by a client to stay secure and encrypted. While Cloudflare adopts the strictest controls to secure these keys, certain industries such as financial or medical services may have compliance requirements that prohibit the sharing of private keys.In the past, Cloudflare required customers to upload their private key in order for us to provide our L7 services. That was, until we built out Keyless SSL in 2014, a feature that allows customers to keep their private keys stored on their own infrastructure while continuing to make use of Cloudflare’s services.</p><p>While Keyless SSL is compatible with any hardware that support PKCS#11 standard, Keyless SSL users frequently opt to secure their private keys within HSMs (Hardware Security Modules), which are specialized machines designed to be tamper proof and resistant to to unauthorized access or manipulation, secure against attacks, and optimized to efficiently execute cryptographic operations such as signing and decryption. To make it easy for customers to set this up, during Security Week in 2021, we <a href="/keyless-ssl-supports-fips-140-2-l3-hsm/">launched</a> integrations between Keyless SSL and HSM offerings from all major cloud providers.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1wuHcCOkcDcvmXTGLrFQho/2b984dacb313cd7e85da2fb2e57e3321/image1-36.png" />
            
            </figure>
    <div>
      <h3>Strengthening the security of key servers even further</h3>
      <a href="#strengthening-the-security-of-key-servers-even-further">
        
      </a>
    </div>
    <p>In order for Cloudflare to communicate with a customer’s key server, we have to know the IP address associated with it. To configure Keyless SSL, we ask customers to create a DNS record that indicates the IP address of their keyserver. As a security measure, we ask customers to keep this record under a long, random hostname such as “11aa40b4a5db06d4889e48e2f738950ddfa50b7349d09b5f.example.com”. While it adds a layer of obfuscation to the location of the key server, it does expose the IP address of the keyserver to the public Internet, allowing anyone to send requests to that server. We lock the connection between Cloudflare and the Keyless server down through Mutual TLS, so that the Keyless server should only accept the request if a Cloudflare client certificate associated with the Keyless client is served. While this allows the key server to drop any requests with an invalid or missing client certificate, the key server is still publicly exposed, making it susceptible to attacks.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6jRWZW8bTeCVotnsB8nbwI/1b83910f4917f4555abd27992a3378db/image6-8.png" />
            
            </figure><p>Instead, Cloudflare should be the only party that knows about this key server’s location, as it should be the only party making requests to it.</p>
    <div>
      <h3>Enter: Cloudflare Tunnel</h3>
      <a href="#enter-cloudflare-tunnel">
        
      </a>
    </div>
    <p>Instead of re-inventing the wheel, we decided to make use of an existing Cloudflare product that our customers use to protect the connections between Cloudflare and their origin servers — Cloudflare Tunnels!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/efEdmsMhMKZINZeiqaEbj/19f607592f8a07ae67ab9ac9aad574b6/image4-11.png" />
            
            </figure><p>Cloudflare Tunnel gives customers the tools to connect incoming traffic to their private networks without exposing those networks to the Internet through a public hostname. It works by having customers install a Cloudflare daemon, called “cloudflared” which Cloudflare’s client will then connect to.</p><p>Now, customers will be able to use the same functionality but for connections made to their key server.</p>
    <div>
      <h3>Getting started</h3>
      <a href="#getting-started">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7cwgNSFiwyruWuHOlNDBKO/ff04a1fc33f4de532cea36570e7fa712/image2-20.png" />
            
            </figure><p>To set this up, customers will need to configure a virtual network on Cloudflare - this is where customers will tell us the IP address or hostname of their key server. Then, when uploading a Keyless certificate, instead of telling us the public hostname associated with the key server, customers will be able to tell us the virtual network that resolves to it. When making requests to the key server, Cloudflare’s gokeyless client will automatically connect to the “cloudflared” server and will continue to use Mutual TLS as an additional security layer on top of that connection. For more instructions on how to set this up , check out our <a href="https://developers.cloudflare.com/ssl/keyless-ssl/configuration/">Developer Docs</a>.</p><p>If you’re an Enterprise customer and are interested in using Keyless SSL in conjunction with Cloudflare Tunnels, reach out to your account team today to get set up.</p> ]]></content:encoded>
            <category><![CDATA[Security Week]]></category>
            <category><![CDATA[Keyless SSL]]></category>
            <category><![CDATA[Cloudflare Tunnel]]></category>
            <guid isPermaLink="false">1FZfLi9GFmCGG0PEQwLhHw</guid>
            <dc:creator>Dina Kozlov</dc:creator>
        </item>
        <item>
            <title><![CDATA[Give us a ping. (Cloudflare) One ping only.]]></title>
            <link>https://blog.cloudflare.com/the-most-exciting-ping-release/</link>
            <pubDate>Fri, 13 Jan 2023 14:00:00 GMT</pubDate>
            <description><![CDATA[ Now Zero Trust administrators can use the familiar debugging tools that we all know and love like ping, traceroute, and MTR to test connectivity to private network destinations running behind their Tunnels ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1nZa6ahqyj7z2sii9QERbV/2c9ee66f5628c47da9a20fab9c85516e/image1-35.png" />
            
            </figure><p>Ping was born in 1983 when the Internet needed a simple, effective way to measure reachability and distance. In short, ping (and subsequent utilities like traceroute and MTR)  provides users with a quick way to validate whether one machine can communicate with another. Fast-forward to today and these network utility tools have become ubiquitous. Not only are they now the de facto standard for troubleshooting connectivity and network performance issues, but they also improve our overall quality of life by acting as a common suite of tools almost all Internet users are comfortable employing in their day-to-day roles and responsibilities.</p><p>Making network utility tools work as expected is very important to us, especially now as more and more customers are building their private networks on Cloudflare. Over 10,000 teams now run a private network on Cloudflare. Some of these teams are among the world's largest enterprises, some are small crews, and yet others are hobbyists, but they all want to know - can I reach that?</p><p>That’s why today we’re excited to incorporate support for these utilities into our already expansive troubleshooting toolkit for Cloudflare Zero Trust. To get started, <a href="https://forms.gle/gpfGAJW2jsxykC6y9">sign up</a> to receive beta access and start using the familiar debugging tools that we all know and love like ping, traceroute, and MTR to test connectivity to private network destinations running behind Tunnel.</p>
    <div>
      <h2>Cloudflare Zero Trust</h2>
      <a href="#cloudflare-zero-trust">
        
      </a>
    </div>
    <p>With Cloudflare Zero Trust, we’ve made it <a href="/ridiculously-easy-to-use-tunnels/">ridiculously easy</a> to build your private network on Cloudflare. In fact, it takes just three steps to get started. First, download Cloudflare’s device client, WARP, to connect your users to Cloudflare. Then, create identity and device aware policies to determine who can reach what within your network. And finally, connect your network to Cloudflare with Tunnel directly from the Zero Trust dashboard.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5Fn9l1D4DFiBYv2JSmpT1Z/c8566a62163b04b8dafb8752f1dd7104/Untitled-1.png" />
            
            </figure><p>We’ve designed Cloudflare Zero Trust to act as a single pane of glass for your organization. This means that after you’ve deployed <i>any</i> part of our <a href="https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/">Zero Trust</a> solution, whether that be <a href="https://www.cloudflare.com/learning/access-management/what-is-ztna/">ZTNA</a> or <a href="https://www.cloudflare.com/learning/access-management/what-is-a-secure-web-gateway/">SWG</a>, you are clicks, not months, away from deploying <a href="https://www.cloudflare.com/products/zero-trust/browser-isolation/">Browser Isolation</a>, <a href="https://www.cloudflare.com/products/zero-trust/dlp/">Data Loss Prevention</a>, <a href="https://www.cloudflare.com/products/zero-trust/casb/">Cloud Access Security Broker</a>, and <a href="https://www.cloudflare.com/products/zero-trust/email-security/">Email Security</a>. This is a stark contrast from other solutions on the market which may require distinct implementations or have limited interoperability across their portfolio of services.</p><p>It’s that simple, but if you’re looking for more prescriptive guidance watch our <a href="https://www.cloudflare.com/products/zero-trust/interactive-demo/">demo</a> below to get started:</p><div></div>
<p></p><p>To get started, sign-up for early access to the closed beta. If you’re interested in learning more about how it works and what else we will be launching in the future, keep scrolling.</p>
    <div>
      <h2>So, how do these network utilities actually work?</h2>
      <a href="#so-how-do-these-network-utilities-actually-work">
        
      </a>
    </div>
    <p>Ping, traceroute and MTR are all powered by the same underlying <a href="https://www.cloudflare.com/learning/network-layer/what-is-a-protocol/">protocol</a>, ICMP. Every <a href="https://www.cloudflare.com/learning/ddos/glossary/internet-control-message-protocol-icmp/">ICMP</a> message has 8-bit type and code fields, which define the purpose and semantics of the message. While ICMP has many types of messages, the network diagnostic tools mentioned above make specific use of the echo request and echo reply message types.</p><p>Every ICMP message has a type, code and checksum. As you may have guessed from the name, an echo reply is generated in response to the receipt of an echo request, and critically, the request and reply have matching identifiers and sequence numbers. Make a mental note of this fact as it will be useful context later in this blog post.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7D6dGG8IM5rnQXjS4easil/c691a4f6500fe4fd901e6fa33d0377a5/ICMP-header-format.png" />
            
            </figure>
    <div>
      <h2>A crash course in ping, traceroute, and MTR</h2>
      <a href="#a-crash-course-in-ping-traceroute-and-mtr">
        
      </a>
    </div>
    <p>As you may expect, each one of these utilities comes with its own unique nuances, but don’t worry. We’re going to provide a quick refresher on each before getting into the nitty-gritty details.</p>
    <div>
      <h3>Ping</h3>
      <a href="#ping">
        
      </a>
    </div>
    <p>Ping works by sending a sequence of echo request packets to the destination. Each router hop between the sender and destination decrements the TTL field of the IP packet containing the ICMP message and forwards the packet to the next hop. If a hop decrements the TTL to 0 before reaching the destination, or doesn’t have a next hop to forward to, it will return an ICMP error message – “TTL exceeded” or “Destination host unreachable” respectively – to the sender. A destination which speaks ICMP will receive these echo request packets and return matching echo replies to the sender. The same process of traversing routers and TTL decrementing takes place on the return trip. On the sender’s machine, ping reports the final TTL of these replies, as well as the roundtrip latency of sending and receiving the ICMP messages to the destination. From this information a user can determine the distance between themselves and the origin server, both in terms of number of network hops and time.</p>
    <div>
      <h3>Traceroute and MTR</h3>
      <a href="#traceroute-and-mtr">
        
      </a>
    </div>
    <p>As we’ve just outlined, while helpful, the output provided by ping is relatively simple. It does provide some useful information, but we will generally want to follow up this request with a traceroute to learn more about the specific path to a given destination. Similar to ping, traceroutes start by sending an ICMP echo request. However, it handles TTL a bit differently. You can <a href="https://www.cloudflare.com/learning/network-layer/what-is-mtr/">learn more</a> about why that is the case in our <a href="https://www.cloudflare.com/learning/">Learning Center</a>, but the important takeaway is that this is how traceroutes are able to map and capture the IP address of each unique hop on the network path. This output makes traceroute an incredibly powerful tool to understanding not only <i>if</i> a machine can connect to another, but also <i>how</i> it will get there! And finally, we’ll cover MTR. We’ve grouped traceroute and MTR together for now as they operate in an extremely similar fashion. In short, the output of an MTR will provide everything traceroute can, but with some additional, aggregate statistics for each unique hop. MTR will also run until explicitly stopped allowing users to receive a statistical average for each hop on the path.</p>
    <div>
      <h2>Checking connectivity to the origin</h2>
      <a href="#checking-connectivity-to-the-origin">
        
      </a>
    </div>
    <p>Now that we’ve had a quick refresher, let’s say I cannot connect to my private application server. With ICMP support enabled on my Zero Trust account, I could run a traceroute to see if the server is online.</p><p>Here is simple example from one of our lab environments:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7auWBc7axco0ez11m2sOSd/e4c1fa9c86f91efe2282dc7800887cbc/ICMP-support-for-Warp-to-Tunnel_d.png" />
            
            </figure><p>Then, if my server is online, traceroute should output something like the following:</p>
            <pre><code>traceroute -I 172.16.10.120
traceroute to 172.16.10.120 (172.16.10.120), 64 hops max, 72 byte packets
 1  172.68.101.57 (172.68.101.57)  20.782 ms  12.070 ms  15.888 ms
 2  172.16.10.100 (172.16.10.100)  31.508 ms  30.657 ms  29.478 ms
 3  172.16.10.120 (172.16.10.120)  40.158 ms  55.719 ms  27.603 ms</code></pre>
            <p>Let’s examine this a bit deeper. Here, the first hop is the Cloudflare data center where my Cloudflare WARP device is connected via our <a href="https://www.cloudflare.com/learning/cdn/glossary/anycast-network/">Anycast</a> network. Keep in mind this IP may look different depending on your location. The second hop will be the server running cloudflared. And finally, the last hop is my application server.</p><p>Conversely, if I could not connect to my app server I would expect traceroute to output the following:</p>
            <pre><code>traceroute -I 172.16.10.120
traceroute to 172.16.10.120 (172.16.10.120), 64 hops max, 72 byte packets
 1  172.68.101.57 (172.68.101.57)  20.782 ms  12.070 ms  15.888 ms
 2  * * *
 3  * * *</code></pre>
            <p>In the example above, this means the ICMP echo requests are not reaching cloudflared. To troubleshoot, first I will make sure cloudflared is running by checking the status of the Tunnel in the <a href="https://dash.teams.cloudflare.com/">ZeroTrust dashboard</a>. Then I will check if the Tunnel has a route to the destination IP. This can be found in the Routes column of the Tunnels table in the dashboard. If it does not, I will add a route to my Tunnel to see if this changes the output of my traceroute.</p><p>Once I have confirmed that cloudflared is running and the Tunnel has a route to my app server, traceroute will show the following:</p>
            <pre><code>raceroute -I 172.16.10.120
traceroute to 172.16.10.120 (172.16.10.120), 64 hops max, 72 byte packets
 1  172.68.101.57 (172.68.101.57)  20.782 ms  12.070 ms  15.888 ms
 2  172.16.10.100 (172.16.10.100)  31.508 ms  30.657 ms  29.478 ms
 3  * * *</code></pre>
            <p>However, it looks like we still can’t quite reach the application server. This means the ICMP echo requests reached cloudflared, but my application server isn’t returning echo replies. Now, I can narrow down the problem to my application server, or communication between cloudflared and the app server. Perhaps the machine needs to be rebooted or there is a firewall rule in place, but either way we have what we need to start troubleshooting the last hop. With ICMP support, we now have many network tools at our disposal to troubleshoot connectivity end-to-end.</p><p>Note that the route cloudflared to origin is always shown as a single hop, even if there are one or more routers between the two. This is because cloudflared creates its own echo request to the origin, instead of forwarding the original packets. In the next section we will explain the technical reason behind it.</p>
    <div>
      <h2>What makes ICMP traffic unique?</h2>
      <a href="#what-makes-icmp-traffic-unique">
        
      </a>
    </div>
    <p>A few quarters ago, Cloudflare Zero Trust <a href="/extending-cloudflares-zero-trust-platform-to-support-udp-and-internal-dns/">extended support for UDP</a> end-to-end as well. Since UDP and ICMP are both datagram-based protocols, within the Cloudflare network we can reuse the same infrastructure to proxy both UDP and ICMP traffic. To do this, we send the individual datagrams for either protocol over a QUIC connection using <a href="https://datatracker.ietf.org/doc/html/rfc9221">QUIC datagrams</a> between Cloudflare and the cloudflared instances within your network.</p><p>With UDP, we establish and maintain a <i>session</i> per client/destination pair, such that we are able to send <b>only</b> the UDP payload and a session identifier in datagrams. In this way, we don’t need to send the IP and port to which the UDP payload should be forwarded with every single packet.</p><p>However, with ICMP we decided that establishing a session like this is far too much overhead, given that typically only a handful of ICMP packets are exchanged between endpoints. Instead, we send the entire IP packet (with the ICMP payload inside) as a single datagram.</p><p>What this means is that cloudflared can read the destination of the ICMP packet from the IP header it receives. While this conveys the eventual destination of the packet to cloudflared, there is still work to be done to actually send the packet. Cloudflared cannot simply send out the IP packet it receives without modification, because the source IP in the packet is still the <i>original</i> client IP, and not a source that is routable to the cloudflared instance itself.</p><p>To receive ICMP echo replies in response to the ICMP packets it forwards, cloudflared must apply a source NAT to the packet. This means that when cloudflared receives an IP packet, it must complete the following:</p><ul><li><p>Read the destination IP address of the packet</p></li><li><p>Strip off the IP header to get the ICMP payload</p></li><li><p>Send the ICMP payload to the destination, meaning the source address of the ICMP packet will be the IP of a network interface to which cloudflared can bind</p></li><li><p>When cloudflared receives replies on this address, it must rewrite the destination address of the received packet (destination because the direction of the packet is reversed) to the original client source address</p></li></ul><p>Network Address Translation like this is done all the time for <a href="https://www.cloudflare.com/learning/ddos/glossary/tcp-ip/">TCP</a> and UDP, but is much easier in those cases because ports can be used to disambiguate cases where the source and destination IPs are the same. Since ICMP packets do not have ports associated with them, we needed to find a way to map packets received from the upstream back to the original source which sent cloudflared those packets.</p><p>For example, imagine that two clients 192.0.2.1 and 192.0.2.2 both send an ICMP echo request to a destination 10.0.0.8. As we previously outlined, cloudflared must rewrite the source IPs of these packets to a source address to which it can bind. In this scenario, when the echo replies come back, the IP headers will be identical: source=10.0.0.8 destination=&lt;cloudflared’s IP&gt;. So, how can cloudflared determine which packet needs to have its destination rewritten to 192.0.2.1 and which to 192.0.2.2?</p><p>To solve this problem, we use fields of the ICMP packet to track packet flows, in the same way that ports are used in TCP/UDP NAT. The field we’ll use for this purpose is the Echo ID. When an echo request is received, conformant ICMP endpoints will return an echo reply with the same identifier as was received in the request. This means we can send the packet from 192.0.2.1 with ID 23 and the one from 192.0.2.2 with ID 45, and when we receive replies with IDs 23 and 45, we know which one corresponds to each original source.</p><p>Of course this strategy only works for ICMP echo requests, which make up a relatively small percentage of the available ICMP message types. For security reasons, however, and owing to the fact that these message types are sufficient to implement the ubiquitous ping and traceroute functionality that we’re after, these are the only message types we currently support. We’ll talk through the security reasons for this choice in the next section.</p>
    <div>
      <h2>How to proxy ICMP without elevated permissions</h2>
      <a href="#how-to-proxy-icmp-without-elevated-permissions">
        
      </a>
    </div>
    <p>Generally, applications need to send ICMP packets through raw sockets. Applications have control of the IP header using this socket, so it requires elevated privileges to open. Whereas the IP header for TCP and UDP packets are added on send and removed on receive by the operating system. To adhere to security best-practices, we don’t really want to run cloudflared with additional privileges. We needed a better solution. To solve this, we found inspiration in the ping utility, which you’ll note can be run by <i>any</i> user, <i>without</i> elevated permissions. So then, how does ping send ICMP echo requests and listen for echo replies as a normal user program? Well, the answer is less satisfying: it depends (on the platform). And as cloudflared supports all the following platforms, we needed to answer this question for each.</p>
    <div>
      <h3>Linux</h3>
      <a href="#linux">
        
      </a>
    </div>
    <p>On linux, ping opens a datagram socket for the ICMP protocol with the syscall <b><i>socket(PF_INET, SOCK_DGRAM, PROT_ICMP).</i></b> This type of socket can only be opened if the group ID of the user running the program is in <b><i>/proc/sys/net/ipv4/ping_group_range</i></b>, but critically, the user does not need to be root. This socket is “special” in that it can only send ICMP echo requests and receive echo replies. Great! It also has a conceptual “port” associated with it, despite the fact that ICMP does not use ports. In this case, the identifier field of echo requests sent through this socket are rewritten to the “port” assigned to the socket. Reciprocally, echo replies received by the kernel which have the same identifier are sent to the socket which sent the request.</p><p>Therefore, on linux cloudflared is able to perform source NAT for ICMP packets simply by opening a unique socket per source IP address. This rewrites the identifier field and source address of the request. Replies are delivered to this same socket meaning that cloudflared can easily rewrite the destination IP address (destination because the packets are flowing <i>to</i> the client) and echo identifier back to the original values received from the client.</p>
    <div>
      <h3>Darwin</h3>
      <a href="#darwin">
        
      </a>
    </div>
    <p>On Darwin (the UNIX-based core set of components which make up macOS), things are similar, in that we can open an unprivileged ICMP socket with the same syscall <i><b>socket(PF_INET, SOCK_DGRAM, PROT_ICMP)</b></i>. However, there is an important difference. With Darwin the kernel does not allocate a conceptual “port” for this socket, and thus, when sending ICMP echo requests the kernel does not rewrite the echo ID as it does on linux. Further, and more importantly for our purposes, the kernel does not demultiplex ICMP echo replies to the socket which sent the corresponding request using the echo identifier. This means that on macOS, we effectively need to perform the echo ID rewriting manually. In practice, this means that when cloudflared receives an echo request on macOS, it must choose an echo ID which is unique for the destination. Cloudflared then adds a key of (chosen echo ID, destination IP) to a mapping it then maintains, with a value of (original echo ID, original source IP). Cloudflared rewrites the echo ID in the echo request packet to the one it chose and forwards it to the destination. When it receives a reply, it is able to use the source IP address and echo ID to look up the client address and original echo ID and rewrite the echo ID and destination address in the reply packet before forwarding it back to the client.</p>
    <div>
      <h3>Windows</h3>
      <a href="#windows">
        
      </a>
    </div>
    <p>Finally, we arrived at Windows which conveniently provides a Win32 API IcmpSendEcho that sends echo requests and returns echo reply, timeout or error. For ICMPv6 we just had to use Icmp6SendEcho. The APIs are in C, but cloudflared can call them through CGO without a problem. If you also need to call these APIs in a Go program, <a href="https://github.com/cloudflare/cloudflared/blob/master/ingress/icmp_windows.go">checkout our wrapper</a> for inspiration.</p><p>And there you have it! That’s how we built the most exciting ping release since 1983. Overall, we’re thrilled to announce this new feature and can’t wait to get your feedback on ways we can continue improving our implementation moving forward.</p>
    <div>
      <h2>What’s next</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>Support for these ICMP-based utilities is just the beginning of how we’re thinking about improving our Zero Trust administrator experience. Our goal is to continue providing tools which make it easy to identify issues within the network that impact connectivity and performance.</p><p>Looking forward, we plan to add more dials and knobs for <a href="https://www.cloudflare.com/learning/performance/what-is-observability/">observability</a> with announcements like <a href="/introducing-digital-experience-monitoring/">Digital Experience Monitoring</a> across our Zero Trust platform to help users <a href="https://www.cloudflare.com/application-services/solutions/app-performance-monitoring/">proactively monitor</a> and stay alert to changing network conditions. In the meantime, try applying Zero Trust controls to your private network for free by <a href="https://dash.cloudflare.com/sign-up">signing up</a> today.</p> ]]></content:encoded>
            <category><![CDATA[CIO Week]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Zero Trust]]></category>
            <category><![CDATA[Private Network]]></category>
            <category><![CDATA[Cloudflare Tunnel]]></category>
            <guid isPermaLink="false">6GPeSDV02jXldOr3L43yxx</guid>
            <dc:creator>Abe Carryl</dc:creator>
            <dc:creator>Chung-Ting Huang</dc:creator>
            <dc:creator>John Norwood</dc:creator>
        </item>
        <item>
            <title><![CDATA[Using Cloudflare R2 as an apt/yum repository]]></title>
            <link>https://blog.cloudflare.com/using-cloudflare-r2-as-an-apt-yum-repository/</link>
            <pubDate>Thu, 15 Sep 2022 13:00:00 GMT</pubDate>
            <description><![CDATA[ Host your apt/yum repositories like how Cloudflare Tunnel does. Here's how ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/sQTPP1Uhcxo0gXUo6NIKI/dc6d5bfe03b47d171a7f56edefa763b6/R2-as-an-apt-yum-repository.png" />
            
            </figure><p>In this blog post, we’re going to talk about how we use Cloudflare R2 as an <i>apt/yum</i> repository to bring <i>cloudflared</i> (the Cloudflare Tunnel daemon) to your Debian/Ubuntu and CentOS/RHEL systems and how you can do it for your own distributable in a few easy steps!</p><p>I work on <a href="https://www.cloudflare.com/en-gb/products/tunnel/"><i>Cloudflare Tunnel</i></a>, a product which enables customers to quickly connect their private networks and services through the Cloudflare global network without needing to expose any public IPs or ports through their firewall. Cloudflare Tunnel is managed for users by <i>cloudflared</i>, a tool that runs on the same network as the private services. It proxies traffic for these services via Cloudflare, and users can then access these services securely through the Cloudflare network.</p><p>Our connector, <i>cloudflared,</i> was designed to be lightweight and flexible enough to be effectively deployed on a Raspberry Pi, a router, your laptop, or a server running on a data center with applications ranging from IoT control to private networking. Naturally, this means <i>cloudflared</i> comes built for a myriad of operating systems, architectures and package distributions: You could download the appropriate package from our <a href="https://github.com/cloudflare/cloudflared/releases">GitHub releases</a>, <i>brew install</i> it or <i>apt/yum install</i> it (<a href="https://pkg.cloudflare.com">https://pkg.cloudflare.com</a>).</p><p>In the rest of this blog post, I’ll use cloudflared as an example of how to create an apt/yum repository backed by Cloudflare’s <a href="https://www.cloudflare.com/learning/cloud/what-is-object-storage/">object storage</a> service R2. Note that this can be any binary/distributable. I simply use cloudflared as an example because this is something we recently did and actually use in production.</p>
    <div>
      <h2>How apt-get works</h2>
      <a href="#how-apt-get-works">
        
      </a>
    </div>
    <p>Let’s see what happens when you run something like this on your terminal.</p>
            <pre><code>$ apt-get install cloudflared</code></pre>
            <p>Let’s also assume that apt sources were already added like so:</p>
            <pre><code>  $ echo 'deb [signed-by=/usr/share/keyrings/cloudflare-main.gpg] https://pkg.cloudflare.com/cloudflared buster main' | sudo tee /etc/apt/sources.list.d/cloudflared.list


$ apt-get update</code></pre>
            <p>From the source.list above, apt first looks up the <a href="https://pkg.cloudflare.com/cloudflared/dists/buster/Release">Release</a> file (or <a href="https://pkg.cloudflare.com/cloudflared/dists/buster/InRelease">InRelease</a> if it’s a signed package like cloudflared, but we will ignore this for the sake of simplicity).</p><p>A Release file contains a list of supported architectures and their md5, sha1 and sha256 checksums. It looks something like this:</p>
            <pre><code>$ curl https://pkg.cloudflare.com/cloudflared/dists/buster/Release
Origin: cloudflared
Label: cloudflared
Codename: buster
Date: Thu, 11 Aug 2022 08:40:18 UTC
Architectures: amd64 386 arm64 arm armhf
Components: main
Description: apt repository for cloudflared - buster
MD5Sum:
 c14a4a1cbe9437d6575ae788008a1ef4 549 main/binary-amd64/Packages
 6165bff172dd91fa658ca17a9556f3c8 374 main/binary-amd64/Packages.gz
 9cd622402eabed0b1b83f086976a8e01 128 main/binary-amd64/Release
 5d2929c46648ea8dbeb1c5f695d2ef6b 545 main/binary-386/Packages
 7419d40e4c22feb19937dce49b0b5a3d 371 main/binary-386/Packages.gz
 1770db5634dddaea0a5fedb4b078e7ef 126 main/binary-386/Release
 b0f5ccfe3c3acee38ba058d9d78a8f5f 549 main/binary-arm64/Packages
 48ba719b3b7127de21807f0dfc02cc19 376 main/binary-arm64/Packages.gz
 4f95fe1d9afd0124a32923ddb9187104 128 main/binary-arm64/Release
 8c50559a267962a7da631f000afc6e20 545 main/binary-arm/Packages
 4baed71e49ae3a5d895822837bead606 372 main/binary-arm/Packages.gz
 e472c3517a0091b30ab27926587c92f9 126 main/binary-arm/Release
 bb6d18be81e52e57bc3b729cbc80c1b5 549 main/binary-armhf/Packages
 31fd71fec8acc969a6128ac1489bd8ee 371 main/binary-armhf/Packages.gz
 8fbe2ff17eb40eacd64a82c46114d9e4 128 main/binary-armhf/Release
SHA1:
…
SHA256:
…</code></pre>
            <p>Depending on your system’s architecture, Debian picks the appropriate Packages location. A Packages file contains metadata about the binary apt wants to install, including location and its checksum. Let’s say it’s an amd64 machine. This means we’ll go here next:</p>
            <pre><code>$ curl https://pkg.cloudflare.com/cloudflared/dists/buster/main/binary-amd64/Packages
Package: cloudflared
Version: 2022.8.0
License: Apache License Version 2.0
Vendor: Cloudflare
Architecture: amd64
Maintainer: Cloudflare &lt;support@cloudflare.com&gt;
Installed-Size: 30736
Homepage: https://github.com/cloudflare/cloudflared
Priority: extra
Section: default
Filename: pool/main/c/cloudflared/cloudflared_2022.8.0_amd64.deb
Size: 15286808
SHA256: c47ca10a3c60ccbc34aa5750ad49f9207f855032eb1034a4de2d26916258ccc3
SHA1: 1655dd22fb069b8438b88b24cb2a80d03e31baea
MD5sum: 3aca53ccf2f9b2f584f066080557c01e
Description: Cloudflare Tunnel daemon</code></pre>
            <p>Notice the Filename field. This is where apt gets the deb from before running a dpkg command on it. What all of this means is the apt repository (and the yum repositories too) is basically a structured file system of mostly plaintext files and a deb.</p><p>The deb file is a Debian software package that contains two things: installable data (in our case, the <i>cloudflared</i> binary) and metadata about the installable.</p>
    <div>
      <h2>Building our own apt repository</h2>
      <a href="#building-our-own-apt-repository">
        
      </a>
    </div>
    <p>Now that we know what happens when an apt-get install runs, let’s work our way backwards to construct the repository.</p>
    <div>
      <h3>Create a deb file out of the binary</h3>
      <a href="#create-a-deb-file-out-of-the-binary">
        
      </a>
    </div>
    <p><b>Note:</b> It is optional but recommended one signs the packages. See the section about how apt verifies packages <a href="/dont-use-apt-key/">here</a><i>.</i></p><p>Debian files can be built by the <a href="https://man7.org/linux/man-pages/man1/dpkg-buildpackage.1.html">dpkg-buildpackage</a> tool in a linux or Debian environment. We use a handy command line tool called fpm (<a href="https://fpm.readthedocs.io/en/v1.13.1/">https://fpm.readthedocs.io/en/v1.13.1/</a>) to do this because it works for both rpm and deb.</p>
            <pre><code>$ fpm -s &lt;dir&gt; -t deb -C /path/to/project -name &lt;project_name&gt; –version &lt;version&gt;</code></pre>
            <p>This yields a .deb file.</p>
    <div>
      <h3>Create plaintext files needed by apt to lookup downloads</h3>
      <a href="#create-plaintext-files-needed-by-apt-to-lookup-downloads">
        
      </a>
    </div>
    <p>Again, these files can be built by hand, but there are multiple <a href="https://wiki.debian.org/DebianRepository/Setup?action=show&amp;redirect=HowToSetupADebianRepository#Debian_Repository_Generation_Tools.">tools</a> available to generate this:</p><p>We use <a href="https://wiki.debian.org/DebianRepository/Setup?action=show&amp;redirect=HowToSetupADebianRepository#reprepro.">reprepro</a>.</p><p>Using it is as simple as</p>
            <pre><code>$ reprepro buster includedeb &lt;path/to/the/deb&gt;</code></pre>
            <p>reprepro neatly creates a bunch of folders in the structure we looked into above.</p>
    <div>
      <h3>Upload them to Cloudflare R2</h3>
      <a href="#upload-them-to-cloudflare-r2">
        
      </a>
    </div>
    <p>We use Cloudflare R2 to now be the host for this structured file system. R2 lets us upload and serve objects in this structured format. This means, we just need to upload the files in the same structure reprepro created them.</p><p><a href="https://github.com/cloudflare/cloudflared/blob/135c8e6d13663d2aa2d3c9169cde0cfc1e6e2062/release_pkgs.py#L36">Here</a> is a copyable example of how we do it for cloudflared.</p>
    <div>
      <h3>Serve them from an R2 worker</h3>
      <a href="#serve-them-from-an-r2-worker">
        
      </a>
    </div>
    <p>For fine-grained control, we’ll write a very lightweight Cloudflare Worker to be the service we talk to and serve as the front-end API for apt to talk to. For an apt repository, we only need it to perform the GET function.</p><p>Here’s an example on how-to do this: <a href="https://developers.cloudflare.com/r2/examples/demo-worker/">https://developers.cloudflare.com/r2/examples/demo-worker/</a></p>
    <div>
      <h2>Putting it all together</h2>
      <a href="#putting-it-all-together">
        
      </a>
    </div>
    <p><a href="https://github.com/cloudflare/cloudflared/blob/master/release_pkgs.py">Here</a> is a handy script we use to push cloudflared to R2 ready for apt/yum downloads and includes signing and publishing the pubkey.</p><p>And that’s it! You now have your own apt/yum repo without needing a server that needs to be maintained, there are no egress fees for downloads, and it is on the Cloudflare global network, protecting it against high request volumes. You can automate many of these steps to make it part of a release process.</p><p>Today, this is how cloudflared is distributed on the apt and yum repositories: <a href="https://pkg.cloudflare.com/">https://pkg.cloudflare.com/</a></p> ]]></content:encoded>
            <category><![CDATA[undefined]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Storage]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Cloudflare Tunnel]]></category>
            <guid isPermaLink="false">1wS0SQnobVypSO6YoT2RnL</guid>
            <dc:creator>Sudarsan Reddy</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing Private Network Discovery]]></title>
            <link>https://blog.cloudflare.com/introducing-network-discovery/</link>
            <pubDate>Wed, 22 Jun 2022 13:14:46 GMT</pubDate>
            <description><![CDATA[ Rest easy knowing exactly who and what is being accessed within your private network. Introducing Private Network Discovery ]]></description>
            <content:encoded><![CDATA[ <p></p><p>With Cloudflare One, building your private network on Cloudflare is easy. What is not so easy is maintaining the security of your private network over time. Resources are constantly being spun up and down with new users being added and removed on a daily basis, making it painful to manage over time.</p><p>That’s why today we’re opening a closed beta for our new Zero Trust network discovery tool. With Private Network Discovery, our Zero Trust platform will now start passively cataloging both the resources being accessed and the users who are accessing them without any additional configuration required. No third party tools, commands, or clicks necessary.</p><p>To get started, <a href="http://www.cloudflare.com/zero-trust/lp/private-network-discovery">sign-up</a> for early access to the closed beta and gain instant visibility into your network today. If you’re interested in learning more about how it works and what else we will be launching in the future for general availability, keep scrolling.</p><p>One of the most laborious aspects of migrating to <a href="https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/">Zero Trust</a> is replicating the security policies which are active within your network today. Even if you do have a point-in-time understanding of your environment, networks are constantly evolving with new resources being spun up dynamically for various operations. This results in a constant cycle to discover and secure applications which creates an endless backlog of due diligence for security teams.</p><p>That’s why we built Private Network Discovery. With Private Network Discovery, organizations can easily gain complete visibility into the users and applications that live on their network without any additional effort on their part. Simply connect your private network to Cloudflare, and we will surface any unique traffic we discover on your network to allow you to seamlessly translate them into Cloudflare Access applications.</p>
    <div>
      <h3>Building your private network on Cloudflare</h3>
      <a href="#building-your-private-network-on-cloudflare">
        
      </a>
    </div>
    <p>Building out a private network has two primary components: the infrastructure side, and the client side.</p><p>The infrastructure side of the equation is powered by Cloudflare Tunnel, which simply connects your infrastructure (whether that be a single application, many applications, or an entire <a href="https://www.cloudflare.com/learning/access-management/what-is-network-segmentation/">network segment</a>) to Cloudflare. This is made possible by running a simple command-line daemon in your environment to establish multiple secure, outbound-only links to Cloudflare. Simply put, Tunnel is what connects your network to Cloudflare.</p><p>On the other side of this equation, you need your end users to be able to easily connect to Cloudflare and, more importantly, your network. This connection is handled by our robust device agent, Cloudflare WARP. This agent can be rolled out to your entire organization in just a few minutes using your in-house MDM tooling, and it establishes a secure connection from your users’ devices to the Cloudflare network.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3ZSuHpn9OVszm7xyyWkPjj/e0fea4c23b93c94c8ea941e4b5711134/image2-29.png" />
            
            </figure><p>Now that we have your infrastructure and your users connected to Cloudflare, it becomes easy to tag your applications and layer on Zero Trust security controls to verify both identity and device-centric rules for each and every request on your network.</p>
    <div>
      <h3>How it works</h3>
      <a href="#how-it-works">
        
      </a>
    </div>
    <p>As we mentioned earlier, we built this feature to help your team gain visibility into your network by passively cataloging unique traffic destined for an RFC 1918 or RFC 4193 address space. By design, this tool operates in an <a href="https://www.cloudflare.com/learning/performance/what-is-observability/">observability</a> mode whereby all applications are surfaced, but are tagged with a base state of “Unreviewed.”</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7knrqbccpRyMSlXEJmjpe2/83f5716fa3d90ce3baa61e7b2eaad6b7/image3-21.png" />
            
            </figure><p>The Network Discovery tool surfaces all origins within your network, defined as any unique IP address, port, or protocol. You can review the details of any given origin and then create a Cloudflare Access application to control access to that origin. It’s also worth noting that Access applications may be composed of more than one origin.</p><p>Let’s take, for example, a privately hosted video conferencing service, Jitsi. I’m using this example as our team actually uses this service internally to test our new features before pushing them into production. In this scenario, we know that our self-hosted instance of Jitsi lives at 10.0.0.1:443. However, as this is a video conferencing application, it communicates on both tcp:10.0.0.1:443 and udp:10.0.0.1:10000. Here we would select one origin and assign it an application name.</p><p>As a note, during the closed beta you will not be able to view this application in the Cloudflare Access application table. For now, these application names will only be reflected in the discovered origins table of the Private Network Discovery report. You will see them reflected in the Application name column exclusively. However, when this feature goes into general availability you’ll find all the applications you have created under Zero Trust &gt; Access &gt; Applications as well.</p><p>After you have assigned an application name and added your first origin, tcp:10.0.0.1:443, you can then follow the same pattern to add the other origin, udp:10.0.0.1:10000, as well. This allows you to create logical groupings of origins to create a more accurate representation of the resources on your network.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3HyitCjObLQBZkGmAVJZBU/95575ed3139f725f1a2a10081e50c86c/image1-29.png" />
            
            </figure><p>By creating an application, our Network Discovery tool will automatically update the status of these individual origins from “Unreviewed'' to “In-Review.” This will allow your team to easily track the origin’s status. From there, you can drill further down to review the number of unique users accessing a particular origin as well as the total number of requests each user has made. This will help equip your team with the information it needs to create identity and device-driven Zero Trust policies. Once your team is comfortable with a given application's usage, you can then manually update the status of a given application to be either “Approved” or “Unapproved”.</p>
    <div>
      <h3>What’s next</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>Our closed beta launch is just the beginning. While the closed beta release supports creating friendly names for your private network applications, those names do not currently appear in the Cloudflare Zero Trust policy builder.</p><p>As we move towards general availability, our top priority will be making it easier to secure your private network based on what is surfaced by the Private Network Discovery tool. With the general availability launch, you will be able to create Access applications directly from your Private Network Discovery report, reference your private network applications in Cloudflare Access and create Zero Trust security policies for those applications, all in one singular workflow.</p><p>As you can see, we have exciting plans for this tool and will continue investing in Private Network Discovery in the future. If you’re interested in gaining access to the closed beta, sign-up <a href="http://www.cloudflare.com/zero-trust/lp/private-network-discovery">here</a> and be among the first users to try it out!</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare One Week]]></category>
            <category><![CDATA[Cloudflare One]]></category>
            <category><![CDATA[Zero Trust]]></category>
            <category><![CDATA[Private Network]]></category>
            <category><![CDATA[Cloudflare Tunnel]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">2DaQsZHBxps61psm4DQ5jB</guid>
            <dc:creator>Abe Carryl</dc:creator>
        </item>
        <item>
            <title><![CDATA[Ridiculously easy to use Tunnels]]></title>
            <link>https://blog.cloudflare.com/ridiculously-easy-to-use-tunnels/</link>
            <pubDate>Fri, 25 Mar 2022 12:58:59 GMT</pubDate>
            <description><![CDATA[ Today, we’re thrilled to announce that we have launched a new solution to remotely create, deploy, and manage Tunnels and their configuration directly from the Zero Trust dashboard. This new solution allows our customers to provide their workforce with Zero Trust network access in 15 minutes or less ]]></description>
            <content:encoded><![CDATA[ <p></p><p>A little over a decade ago, Cloudflare launched at <a href="https://youtu.be/XeKWeBw1R5A?t=264">TechCrunch Disrupt</a>. At the time, we talked about three core principles that differentiated Cloudflare from traditional security vendors: be more secure, more performant, and ridiculously easy to use. Ease of use is at the heart of every decision we make, and this is no different for Cloudflare Tunnel.</p><p>That’s why we’re thrilled to announce today that creating tunnels, which previously required up to 14 commands in the terminal, can now be accomplished in <b>just</b> <b>three simple steps</b> directly from the Zero Trust dashboard.</p><p>If you’ve heard enough, jump over to <a href="http://dash.cloudflare.com/sign-up/teams">sign-up/teams</a> to unplug your VPN and start building your private network with Cloudflare. If you’re interested in learning more about our motivations for this release and what we’re building next, keep scrolling.</p>
    <div>
      <h2>Our connector</h2>
      <a href="#our-connector">
        
      </a>
    </div>
    <p>Cloudflare Tunnel is the easiest way to connect your infrastructure to Cloudflare, whether that be a local HTTP server, web services served by a Kubernetes cluster, or a private <a href="https://www.cloudflare.com/learning/access-management/what-is-network-segmentation/">network segment</a>. This connectivity is made possible through our lightweight, <a href="https://github.com/cloudflare/cloudflared/blob/master/LICENSE">open-source connector</a>, <code>cloudflared</code>. Our connector offers high-availability by design, creating four long-lived connections to two distinct data centers within Cloudflare’s network. This means that whether an individual connection, server, or data center goes down, your network remains up. Users can also maintain redundancy within their own environment by deploying <a href="/highly-available-and-highly-scalable-cloudflare-tunnels/">multiple instances</a> of the connector in the event a single host goes down for one reason or another.</p><p>Historically, the best way to deploy our connector has been through the <code>cloudflared</code> CLI. Today, we’re thrilled to announce that we have launched a new solution to remotely create, deploy, and manage tunnels and their configuration directly from the Zero Trust dashboard. This new solution allows our customers to provide their workforce with <a href="https://www.cloudflare.com/learning/access-management/what-is-ztna/">Zero Trust network access</a> in <b>15 minutes or less</b>.</p>
    <div>
      <h2>CLI? GUI? Why not both</h2>
      <a href="#cli-gui-why-not-both">
        
      </a>
    </div>
    <p>Command line interfaces are exceptional at what they do. They allow users to pass commands at their console or terminal and interact directly with the operating system. This precision grants users exact control over the interactions they may have with a given program or service where this exactitude is required.</p><p>However, they also have a higher learning curve and can be less intuitive for new users. This means users need to carefully research the tools they wish to use prior to trying them out. Many users don’t have the luxury to perform this level of research, only to test a program and find it’s not a great fit for their problem.</p><p>Conversely, GUIs, like our Zero Trust dashboard, have the flexibility to teach by doing. Little to no program knowledge is required to get started. Users can be intuitively led to their desired results and only need to research how and why they completed certain steps <i>after</i> they know this solution fits their problem.</p><p>When we first released Cloudflare Tunnel, it had less than ten distinct commands to get started. We now have far more than this, as well as a myriad of new use cases to invoke them. This has made what used to be an easy-to-navigate CLI library into something more cumbersome for users just discovering our product.</p><p>Simple typos led to immense frustration for some users. Imagine, for example, a user needs to advertise IP routes for their private network tunnel. It can be burdensome to remember <code>cloudflared tunnel route ip add &lt;IP/CIDR&gt;</code>. Through the Zero Trust dashboard, you can forget all about the semantics of the CLI library. All you need to know is the name of your tunnel and the network range you wish to connect through Cloudflare. Simply enter <code>my-private-net</code> (or whatever you want to call it), copy the installation script, and input your network range. It’s that simple. If you accidentally type an invalid IP or CIDR block, the dashboard will provide an actionable, human-readable error and get you on track.</p><p>Whether you prefer the CLI or GUI, they ultimately achieve the same outcome through different means. Each has merit and ideally users get the best of both worlds in one solution.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6Y7ixZLiZYgPcxtwT7yf3T/30f43c9e7063db4c103a9e4e85f46d82/image2-92.png" />
            
            </figure>
    <div>
      <h2>Eliminating points of friction</h2>
      <a href="#eliminating-points-of-friction">
        
      </a>
    </div>
    <p>Tunnels have typically required a locally managed configuration file to route requests to their appropriate destinations. This configuration file was never created by default, but was required for almost every use case. This meant that users needed to use the command line to create and populate their configuration file using examples from <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/configuration/configuration-file/">developer documentation</a>. As functionality has been added into <code>cloudflared</code>, configuration files have become unwieldy to manage. Understanding the parameters and values to include as well as where to include them has become a burden for users. These issues were often difficult to catch with the naked eye and painful to troubleshoot for users.</p><p>We also wanted to improve the concept of tunnel permissions with our latest release. Previously, users were required to manage <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/install-and-setup/tunnel-permissions/">two distinct tokens</a>: The <code>cert.pem</code> and the <code>Tunnel_UUID.json</code> file. In short, <code>cert.pem</code>, issued during the <code>cloudflared tunnel login</code> command, granted the ability to create, delete, and list tunnels for their Cloudflare account through the CLI. <code>Tunnel_UUID.json</code>, issued during the <code>cloudflared tunnel create &lt;NAME&gt;</code> command, granted the ability to run a specified tunnel. However, since tunnels can now be created directly from your Cloudflare account in the Zero Trust dashboard, there is no longer a requirement to authenticate your origin prior to creating a tunnel. This action is already performed during the initial Cloudflare login event.</p><p>With today’s release, users no longer need to manage configuration files or tokens locally. Instead, Cloudflare will manage this for you based on the inputs you provide in the Zero Trust dashboard. If users typo a hostname or service, they’ll know well before attempting to run their tunnel, saving time and hassle. We’ll also manage your tokens for you, and if you need to refresh your tokens at some point in the future, we’ll rotate the token on your behalf as well.</p>
    <div>
      <h2>Client or clientless Zero Trust</h2>
      <a href="#client-or-clientless-zero-trust">
        
      </a>
    </div>
    <p>We commonly refer to Cloudflare Tunnel as an “on-ramp” to our Zero Trust platform. Once connected, you can seamlessly pair it with WARP, Gateway, or Access to protect your resources with <a href="https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/">Zero Trust</a> security policies, so that each request is validated against your organization's device and identity based rules.</p>
    <div>
      <h3>Clientless Zero Trust</h3>
      <a href="#clientless-zero-trust">
        
      </a>
    </div>
    <p>Users can achieve a clientless Zero Trust deployment by pairing Cloudflare Tunnel with Access. In this model, users will follow the flow laid out in the Zero Trust dashboard. First, users name their tunnel. Next, users will be provided a single installation script tailored to the origin’s operating system and system architecture. Finally, they’ll create either public hostnames or private network routes for their tunnel. As outlined earlier, this step eliminates the need for a configuration file. Public hostname values will now replace ingress rules for remotely managed tunnels. Simply add the public hostname through which you’d like to access your private resource. Then, map the hostname value to a service behind your origin server. Finally, create a Cloudflare Access policy to ensure only those users who meet your requirements are able to access this resource.</p>
    <div>
      <h3>Client-based Zero Trust</h3>
      <a href="#client-based-zero-trust">
        
      </a>
    </div>
    <p>Alternatively, users can pair Cloudflare Tunnel with WARP and Gateway if they prefer a client-based approach to Zero Trust. Here, they’ll follow the same flow outlined above but instead of creating a public hostname, they’ll add a private network. This step replaces the <code>cloudflared tunnel route ip add &lt;IP/CIDR&gt;</code> step from the CLI library. Then, users can navigate to the Cloudflare Gateway section of the Zero Trust dashboard and create two rules to test private network connectivity and get started.</p><ol><li><p>Name: Allow  for &lt;IP/CIDR&gt;      Policy: Destination IP in &lt;IP/CIDR&gt; AND User Email is Action: Allow</p></li><li><p>Name: Default deny for &lt;IP/CIDR&gt;Policy: Destination IP in &lt;IP/CIDR&gt;Action: Block</p></li></ol><p>It’s important to note, with either approach, most use cases will only require a single tunnel. A tunnel can advertise both public hostnames and private networks without conflicts. This helps make orchestration simple. In fact, we suggest starting with the least number of tunnels possible and using replicas to handle redundancy rather than additional tunnels. This, of course, is dependent on each user's environment, but generally it’s smart to start with a single tunnel and create more only when there is a need to keep networks or services logically separated.</p>
    <div>
      <h2>What’s next</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>Since we launched Cloudflare Tunnel, hundreds of thousands of tunnels have been created. That’s many tunnels that need to be migrated over to our new orchestration method. We want to make this process frictionless. That’s why we’re currently building out tooling to seamlessly migrate locally managed configurations to Cloudflare managed configurations. This will be available in a few weeks.</p><p>At launch, we also will not support global configuration options listed in our <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/configuration/arguments/">developer documentation</a>. These parameters require case-by-case support, and we’ll be adding these commands incrementally over time. Most notably, this means the best way to adjust your <code>cloudflared</code> logging levels will still be by modifying the Cloudflare Tunnel service start command and appending the <code>--loglevel</code> flag into your service run command. This will become a priority after releasing the migration wizard.</p><p>As you can see, we have exciting plans for the future of remote tunnel management and will continue investing in this as we move forward. Check it out today and <a href="http://dash.cloudflare.com/sign-up/teams">deploy your first Cloudflare Tunnel</a> from the dashboard in three simple steps.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Tunnel]]></category>
            <category><![CDATA[Zero Trust]]></category>
            <category><![CDATA[Private Network]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Cloudflare Zero Trust]]></category>
            <guid isPermaLink="false">4JtS3oJoPpEJ6kfbl3Cn9P</guid>
            <dc:creator>Abe Carryl</dc:creator>
        </item>
        <item>
            <title><![CDATA[Guest Blog: k8s tunnels with Kudelski Security]]></title>
            <link>https://blog.cloudflare.com/guest-blog-zero-trust-access-kubernetes/</link>
            <pubDate>Wed, 08 Dec 2021 13:59:22 GMT</pubDate>
            <description><![CDATA[ At Kudelski Security, we've been working on implementing our Zero Trust strategy for the last two years. In many aspects, it's been an incredible journey, and although we're not quite finished yet, we're excited by the progress made so far with Cloudflare. ]]></description>
            <content:encoded><![CDATA[ <p></p><p><i>Today, we’re excited to publish a blog post written by our friends at Kudelski Security, a managed security services provider. A few weeks back, Romain Aviolat, the Principal Cloud and Security Engineer at Kudelski Security approached our Zero Trust team with a unique solution to a difficult problem that was powered by Cloudflare’s Identity-aware Proxy, which we call Cloudflare Tunnel, to ensure secure application access in remote working environments.</i></p><p><i>We enjoyed learning about their solution so much that we wanted to amplify their story. In particular, we appreciated how Kudelski Security’s engineers took full advantage of the flexibility and scalability of our technology to automate workflows for their end users. If you’re interested in learning more about Kudelski Security, check out their work below or their</i> <a href="https://research.kudelskisecurity.com/"><i>research blog</i></a><i>.</i></p>
    <div>
      <h3>Zero Trust Access to Kubernetes</h3>
      <a href="#zero-trust-access-to-kubernetes">
        
      </a>
    </div>
    <p>Over the past few years, Kudelski Security’s engineering team has prioritized migrating our infrastructure to multi-cloud environments. Our internal cloud migration mirrors what our end clients are pursuing and has equipped us with expertise and tooling to enhance our services for them. Moreover, this transition has provided us an opportunity to reimagine our own security approach and embrace the best practices of <a href="https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/">Zero Trust</a>.</p><p>So far, one of the most challenging facets of our Zero Trust adoption has been securing access to our different Kubernetes (K8s) control-plane (APIs) across multiple cloud environments. Initially, our infrastructure team struggled to gain visibility and apply consistent, identity-based controls to the different APIs associated with different K8s clusters. Additionally, when interacting with these APIs, our developers were often left blind as to which clusters they needed to access and how to do so.</p><p>To address these frictions, we designed an in-house solution leveraging Cloudflare to automate how developers could securely authenticate to K8s clusters sitting across public cloud and on-premise environments. Specifically, for a given developer, we can now surface all the K8s services they have access to in a given cloud environment, authenticate an access request using Cloudflare’s Zero Trust rules, and establish a connection to that cluster via Cloudflare’s Identity-aware proxy, Cloudflare Tunnel.</p><p>Most importantly, this automation tool has enabled Kudelski Security as an organization to enhance our security posture and improve our developer experience at the same time. We estimate that this tool saves a new developer at least two hours of time otherwise spent reading documentation, submitting IT service tickets, and manually deploying and configuring the different tools needed to access different K8s clusters.</p><p>In this blog, we detail the specific pain points we addressed, how we designed our automation tool, and how Cloudflare helped us progress on our Zero Trust journey in a work-from-home friendly way.</p>
    <div>
      <h3>Challenges securing multi-cloud environments</h3>
      <a href="#challenges-securing-multi-cloud-environments">
        
      </a>
    </div>
    <p>As Kudelski Security has expanded our client services and internal development teams, we have inherently expanded our footprint of applications within multiple K8s clusters and multiple cloud providers. For our infrastructure engineers and developers, the K8s cluster API is a crucial entry point for troubleshooting. We work in GitOps and all our application deployments are automated, but we still frequently need to connect to a cluster to pull logs or debug an issue.</p><p>However, maintaining this diversity creates complexity and pressure for infrastructure administrators. For end users, sprawling infrastructure can translate to different credentials, different access tools for each cluster, and different configuration files to keep track of.</p><p>Such a complex access experience can make real-time troubleshooting particularly painful. For example, on-call engineers trying to make sense of an unfamiliar K8s environment may dig through dense documentation or be forced to wake up other colleagues to ask a simple question. All this is error-prone and a waste of precious time.</p><p>Common, traditional approaches of securing access to K8s APIs presented challenges we knew we wanted to avoid. For example, we felt that exposing the API to the public internet would inherently increase our attack surface, that’s a risk we couldn’t afford. Moreover, we did not want to provide broad-based access to our clusters’ APIs via our internal networks and condone the risks of lateral movement. As Kudelski continues to grow, the operational costs and complexity of deploying VPNs across our workforce and different cloud environments would lead to scaling challenges as well.</p><p>Instead, we wanted an approach that would allow us to maintain small, micro-segmented environments, small failure domains, and no more than one way to give access to a service.</p>
    <div>
      <h3>Leveraging Cloudflare’s Identity-aware Proxy for Zero Trust access</h3>
      <a href="#leveraging-cloudflares-identity-aware-proxy-for-zero-trust-access">
        
      </a>
    </div>
    <p>To do this, Kudelski Security’s engineering team opted for a more modern approach: creating connections between users and each of our K8 clusters via an Identity-aware proxy (IAP). IAPs  are flexible to deploy and add an additional layer of security in front of our applications by verifying the identity of a user when an access request is made. Further, they support our Zero Trust approach by creating connections from users to individual applications — not entire networks.</p><p>Each cluster has its own IAP and its own sets of policies, which check for identity (via our corporate SSO) and other contextual factors like the device posture of a developer’s laptop. The IAP doesn't replace the K8s cluster authentication mechanism, it adds a new one on top of it, and thanks to identity federation and SSO this process is completely transparent for our end users.</p><p>In our setup, Kudelski Security is using Cloudflare’s IAPs as a component of Cloudflare Access -- a ZTNA solution and one of several security services unified by Cloudflare’s Zero Trust platform.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/qcVhWCQ9738SLDgAWt1fL/0e6e328791b8d4c0c51deecd4a76ba43/image1-39.png" />
            
            </figure><p>For many web-based apps, IAPs help create a frictionless experience for end users requesting access via a browser. Users authenticate via their corporate SSO or identity provider before reaching the secured app, while the IAP works in the background.</p><p>That user flow looks different for CLI-based applications because we cannot redirect CLI network flows like we do in a browser. In our case, our engineers want to use their favorite K8s clients which are CLI-based like <a href="https://kubernetes.io/docs/reference/kubectl/overview/">kubectl</a> or <a href="https://github.com/derailed/k9s">k9s</a>. This means our Cloudflare IAP needs to act as a SOCKS5 proxy between the CLI client and each K8s cluster.</p><p>To create this IAP connection, Cloudflare provides a lightweight server-side daemon called <i>cloudflared</i> that connects infrastructure with applications. This encrypted connection runs on Cloudflare’s global network where Zero Trust policies are applied with single-pass inspection.</p><p>Without any automation, however, Kudelski Security’s infrastructure team would need to distribute the daemon on end user devices, provide guidance on how to set up those encrypted connections, and take other manual, hands-on configuration steps and maintain them over time. Plus, developers would still lack a single pane of visibility across the different K8s clusters that they would need to access in their regular work.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1GgyiIhSs1biHl2F16tpVf/8290e0e8c79254bc85493ec24ff88ad6/image3-14.png" />
            
            </figure>
    <div>
      <h3>Our automated solution: k8s-tunnels!</h3>
      <a href="#our-automated-solution-k8s-tunnels">
        
      </a>
    </div>
    <p>To solve these challenges, our infrastructure engineering team developed an internal tool — called ‘k8s-tunnels’ — that embeds complex configuration steps which make life easier for our developers. Moreover, this tool automatically discovers all the K8s clusters that a given user has access to based on the Zero Trust policies created. To enable this functionality, we embedded the SDKs of some major public cloud providers that Kudelski Security uses. The tool also embeds the <i>cloudflared</i> daemon_,_ meaning that we only need to distribute a single tool to our users.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/ZQvU7tCb7pVwuA4XxkEFs/2f73cfaf44a385c1ec12f8042292ddb4/image7-3.png" />
            
            </figure><p>All together, a developer who launches the tool goes through the following workflow: (we assume that the user already has valid credentials otherwise the tool would open a browser on our IDP to obtain them)</p><p>1. The user selects one or more cluster to</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7FRjPPfs08sQT8liAnUYJx/6985e5ee3b7c6f44de8911f9cd000ffe/image2-22.png" />
            
            </figure><p>2. k8s-tunnel will automatically open the connection with Cloudflare and expose a local SOCKS5 proxy on the developer machine</p><p>3. k8s-tunnel amends the user local kubernetes client configuration by pushing the necessary information to go through the local SOCKS5 proxy</p><p>4. k8s-tunnel switches the Kubernetes client context to the current connection</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5c4A7fNyziO8je6kIFL2yl/f89cd117fc0ac71a2d21a59faca9e44c/image6-8.png" />
            
            </figure><p>5. The user can now use his/her favorite CLI client to access the K8s cluster</p><p>The whole process is really straightforward and is being used on a daily basis by our engineering team. And, of course, all this magic is made possible through the auto-discovery mechanism we’ve built into k8s-tunnels. Whenever new engineers join our team, we simply ask them to launch the auto-discovery process and get started.</p><p>Here is an example of the auto-discovery process in action.</p><ol><li><p>k8s-tunnels will connect to our different cloud providers APIs and list the K8s clusters the user has access to</p></li><li><p>k8s-tunnels will maintain a local config file on the user machine of those clusters so this process does not be run more than once</p></li></ol>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2cjUsp5ExD6qFFwAKHfpF6/b3a91c280d4cdfa0782e4d90ef133d59/image5-6.png" />
            
            </figure>
    <div>
      <h3>Automation enhancements</h3>
      <a href="#automation-enhancements">
        
      </a>
    </div>
    <p>For on-premises deployments, it was a bit trickier as we didn't have a simple way to store the K8s clusters metadata like we do with resource tags with public cloud providers. We decided to use <a href="https://www.vaultproject.io/">Vault</a> as a Key-Value-store to mimic public-cloud resource tags for on-prem. This way we can achieve auto-discovery of on-prem clusters following the same process as with a public-cloud provider.</p><p>Maybe you saw that in the previous CLI screenshot, the user can select multiple clusters at the same time! We quickly realized that our developers often needed to access multiple environments at the same time to compare a workload running in production and in staging. So instead of opening and closing tunnels every time they needed to switch clusters, we designed our tool such that they can now simply open multiple tunnels in parallel within a single k8s-tunnels instance and just switch the destination K8s cluster on their laptop.</p><p>Last but not least, we've also added the support for favorites and notifications on new releases, leveraging Cloudflare Workers, but that's for another blog post.</p>
    <div>
      <h3>What’s Next</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>In designing this tool, we’ve identified a couple of issues inside Kubernetes client libraries when used in conjunction with SOCKS5 proxies, and we’re <a href="https://github.com/kubernetes/kubernetes/pull/105632">working with the Kubernetes community</a> to fix those issues, so everybody should benefit from those patches in the near future.</p><p>With this blog post, we wanted to highlight how it is possible to apply Zero Trust security for complex workloads running on multi-cloud environments, while simultaneously improving the end user experience.</p><p>Although today our ‘k8s-tunnels’ code is too specific to Kudelski Security, our goal is to share what we’ve created back with the Kubernetes community, so that other organizations and Cloudflare customers can benefit from it.</p> ]]></content:encoded>
            <category><![CDATA[CIO Week]]></category>
            <category><![CDATA[Cloudflare Tunnel]]></category>
            <category><![CDATA[Cloudflare Access]]></category>
            <category><![CDATA[Zero Trust]]></category>
            <category><![CDATA[Guest Post]]></category>
            <guid isPermaLink="false">5vGKajT907o4VLlWFduSua</guid>
            <dc:creator>Romain Aviolat (Guest Author)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Extending Cloudflare’s Zero Trust platform to support UDP and Internal DNS]]></title>
            <link>https://blog.cloudflare.com/extending-cloudflares-zero-trust-platform-to-support-udp-and-internal-dns/</link>
            <pubDate>Wed, 08 Dec 2021 13:59:15 GMT</pubDate>
            <description><![CDATA[ Last year, we launched a new feature which empowered users to begin building a private network on Cloudflare. Today, we’re excited to announce even more features which make your Zero Trust migration easier than ever.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>At the end of 2020, Cloudflare empowered organizations to start <a href="/build-your-own-private-network-on-cloudflare/">building a private network</a> on top of our network. Using Cloudflare Tunnel on the server side, and Cloudflare WARP on the client side, the need for a legacy VPN was eliminated. Fast-forward to today, and thousands of organizations have gone on this journey with us — unplugging their legacy VPN concentrators, internal firewalls, and load balancers. They’ve eliminated the need to maintain all this legacy hardware; they’ve dramatically improved speeds for end users; and they’re able to maintain Zero Trust rules organization-wide.</p><p>We started with TCP, which is powerful because it enables an important range of use cases. However, to truly replace a VPN, you need to be able to cover UDP, too. Starting today, we’re excited to provide early access to UDP on Cloudflare’s Zero Trust platform. And even better: as a result of supporting UDP, we can offer Internal DNS — so there’s no need to migrate thousands of private hostnames by hand to override DNS rules. You can get started with Cloudflare for Teams for free today by signing up <a href="https://dash.cloudflare.com/sign-up/teams">here</a>; and if you’d like to join the waitlist to gain early access to UDP and Internal DNS, please visit <a href="https://cloudflare.com/zero-trust/lp/private-dns-waitlist">here</a>.</p>
    <div>
      <h2>The topology of a private network on Cloudflare</h2>
      <a href="#the-topology-of-a-private-network-on-cloudflare">
        
      </a>
    </div>
    <p>Building out a private network has two primary components: the infrastructure side, and the client side.</p><p>The infrastructure side of the equation is powered by Cloudflare Tunnel, which simply connects your infrastructure (whether that be a singular application, many applications, or an entire <a href="https://www.cloudflare.com/learning/access-management/what-is-network-segmentation/">network segment</a>) to Cloudflare. This is made possible by running a simple command-line daemon in your environment to establish multiple secure, outbound-only, load-balanced links to Cloudflare. Simply put, Tunnel is what connects your network to Cloudflare.</p><p>On the other side of this equation, we need your end users to be able to easily connect to Cloudflare and, more importantly, your network. This connection is handled by our robust device client, <a href="/warp-for-desktop/">Cloudflare WARP</a>. This client can be rolled out to your entire organization in just a few minutes using your in-house MDM tooling, and it establishes a secure, WireGuard-based connection from your users’ devices to the Cloudflare network.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2m1z6HnFMDxGRFpphnS5kB/6a2cf02939d4a1e9b3f8829c9fcc656f/image1-36.png" />
            
            </figure><p>Now that we have your infrastructure and your users connected to Cloudflare, it becomes easy to tag your applications and layer on <a href="https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/">Zero Trust security controls</a> to verify both identity and device-centric rules for each and every request on your network.</p><p>Up until now though, only TCP was supported.</p>
    <div>
      <h2>Extending Cloudflare Zero Trust to support UDP</h2>
      <a href="#extending-cloudflare-zero-trust-to-support-udp">
        
      </a>
    </div>
    <p>Over the past year, with more and more users adopting Cloudflare’s Zero Trust platform, we have gathered data surrounding all the use cases that are keeping VPNs plugged in. Of those, the most common need has been blanket support for UDP-based traffic. Modern protocols like QUIC take advantage of UDP’s lightweight architecture — and at Cloudflare, we believe it is part of our mission to advance these new standards to help build a better Internet.</p><p>Today, we’re excited to open an official waitlist for those who would like early access to Cloudflare for Teams with UDP support.</p>
    <div>
      <h3>What is UDP and why does it matter?</h3>
      <a href="#what-is-udp-and-why-does-it-matter">
        
      </a>
    </div>
    <p>UDP is a vital component of the Internet. Without it, many applications would be rendered woefully inadequate for modern use. Applications which depend on near real time communication such as <a href="https://www.cloudflare.com/developer-platform/solutions/live-streaming/">video streaming</a> or VoIP services are prime examples of why we need UDP and the role it fills for the Internet. At their core, however, TCP and UDP achieve the same results — just through vastly different means. Each has their own unique benefits and drawbacks, which are always felt downstream by the applications that utilize them.</p><p>Here’s a quick example of how they both work, if you were to ask a question to somebody as a metaphor. TCP should look pretty familiar: you would typically say hi, wait for them to say hi back, ask how they are, wait for their response, and then ask them what you want.</p><p>UDP, on the other hand, is the equivalent of just walking up to someone and asking what you want without checking to make sure that they're listening. With this approach, some of your question may be missed, but that's fine as long as you get an answer.</p><p>Like the conversation above, with UDP many applications actually don’t care if some data gets lost; video streaming or game servers are good examples here. If you were to lose a packet in transit while streaming, you wouldn’t want the entire stream to be interrupted until this packet is received — you’d rather just drop the packet and move on. Another reason application developers may utilize UDP is because they’d prefer to develop their own controls around connection, transmission, and quality control rather than use TCP’s standardized ones.</p><p>For Cloudflare, end-to-end support for UDP-based traffic will unlock a number of new use cases. Here are a few we think you’ll agree are pretty exciting.</p>
    <div>
      <h3>Internal DNS Resolvers</h3>
      <a href="#internal-dns-resolvers">
        
      </a>
    </div>
    <p>Most corporate networks require an internal DNS resolver to disseminate access to resources made available over their Intranet. Your Intranet needs an internal DNS resolver for many of the same reasons the Internet needs public DNS resolvers. In short, humans are good at many things, but remembering long strings of numbers (in this case IP addresses) is not one of them. Both public and internal DNS resolvers were designed to solve this problem (and <a href="https://www.cloudflare.com/learning/dns/what-is-dns/">much more</a>) for us.</p><p>In the corporate world, it would be needlessly painful to ask internal users to navigate to, say, 192.168.0.1 to simply reach Sharepoint or OneDrive. Instead, it’s much easier to create DNS entries for each resource and let your internal resolver handle all the mapping for your users as this is something humans are actually quite good at.</p><p>Under the hood, DNS queries generally consist of a single UDP request from the client. The server can then return a single reply to the client. Since DNS requests are not very large, they can often be sent and received in a single packet. This makes support for UDP across our Zero Trust platform a key enabler to pulling the plug on your VPN.</p>
    <div>
      <h3>Thick Client Applications</h3>
      <a href="#thick-client-applications">
        
      </a>
    </div>
    <p>Another common use case for UDP is thick client applications. One benefit of UDP we have discussed so far is that it is a lean protocol. It’s lean because the <a href="https://www.cloudflare.com/learning/ddos/glossary/tcp-ip/">three-way handshake</a> of TCP and other measures for reliability have been stripped out by design. In many cases, application developers still want these reliability controls, but are intimately familiar with their applications and know these controls could be better handled by tailoring them to their application. These thick client applications often perform critical business functions and must be supported end-to-end to migrate. As an example, legacy versions of Outlook may be implemented through thick clients where most of the operations are performed by the local machine, and only the sync interactions with Exchange servers occur over UDP.</p><p>Again, UDP support on our Zero Trust platform now means these types of applications are no reason to remain on your legacy VPN.</p>
    <div>
      <h3>And more…</h3>
      <a href="#and-more">
        
      </a>
    </div>
    <p>A huge portion of the world's Internet traffic is transported over UDP. Often, people equate time-sensitive applications with UDP, where occasionally dropping packets would be better than waiting — but there are a number of other use cases, and we’re excited to be able to provide sweeping support.</p>
    <div>
      <h2>How can I get started today?</h2>
      <a href="#how-can-i-get-started-today">
        
      </a>
    </div>
    <p>You can already get started building your private network on Cloudflare with our tutorials and guides in our developer documentation. Below is the critical path. And if you’re already a customer, and you’re interested in joining the waitlist for UDP and Internal DNS access, please skip ahead to the end of this post!</p>
    <div>
      <h3>Connecting your network to Cloudflare</h3>
      <a href="#connecting-your-network-to-cloudflare">
        
      </a>
    </div>
    <p>First, you need to <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/install-and-setup/installation">install cloudflared</a> on your network and authenticate it with the command below:</p>
            <pre><code>cloudflared tunnel login</code></pre>
            <p>Next, you’ll create a tunnel with a user-friendly name to identify your network or environment.</p>
            <pre><code>cloudflared tunnel create acme-network</code></pre>
            <p>Finally, you’ll want to configure your tunnel with the IP/CIDR range of your private network. By doing this, you’re making the Cloudflare WARP agent aware that any requests to this IP range need to be routed to our new tunnel.</p>
            <pre><code>cloudflared tunnel route ip add 192.168.0.1/32</code></pre>
            <p>Then, all you need to do is run your tunnel!</p>
    <div>
      <h3>Connecting your users to your network</h3>
      <a href="#connecting-your-users-to-your-network">
        
      </a>
    </div>
    <p>To connect your first user, start by downloading the Cloudflare WARP agent on the device they’ll be connecting from, then follow the steps in our installer.</p><p>Next, you’ll visit the <a href="https://dash.teams.cloudflare.com">Teams Dashboard</a> and define who is allowed to access our network by creating an enrollment policy. This policy can be created under Settings &gt; Devices &gt; Device Enrollment. In the example below, you can see that we’re requiring users to be located in Canada and have an email address ending @cloudflare.com.</p><p>Once you’ve created this policy, you can enroll your first device by clicking the WARP desktop icon on your machine and navigating to preferences &gt; Account &gt; Login with Teams.</p><p>Last, we’ll remove the IP range we added to our Tunnel from the Exclude list in Settings &gt; Network &gt; Split Tunnels. This will ensure this traffic is, in fact, routed to Cloudflare and then sent to our private network Tunnel as intended.</p><p>In addition to the tutorial above, we also have in-product guides in the Teams Dashboard which go into more detail about each step and provide validation along the way.</p><p>To create your first Tunnel, navigate to the <a href="https://dash.teams.cloudflare.com/access/tunnels">Access &gt; Tunnels</a>.</p><p>To enroll your first device into WARP, navigate to <a href="https://dash.teams.cloudflare.com/team/devices">My Team &gt; Devices</a>.</p>
    <div>
      <h2>What’s Next</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>We’re incredibly excited to release our <a href="https://cloudflare.com/zero-trust/lp/private-dns-waitlist">waitlist</a> today and even more excited to launch this feature in the coming weeks. We’re just getting started with private network Tunnels and plan to continue adding more support for Zero Trust access rules for each request to each internal DNS hostname after launch. We’re also working on a number of efforts to measure performance and to ensure we remain the fastest Zero Trust platform — making using us a delight for your users, compared to the pain of using a legacy VPN.</p> ]]></content:encoded>
            <category><![CDATA[CIO Week]]></category>
            <category><![CDATA[Cloudflare One]]></category>
            <category><![CDATA[Cloudflare Tunnel]]></category>
            <category><![CDATA[Zero Trust]]></category>
            <category><![CDATA[UDP]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">1lvtfumva5EYQOVoDyU1fm</guid>
            <dc:creator>Abe Carryl</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare Tunnel for Content Teams]]></title>
            <link>https://blog.cloudflare.com/cloudflare-tunnel-for-content-teams/</link>
            <pubDate>Mon, 25 Oct 2021 12:59:23 GMT</pubDate>
            <description><![CDATA[ See how we’re using Cloudflare Tunnel to share our technical writing with internal stakeholders for a faster, seamless feedback process. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>A big part of the job of a technical writer is getting feedback on the content you produce. Writing and maintaining product documentation is a deeply collaborative and cyclical effort — through constant conversation with product managers and engineers, technical writers ensure the content is clear and serves the user in the most effective way. Collaboration with other technical writers is also important to keep the documentation consistent with Cloudflare’s content strategy.</p><p>So whether we’re documenting a new feature or overhauling a big portion of existing documentation, sharing our writing with stakeholders before it’s published is quite literally half the work.</p><p>In my experience as a technical writer, the feedback I’ve received has been exponentially more impactful when stakeholders could see my changes in context. This is especially true for bigger and more strategic changes. Imagine I’m changing the structure of an entire section of a product’s documentation, or shuffling the order of pages in the navigation bar. It’s hard to guess the impact of those changes just by looking at the markdown files.</p><p>We writers check those changes in context by building a development server on our local machines. But sharing what we see locally with our stakeholders has always been a pain point for us. We’ve sent screenshots (hardly a good idea). We’ve recorded our screens. We’ve asked stakeholders to check out our branches locally and build a development server on their own. Lately, we’ve added a GitHub action to our open-source <a href="https://github.com/cloudflare/cloudflare-docs">cloudflare-docs</a> repo that allows us to generate a preview link for all pull requests with a certain label. However, that requires us to open a pull request with our changes, and that is not ideal if we’re documenting a feature that’s yet to be announced, or if our work is still in its early stages.</p><p>So the question has always been: could there be a way for someone else to see what we see, as easily as we see it?</p>
    <div>
      <h3>Enter Cloudflare Tunnel</h3>
      <a href="#enter-cloudflare-tunnel">
        
      </a>
    </div>
    <p>I was working on a complete refresh of <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-apps">Cloudflare Tunnel’s documentation</a> when I realized the product could very well answer that question for us as a technical writing team.</p><p>If you’re not familiar with the product, Cloudflare Tunnel provides a secure way to connect your local resources to the Cloudflare network without poking holes in your firewall. By running <code>cloudflared</code> in your environment, you can create outbound-only connections to Cloudflare’s edge, and ensure all traffic to your origins goes through Cloudflare and is protected from outside interference.</p><p>For our team, Cloudflare Tunnel could offer a way for our stakeholders to interact with what’s on our local environments in real-time, just like a customer would if the changes were published. To do that, we could expose our local environment to the edge through a tunnel, assign a DNS record to that tunnel, and then share that URL with our stakeholders.</p><p>So if each member in the technical writing team had their own tunnel that they could spin up every time they needed to get feedback, that would pretty much solve our long-standing problem.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6JgLe3LRObYVyy2ippRXmi/d8f23009b78005588039a581e48d8710/image2-29.png" />
            
            </figure>
    <div>
      <h3>Setting up the tunnel</h3>
      <a href="#setting-up-the-tunnel">
        
      </a>
    </div>
    <p>To test out that this would work, I went ahead and tried it for myself.</p><p>First, I made sure to create a local branch of the cloudflare-docs repo, make local changes, and run a development server locally on port 8000.</p><p>Since I already had <code>cloudflared</code> installed on my machine, the next thing I needed to do was log into my team’s Cloudflare account, pick the zone I wanted to create tunnels for (I picked <code>developers.cloudflare.com</code>), and authorize Cloudflare Tunnel for that zone.</p>
            <pre><code>$ cloudflared login</code></pre>
            <p>Next, it was time to create the Named Tunnel.</p>
            <pre><code>$ cloudflared tunnel create alice
Tunnel credentials written to /Users/alicebracchi/.cloudflared/0e025819-6f12-4f49-8183-c678273feef4.json. cloudflared chose this file based on where your origin certificate was found. Keep this file secret. To revoke these credentials, delete the tunnel.

Created tunnel alice with id 0e025819-6f12-4f49-8183-c678273feef4</code></pre>
            <p>Alright, tunnel created. Next, I needed to assign a DNS record to it. I wanted it to be something readable and easily shareable with stakeholders (like <code>abracchi.developers.cloudflare.com</code>), so I ran the following command and specified the tunnel name first and then the desired subdomain:</p>
            <pre><code>$ cloudflared tunnel route dns alice abracchi</code></pre>
            <p>Next, I needed a way to tell the tunnel to serve traffic to my localhost:8000 port. For that, I created a configuration file in my default <code>cloudflared</code> directory and specified the following fields:</p>
            <pre><code>url: https://localhost:8000
tunnel: 0e025819-6f12-4f49-8183-c678273feef4
credentials-file: /Users/alicebracchi/.cloudflared/0e025819-6f12-4f49-8183-c678273feef4
.json  </code></pre>
            <p>Time to run the tunnel. The following command established connections between my origin and the Cloudflare edge, telling the tunnel to serve traffic to my origin according to the parameters I’d specified in the config file:</p>
            <pre><code>$ cloudflared tunnel --config /Users/alicebracchi/.cloudflared/config.yml run alice
2021-10-18T09:39:54Z INF Starting tunnel tunnelID=0e025819-6f12-4f49-8183-c678273feef4
2021-10-18T09:39:54Z INF Version 2021.9.2
2021-10-18T09:39:54Z INF GOOS: darwin, GOVersion: go1.16.5, GoArch: amd64
2021-10-18T09:39:54Z INF Settings: map[cred-file:/Users/alicebracchi/.cloudflared/0e025819-6f12-4f49-8183-c678273feef4.json credentials-file:/Users/alicebracchi/.cloudflared/0e025819-6f12-4f49-8183-c678273feef4.json url:http://localhost:8000]
2021-10-18T09:39:54Z INF Generated Connector ID: 90a7e3a9-9d59-4d26-9b87-4b94ebf4d2a0
2021-10-18T09:39:54Z INF cloudflared will not automatically update when run from the shell. To enable auto-updates, run cloudflared as a service: https://developers.cloudflare.com/argo-tunnel/reference/service/
2021-10-18T09:39:54Z INF Initial protocol http2
2021-10-18T09:39:54Z INF Starting metrics server on 127.0.0.1:64193/metrics
2021-10-18T09:39:55Z INF Connection 13bf4c0c-b35b-4f9a-b6fa-f0a3dd001951 registered connIndex=0 location=MAD
2021-10-18T09:39:56Z INF Connection 38510c22-5256-45f2-abf8-72f1207ca242 registered connIndex=1 location=LIS
2021-10-18T09:39:57Z INF Connection 9ab0ea06-b1cf-483c-bd48-64a067a87c39 registered connIndex=2 location=MAD
2021-10-18T09:39:58Z INF Connection df079efe-8246-4e93-85f5-10caf8b7c354 registered connIndex=3 location=LIS</code></pre>
            <p>And sure enough, at <code>abracchi.developers.cloudflare.com</code>, my teammates could see what I was seeing on localhost:8000.</p>
    <div>
      <h3>Securing the tunnel</h3>
      <a href="#securing-the-tunnel">
        
      </a>
    </div>
    <p>After creating the tunnel, I needed to make sure only people within Cloudflare could access that tunnel. As it was, anyone with access to abracchi.developers.cloudflare.com could see what was in my local environment. To fix this, I set up <a href="https://developers.cloudflare.com/cloudflare-one/applications/configure-apps/self-hosted-apps">an Access self-hosted application</a> by navigating to Access &gt; Applications on the Teams Dashboard. For this application, I then created a policy that restricts access to the tunnel to a <a href="https://developers.cloudflare.com/cloudflare-one/identity/users/groups">user group</a> that includes only Cloudflare employees and requires authentication via Google or One-time PIN (OTP).</p><p>This makes applications like my tunnel easily shareable between colleagues, but also safe from potential vulnerabilities.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5qVnJE4AvU28h35W4KnA6m/17790ff65b5dc719d01b9895f1eda24f/image3-27.png" />
            
            </figure>
    <div>
      <h3>Et voilà!</h3>
      <a href="#et-voila">
        
      </a>
    </div>
    <p>Back to the Tunnels page, this is what the content team’s Cloudflare Tunnel setup looks like after each writer completed the process I’ve outlined above. Every writer has their personal tunnel set up and their local environment exposed to the Cloudflare Edge:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2eanjbRUrV21ztn6kd7VZP/2ca4a9b9e971ffa18754fb7839d33eab/Screenshot-2021-10-25-at-10.18.31.png" />
            
            </figure>
    <div>
      <h3>What’s next</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>The team is now seamlessly sharing visual content with their stakeholders, but there’s still room for improvement. Cloudflare Tunnel is just the first step towards making the feedback loop easier for everyone involved. We’re currently exploring ways we can capture integrated feedback directly at the URL that’s shared with the stakeholders, to avoid back-and-forth on separate channels.</p><p>We’re also looking into bringing in <a href="https://developers.cloudflare.com/pages/">Cloudflare Pages</a> to make the entire deployment process faster. Stay tuned for future updates, and in the meantime, check out our <a href="https://developers.cloudflare.com/">developer docs</a>.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Tunnel]]></category>
            <category><![CDATA[Cloudflare Zero Trust]]></category>
            <category><![CDATA[Technical Writing]]></category>
            <category><![CDATA[Device Security]]></category>
            <category><![CDATA[Zero Trust]]></category>
            <category><![CDATA[1.1.1.1]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">2zEZFkyANgfRYIcR0rDIjZ</guid>
            <dc:creator>Alice Bracchi</dc:creator>
        </item>
        <item>
            <title><![CDATA[Getting Cloudflare Tunnels to connect to the Cloudflare Network with QUIC]]></title>
            <link>https://blog.cloudflare.com/getting-cloudflare-tunnels-to-connect-to-the-cloudflare-network-with-quic/</link>
            <pubDate>Wed, 20 Oct 2021 13:00:53 GMT</pubDate>
            <description><![CDATA[  It is now possible to connect a Cloudflare Tunnel to the Cloudflare network with QUIC. While doing this, we ran into an interesting connectivity problem unique to UDP.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>I work on <i>Cloudflare Tunnel</i>, which lets customers quickly connect their private services and networks through the Cloudflare network without having to expose their public IPs or ports through their firewall. Tunnel is managed for users by <i>cloudflared</i>, a tool that runs on the same network as the private services. It proxies traffic for these services via Cloudflare, and users can then access these services securely through the Cloudflare network.</p><p>Recently, I was trying to get <i>Cloudflare Tunnel</i> to connect to the Cloudflare network using a UDP protocol, QUIC. While doing this, I ran into an interesting connectivity problem unique to UDP. In this post I will talk about how I went about debugging this connectivity issue beyond the land of firewalls, and how some interesting differences between UDP and TCP came into play when sending network packets.</p>
    <div>
      <h3>How does Cloudflare Tunnel work?</h3>
      <a href="#how-does-cloudflare-tunnel-work">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/NEssmu5aijDShf2V3IWjw/cfd10b15ec2b255155f13cc2899d7b69/2-17.png" />
            
            </figure><p><i>cloudflared</i> works by opening several connections to different servers on the Cloudflare edge. Currently, these are long-lived TCP-based connections proxied over HTTP/2 frames. When Cloudflare receives a request to a hostname, it is proxied through these connections to the local service behind <i>cloudflared</i>.</p><p>While our HTTP/2 protocol mode works great, we’d like to improve a few things. First, TCP traffic sent over HTTP/2 is susceptible to <a href="https://en.wikipedia.org/wiki/Head-of-line_blocking">Head of Line (HoL) blocking</a> — this affects both HTTP traffic and traffic from <a href="https://developers.cloudflare.com/cloudflare-one/tutorials/warp-to-tunnel">WARP routing</a>. Additionally, it is currently not possible to initiate communication from <i>cloudflared’s</i> HTTP/2 server in an efficient way. With the current Go implementation of HTTP/2, we could use <a href="https://en.wikipedia.org/wiki/Server-sent_events#:~:text=Server%2DSent%20Events%20(SSE),client%20connection%20has%20been%20established.">Server-Sent Events</a>, but this is not very useful in the scheme of proxying L4 traffic.</p><p>The upgrade to QUIC solves possible HoL blocking issues and opens up avenues that allow us to initiate communication from <i>cloudflared</i> to a different <i>cloudflared</i> in the future.</p><p>Naturally, QUIC required a UDP-based listener on our edge servers which <i>cloudflared</i> could connect to. We already connect to a TCP-based listener for the existing protocols, so this should be nice and easy, right?</p>
    <div>
      <h3>Failed to dial to the edge</h3>
      <a href="#failed-to-dial-to-the-edge">
        
      </a>
    </div>
    <p>Things weren’t as straightforward as they first looked. I added a QUIC listener on the edge, and the ability for <i>cloudflared</i> to connect to this new UDP-based listener. I tried to run my brand new QUIC tunnel and this happened.</p>
            <pre><code>$  cloudflared tunnel run --protocol quic my-tunnel
2021-09-17T18:44:11Z ERR Failed to create new quic connection, err: failed to dial to edge: timeout: no recent network activity</code></pre>
            <p><i>cloudflared</i> wasn’t even establishing a connection to the edge. I started looking at the obvious places first. <i>Did I add a firewall rule allowing traffic to this port?</i> Check_. Did I have iptables rules ACCEPTing or DROPping appropriate traffic for this port?_ Check_._ They seemed to be in order. So what else could I do?</p>
    <div>
      <h3>tcpdump all the packets</h3>
      <a href="#tcpdump-all-the-packets">
        
      </a>
    </div>
    <p>I started by logging for UDP traffic on the machine my server was running on to see what could be happening.</p>
            <pre><code>$  sudo tcpdump -n -i eth0 port 7844 and udp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
14:44:27.742629 IP 173.13.13.10.50152 &gt; 198.41.200.200.7844: UDP, length 1252
14:44:27.743298 IP 203.0.113.0.7844 &gt; 173.13.13.10.50152: UDP, length 37</code></pre>
            <p>Looking at this <i>tcpdump</i> helped me understand why I had no connectivity! Not only was this port getting UDP traffic but I was also seeing traffic flow out. But there seemed to be something strange afoot. Incoming packets were being sent to 198.41.200.200:7844 while responses were being sent back from 203.0.113.0:7844 (this is an <a href="https://datatracker.ietf.org/doc/html/rfc5737">example IP</a> used for illustration purposes)  instead.</p><p>Why is this a problem? If a host (in this case, the server) chooses an address from a network unable to communicate with a public Internet host, it is likely that the return half of the communication will never arrive. But wait a minute. Why is some other IP getting prioritized over a source address my packets were already being sent to? Let’s take a deeper look at some IP addresses. (Note that I’ve deliberately oversimplified and scrambled results to minimally illustrate the problem)</p>
            <pre><code>$  ip addr list
eth0: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1600 qdisc noqueue state UP group default qlen 1000
inet 203.0.113.0/32 scope global eth0
inet 198.41.200.200/32 scope global eth0 </code></pre>
            
            <pre><code>$ ip route show
default via 203.0.113.0 dev eth0</code></pre>
            <p>So this was clearly why the server was working fine on my machine but not on the Cloudflare edge servers. It looks like I have multiple IPs on the interface my service is bound to. The IP that is the default route is being sent back as the source address of the packet.</p>
    <div>
      <h3>Why does this work for TCP but not UDP?</h3>
      <a href="#why-does-this-work-for-tcp-but-not-udp">
        
      </a>
    </div>
    <p>Connection-oriented protocols, like TCP, initiate a connection (<a href="https://man7.org/linux/man-pages/man2/connect.2.html">connect()</a>) with a <a href="https://en.wikipedia.org/wiki/Handshaking#TCP_three-way_handshake">three-way handshake</a>. The kernel therefore maintains a state about ongoing connections and uses this to determine the source IP address at the time of a response.</p><p>Because UDP (unless SOCK_SEQPACKET is involved) is connectionless, the kernel cannot maintain state like TCP does. The <a href="https://man7.org/linux/man-pages/man3/recvfrom.3p.html"><i>recvfrom</i></a>  system call is invoked from the server side and tells who the data comes from. Unfortunately, <i>recvfrom</i>  does not tell us which IP this data is addressed for. Therefore, when the UDP server invokes the <code>[sendto](https://man7.org/linux/man-pages/man3/sendto.3p.html) system call</code> to respond to the client, we can only tell it which address to send the data to. The responsibility of determining the source-address IP then falls to the kernel. The kernel has certain <a href="http://linux-ip.net/html/routing-selection.html">heuristics</a> that it uses to determine the source address. This may or may not work, and in the <i>ip routes</i> example above, these heuristics did not work. The kernel naturally (and wrongly) picks the address of the default route to respond with.</p>
    <div>
      <h3>Telling the kernel what to do</h3>
      <a href="#telling-the-kernel-what-to-do">
        
      </a>
    </div>
    <p>I had to rely on my application to set the source address explicitly and therefore not rely on kernel heuristics.</p><p>Linux has some generic I/O system calls, namely <a href="https://man7.org/linux/man-pages/man3/recvmsg.3p.html"><i>recvmsg</i></a>  and <a href="https://man7.org/linux/man-pages/man3/sendmsg.3p.html"><i>sendmsg</i></a>. Their function signatures allow us to both read or write additional <a href="http://www.gnu.org/software/libc/manual/html_node/Out_002dof_002dBand-Data.html">out-of-band data</a> we can pass the source address to. This control information is passed via the <i>msghdr</i> struct’s <i>msg_control</i> field.</p>
            <pre><code>ssize_t sendmsg(int socket, const struct msghdr *message, int flags)
ssize_t recvmsg(int socket, struct msghdr *message, int flags);
 
struct msghdr {
     void    *   msg_name;   /* Socket name          */
     int     msg_namelen;    /* Length of name       */
     struct iovec *  msg_iov;    /* Data blocks          */
     __kernel_size_t msg_iovlen; /* Number of blocks     */
     void    *   msg_control;    /* Per protocol magic (eg BSD file descriptor passing) */
    __kernel_size_t msg_controllen; /* Length of cmsg list */
     unsigned int    msg_flags;
};</code></pre>
            <p>We can now copy the control information we’ve gotten from <i>recvmsg</i> back when calling <i>sendmsg</i>, providing the kernel with information about the source address.The library I used (<a href="https://github.com/lucas-clemente/quic-go">https://github.com/lucas-clemente/quic-go</a>) had a recent update that did exactly this! I pulled the changes into my service and gave it a spin.</p><p>But alas. It did not work! A quick <i>tcpdump</i> showed that the same source address was being sent back. It seemed clear from reading the source code that the <i>recvmsg</i> and <i>sendmsg</i> were being called with the right values. It did not make sense.</p><p>So I had to see for myself if these system calls were being made.</p>
    <div>
      <h3>strace all the system calls</h3>
      <a href="#strace-all-the-system-calls">
        
      </a>
    </div>
    <p><a href="https://man7.org/linux/man-pages/man1/strace.1.html"><i>strace</i></a> is an extremely useful tool that tracks all system calls and signals sent/received by a process. Here’s what it had to say. I've removed all the information not relevant to this specific issue.</p>
            <pre><code>17:39:09.130346 recvmsg(3, {msg_name={sa_family=AF_INET6,
sin6_port=htons(35224), inet_pton(AF_INET6, "::ffff:171.54.148.10", 
&amp;sin6_addr), sin6_flowinfo=htonl(0), sin6_scope_id=0}, msg_namelen=112-&gt;28, msg_iov=
[{iov_base="_\5S\30\273]\275@\34\24\322\243{2\361\312|\325\n\1\314\316`\3
03\250\301X\20", iov_len=1452}], msg_iovlen=1, msg_control=[{cmsg_len=36, 
cmsg_level=SOL_IPV6, cmsg_type=0x32}, {cmsg_len=28, cmsg_level=SOL_IP, 
cmsg_type=IP_PKTINFO, cmsg_data={ipi_ifindex=if_nametoindex("eth0"),
ipi_spec_dst=inet_addr("198.41.200.200"),ipi_addr=inet_addr("198.41.200.200")}},
{cmsg_len=17, cmsg_level=SOL_IP, 
cmsg_type=IP_TOS, cmsg_data=[0]}], msg_controllen=96, msg_flags=0}, 0) = 28 &lt;0.000007&gt;</code></pre>
            
            <pre><code>17:39:09.165160 sendmsg(3, {msg_name={sa_family=AF_INET6, 
sin6_port=htons(35224), inet_pton(AF_INET6, "::ffff:171.54.148.10", 
&amp;sin6_addr), sin6_flowinfo=htonl(0), sin6_scope_id=0}, msg_namelen=28, 
msg_iov=[{iov_base="Oe4\37:3\344 &amp;\243W\10~c\\\316\2640\255*\231 
OY\326b\26\300\264&amp;\33\""..., iov_len=1302}], msg_iovlen=1, msg_control=
[{cmsg_len=28, cmsg_level=SOL_TCP, cmsg_type=0x8}], msg_controllen=28, 
msg_flags=0}, 0) = 1302 &lt;0.000054&gt;</code></pre>
            <p>Let's start with <i>recvmsg</i> . We can clearly see that the ipi_addr for the source is being passed correctly: <i>ipi_addr=inet_addr("172.16.90.131")</i>. This part works as expected. Looking at <i>sendmsg</i>  almost instantly tells us where the problem is. The field we want, ip_spec_dst is not being set as we make this system call. So the kernel continues to make wrong guesses as to what the source address may be.</p><p>This turned out to be a <a href="https://github.com/lucas-clemente/quic-go/blob/3b46d7402c8436c38ca0d07a1ab4b4251acfd794/conn_oob.go#L249">bug</a> where the library was using <i>IPROTO_TCP</i> instead of <i>IPPROTO_IPV4</i> as the control message level while making the <i>sendmsg</i> call. Was that it? Seemed a little anticlimactic. I submitted a slightly more typesafe <a href="https://github.com/lucas-clemente/quic-go/pull/3278">fix</a> and sure enough, straces now showed me what I was expecting to see.</p>
            <pre><code>18:22:08.334755 sendmsg(3, {msg_name={sa_family=AF_INET6, 
sin6_port=htons(37783), inet_pton(AF_INET6, "::ffff:171.54.148.10", 
&amp;sin6_addr), sin6_flowinfo=htonl(0), sin6_scope_id=0}, msg_namelen=28, 
msg_iov=
[{iov_base="Ki\20NU\242\211Y\254\337\3107\224\201\233\242\2647\245}6jlE\2
70\227\3023_\353n\364"..., iov_len=33}], msg_iovlen=1, msg_control=
[{cmsg_len=28, cmsg_level=SOL_IP, cmsg_type=IP_PKTINFO, cmsg_data=
{ipi_ifindex=if_nametoindex("eth0"), 
ipi_spec_dst=inet_addr("198.41.200.200"),ipi_addr=inet_addr("0.0.0.0")}}
], msg_controllen=32, msg_flags=0}, 0) =
33 &lt;0.000049&gt;</code></pre>
            <p><i>cloudflared</i> is now able to connect with UDP (QUIC) to the Cloudflare network from anywhere in the world!</p>
            <pre><code>$  cloudflared tunnel --protocol quic run sudarsans-tunnel
2021-09-21T11:37:30Z INF Starting tunnel tunnelID=a72e9cb7-90dc-499b-b9a0-04ee70f4ed78
2021-09-21T11:37:30Z INF Version 2021.9.1
2021-09-21T11:37:30Z INF GOOS: darwin, GOVersion: go1.16.5, GoArch: amd64
2021-09-21T11:37:30Z INF Settings: map[p:quic protocol:quic]
2021-09-21T11:37:30Z INF Initial protocol quic
2021-09-21T11:37:32Z INF Connection 3ade6501-4706-433e-a960-c793bc2eecd4 registered connIndex=0 location=AMS</code></pre>
            <p>While the programmatic bug causing this issue was a trivial one, the journey into systematically discovering the issue and understanding how Linux internals worked for UDP along the way turned out to be very rewarding for me. It also reiterated my belief that <i>tcpdump</i> and <i>strace</i> are indeed invaluable tools in anybody’s arsenal when debugging network problems.</p>
    <div>
      <h3>What’s next?</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>You can give this a try with the latest <i>cloudflared</i> release at <a href="https://github.com/cloudflare/cloudflared/releases/latest">https://github.com/cloudflare/cloudflared/releases/latest</a>. Just remember to set the <i>protocol</i> flag to <i>quic</i>. We plan to leverage this new mode to roll out some exciting new features for <i>Cloudflare Tunnel</i>. So upgrade away and keep watching this space for more information on how you can take advantage of this.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Tunnel]]></category>
            <category><![CDATA[Zero Trust]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[QUIC]]></category>
            <guid isPermaLink="false">1HDlvSKaYHiPl8cXFEdNV6</guid>
            <dc:creator>Sudarsan Reddy</dc:creator>
        </item>
        <item>
            <title><![CDATA[Tunnel: Cloudflare’s Newest Homeowner]]></title>
            <link>https://blog.cloudflare.com/observe-and-manage-cloudflare-tunnel/</link>
            <pubDate>Mon, 18 Oct 2021 13:46:00 GMT</pubDate>
            <description><![CDATA[ Starting today, users who deploy and manage Cloudflare Tunnel at scale now have easier visibility into their Tunnel’s respective status, routes, uptime, connectors, cloudflared version, and much more through our new UI in the Cloudflare for Teams Dashboard.  ]]></description>
            <content:encoded><![CDATA[ <p>Cloudflare Tunnel connects your infrastructure to Cloudflare. Your team runs a lightweight connector in your environment, <code>cloudflared</code>, and services can reach Cloudflare and your audience through an outbound-only connection without the need for opening up holes in your firewall.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2BqXIo11SFcAQALHZuWya8/3c5a93448085f6c7bf704305fbca5212/image4-27.png" />
            
            </figure><p>Whether the services are internal apps protected with Zero Trust policies, websites running in Kubernetes clusters in a public cloud environment, or a <a href="/building-a-pet-cam-using-a-raspberry-pi-cloudflare-tunnels-and-teams/">hobbyist project on a Raspberry Pi</a> — Cloudflare Tunnel provides a stable, secure, and highly performant way to serve traffic.</p><p>Starting today, with our new UI in the Cloudflare for Teams Dashboard, users who deploy and manage Cloudflare Tunnel at scale now have easier visibility into their tunnels’ status, routes, uptime, connectors, <code>cloudflared</code> version, and much more. On the Teams Dashboard you will also find an interactive guide that walks you through setting up your first tunnel.  </p>
    <div>
      <h3>Getting Started with Tunnel</h3>
      <a href="#getting-started-with-tunnel">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/40CrkppZAhLfvEEiP8fCaZ/c74128b9d29be3b0b832615dec26d995/image3-26.png" />
            
            </figure><p>We wanted to start by making the tunnel onboarding process more transparent for users. We understand that not all users are intimately familiar with the command line nor are they deploying tunnel in an environment or OS they’re most comfortable with. To alleviate that burden, we designed a comprehensive onboarding guide with pathways for MacOS, Windows, and Linux for our two primary onboarding flows:</p><ol><li><p>Connecting an <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-apps">origin to Cloudflare</a></p></li><li><p>Connecting a private network via <a href="https://developers.cloudflare.com/cloudflare-one/tutorials/warp-to-tunnel">WARP to Tunnel</a></p></li></ol><p>Our new onboarding guide walks through each command required to create, route, and run your tunnel successfully while also highlighting relevant validation commands to serve as guardrails along the way. Once completed, you’ll be able to view and manage your newly established tunnels.</p>
    <div>
      <h3>Managing your tunnels</h3>
      <a href="#managing-your-tunnels">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6574ihZ3Lv32PE6Ycfrrkp/161e9103ec7187c00bbb4060cb322340/image1-43.png" />
            
            </figure><p>When thinking about the new user interface for tunnel we wanted to concentrate our efforts on how users gain visibility into their tunnels today. It was important that we provide the same level of <a href="https://www.cloudflare.com/learning/performance/what-is-observability/">observability</a>, but through the lens of a visual, interactive dashboard. Specifically, we strove to build a familiar experience like the one a user may see if they were to run <code>cloudflared tunnel list</code> to show all of their tunnels, or <code>cloudflared tunnel info</code> if they wanted to better understand the connection status of a specific tunnel.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3bI6i82oBS4t9ccPrfpdj7/5dec94b82e7c99dddbe7e91bfc34a54d/Screen-Shot-2021-10-14-at-1.07.21-PM.png" />
            
            </figure><p>In the interface, you can quickly search by name or filter by name, status, uptime, or creation date. This allows users to easily identify and manage the tunnels they need, when they need them. We also included other key metrics such as <b>Status</b> and <b>Uptime</b>.</p><p>A tunnel's status depends on the health of its connections:</p><ul><li><p><b>Active</b>: This means your tunnel is running and has a healthy connection to the Cloudflare network.</p></li><li><p><b>Inactive</b>: This means your tunnel is not running and is not connected to Cloudflare.</p></li><li><p><b>Degraded</b>: This means one or more of your <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-apps">four long-lived TCP connections</a> to Cloudflare have been disconnected, but traffic is still being served to your origin.</p></li></ul><p>A tunnel’s uptime is also calculated by the health of its connections. We perform this calculation by determining the UTC timestamp of when the first (of four) long-lived TCP connections is established with the Cloudflare Edge. In the event this single connection is terminated, we will continue tracking uptime as long as one of the other three connections continues to serve traffic. If no connections are active, Uptime will reset to zero.</p>
    <div>
      <h3>Tunnel Routes and Connectors</h3>
      <a href="#tunnel-routes-and-connectors">
        
      </a>
    </div>
    <p>Last year, shortly after the announcement of Named Tunnels, we released a new feature that allowed users to utilize the same Named Tunnel to serve traffic to <a href="/many-services-one-cloudflared/">many different services</a> through the use of <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/configuration/configuration-file/ingress">Ingress Rules</a>. In the new UI, if you’re running your tunnels in this manner, you’ll be able to see these various services reflected by hovering over the route's value in the dashboard. Today, this includes routes for DNS records, Load Balancers, and Private IP ranges.</p><p>Even more recently, we announced highly available and highly scalable instances of cloudflared, known more commonly as “<a href="/highly-available-and-highly-scalable-cloudflare-tunnels/">cloudflared replicas</a>.” To view your <code>cloudflared</code> replicas, select and expand a tunnel. Then you will identify how many <code>cloudflared</code> replicas you’re running for a given tunnel, as well as the corresponding connection status, data center, IP address, and version. And ultimately, when you’re ready to delete a tunnel, you can do so directly from the dashboard as well.</p>
    <div>
      <h3>What’s next</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>Moving forward, we’re excited to begin incorporating more Cloudflare Tunnel analytics into our dashboard. We also want to continue making Cloudflare Tunnel the easiest way to connect to Cloudflare. In order to do that, we will focus on improving our onboarding experience for new users and look forward to bringing more of that functionality into the Teams Dashboard. If you have things you’re interested in having more visibility around in the future, let us know below!</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Tunnel]]></category>
            <category><![CDATA[Zero Trust]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Cloudflare Zero Trust]]></category>
            <category><![CDATA[Research]]></category>
            <guid isPermaLink="false">fZKkFBvkuw1hQPmWCDkG0</guid>
            <dc:creator>Abe Carryl</dc:creator>
        </item>
        <item>
            <title><![CDATA[Quick Tunnels: Anytime, Anywhere]]></title>
            <link>https://blog.cloudflare.com/quick-tunnels-anytime-anywhere/</link>
            <pubDate>Thu, 02 Sep 2021 13:00:03 GMT</pubDate>
            <description><![CDATA[ Cloudflare Tunnel now supports a free version that includes all the latest features and does not require any onboarding to Cloudflare. With today’s change, you can begin experimenting with Tunnel in five minutes or less. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6aq4Df0NwD6Jx4gVeluk4N/3b98261144f2f97f0903627cb3e50493/image2-28.png" />
            
            </figure><p>My name is Rishabh Bector, and this summer, I worked as a software engineering intern on the Cloudflare Tunnel team. One of the things I built was quick Tunnels and before departing for the summer, I wanted to write a blog post on how I developed this feature.</p><p>Over the years, our engineering team has worked hard to continually improve the underlying architecture through which we serve our Tunnels. However, the core use case has stayed largely the same. Users can implement Tunnel to establish an encrypted connection between their origin server and Cloudflare’s edge.</p><p>This connection is initiated by installing a lightweight daemon on your origin, to serve your traffic to the Internet without the need to poke holes in your firewall or create intricate access control lists. Though we’ve always centered around the idea of being a <code>connector</code> to Cloudflare, we’ve also made many enhancements behind the scenes to the way in which our connector operates.</p><p>Typically, users run into a few speed bumps before being able to use Cloudflare Tunnel. Before they can create or route a tunnel, users need to authenticate their unique token against a zone on their account. This means in order to simply spin up a Tunnel testing environment, users need to first create an account, add a website, change their nameservers, and wait for DNS propagation.</p><p>Starting today, we’re excited to fix that. Cloudflare Tunnel now supports a free version that includes all the latest features and does not require any onboarding to Cloudflare. With today’s change, you can begin experimenting with Tunnel in five minutes or less.</p>
    <div>
      <h3>Introducing Quick Tunnels</h3>
      <a href="#introducing-quick-tunnels">
        
      </a>
    </div>
    <p>When administrators start using Cloudflare Tunnel, they need to perform four specific steps:</p><ol><li><p>Create the Tunnel</p></li><li><p>Configure the Tunnel and what services it will represent</p></li><li><p>Route traffic to the Tunnel</p></li><li><p>And finally… run the Tunnel!</p></li></ol><p>These steps give you control over how your services connect to Cloudflare, but they are also a chore. Today’s change, which we are calling quick Tunnels, not only removes some onboarding requirements, we’re also condensing these into a single step.</p><p>If you have a service running locally that you want to share with teammates or an audience, you can use this single command to connect your service to Cloudflare’s edge. First, you need to install the Cloudflare connector, a lightweight daemon called <code>cloudflared</code>. Once installed, you can run the command below.</p><p><code>cloudflared tunnel</code></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/52RN2zWOVsS0PKx2CJRIUx/44b28bb92b76ba31a346675877de428b/image1-36.png" />
            
            </figure><p>When run, <code>cloudflared</code> will generate a URL that consists of a random subdomain of the website <code>trycloudflare.com</code> and point traffic to localhost port 8080. If you have a web service running at that address, users who visit the subdomain generated will be able to visit your web service through Cloudflare’s network.</p>
    <div>
      <h3>Configuring Quick Tunnels</h3>
      <a href="#configuring-quick-tunnels">
        
      </a>
    </div>
    <p>We built this feature with the single command in mind, but if you have services that are running at different default locations, you can optionally configure your quick Tunnel to support that.</p><p>One example is if you’re building a multiplayer game that you want to share with friends. If that game is available locally on your origin, or even your laptop, at localhost:3000, you can run the command below.</p><p><code>cloudflared tunnel --url localhost:3000</code></p><p>You can do this with IP addresses or URLs, as well. Anything that <code>cloudflared</code> can reach can be made available through this service.</p>
    <div>
      <h3>How does it work?</h3>
      <a href="#how-does-it-work">
        
      </a>
    </div>
    <p>Cloudflare quick Tunnels is powered by Cloudflare <a href="https://workers.cloudflare.com/">Workers</a>, giving us a serverless compute deployment that puts Tunnel management in a Cloudflare data center closer to you instead of a centralized location.</p><p>When you run the command <code>cloudflared tunnel</code>, your instance of <code>cloudflared</code> initiates an outbound-only connection to Cloudflare. Since that connection was initiated without any account details, we treat it as a quick Tunnel.</p><p>A Cloudflare Worker, which we call the quick Tunnel Worker, receives a request that a new quick Tunnel should be created. The Worker generates the random subdomain and returns that to the instance of <code>cloudflared</code>. That instance of <code>cloudflared</code> can now establish a connection for that subdomain.</p><p>Meanwhile, a complementary service running on Cloudflare’s edge receives that subdomain and the identification number of the instance of <code>cloudflared</code>. That service uses that information to create a DNS record in Cloudflare’s authoritative DNS which maps the randomly-generated hostname to the specific Tunnel you created.</p><p>The deployment also relies on the <a href="/introducing-cron-triggers-for-cloudflare-workers/">Workers Cron Trigger</a> feature to perform clean up operations. On a regular interval, the Worker looks for quick Tunnels which have been disconnected for more than five minutes. Our Worker classifies these Tunnels as abandoned and proceeds to delete them and their associated DNS records.</p><p>What about Zero Trust policies?</p><p>By default, all the quick Tunnels that you create are available on the public Internet at the randomly generated URL. While this might be fine for some projects and tests, other use cases require more security.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7nQnLMDD856i6qX9MjHMFD/e384f7683da597a8c440b682b8e64edd/image3-23.png" />
            
            </figure><p>If you need to add additional Zero Trust rules to control who can reach your services, you can use Cloudflare Access alongside Cloudflare Tunnel. That use case does require creating a Cloudflare account and adding a zone to Cloudflare, but we’re working on ideas to make that easier too.</p>
    <div>
      <h3>Where should I notice improvements?</h3>
      <a href="#where-should-i-notice-improvements">
        
      </a>
    </div>
    <p>We first launched a version of Cloudflare Tunnel that did not require accounts over two years ago. While we’ve been thrilled that customers have used this for their projects, Cloudflare Tunnel evolved significantly since then. Specifically, Cloudflare Tunnel relies on a new architecture that is more redundant and stable than the one used by that older launch. While all Tunnels that migrated to this new architecture, which we call <a href="/argo-tunnels-that-live-forever/">Named Tunnels</a>, enjoyed those benefits, the users on this option that did not require an account were left behind.</p><p>Today’s announcement brings that stability to quick Tunnels. Tunnels are now designed to be long-lived, persistent objects. Unless you delete them, Tunnels can live for months, an improvement over the average lifespan measured in hours before connectivity issues disrupted a Tunnel in the older architecture.</p><p>These quick Tunnels run on this same, resilient architecture not only expediting time-to-value, but also improving the overall tunnel quality of life.</p>
    <div>
      <h3>What’s next?</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>Today’s quick Tunnels add a powerful feature to Cloudflare Tunnels: the ability to create a reliable, resilient tunnel in a single command, without the hassle of creating an account first. We’re excited to help your team build and connect services to Cloudflare’s network and on to your audience or teammates. If you have additional questions, please share them in this community post here.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Tunnel]]></category>
            <category><![CDATA[Zero Trust]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Cloudflare Zero Trust]]></category>
            <category><![CDATA[Road to Zero Trust]]></category>
            <guid isPermaLink="false">2roWhXiAJnt7fEoqoxfAXt</guid>
            <dc:creator>Rishabh Bector</dc:creator>
        </item>
        <item>
            <title><![CDATA[Building a Pet Cam using a Raspberry Pi, Cloudflare Tunnels and Teams]]></title>
            <link>https://blog.cloudflare.com/building-a-pet-cam-using-a-raspberry-pi-cloudflare-tunnels-and-teams/</link>
            <pubDate>Thu, 19 Aug 2021 12:59:03 GMT</pubDate>
            <description><![CDATA[ This was a perfect weekend project: I would set up my own pet cam, connect it to the Internet, and make it available for me to check from anywhere in the world. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>I adopted Ziggy in late 2020. It took me quite a while to get used to his routine and mix it with mine. He consistently jumped on the kitchen counter in search of food, albeit only when no one was around. And I only found out when he tossed the ceramic butter box. It shattered and made a loud bang in the late hours of the night. Thankfully, no one was asleep yet.</p><p>This got me thinking that I should keep an eye on his mischievous behaviour, even when I'm not physically at home. I briefly considered buying a pet cam, but I remembered I had bought a Raspberry Pi a few months before. It was hardly being used, and it had a case (like <a href="https://thepihut.com/collections/raspberry-pi-4-cases/products/pir-camera-case-for-raspberry-pi-4-3">this</a>) allowing a camera module to be added. I hadn’t found a use for the camera module — until now.</p><p>This was a perfect weekend project: I would set up my own pet cam, connect it to the Internet, and make it available for me to check from anywhere in the world. I also wanted to ensure that only I could access it and that it had some easy way to login, possibly using my Google account. The solution? Cloudflare Tunnel and Teams. Cloudflare would help me expose a service running in an internal network using <a href="https://www.cloudflare.com/products/tunnel/">Tunnel</a> while providing a security solution on top of it to keep it secure. <a href="https://www.cloudflare.com/teams/">Teams</a> on the other hand, would help me by adding access control in the form of Google authentication.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5bWBM8Z87zIOnvBG8TYXJ3/54946b52cfe1989747b9400f2c2f2b66/image10.jpg" />
            
            </figure><p>So all I am left to do is configure my Raspberry Pi to be able to run a camera as a web service. That weekend, I started researching for it and made a list of things I needed:</p><ul><li><p>A Raspberry Pi with a compatible camera module. I used a <a href="https://www.raspberrypi.org/products/raspberry-pi-4-model-b/">Raspberry Pi 4 model B</a> with <a href="https://www.raspberrypi.org/products/camera-module-v2/">camera module v2</a>.</p></li><li><p>Linux knowledge.</p></li><li><p>A domain name I could make changes to.</p></li><li><p>Understanding of how DNS works.</p></li><li><p>A Cloudflare account with Cloudflare for Teams+Tunnel access.</p></li><li><p>Internet connection.</p></li></ul><p>In this blog post, I’ll walk you through the process I followed to set everything up for the pet cam. To keep things simple and succinct, I will not cover how to set up your Raspberry Pi, but you should make sure it has Internet access and that you can run shell commands on it, either via <a href="https://www.cloudflare.com/learning/access-management/what-is-ssh/">SSH</a> or using a <a href="https://www.realvnc.com/en/">VNC connection</a>.</p>
    <div>
      <h3>Setup</h3>
      <a href="#setup">
        
      </a>
    </div>
    <p>The first thing we need to do is connect the camera module to the Raspberry Pi. For more detailed instructions, follow the <a href="https://projects.raspberrypi.org/en/projects/getting-started-with-picamera/1">official guide</a>, steps 1 to 3.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4nMbgxZI3HylSD6wKx5ImA/cef37fd42e0ea3f4338658d08f45f168/image2.jpg" />
            
            </figure><p>After setting up the camera and testing that it works, we need to set it up as a camera with a web server. This is so we can access it at a URL such as <a href="https://192.168.0.2:8080">https://192.168.0.2:8080</a> within the local network, to which the Raspberry Pi is also connected. To do that, we will use <a href="https://motion-project.github.io/">Motion</a>, a program for setting up the camera module v2 as a web server.</p><p>To install Motion, input these commands:</p>
            <pre><code>$ sudo apt-get update &amp;&amp; sudo apt-get upgrade
$ sudo apt install autoconf automake build-essential pkgconf libtool git libzip-dev libjpeg-dev gettext libmicrohttpd-dev libavformat-dev libavcodec-dev libavutil-dev libswscale-dev libavdevice-dev default-libmysqlclient-dev libpq-dev libsqlite3-dev libwebp-dev
$ sudo wget https://github.com/Motion-Project/motion/releases/download/release-4.3.1/pi_buster_motion_4.3.1-1_armhf.deb
$ sudo dpkg -i pi_buster_motion_4.3.1-1_armhf.deb</code></pre>
            <p>The above commands will update the local packages with new versions from the repositories and then install that version of Motion from Motion’s GitHub project.</p><p>Next, we need to configure Motion using:</p>
            <pre><code>$ sudo vim /etc/motion/motion.conf
# Find the following lines and update them to following:
# daemon on
# stream_localhost off
# save and exit</code></pre>
            <p>After that, we need to set Motion up as a daemon, so it runs whenever the system is restarted:</p>
            <pre><code>$ sudo vim /etc/default/motion
# and change the following line 
# start_motion_daemon=yes
# save and exit and run the next command
$ sudo service motion start</code></pre>
            <p>Great. Now that we have Motion set up, we can see the live feed from our camera in a browser on Raspberry Pi module at the default URL: <a href="http://0.0.0.0:8081"><b>http://localhost:8081</b></a> (the port can be changed in the config edit step above). Alternatively, we can open it on another machine within the same network by replacing 0.0.0.0 with the IP of the Raspberry Pi in the network.</p><p>For now, the camera web server is available only within our local network. However, I wanted to keep an eye on Ziggy no matter where I am, as long as I have Internet access and a browser. This is perfect for <a href="https://www.cloudflare.com/products/tunnel/">Cloudflare Tunnel</a>. An alternative would be to open a port in the firewall on the router in my home network, but I hate that idea of having to mess with the router configuration. I am not really an expert at that, and if I leave a backdoor open to my internal network, it can get scary quickly!</p><p>The Cloudflare Tunnel documentation takes us through <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/install-and-setup/installation">its installation</a>. The only issue is that the architecture of the Raspberry Pi is based on armv7l (32-bit) and there is no package for it in the remote repositories_._ We could build cloudflared from source if we wanted as it’s an <a href="https://github.com/cloudflare/cloudflared">open source project</a>, but an easier route is to <code>wget</code> it.</p>
            <pre><code>$ wget https://bin.equinox.io/c/VdrWdbjqyF/cloudflared-stable-linux-arm.tgz
# a quick check of version shall confirm if it installed correctly
$ cloudflared -v 
cloudflared version 2021.5.10 (built 2021-05-26-1355 UTC)</code></pre>
            <p>Let’s set up our Tunnel now:</p>
            <pre><code>$ cloudflared tunnel create camera
Tunnel credentials written to /home/pi/.cloudflared/5f8182ba-906c-4910-98c3-7d042bda0594.json. cloudflared chose this file based on where your origin certificate was found. Keep this file secret. To revoke these credentials, delete the tunnel.

Created tunnel camera with id 5f8182ba-906c-4910-98c3-7d042bda0594</code></pre>
            <p>Now we need to configure the Tunnel to forward the traffic to the Motion webcam server:</p>
            <pre><code>$ vim /home/pi/.cloudflared/config.yaml 
# And add the following.
tunnel: 5f8182ba-906c-4910-98c3-7d042bda0594
credentials-file: /home/pi/.cloudflared/5f8182ba-906c-4910-98c3-7d042bda0594.json 

ingress:
  - hostname: camera.imohak.com
    service: http://0.0.0.0:9095
  - service: http_status:404</code></pre>
            <p>The Tunnel uuid should be the one created with the command above and so should the path of the credential file. The ingress should have the domain we want to use. In my case I have set up camera.imohak.com as my domain and 404 as the fallback rule.</p><p>Next, we need to route the DNS to this Tunnel:</p>
            <pre><code>$ cloudflared tunnel route dns 5f8182ba-906c-4910-98c3-7d042bda0594 camera.imohak.com</code></pre>
            <p>This adds a DNS CNAME record, which can be verified from the Cloudflare dashboard as shown here:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/72ToAYL4KvcUUsN9mLC4bY/54dcf24add84b971a02085ecb3b1e708/image9.png" />
            
            </figure><p>Let’s test the Tunnel!</p>
            <pre><code>$ cloudflared tunnel run camera
2021-06-15T21:44:41Z INF Starting tunnel tunnelID=5f8182ba-906c-4910-98c3-7d042bda0594
2021-06-15T21:44:41Z INF Version 2021.5.10
2021-06-15T21:44:41Z INF GOOS: linux, GOVersion: go1.16.3, GoArch: arm
2021-06-15T21:44:41Z INF Settings: map[cred-file:/home/pi/.cloudflared/5f8182ba-906c-4910-98c3-7d042bda0594.json credentials-file:/home/pi/.cloudflared/5f8182ba-906c-4910-98c3-7d042bda0594.json]
2021-06-15T21:44:41Z INF cloudflared will not automatically update when run from the shell. To enable auto-updates, run cloudflared as a service: https://developers.cloudflare.com/argo-tunnel/reference/service/
2021-06-15T21:44:41Z INF Generated Connector ID: 7e38566e-0d33-426d-b64d-326d0592486a
2021-06-15T21:44:41Z INF Initial protocol http2
2021-06-15T21:44:41Z INF Starting metrics server on 127.0.0.1:43327/metrics
2021-06-15T21:44:42Z INF Connection 6e7e0168-22a4-4804-968d-0674e4c3b4b1 registered connIndex=0 location=DUB
2021-06-15T21:44:43Z INF Connection fc83017d-46f9-4cee-8fc6-e4ee75c973f5 registered connIndex=1 location=LHR
2021-06-15T21:44:44Z INF Connection 62d28eee-3a1e-46ef-a4ba-050ae6e80aba registered connIndex=2 location=DUB
2021-06-15T21:44:44Z INF Connection 564164b1-7d8b-4c83-a920-79b279659491 registered connIndex=3 location=LHR</code></pre>
            <p>Next, we go to the browser and open the URL camera.imohak.com.</p><p><b>Voilà.</b> Or, not quite yet.</p>
    <div>
      <h3>Locking it Down</h3>
      <a href="#locking-it-down">
        
      </a>
    </div>
    <p>We still haven’t put any requirement for authentication on top of the server. Right now, anyone who knows about the domain can just open it and look at what is happening inside my house. Frightening, right? Thankfully we have two options now:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6ZyzqSJev8PbTkyn3qFn8n/d37bc2f7ced529309f7f173485256d05/image7-1.png" />
            
            </figure><ol><li><p><b>Use </b><a href="https://motion-project.github.io/motion_config.html#webcontrol_authentication"><b>Motion’s inbuilt authentication mechanisms</b></a>. However, we shall not choose this option as it’s just another username/password to remember which one can easily forget and who knows if in the future, there is a vulnerability found in the way motion authenticates and my credentials are leaked? We are looking for an SSO using Google which is easy and quick to use and gives us a secure login based on Google credentials.</p></li><li><p><b>Use Cloudflare Access</b>. Access gives us the ability to create policies based on IP addresses and email addresses, and it lets us integrate different <a href="https://developers.cloudflare.com/cloudflare-one/identity">types of authentication methods</a>, such as OTP or <a href="https://developers.cloudflare.com/cloudflare-one/identity/idp-integration/google">Google</a>. In our case, we require authentication through Google.</p></li></ol><p>To take advantage of this Cloudflare Access functionality, the first step is to set up Cloudflare for Teams. Visit <a href="https://dash.teams.cloudflare.com/">https://dash.teams.cloudflare.com/</a>, follow the <a href="https://developers.cloudflare.com/cloudflare-one/setup">setup guide</a> and choose a team name (imohak in my case).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4Ckc0vrJf3mhwd00R4EPuO/43a6df1837a7eaf0fffd2128e8b467f1/image6-1.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7ECuaRhkiYYhB0PAn5wb66/495aced4d3bd416324e365018e0da159/image13.png" />
            
            </figure><p>After this, we have two things left to do: add a login method and add an application. Let’s cover how we add a login method first. Navigate to <b>Configuration</b> &gt; <b>Authentication</b> and click on <b>+Add</b>, under the Login tab. The Dashboard will show us a list of identity providers to choose from. Select <b>Google</b> — follow <a href="https://developers.cloudflare.com/cloudflare-one/identity/idp-integration/google">this guide</a> for a walkthrough of how to set up a Google Cloud application, get a ClientID and Client Secret, and use them to configure the identity provider in Teams.</p><p>After adding a login method and testing it, we should see a confirmation page like this:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4P79mOvcSRa4tG1MQZlV4b/aa025e7c0fcc069fbcba15750602f4c7/image4-2.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1mrPcowDoLHrYOgO0m5TJZ/56caf36503399da9978180448a6393ec/image11.png" />
            
            </figure><p>The last thing we need to do is to add the pet-cam subdomain as an application protected behind Teams. This enables us to enforce the Google authentication requirement we have configured before. To do that, navigate to <b>Access</b> &gt; <b>Applications</b>, click on <b>Add an application</b>, and select <b>Self-hosted.</b></p><p>On the next page, we specify a name, session duration and also the URL at which the application should be accessible. We add the subdomain <b>camera.imohak.com</b> and also name the app ‘camera’ to keep it simple.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/Fw04ynQQofuRrzAEDtNrf/9c9f06d76be3fff0dab3a377b6f1b5bf/image5-6.png" />
            
            </figure><p>Next, we select Google as an identity provider for this application. Given that we are not choosing multiple authentication methods, I can also enable Instant Auth — this means we don’t need to select Google when we open the URL.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4afTGRdT9HqB27ijtHV1sz/899a61e1508a2b83cfa8542c6ebe5754/image3-7.png" />
            
            </figure><p>Now we add policies to the application. Here, we add an email check so that after the Google authentication, a check is made to ensure the specified email address is the only one who is able to access the URL. If needed, we can choose to configure other, more <a href="https://developers.cloudflare.com/cloudflare-one/policies/zero-trust">complex rules</a>. At this point, we click on <b>Next</b> and finish the setup.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1geMYdeb3TxwfOdNX2EbXc/ea5c14428ed5cdcdc83ec87dfa246c2c/image12.png" />
            
            </figure>
    <div>
      <h3>The Result</h3>
      <a href="#the-result">
        
      </a>
    </div>
    <p>The setup is now complete. Time to test everything! After opening the browser and entering my URL, <b>voilà.</b> Now, when I visit this URL, I see a Google authentication page and, after logging in, Ziggy eating his dinner.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/22S1tv39WLxSU92UNgnaPW/a54c9c1de414af27c0f68d9afaba55b2/image8.png" />
            
            </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Tunnel]]></category>
            <category><![CDATA[Cloudflare Zero Trust]]></category>
            <category><![CDATA[Raspberry Pi]]></category>
            <category><![CDATA[Road to Zero Trust]]></category>
            <category><![CDATA[Zero Trust]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Developers]]></category>
            <guid isPermaLink="false">5zgPI338Em7XJ667F2OIi8</guid>
            <dc:creator>Mohak Kataria</dc:creator>
        </item>
        <item>
            <title><![CDATA[Modernizing a familiar approach to REST APIs, with PostgreSQL and Cloudflare Workers]]></title>
            <link>https://blog.cloudflare.com/modernizing-a-familiar-approach-to-rest-apis-with-postgresql-and-cloudflare-workers/</link>
            <pubDate>Wed, 04 Aug 2021 12:56:38 GMT</pubDate>
            <description><![CDATA[ By using PostgREST with Postgres, we can build REST API-based applications. In particular, it's an excellent fit for Cloudflare Workers, our serverless function platform. Workers is a great place to build REST APIs. ]]></description>
            <content:encoded><![CDATA[ <p><a href="http://postgresql.com/">Postgres</a> is a ubiquitous open-source database technology. It contains a vast number of features and offers rock-solid reliability. It's also one of the most popular <a href="https://www.cloudflare.com/developer-platform/products/d1/">SQL database tools</a> in the industry. As the industry builds “modern” developer experience tools—real-time and highly interactive—Postgres has also served as a great foundation. Projects like <a href="https://hasura.io/">Hasura</a>, which offers a real-time GraphQL engine, and <a href="https://supabase.io/">Supabase</a>, an open-source Firebase alternative, use Postgres under the hood. This makes Postgres a technology that every developer should know, and consider using in their applications.</p><p>For many developers, REST APIs serve as the primary way we interact with our data. Language-specific libraries like <a href="https://node-postgres.com"><code>pg</code></a> allow developers to connect with Postgres in their code, and directly interact with their databases. Yet in almost every case, developers reinvent the wheel, building the same connection logic on an app-by-app basis.</p><p>Many developers building applications with <a href="https://workers.cloudflare.com/">Cloudflare Workers</a>, our serverless functions platform, have asked how they can use Postgres in Workers functions. Today, we're releasing <a href="https://developers.cloudflare.com/workers/tutorials/postgres">a new tutorial for Workers</a> that shows how to connect to Postgres inside Workers functions. Built on <a href="http://postgrest.com/">PostgREST</a>, you'll write a REST API that communicates directly with your database, on the edge.</p><p>This means that you can entirely build applications on Cloudflare’s edge — using Workers as a performant and globally-distributed API, and <a href="https://pages.cloudflare.com/">Cloudflare Pages</a>, our Jamstack deployment platform, as the <a href="https://www.cloudflare.com/developer-platform/solutions/hosting/">host for your frontend user interface</a>. With Workers, you can add new API endpoints and handle authentication <i>in front</i> of your database without needing to alter your Postgres configuration. With features like Workers KV and Durable Objects, Workers can provide globally-distributed caching in front of your Postgres database. <a href="/introducing-websockets-in-workers/">Features like WebSockets</a> can be used to build real-time interactions for your applications, without having to migrate from Postgres to a new database-as-a-service platform.</p><p>PostgREST is an open-source tool that generates a standards-compliant REST API for your Postgres databases. Many growing database-as-a-service startups like <a href="https://retool.com/">Retool</a> and <a href="http://supabase.com/">Supabase</a> use PostgREST under the hood. PostgREST is fast and has great defaults, allowing you to access your Postgres data using standard REST conventions.</p><p>It’s great to be able to access your database directly from Workers, but do you really want to expose your database directly to the public Internet? Luckily, Cloudflare has a solution for this, and it works great with PostgREST: <a href="https://www.cloudflare.com/products/tunnel/">Cloudflare Tunnel</a>. Cloudflare Tunnel is one of my personal favorite products at Cloudflare. It creates a secure tunnel between your local server and the Cloudflare network. We want to expose our PostgREST endpoint, without making our entire database available on the public internet. Cloudflare Tunnel allows us to do that securely.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3yA7ys92hJceHCMsd04rkT/edb36c84a2ea43d56c4f3374c60a3a0b/image1-4.png" />
            
            </figure><p>By using PostgREST with Postgres, we can build REST API-based applications. In particular, it's an excellent fit for Cloudflare Workers, our serverless function platform. Workers is a great place to build REST APIs. With the open-source JavaScript library <a href="https://github.com/supabase/postgrest-js"><code>postgrest-js</code></a>, we can interact with a PostgREST endpoint from inside our Workers function, using simple JS-based primitives.</p><p><i>By the way — if you haven't built a REST API with Workers yet, </i><a href="https://egghead.io/courses/build-a-serverless-api-with-cloudflare-workers-d67ca551?af=a54gwi"><i>check out our free video course with Egghead: "Building a Serverless API with Cloudflare Workers"</i></a><i>.</i></p><p>Scaling applications built on Postgres is an incredibly common problem that developers face. Often, this means duplicating your Postgres database and distributing reads between your primary database, and a fleet of “read replicas”. With PostgREST and Workers, we can begin to explore a different approach to solving the scaling problem. <a href="https://developers.cloudflare.com/workers/learning/how-workers-works">Workers' unique architecture</a> allows us to deploy hyper-performant functions <i>in front</i> of Postgres databases. With tools like Workers KV and Durable Objects, exposed in Workers as basic JavaScript APIs, we can build intelligent caches for our databases, without sacrificing performance or developer experience.</p><p>If you'd like to learn more about building REST APIs in Cloudflare Workers using PostgREST, <a href="https://developers.cloudflare.com/workers/tutorials/postgres">check out our new tutorial</a>! We've also provided two open-source libraries to help you get started. <a href="https://github.com/cloudflare/postgres-postgrest-cloudflared-example"><code>cloudflare/postgres-postgrest-cloudflared-example</code></a> helps you set up a Cloudflare Tunnel-backed Postgres + PostgREST endpoint. <a href="https://github.com/cloudflare/postgrest-worker-example"><code>postgrest-worker-example</code></a> is an example of using postgrest-js inside of Cloudflare Workers, to build REST APIs with your Postgres databases.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3qDgnCYYBrkCji8WvUgM33/09114cef001625e6e56236d2a3575cc0/image2-4.png" />
            
            </figure><p>With <code>postgrest-js</code>, you can build dynamic queries and request data from your database using the JS primitives you know and love:</p>
            <pre><code>// Get all users with at least 100 followers
const { data: users, error } = await client
.from('users')
.select(‘*’)
.gte('followers', 100)</code></pre>
            <p>You can also join our Cloudflare Developers Discord community! Learn more about what you can build with Cloudflare Workers, and meet our wonderful community of developers from around the world. <a href="https://discord.gg/cloudflaredev">Get your invite link here.</a></p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Tunnel]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Postgres]]></category>
            <guid isPermaLink="false">8tTIUN8pM2HWOKhmrZfSG</guid>
            <dc:creator>Kristian Freeman</dc:creator>
        </item>
        <item>
            <title><![CDATA[Automating Cloudflare Tunnel with Terraform]]></title>
            <link>https://blog.cloudflare.com/automating-cloudflare-tunnel-with-terraform/</link>
            <pubDate>Fri, 14 May 2021 13:00:00 GMT</pubDate>
            <description><![CDATA[ An overview on how to use Terraform to automatically deploy Named Tunnels into your infrastructure with Cloudflare. 
 ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Cloudflare Tunnel allows you to connect applications securely and quickly to Cloudflare’s edge. With Cloudflare Tunnel, teams can expose anything to the world, from internal subnets to containers, in a secure and fast way. Thanks to recent developments with our <a href="https://github.com/cloudflare/terraform-provider-cloudflare/issues/603">Terraform provider</a> and the advent of <a href="/argo-tunnels-that-live-forever/">Named Tunnels</a> it’s never been easier to spin up.</p>
    <div>
      <h3>Classic Tunnels to Named Tunnels</h3>
      <a href="#classic-tunnels-to-named-tunnels">
        
      </a>
    </div>
    <p>Historically, the biggest limitation to using Cloudflare Tunnel at scale was that the process to create a tunnel was manual. A user needed to download the binary for their OS, install/compile it, and then run the command <code>cloudflared tunnel login</code>. This would open a browser to their Cloudflare account so they could download a <code>cert.pem</code> file to authenticate their tunnel against Cloudflare’s edge with their account.</p><p>With the jump to Named Tunnels and a supported <a href="https://api.cloudflare.com/#argo-tunnel-create-argo-tunnel">API endpoint</a> Cloudflare users can automate this manual process. Named Tunnels also moved to allow a <code>.json</code> file for the origin side tunnel credentials instead of (or with) the <code>cert.pem</code> file. It has been a dream of mine since joining Cloudflare to write a Cloudflare Tunnel as code, along with my instance/application, and deploy it while I go walk my dog. Tooling should be easy to deploy and robust to use. That dream is now a reality and my dog could not be happier.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3gSV6JJ9YIcaT8Mb0Jd3M3/7903db595fad3dcf6d88732542562f99/image3-2.png" />
            
            </figure>
    <div>
      <h3>Okay, so what?</h3>
      <a href="#okay-so-what">
        
      </a>
    </div>
    <p>The ability to dynamically generate a tunnel and tie it into a back end application(s) brings several benefits to users including: putting more of their Cloudflare config in code, auto-scaling resources, dynamically spinning up resources such as bastion servers for secure logins, and saving time from avoiding manually generating/maintaining tunnels.</p><p>Tunnels also allow traffic to connect securely into Cloudflare’s edge for <i>only</i> the particular account they are affiliated with. In a world where IPs are increasingly ephemeral, tunnels allow for a modern approach to tying your application(s) into Cloudflare. Putting automation around tunnels allows teams to incorporate them into their existing <a href="https://www.cloudflare.com/learning/serverless/glossary/what-is-ci-cd/">CI/CD (continuous improvement/continuous development) pipelines</a>.</p><p>Most importantly, the spin up of an environment securely tied into Cloudflare can be achieved with some Terraform config and then by running <code>terraform apply</code>. I can then go take my pup on an adventure while my environment kicks off.</p>
    <div>
      <h3>Why Terraform?</h3>
      <a href="#why-terraform">
        
      </a>
    </div>
    <p>While there are numerous Infrastructure as Code tools out there, Terraform has an actively maintained <a href="https://registry.terraform.io/providers/cloudflare/cloudflare/latest/docs">Cloudflare provider</a>. This is not to say that this same functionality cannot be re-created by making use of the API endpoint with a tool of your choice. The overarching concepts here should translate quite nicely. Using Terraform we can deploy Cloudflare resources, origin resources, and configure our server all with one tool. Let’s see what setting that up looks like.</p>
    <div>
      <h3>Terraform Config</h3>
      <a href="#terraform-config">
        
      </a>
    </div>
    <p>The technical bits of this will cover how to set up an automated Named Tunnel that will proxy traffic to a Google compute instance (GCP) which is my backend for this example. These concepts should be the same regardless of where you host your applications such as an onprem location to a multi-cloud solution.</p><p>With <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/configuration/ingress">Cloudflare Tunnel’s Ingress Rules</a>, we can use a single tunnel to proxy traffic to a number of local services. In our case we will tie into a Docker container running <a href="https://httpbin.org/">HTTPbin</a> and the local SSH daemon. These endpoints are being used to represent a standard login protocol (such as SSH or RDP) and an example web application (HTTPbin). We can even take it a step further by applying a <a href="https://www.cloudflare.com/teams/access/">Zero Trust framework with Cloudflare Access</a> over the SSH hostname.</p><p>The version of Terraform used in this example is 0.15.0. Please refer to the provider documentation when using the Cloudflare Terraform provider. Tunnels are compatible with Terraform version 0.13+.</p>
            <pre><code>cdlg at cloudflare in ~/Documents/terraform/blog on master
$ terraform --version
Terraform v0.15.0
on darwin_amd64
+ provider registry.terraform.io/cloudflare/cloudflare v2.18.0
+ provider registry.terraform.io/hashicorp/google v3.56.0
+ provider registry.terraform.io/hashicorp/random v3.0.1
+ provider registry.terraform.io/hashicorp/template v2.2.0</code></pre>
            <p>Here is what the Terraform hierarchy looks like for this setup.</p>
            <pre><code>cdlg at cloudflare in ~/Documents/terraform/blog on master
$ tree .
.
├── README.md
├── access.tf
├── argo.tf
├── bootstrap.tf
├── instance.tf
├── server.tpl
├── terraform.tfstate
├── terraform.tfstate.backup
├── terraform.tfvars
├── terraform.tfvars.example
├── test.plan
└── versions.tf

0 directories, 12 files</code></pre>
            <p>We can ignore the files <code>README.md</code> and <code>terraform.tfvars.example</code> for now. The files ending in <code>.tf</code> is where our Terraform configuration lives. Each file is designated to a specific purpose. For example, the <code>instance.tf</code> file only contains the scope of the GCP server resources used with this deployment and the affiliated DNS records pointing to the tunnel on it.</p>
            <pre><code>cdlg at cloudflare in ~/Documents/terraform/blog on master
$ cat instance.tf
# Instance information
data "google_compute_image" "image" {
  family  = "ubuntu-minimal-1804-lts"
  project = "ubuntu-os-cloud"
}

resource "google_compute_instance" "origin" {
  name         = "test"
  machine_type = var.machine_type
  zone         = var.zone
  tags         = ["no-ssh"]

  boot_disk {
    initialize_params {
      image = data.google_compute_image.image.self_link
    }
  }

  network_interface {
    network = "default"
    access_config {
      // Ephemeral IP
    }
  }
  // Optional config to make instance ephemeral
  scheduling {
    preemptible       = true
    automatic_restart = false
  }

  metadata_startup_script = templatefile("./server.tpl",
    {
      web_zone    = var.cloudflare_zone,
      account     = var.cloudflare_account_id,
      tunnel_id   = cloudflare_argo_tunnel.auto_tunnel.id,
      tunnel_name = cloudflare_argo_tunnel.auto_tunnel.name,
      secret      = random_id.argo_secret.b64_std
    })
}



# DNS settings to CNAME to tunnel target
resource "cloudflare_record" "http_app" {
  zone_id = var.cloudflare_zone_id
  name    = var.cloudflare_zone
  value   = "${cloudflare_argo_tunnel.auto_tunnel.id}.cfargotunnel.com"
  type    = "CNAME"
  proxied = true
}

resource "cloudflare_record" "ssh_app" {
  zone_id = var.cloudflare_zone_id
  name    = "ssh"
  value   = "${cloudflare_argo_tunnel.auto_tunnel.id}.cfargotunnel.com"
  type    = "CNAME"
  proxied = true
}</code></pre>
            <p>This is a personal preference — if desired, the entire Terraform config could be put into one file. One thing to note is the usage of variables throughout the files. For example, the value of <code>var.cloudflare_zone</code> is populated with the value provided to it from the <code>terraform.tfvars</code> file. This allows the configuration to be used as a template with other deployments. The only change that would be necessary is updating the relevant variables, such as in the <code>terraform.tfvars</code> file, when re-using the configuration.</p><p>When using a credentials file (vs environment variables such as a <code>.tfvars</code> file) it is very important that this file is exempted from the version tracking tool. With git this is accomplished with a <code>.gitignore</code> file. Before running this example the <code>terraform.tfvars.example</code> file is copied to <code>terraform.tfvars</code> within the same directory and filled in as needed. The <code>.gitignore</code> file is told to ignore any file named <code>terraform.tfvars</code> to exempt the actual variables from version tracking.</p>
            <pre><code>cdlg at cloudflare in ~/Documents/terraform/blog on master
$ cat .gitignore
# Local .terraform directories
**/.terraform/*

# .tfstate files
*.tfstate
*.tfstate.*

# Crash log files
crash.log

# Ignore any .tfvars files that are generated automatically for each Terraform run. Most
# .tfvars files are managed as part of configuration and so should be included in
# version control.
#
# example.tfvars
terraform.tfvars

# Ignore override files as they are usually used to override resources locally and so
# are not checked in
override.tf
override.tf.json
*_override.tf
*_override.tf.json

# Include override files you do wish to add to version control using negated pattern
#
# !example_override.tf

# Include tfplan files to ignore the plan output of command: terraform plan -out=tfplan
# example: *tfplan*
*tfplan*
*.plan*
*lock*</code></pre>
            <p>Now to the fun stuff! To create a Cloudflare Tunnel in Terraform we only need to set the following resources in our Terraform config (this is what populates the <code>argo.tf</code> file).</p>
            <pre><code>resource "random_id" "argo_secret" {
  byte_length = 35
}

resource "cloudflare_argo_tunnel" "auto_tunnel" {
  account_id = var.cloudflare_account_id
  name       = "zero_trust_ssh_http"
  secret     = random_id.argo_secret.b64_std
}</code></pre>
            <p>That’s it.</p><p>Technically you could get away with just the <code>cloudflare_argo_tunnel</code> resource, but using the <code>random_id</code> resource helps with not having to hard code the secret for the tunnel. Instead we can dynamically generate a secret for our tunnel each time we run Terraform.</p><p>Let’s break down what is happening in the <code>cloudflare_argo_tunnel</code> resource: we are passing the Cloudflare account ID (via the <code>var.cloudflare_account_id</code> variable), a name for our tunnel, and the dynamically generated secret for the tunnel, which is pulled from the <code>random_id</code> resource. Tunnels expect the secret to be in base64 standard encoding and at least 32 characters.</p><p>Using Named Tunnels now gives customers a UUID (universal unique identity) target to tie their applications to. These endpoints are routed off an internal domain to Cloudflare and can only be used with zones in your account, as mentioned earlier. This means that one tunnel can proxy multiple applications for various zones in your account, thanks to Cloudflare Tunnel Ingress Rules.</p><p>Now that we have a target for our services, we can create a tunnel/applications in the GCP instance. Terraform has a feature called a <a href="https://www.terraform.io/docs/language/functions/templatefile.html">templatefile function</a> that allows you to pass input variables as local variables (i.e. what the server can use to configure things) to an argument called <code>metadata_startup_script</code>.</p>
            <pre><code>resource "google_compute_instance" "origin" {
...
  metadata_startup_script = templatefile("./server.tpl", 
    {
      web_zone    = var.cloudflare_zone,
      account     = var.cloudflare_account_id,
      tunnel_id   = cloudflare_argo_tunnel.auto_tunnel.id,
      tunnel_name = cloudflare_argo_tunnel.auto_tunnel.name,
      secret      = random_id.argo_secret.b64_std
    })
}</code></pre>
            <p>This abbreviated section of the <code>google_compute_instance</code> resource shows a templatefile using 5 variables passed to the file located at <code>./server.tpl</code>. The file <code>server.tpl</code> is a bash script within the local directory that will configure the newly created GCP instance.</p><p>As indicated earlier, Named Tunnels can make use of a JSON credentials file instead of the historic use of a <code>cert.pem</code> file. By using a templatefile function pointing to a bash script (or cloud-init, etc…) we can dynamically generate the fields that populate both the <code>cert.json</code> file and the <code>config.yml</code> file used for Ingress Rules on the server/host. Then the bash script can install <code>cloudflared</code> as <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/run-tunnel/run-as-service">a system service</a>, so it is persistent (i.e it can come back up after the machine is rebooted). Here is an example of this.</p>
            <pre><code>wget https://bin.equinox.io/c/VdrWdbjqyF/cloudflared-stable-linux-amd64.deb
sudo dpkg -i cloudflared-stable-linux-amd64.deb
mkdir ~/.cloudflared
touch ~/.cloudflared/cert.json
touch ~/.cloudflared/config.yml
cat &gt; ~/.cloudflared/cert.json &lt;&lt; "EOF"
{
    "AccountTag"   : "${account}",
    "TunnelID"     : "${tunnel_id}",
    "TunnelName"   : "${tunnel_name}",
    "TunnelSecret" : "${secret}"
}
EOF
cat &gt; ~/.cloudflared/config.yml &lt;&lt; "EOF"
tunnel: ${tunnel_id}
credentials-file: /etc/cloudflared/cert.json
logfile: /var/log/cloudflared.log
loglevel: info

ingress:
  - hostname: ${web_zone}
    service: http://localhost:8080
  - hostname: ssh.${web_zone}
    service: ssh://localhost:22
  - hostname: "*"
    service: hello-world
EOF

sudo cloudflared service install
sudo cp -via ~/.cloudflared/cert.json /etc/cloudflared/

cd /tmp
sudo docker-compose up -d &amp;&amp; sudo service cloudflared start</code></pre>
            <p>In this example, a <a href="https://tldp.org/LDP/abs/html/here-docs.html">heredoc</a> is used to fill in the variable fields for the <code>cert.json</code> file and another heredoc is used to fill in the <code>config.yml</code> (Ingress Rules) file with the variables we set in Terraform. Taking a quick look at the <code>cert.json</code> file we can see that the Account ID is provided to it which secures the tunnel to your specific account. The UUID  of the tunnel is then passed in along with the name that was assigned in the tunnel’s name argument. Lastly the 35 character secret is then passed to the tunnel. These are the necessary parameters to get our tunnel spun up against Cloudflare’s edge.</p><p>The <code>config.yml</code> file is where we set up the Ingress Rules for the Cloudflare Tunnel. The first few lines tell the tunnel which UUID to attach to, where the credentials are on the OS, and where the tunnel should write logs to. The log level of <code>info</code> is good for general use but for troubleshooting <code>debug</code> may be needed.</p><p>Next the first - <code>hostname:</code> line says any requests bound for that particular hostname need to be proxied to the service (HTTPbin) running at <code>localhost</code> port 8080. Following that the SSH target is defined and will proxy requests to the local SSH port. The next hostname is interesting in that we have a wildcard character. This functionality allows other zones or hostnames on the Account to point to the tunnel without being explicitly defined in Ingress Rules. The service that will respond to these requests is a built in <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/configuration/ingress#supported-protocols">hello world service</a> the tunnel provides.</p><p>Pretty neat, but what else can we do? We can block all <a href="https://developers.cloudflare.com/cloudflare-one/faq/tunnel/#how-can-origin-servers-be-secured-when-using-tunnel">inbound networking</a> to the server and instead use Cloudflare Tunnel to proxy the connections to Cloudflare’s edge. To safeguard the SSH hostname <a href="https://developers.cloudflare.com/cloudflare-one/applications/configure-apps/self-hosted-apps/">an Access policy</a> can be applied over it.</p>
    <div>
      <h3>SSH and Zero Trust</h3>
      <a href="#ssh-and-zero-trust">
        
      </a>
    </div>
    <p>The Access team has several tutorials on how to <a href="https://developers.cloudflare.com/cloudflare-one/api-terraform/access-with-terraform">tie your policies into Terraform</a>. Using this as a guide we can create the Access related Terraform resources for the SSH endpoint.</p>
            <pre><code># Access policy to apply zero trust policy over SSH endpoint
resource "cloudflare_access_application" "ssh_app" {
  zone_id          = var.cloudflare_zone_id
  name             = "Access protection for ssh.${var.cloudflare_zone}"
  domain           = "ssh.${var.cloudflare_zone}"
  session_duration = "1h"
}

resource "cloudflare_access_policy" "ssh_policy" {
  application_id = cloudflare_access_application.ssh_app.id
  zone_id        = var.cloudflare_zone_id
  name           = "Example Policy for ssh.${var.cloudflare_zone}"
  precedence     = "1"
  decision       = "allow"

  include {
    email = [var.cloudflare_email]
  }
}</code></pre>
            <p>In the above <code>cloudflare_access_application</code> resource, a variable, <code>var.cloudflare_zone_id</code>, is used to pull in the Cloudflare Zone’s ID based on the value of the variable provided in the <code>terraform.tfvars</code> file. The Zone Name is also dynamically populated at runtime in the <code>var.cloudflare_zone</code> fields based on the value provided in the <code>terraform.tfvars</code> file. We also limit the scope of this access policy to <code>ssh.targetdomain.com</code> using the <code>domain</code> argument in the <code>cloudflare_access_application</code> resource.</p><p>In the <code>cloudflare_access_policy</code> resource, we take the information provided by the <code>cloudflare_access_application</code> resource called <code>ssh_app</code> and apply it as an active policy. The scope of who is allowed to log into this endpoint is the user’s email as provided by the <code>var.cloudflare_email</code> variable.</p>
    <div>
      <h3>Terraform Spin up and SSH Connection</h3>
      <a href="#terraform-spin-up-and-ssh-connection">
        
      </a>
    </div>
    <p>Now to connect to this SSH endpoint. First we need to spin up our environment. This can be done with <code>terraform plan</code> and then <code>terraform apply</code>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7t9hVHmNwlWQASoIFnboLt/9efaaf728a3a432e73afff6c666544f6/image5-2.png" />
            
            </figure><p>On my workstation I have <code>cloudflared</code> installed and updated my SSH config to proxy traffic for this SSH endpoint through <code>cloudflared</code>.</p>
            <pre><code>cdlg at cloudflare in ~
$ cloudflared --version
cloudflared version 2021.4.0 (built 2021-04-07-2111 UTC)

cdlg at cloudflare in ~
$ grep -A2 'ssh.chrisdlg.com' ~/.ssh/config
Host ssh.chrisdlg.com
    IdentityFile /Users/cdlg/.ssh/google_compute_engine
    ProxyCommand /usr/local/bin/cloudflared access ssh --hostname %h</code></pre>
            <p>I can then SSH with my local user on the remote machine (cdlg) at the SSH hostname (ssh.chrisdlg.com). The instance of cloudflared running on my workstation will then proxy this request.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1BcyIgXbVE2wu6rRq6pZgJ/8690b09b9e60f28a7fac1377942b7bc4/image4-3.png" />
            
            </figure><p>This will open a new tab in my current browser and direct me to the Cloudflare Access application recently created with Terraform. Earlier in the Access resource we set the Cloudflare user as denoted by the <code>var.cloudflare_email</code> variable as the criteria for the Access policy. If the correct email address is provided the user will receive an email similar to the following.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5Z4k965ZDENHc6gTFX0AZx/6914c9f98fdab2960ce873491966471d/image1-3.png" />
            
            </figure><p>Following the link or providing the pin on the previously opened tab will complete the authentication. Hitting ‘approve’ tells Cloudflare Access that the user should be allowed through per the length of the <code>session_duration</code> argument in the <code>cloudflare_access_application</code> resource. Navigating back to the terminal we can see that we are now on the server.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4zogkUR1oIkrVNTAXAMlnl/35dfd66180b79030974a4df03154978a/image6.png" />
            
            </figure><p>If we check the server’s authentication log we can see that connections from the tunnel are coming in via <code>localhost (127.0.0.1)</code>. This allows us to lock down external network access on the SSH port of the server.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1HQdBMj9vWK7HiKP6VXwcI/383ba96ae8d91fd2eec69bf43a41e895/image2-2.png" />
            
            </figure><p>The full config of this deployment can be viewed <a href="https://github.com/cloudflare/argo-tunnel-examples/tree/master/terraform-zerotrust-ssh-http-gcp">here</a>.</p><p>The roadmap for Cloudflare Tunnels is bright. Hopefully this walkthrough provided some quick context on what you can achieve with Cloudflare Tunnels and Cloudflare. Personally my dog is quite happy that I have more time to take him on walks. We’re very excited to see what you build with Cloudflare Tunnels and Cloudflare!</p> ]]></content:encoded>
            <category><![CDATA[Terraform]]></category>
            <category><![CDATA[Cloudflare Tunnel]]></category>
            <guid isPermaLink="false">62yhRjZfOrEwEMIQAf61Am</guid>
            <dc:creator>Chris De La Garza</dc:creator>
        </item>
    </channel>
</rss>