
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Mon, 13 Apr 2026 21:09:00 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Announcing Custom DLP profiles]]></title>
            <link>https://blog.cloudflare.com/custom-dlp-profiles/</link>
            <pubDate>Tue, 10 Jan 2023 14:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare Data Loss Prevention now offers the ability to create custom detections. ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7qJZd1dizqFIdaPcbY7Xxo/c285a34ed84c8120f86801f516037e27/image5-4.png" />
            
            </figure>
    <div>
      <h3>Introduction</h3>
      <a href="#introduction">
        
      </a>
    </div>
    <p>Where does sensitive data live? Who has access to that data? How do I know if that data has been improperly shared or leaked? These questions keep many IT and security administrators up at night. The goal of <a href="https://www.cloudflare.com/learning/access-management/what-is-dlp/">data loss prevention (DLP)</a> is to give administrators the desired visibility and control over their sensitive data.</p><p>We shipped the <a href="/inline-dlp-ga/">general availability of DLP</a> in September 2022, offering Cloudflare One customers better protection of their sensitive data. With DLP, customers can identify sensitive data in their corporate traffic, evaluate the intended destination of the data, and then allow or block it accordingly -- with details logged as permitted by your privacy and sovereignty requirements. We began by offering customers predefined detections for identifier numbers (e.g. Social Security #s) and financial information (e.g. credit card #s). Since then, nearly every customer has asked:</p><blockquote><p>“When can I build my own detections?”</p></blockquote><p>Most organizations care about credit card numbers, which use standard patterns that are easily detectable. But the data patterns of intellectual property or trade secrets vary widely between industries and companies, so customers need a way to detect the loss of their unique data. This can include internal project names, unreleased product names, or unannounced partner names.</p><p>As of today, your organization can build custom detections to identify these types of sensitive data using Cloudflare One. That’s right, today you are able to build Custom DLP Profile using the same regular expression approach that is used in policy building across our platform.</p>
    <div>
      <h3>How to use it</h3>
      <a href="#how-to-use-it">
        
      </a>
    </div>
    <p><a href="https://www.cloudflare.com/products/zero-trust/dlp/">Cloudflare’s DLP</a> is embedded in our <a href="https://www.cloudflare.com/learning/access-management/what-is-a-secure-web-gateway/">secure web gateway (SWG)</a> product, <a href="https://www.cloudflare.com/products/zero-trust/gateway/">Cloudflare Gatewa</a>y, which routes your corporate traffic through Cloudflare for fast, safe Internet browsing. As your traffic passes through Cloudflare, you can inspect that HTTP traffic for sensitive data and apply DLP policies.</p><p>Building DLP custom profiles follows the same intuitive approach you’ve come to expect from Cloudflare.</p><p>First, once within the Zero Trust dashboard, navigate to the DLP Profiles tab under Gateway:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3kfzsPRGyGAS35yu7A0uC9/e02371cee6289fe92c38b0dacbbccef1/image2-13.png" />
            
            </figure><p>Here you will find any available DLP profiles, either predefined or custom:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/199ahBcyglz52Lpsofvl1F/0ef3b17dd0a17ffc290db20d735bdc98/image1-20.png" />
            
            </figure><p>Select to <b>Create Profile</b> to begin a new one.  After providing a name and description, select <b>Add detection entry</b> to add a custom regular expression. A <a href="https://en.wikipedia.org/wiki/Regular_expression">regular expression</a>, or regex, is a sequence of characters that specifies a search pattern in text, and is a standard way for administrators to achieve the flexibility and granularity they need in policy building.</p><p>Cloudflare Gateway currently supports regexes in HTTP policies using the <a href="https://docs.rs/regex/latest/regex/#syntax">Rust regex crate</a>. For consistency, we used the same crate to offer custom DLP detections. For documentation on our regex support, see <a href="https://developers.cloudflare.com/cloudflare-one/policies/filtering/http-policies/data-loss-prevention/#build-a-custom-profile">our documentation</a>.</p><p>Regular expressions can be used to build custom PII detections of your choosing, such as email addresses, or to detect keywords for sensitive intellectual property.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6KclkjaZOxF6PMVkY0L2yM/9b327971802a650d2ffc7977fcd712c0/image3-9.png" />
            
            </figure><p>Provide a name and a regex of your choosing. Every entry in a DLP profile is a new detection that you can scan for in your corporate traffic. Our <a href="https://developers.cloudflare.com/cloudflare-one/policies/filtering/http-policies/data-loss-prevention/#build-a-custom-profile">documentation</a> provides resources to help you create and test Rust regexes.</p><p>Below is an example of regex to detect a simple email address:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4QUMf9IdSNh6i81pYWdzUb/b4295e07168c644dce339715c493bb43/image7-1.png" />
            
            </figure><p>When you are done, you will see the entry in your profile.  You can turn entries on and off in the <b>Status</b> field for easier testing.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3XCjno9mGXiEPz3oCv8BwL/96c7075d1516f4e8fb6484ba5f01fbfc/image4-5.png" />
            
            </figure><p>The custom profile can then be applied to traffic using an HTTP policy, just like a predefined profile. Here both a predefined and custom profile are used in the same policy, blocking sensitive traffic to dlptest.com:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6cUsE6rB7AcS5oQMQVicXs/b5858d4f1aff7a7295d1c04edbe9000c/image6.png" />
            
            </figure>
    <div>
      <h3>Our DLP roadmap</h3>
      <a href="#our-dlp-roadmap">
        
      </a>
    </div>
    <p>This is just the start of our DLP journey, and we aim to grow the product exponentially in the coming quarters. In Q4 we delivered:</p><ul><li><p>Expanded Predefined DLP Profiles</p></li><li><p>Custom DLP Profiles</p></li><li><p>PDF scanning support</p></li><li><p>Upgraded file name logging</p></li></ul><p>Over the next quarters, we will add a number of features, including:</p><ul><li><p>Data at rest scanning with Cloudflare CASB</p></li><li><p>Minimum DLP match counts</p></li><li><p>Microsoft Sensitivity Label support</p></li><li><p>Exact Data Match (EDM)</p></li><li><p>Context analysis</p></li><li><p>Optical Character Recognition (OCR)</p></li><li><p>Even more predefined DLP detections</p></li><li><p>DLP analytics</p></li><li><p>Many more!</p></li></ul><p>Each of these features will offer you new data visibility and control solutions, and we are excited to bring these features to customers very soon.</p>
    <div>
      <h3>How do I get started?</h3>
      <a href="#how-do-i-get-started">
        
      </a>
    </div>
    <p>DLP is part of Cloudflare One, our <a href="https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/">Zero Trust</a> <a href="/custom-dlp-profiles/edit">network-as-a-service</a> platform that connects users to enterprise resources. Our <a href="/inline-dlp-ga/">GA blog announcement</a> provides more detail about using Cloudflare One to onboard traffic to DLP.</p><p>To get access to DLP via Cloudflare One, <a href="https://www.cloudflare.com/lp/cio-week-2023-cloudflare-one-contact-us/">reach out for a consultation</a>, or contact your account manager.</p> ]]></content:encoded>
            <category><![CDATA[CIO Week]]></category>
            <category><![CDATA[Cloudflare One]]></category>
            <category><![CDATA[Zero Trust]]></category>
            <category><![CDATA[Data Loss Prevention]]></category>
            <category><![CDATA[DLP]]></category>
            <category><![CDATA[Secure Web Gateway]]></category>
            <guid isPermaLink="false">YVG5VxqbfYehlg0rqucXP</guid>
            <dc:creator>Adam Chalmers</dc:creator>
            <dc:creator>Noelle Kagan</dc:creator>
        </item>
        <item>
            <title><![CDATA[Pin, Unpin, and why Rust needs them]]></title>
            <link>https://blog.cloudflare.com/pin-and-unpin-in-rust/</link>
            <pubDate>Thu, 26 Aug 2021 15:04:04 GMT</pubDate>
            <description><![CDATA[ Using async Rust libraries is usually easy. It's just like using normal Rust code, with a little async or .await here and there. But writing your own async libraries can be hard.  ]]></description>
            <content:encoded><![CDATA[ <p>Using async Rust libraries is usually easy. It's just like using normal Rust code, with a little <code>async</code> or <code>.await</code> here and there. But writing your own async libraries can be hard. The first time I tried this, I got really confused by arcane, esoteric syntax like <code>T: ?Unpin</code> and <code>Pin&lt;&amp;mut Self&gt;</code>. I had never seen these types before, and I didn't understand what they were doing. Now that I understand them, I've written the explainer I wish I could have read back then. In this post, we're gonna learn</p><ul><li><p>What Futures are</p></li><li><p>What self-referential types are</p></li><li><p>Why they were unsafe</p></li><li><p>How Pin/Unpin made them safe</p></li><li><p>Using Pin/Unpin to write tricky nested futures</p></li></ul>
    <div>
      <h3>What are Futures?</h3>
      <a href="#what-are-futures">
        
      </a>
    </div>
    <p>A few years ago, I needed to write some code which would take some async function, run it and collect some metrics about it, e.g. how long it took to resolve. I wanted to write a type <code>TimedWrapper</code> that would work like this:</p>
            <pre><code>// Some async function, e.g. polling a URL with [https://docs.rs/reqwest]
// Remember, Rust functions do nothing until you .await them, so this isn't
// actually making a HTTP request yet.
let async_fn = reqwest::get("http://adamchalmers.com");

// Wrap the async function in my hypothetical wrapper.
let timed_async_fn = TimedWrapper::new(async_fn);

// Call the async function, which will send a HTTP request and time it.
let (resp, time) = timed_async_fn.await;
println!("Got a HTTP {} in {}ms", resp.unwrap().status(), time.as_millis())</code></pre>
            <p></p><p>I like this interface, it's simple and should be easy for the other programmers on my team to use. OK, let's implement it! I know that, under the hood, Rust's async functions are just regular functions that return a <a href="https://doc.rust-lang.org/stable/std/future/trait.Future.html"><code>Future</code></a>. The Future trait is pretty simple. It just means a type which:</p><ul><li><p>Can be polled</p></li><li><p>When it's polled, it might return "Pending" or "Ready"</p></li><li><p>If it's pending, you should poll it again later</p></li><li><p>If it's ready, it responds with a value. We call this "resolving".</p></li></ul><p>Here's a really easy example of implementing a Future. Let's make a Future that returns a random <code>u16</code>.</p>
            <pre><code>use std::{future::Future, pin::Pin, task::Context}

/// A future which returns a random number when it resolves.
#[derive(Default)]
struct RandFuture;

impl Future for RandFuture {
	// Every future has to specify what type of value it returns when it resolves.
	// This particular future will return a u16.
	type Output = u16;

	// The `Future` trait has only one method, named &amp;quot;poll&amp;quot;.
fn poll(self: Pin&amp;lt;&amp;amp;mut Self&amp;gt;, _cx: &amp;amp;mut Context) -&amp;gt; Poll&amp;lt;Self::Output  {
		Poll::ready(rand::random())
	}
}</code></pre>
            <p></p><p>Not too hard! I think we're ready to implement <code>TimedWrapper</code>.</p>
    <div>
      <h3>Trying and failing to use nested Futures</h3>
      <a href="#trying-and-failing-to-use-nested-futures">
        
      </a>
    </div>
    <p>Let's start by defining the type.</p>
            <pre><code>pub struct TimedWrapper&lt;Fut: Future&gt; {
	start: Option&lt;Instant&gt;,
	future: Fut,
}</code></pre>
            <p></p><p>OK, so a <code>TimedWrapper</code> is generic over a type <code>Fut</code>, which must be a <code>Future</code>. And it will store a future of that type as a field. It'll also have a <code>start</code> field which will record when it was first polled. Let's write a constructor:</p>
            <pre><code>impl&lt;Fut: Future&gt; TimedWrapper&lt;Fut&gt; {
	pub fn new(future: Fut) -&gt; Self {
		Self { future, start: None }
	}
}</code></pre>
            <p></p><p>Nothing too complicated here. The <code>new</code> function takes a future and wraps it in the <code>TimedWrapper</code>. Of course, we have to set start to None, because it hasn't been polled yet. So, let's implement the <code>poll</code> method, which is the only thing we need to implement <code>Future</code> and make it <code>.await</code>able.</p>
            <pre><code>impl&lt;Fut: Future&gt; Future for TimedWrapper&lt;Fut&gt; {
	// This future will output a pair of values:
	// 1. The value from the inner future
	// 2. How long it took for the inner future to resolve
	type Output = (Fut::Output, Duration);

	fn poll(self: Pin&lt;&amp;mut Self&gt;, cx: &amp;mut Context) -&gt; Poll&lt;Self::Output&gt; {
		// Call the inner poll, measuring how long it took.
		let start = self.start.get_or_insert_with(Instant::now);
		let inner_poll = self.future.poll(cx);
		let elapsed = self.elapsed();

		match inner_poll {
			// The inner future needs more time, so this future needs more time too
			Poll::Pending =&gt; Poll::Pending,
			// Success!
			Poll::Ready(output) =&gt; Poll::Ready((output, elapsed)),
		}
	}
}</code></pre>
            <p></p><p>OK, that wasn't too hard. There's just one problem: this doesn't work.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5vQYwC1N4iXqv6DzpxWJDq/ce5125480ef4ad35b8eb0d4882a408bf/Screen-Shot-2021-08-25-at-11.15.17-PM.png" />
            
            </figure><p></p><p>So, the Rust compiler reports an error on <code>self.future.poll(cx)</code>, which is "no method named <code>poll</code> found for type parameter <code>Fut</code> in the current scope". This is confusing, because we know <code>Fut</code> is a <code>Future</code>, so surely it has a poll method? OK, but Rust continues: <code>Fut</code> doesn't have a poll method, but <code>Pin&lt;&amp;mut Fut&gt;</code> has one. What is this weird type?</p><p>Well, we know that methods have a "receiver", which is some way it can access <code>self</code>. The receiver might be <code>self, &amp;self or &amp;mut self</code>, which mean "take ownership of self," "borrow self," and "mutably borrow self" respectively. So this is just a new, unfamiliar kind of receiver. Rust is complaining because we have Fut and we really need a <code>Pin&lt;&amp;mut Fut&gt;</code>. At this point I have two questions:</p><ol><li><p>What is <code>Pin</code>?</p></li><li><p>If I have a T value, how do I get a <code>Pin&lt;&amp;mut T&gt;</code>?</p></li></ol><p>The rest of this post is going to be answering those questions. I'll explain some problems in Rust that could lead to unsafe code, and why Pin safely solves them.</p>
    <div>
      <h3>Self-reference is unsafe</h3>
      <a href="#self-reference-is-unsafe">
        
      </a>
    </div>
    <p>Pin exists to solve a very specific problem: self-referential datatypes, i.e. data structures which have pointers into themselves. For example, a binary search tree might have self-referential pointers, which point to other nodes in the same struct.</p><p>Self-referential types can be really useful, but they're also hard to make memory-safe. To see why, let's use this example type with two fields, an i32 called <code>val</code> and a pointer to an i32 called <code>pointer</code>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6DejT3ngy1ze7AWuwy37Zq/6b9d388bb27d37a9d06f1e8ef2898615/memory_before.png" />
            
            </figure><p></p><p>So far, everything is OK. The <code>pointer</code> field points to the val field in memory address A, which contains a valid i32. All the pointers are <i>valid</i>, i.e. they point to memory that does indeed encode a value of the right type (in this case, an i32). But the Rust compiler often moves values around in memory. For example, if we pass this struct into another function, it might get moved to a different memory address. Or we might Box it and put it on the heap. Or if this struct was in a <code>Vec&lt;MyStruct&gt;</code>, and we pushed more values in, the Vec might outgrow its capacity and need to move its elements into a new, larger buffer.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6WF4QNkOCFQiAIsmpbUaQK/fb8d03bc290071fb69a6d35399bec85f/memory_after.png" />
            
            </figure><p></p><p>When we move it, the struct's fields change their address, but not their value. So the <code>pointer</code> field is still pointing at address A, but address A now doesn't have a valid i32. The data that was there was moved to address B, and some other value might have been written there instead! So now the pointer is invalid. This is bad -- at best, invalid pointers cause crashes, at worst they cause hackable vulnerabilities. We only want to allow memory-unsafe behaviour in unsafe blocks, and we should be very careful to document this type and tell users to update the pointers after moves.</p>
    <div>
      <h3>Unpin and !Unpin</h3>
      <a href="#unpin-and-unpin">
        
      </a>
    </div>
    <p>To recap, all Rust types fall into two categories.</p><ol><li><p>Types that are safe to move around in memory. This is the default, the norm. For example, this includes primitives like numbers, strings, bools, as well as structs or enums entirely made of them. Most types fall into this category!</p></li><li><p>Self-referential types, which are <i>not</i> safe to move around in memory. These are pretty rare. An example is the <a href="https://docs.rs/tokio/1.10.0/src/tokio/util/linked_list.rs.html">intrusive linked list inside some Tokio internals</a>. Another example is most types which implement Future and also borrow data, for reasons <a href="https://rust-lang.github.io/async-book/04_pinning/01_chapter.html">explained in the Rust async book</a>.</p></li></ol><p>Types in category (1) are totally safe to move around in memory. You won't invalidate any pointers by moving them around. But if you move a type in (2), then you invalidate pointers and can get undefined behaviour, as we saw before. In earlier versions of Rust, you had to be really careful using these types to not move them, or if you moved them, to use unsafe and update all the pointers. But since Rust 1.33, the compiler can automatically figure out which category any type is in, and make sure you only use it safely.</p><p>Any type in (1) implements a special auto trait called <a href="https://doc.rust-lang.org/stable/std/marker/trait.Unpin.html"><code>Unpin</code></a>. Weird name, but its meaning will become clear soon. Again, most "normal" types implement Unpin, and because it's an auto trait (like Send or Sync or <code>Sized</code><a href="https://blog.adamchalmers.com/pin-unpin/#1">1</a>), so you don't have to worry about implementing it yourself. If you're unsure if a type can be safely moved, just check it on <a href="https://docs.rs">docs.rs</a> and see if it impls <code>Unpin</code>!</p><p>Types in (2) are creatively named <code>!Unpin</code> (the <code>!</code> in a trait means "does not implement"). To use these types safely, we can't use regular pointers for self-reference. Instead, we use special pointers that "pin" their values into place, ensuring they can't be moved. This is exactly what the <a href="https://doc.rust-lang.org/stable/std/pin/struct.Pin.html"><code>Pin</code></a> type does.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/19Usw25JStox7edODSK287/8d0f6e45aa88340d6f34bcde72b31980/pin_diagram.png" />
            
            </figure><p></p><p>Pin wraps a pointer and stops its value from moving. The only exception is if the value impls <code>Unpin</code> -- then we know it's safe to move. Voila! Now we can write self-referential structs safely! This is really important, because as discussed above, many Futures are self-referential, and we need them for async/await.</p>
    <div>
      <h3>Using Pin</h3>
      <a href="#using-pin">
        
      </a>
    </div>
    <p>So now we understand why Pin exists, and why our Future poll method has a pinned <code>&amp;mut self</code> to self instead of a regular <code>&amp;mut self</code>. So let's get back to the problem we had before: I need a pinned reference to the inner future. More generally: given a pinned struct, how do we access its fields?</p><p>The solution is to write helper functions which give you references to the fields. These references might be normal Rust references like &amp;mut, or they might <i>also</i> be pinned. You can choose whichever one you need. This is called <i>projection</i>: if you have a pinned struct, you can write a projection method that gives you access to all its fields.</p><p>Projecting is really just getting data into and out of Pins. For example, we get the <code>start: Option&lt;Duration&gt;</code> field from the <code>Pin&lt;&amp;mut self&gt;</code>, and we need to put the <code>future: Fut</code> into a Pin so we can call its <code>poll</code> method). If you read the <a href="https://doc.rust-lang.org/stable/std/pin/struct.Pin.html"><code>Pin</code> methods</a> you'll see this is always safe if it points to an <code>Unpin</code> value, but requires unsafe otherwise.</p>
            <pre><code>// Putting data into Pin
pub        fn new          &lt;P: Deref&lt;Target:Unpin&gt;&gt;(pointer: P) -&gt; Pin&lt;P&gt;;
pub unsafe fn new_unchecked&lt;P&gt;                     (pointer: P) -&gt; Pin&lt;P&gt;;

// Getting data from Pin
pub        fn into_inner          &lt;P: Deref&lt;Target: Unpin&gt;&gt;(pin: Pin&lt;P&gt;) -&gt; P;
pub unsafe fn into_inner_unchecked&lt;P&gt;                      (pin: Pin&lt;P&gt;) -&gt; P;</code></pre>
            <p></p><p>I know <code>unsafe</code> can be a bit scary, but it's OK to write unsafe code! I think of unsafe as the compiler saying "hey, I can't tell if this code follows the rules here, so I'm going to rely on you to check for me." The Rust compiler does so much work for us, it's only fair that we do some of the work every now and then. If you want to learn how to write your own projection methods, I can highly recommend <a href="https://fasterthanli.me/articles/pin-and-suffering">this fasterthanli.me blog post</a> on the topic. But we're going to take a little shortcut.</p>
    <div>
      <h3>Using pin-project instead</h3>
      <a href="#using-pin-project-instead">
        
      </a>
    </div>
    <p>So, OK, look, it's time for a confession: I don't like using <code>unsafe</code>. I know I just explained why it's OK, but still, given the option, I would rather not.</p><p>I didn't start writing Rust because I wanted to carefully think about the consequences of my actions, damnit, I just want to go fast and not break things. Luckily, someone sympathized with me and made a crate which generates totally safe projections! It's called <a href="https://docs.rs/pin-project">pin-project</a> and it's <i>awesome</i>. All we need to do is change our definition:</p>
            <pre><code>#[pin_project::pin_project] // This generates a `project` method
pub struct TimedWrapper&lt;Fut: Future&gt; {
	// For each field, we need to choose whether `project` returns an
	// unpinned (&amp;mut T) or pinned (Pin&lt;&amp;mut T&gt;) reference to the field.
	// By default, it assumes unpinned:
	start: Option&lt;Instant&gt;,
	// Opt into pinned references with this attribute:
	#[pin]
	future: Fut,
}</code></pre>
            <p></p><p>For each field, you have to choose whether its projection should be pinned or not. By default, you should use a normal reference, just because they're easier and simpler. But if you know you need a pinned reference -- for example, because you want to call <code>.poll()</code>, whose receiver is <code>Pin&lt;&amp;mut Self&gt;</code> -- then you can do that with <code>#[pin]</code>.</p><p>Now we can finally poll the inner future!</p>
            <pre><code>fn poll(self: Pin&lt;&amp;mut Self&gt;, cx: &amp;mut Context) -&gt; Poll&lt;Self::Output&gt; {
	// This returns a type with all the same fields, with all the same types,
	// except that the fields defined with #[pin] will be pinned.
	let mut this = self.project();
	
    // Call the inner poll, measuring how long it took.
	let start = this.start.get_or_insert_with(Instant::now);
	let inner_poll = this.future.as_mut().poll(cx);
	let elapsed = start.elapsed();

	match inner_poll {
		// The inner future needs more time, so this future needs more time too
		Poll::Pending =&gt; Poll::Pending,
		// Success!
		Poll::Ready(output) =&gt; Poll::Ready((output, elapsed)),
	}
}</code></pre>
            <p></p><p>Finally, our goal is complete -- and we did it all without any unsafe code.</p>
    <div>
      <h3>Summary</h3>
      <a href="#summary">
        
      </a>
    </div>
    <p>If a Rust type has self-referential pointers, it can't be moved safely. After all, moving doesn't update the pointers, so they'll still be pointing at the old memory address, so they're now invalid. Rust can automatically tell which types are safe to move (and will auto impl the <code>Unpin</code> trait for them). If you have a <code>Pin</code>-ned pointer to some data, Rust can guarantee that nothing unsafe will happen (if it's safe to move, you can move it, if it's unsafe to move, then you can't). This is important because many Future types are self-referential, so we need <code>Pin</code> to safely poll a Future. You probably won't have to poll a future yourself (just use async/await instead), but if you do, use the <a href="https://docs.rs/pin-project">pin-project</a> crate to simplify things.</p><p>I hope this helped -- if you have any questions, please <a href="https://twitter.com/adam_chal">ask me on Twitter</a>. And if you want to get paid to talk to me about Rust and networking protocols, my team at Cloudflare is hiring, so be sure to visit <a href="https://www.cloudflare.com/careers/">careers.cloudflare.com</a>.</p>
    <div>
      <h3>References</h3>
      <a href="#references">
        
      </a>
    </div>
    <ul><li><p>Complete TimedWrapper example code on <a href="https://github.com/adamchalmers/nested-future-example/blob/master/src/main.rs">GitHub</a></p></li><li><p>This post is based on a <a href="https://cloudflare.tv/event/2F1zRnM58eBCSHP2VEd74x">presentation</a> I gave at a Rust Bay Area meetup a few weeks ago. My talk starts around 40 minutes in.</p></li><li><p>The <a href="https://doc.rust-lang.org/stable/std/pin/index.html">std::pin docs</a> have a pretty good explanation of Pin's details.</p></li><li><p>The <a href="https://rust-lang.github.io/async-book/04_pinning/01_chapter.html">Rust async book</a> explains why Futures often need self-referential pointers.</p></li><li><p>Comprehensive article on <a href="https://fasterthanli.me/articles/pin-and-suffering">how pin projection actually works</a> by <a href="https://twitter.com/fasterthanlime/">@fasterthanlime</a></p></li><li><p>Great article explaining when and how Rust <a href="https://hashrust.com/blog/moves-copies-and-clones-in-rust/">moves values to different memory addresses</a>, by <a href="https://twitter.com/hashrust">@HashRust</a><i>Thanks to Nick Vollmar for feedback and to</i> <a href="https://stackoverflow.com/users/155423/shepmaster"><i>Shepmaster</i></a> <i>for helping me use pin-project when I first needed to write a nested Future</i></p></li></ul><p></p> ]]></content:encoded>
            <category><![CDATA[Rust]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[Deep Dive]]></category>
            <guid isPermaLink="false">10AlR3uJVQ5N1QSlisjauG</guid>
            <dc:creator>Adam Chalmers</dc:creator>
        </item>
        <item>
            <title><![CDATA[Highly available and highly scalable Cloudflare tunnels]]></title>
            <link>https://blog.cloudflare.com/highly-available-and-highly-scalable-cloudflare-tunnels/</link>
            <pubDate>Wed, 12 May 2021 13:00:00 GMT</pubDate>
            <description><![CDATA[ Starting today, we’re thrilled to announce you can run the same tunnel from multiple different cloudflareds simultaneously. This enables graceful restarts, elastic auto-scaling, easier Kubernetes integration, and more reliable tunnels. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Starting today, we’re thrilled to announce you can run the same tunnel from multiple instances of cloudflared simultaneously. This enables graceful restarts, elastic auto-scaling, easier Kubernetes integration, and more reliable tunnels.</p>
    <div>
      <h2>What is Cloudflare Tunnel?</h2>
      <a href="#what-is-cloudflare-tunnel">
        
      </a>
    </div>
    <p>I work on Cloudflare Tunnel, a product our customers use to connect their services and private networks to Cloudflare without poking holes in their firewall. Tunnel connections are managed by <code>cloudflared</code>, a tool that runs in your environment and connects your services to the Internet while ensuring that all its traffic goes through Cloudflare.</p><p>Say you have some local service (a website, an API, or a TCP server), and you want to securely expose it to the Internet using a Cloudflare Tunnel. First, download cloudflared, which is a “connector” that connects your local service to the Internet through Cloudflare. You can then connect that service to Cloudflare and generate a DNS entry with a single command:</p>
            <pre><code>cloudflared tunnel create --name mytunnel --url http://localhost:8080 --hostname example.com</code></pre>
            <p>This creates a tunnel called “mytunnel”, and configures your DNS to map <i>example.com</i> to that tunnel. Then cloudflared connects to the Cloudflare network. When the Cloudflare network receives an incoming request for example.com, it looks up the cloudflared running <i>mytunnel</i> and proxies the request there. Then cloudflared proxies those requests to localhost:8080.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6erzY2OSqCDSQsWzUnPaDy/9e9fa48823ff4cf42f7e65e3375e234c/image1.png" />
            
            </figure><p>With Tunnel, your origin server no longer needs to allow any incoming traffic. In fact, it doesn’t even need a publicly reachable IP address. This is significant because that means that no one can simply bypass Cloudflare to reach your resource either.</p><p>Traditionally, Cloudflare customers onboard their sites to our platform with a simple nameserver change. By changing your nameserver, Cloudflare receives any queries to your resource first and leverages this as an opportunity to block malicious traffic and enforce policies and rules you define for your resource within the Cloudflare dashboard. However, if attackers discover your origin IP, they could bypass Cloudflare and your policies and rules.</p><p>Instead, with Tunnel, requests for your Internet property are proxied through the already-established outgoing connections from cloudflared to the Cloudflare network. This way any traffic entering your site will have to go through Cloudflare, where you can enforce more granular control with policies for caching, page rewrites, or Zero Trust security (e.g. only users with an @example.com email can view the page).</p>
    <div>
      <h2>Scaling cloudflared</h2>
      <a href="#scaling-cloudflared">
        
      </a>
    </div>
    <p>One feature request we’ve heard quite often is that our users need their software systems to scale. Their database must be scalable. Their web servers must be scalable. And of course, in turn, cloudflared must be scalable, because without cloudflared, our users can’t receive traffic.</p><p>For reliability purposes, cloudflared opens connections to four different Cloudflare servers (two in each of two different data centers, for redundancy in case a data center goes down). This way if one goes down, the other three will serve traffic while it reconnects. But what if the cloudflared process itself goes down?</p><p>Well, ideally, we would be able to scale or replicate cloudflared itself.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3HbFUcYomom2pfzgsUceHS/9244a5ee9c45629988b9cfc33dac7041/image2.png" />
            
            </figure><p>Previously, scaling cloudflared required using Cloudflare Load Balancer to spread traffic across multiple unique tunnels. Each tunnel then had to be manually authenticated, configured, and connected. That poses a challenge for teams who need to autoscale instances, like the resources in a Kubernetes cluster, without manual intervention.</p><p>Starting today, you can now create and configure an instance of <code>cloudflared</code> once and run it as multiple different processes in a replica model.</p><p>You can still point a DNS record or Cloudflare Load Balancer to a tunnel using its unique ID, but that tunnel is now represented by one or more identical instances of <code>cloudflared</code> - each with a unique connector ID.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1XV8XW1BSnFxCuU3wiUt7P/4e5cdc3693fcb9f1562ec19ae13a9aa8/image3.png" />
            
            </figure><p>When you run your tunnel, cloudflared will log its <i>connector ID</i>.</p>
            <pre><code>2021-03-29T18:40:17Z INF Starting tunnel tunnelID=610a53bd-ed0c-4afe-92b5-ca0238153410
2021-03-29T18:40:17Z INF Version 2021.3.5
2021-03-29T18:40:17Z INF GOOS: darwin, GOVersion: go1.16.2, GoArch: amd64
2021-03-29T18:40:17Z INF Generated Connector ID: 14e2e624-0d32-4a21-a88c-64acf9484dac</code></pre>
            <p>There’s a new command, <i>cloudflared tunnel info </i>, to show you each cloudflared running your tunnel.</p>
            <pre><code>$ cloudflared tunnel info mytunnel
NAME:     mytunnel
ID:       610a53bd-ed0c-4afe-92b5-ca0238153410
CREATED:  2021-03-26 19:29:34.291328 +0000 UTC

CONNECTOR ID                         CREATED              ARCHITECTURE VERSION  ORIGIN IP     EDGE         
71490dec-190f-4652-a70a-cd001fe6fdcf 2021-03-26T19:29:47Z darwin_amd64 2021.3.3 104.13.170.35 2xDFW, 2xMCI 
a0737d55-51f5-4fe0-8b53-c25989453c43 2021-03-26T19:29:58Z darwin_amd64 2021.3.3 104.13.170.35 2xDFW, 2xMCI </code></pre>
            <p>At the moment, there’s a limit of 100 simultaneous connectors per tunnel. We think that should be enough for any customer, but if your use-case requires more, please reach out to support and we can raise it for your account.</p>
    <div>
      <h2>Cloudflare Tunnel’s new use cases</h2>
      <a href="#cloudflare-tunnels-new-use-cases">
        
      </a>
    </div>
    
    <div>
      <h3>Elastic auto-scaling</h3>
      <a href="#elastic-auto-scaling">
        
      </a>
    </div>
    <p>Don’t let cloudflared become a bottleneck in your system. When your incoming traffic spikes, your origin servers should scale up, and your cloudflareds should scale up too.</p><p>With this launch, your team can dynamically start more instances of <code>cloudflared</code> without changing your DNS or Load Balancer configuration. The tunnel will distribute traffic between instances of <code>cloudflared</code> without the need to manually create and enroll new instances.</p>
    <div>
      <h3>Graceful restarts</h3>
      <a href="#graceful-restarts">
        
      </a>
    </div>
    <p>Right now it can be painful to change your cloudflared configuration. You’ll change the config file (or environment variables), then restart cloudflared. The problem is, this restart causes downtime while cloudflared stops accepting requests, restarts, reconnects to the edge, and starts accepting requests again.</p><p>Today’s announcement enables zero-downtime config changes. Instead of restarting cloudflared, simply start a second instance. The new instance will read the new configuration from the file. Once it’s connected to the edge and accepting traffic, you can stop the old cloudflared instance. The old instance will stop accepting new connections, wait for existing connections to finish, then terminate. Now 100% of your traffic is going through the new configuration, with zero downtime.</p>
    <div>
      <h3>Easier Kubernetes integration</h3>
      <a href="#easier-kubernetes-integration">
        
      </a>
    </div>
    <p>Cloudflared tunnels were previously incompatible with two of the most common Kubernetes scenarios:</p><ol><li><p>Scaling up a service by adding another pod with identical configuration</p></li><li><p>Gracefully upgrading a service by adding another pod with the new version/configuration, waiting for it to become healthy, then removing the old pod</p></li></ol><p>Unfortunately, neither of these worked with cloudflared, because the new cloudflared pod would fail to start. Instead, it would output an error message, saying it couldn’t run because its tunnel was already running somewhere else.</p><p>But now you can run many cloudflared pods, each running the same tunnel. We suggest the easiest way to use cloudflared with Kubernetes is to have your origin server encapsulated in a Kubernetes Service, and then use a separate Kubernetes Deployment for cloudflared. Configure cloudflared’s <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/configuration/ingress">ingress rules</a> to point at the origin Service. Now you can scale cloudflared and your origin service up or down, independently of each other.</p>
    <div>
      <h3>More reliable tunnels</h3>
      <a href="#more-reliable-tunnels">
        
      </a>
    </div>
    <p>In a modern distributed system, it’s important to avoid having a bottleneck or a single point of failure. Unfortunately, cloudflared sometimes became a single point of failure. Previously, you could mitigate this by running two cloudflareds, with different tunnels but otherwise identical configuration, and load balancing across these tunnels.</p><p>However, today you can simply run the same cloudflared multiple times -- whether in the same data center or two different continents -- and avoid the anxiety that comes from relying on a single program to keep your traffic flowing in.</p><p>If you’re interested in trying it out for yourself, check out our <a href="https://developers.cloudflare.com/cloudflare-one/tutorials/many-cfd-one-tunnel">tutorial</a> to get started today!</p> ]]></content:encoded>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Cloudflare Tunnel]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">4qaHizN4qXwdL5GXvysvM0</guid>
            <dc:creator>Adam Chalmers</dc:creator>
        </item>
        <item>
            <title><![CDATA[Many services, one cloudflared]]></title>
            <link>https://blog.cloudflare.com/many-services-one-cloudflared/</link>
            <pubDate>Thu, 19 Nov 2020 12:00:00 GMT</pubDate>
            <description><![CDATA[ Previously, if you wanted to proxy 100 services through Argo Tunnel, you needed 100 instances of cloudflared running on your server. Today, we’re thrilled to announce our most-requested feature: you can now expose unlimited services using one cloudflared. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/YuYC8rMeikhlUWZDeYgHb/708857f8e911c3ea6735faf1b5d9f4f1/Multiple-proxy-services-in-one-cloudflared-1.png" />
            
            </figure><p><i>Route many different local services through many different URLs, with only one cloudflared</i></p><p>I work on the <a href="https://www.cloudflare.com/products/argo-tunnel/">Argo Tunnel</a> team, and we make a program called <a href="https://github.com/cloudflare/cloudflared">cloudflared</a><i>,</i> which lets you securely expose your web service to the Internet while ensuring that <i>all</i> its traffic goes through Cloudflare.</p><p>Say you have some local service (a website, an API, a TCP server, etc), and you want to securely expose it to the internet using Argo Tunnel. First, you run cloudflared, which establishes some long-lived TCP connections to the Cloudflare edge. Then, when Cloudflare receives a request for your chosen hostname, it proxies the request through those connections to cloudflared, which in turn proxies the request to your local service. This means anyone accessing your service has to go through Cloudflare, and Cloudflare can do caching, rewrite parts of the page, block attackers, or build <a href="https://www.cloudflare.com/teams/access/">Zero Trust rules</a> to control who can reach your application (e.g. users with a @corp.com email). Previously, companies had to use VPNs or firewalls to achieve this, but Argo Tunnel aims to be more flexible, more secure, and more scalable than the alternatives.</p><p>Some of our larger customers have deployed hundreds of services with Argo Tunnel, but they’re consistently experiencing a pain point with these larger deployments. Each instance of <i>cloudflared</i> can only proxy a single service. This means if you want to put, say, 100 services on the internet, you’ll need 100 instances of cloudflared running on your server. This is inefficient (because you’re using 100x as many system resources) and, even worse, it’s a pain to manage 100 long-lived services!</p><p>Today, we’re thrilled to announce our most-requested feature: you can now expose unlimited services using one cloudflared. Any customer can start using this today, at no extra cost, using the Named Tunnels we released a few months ago.</p>
    <div>
      <h3>Named Tunnels</h3>
      <a href="#named-tunnels">
        
      </a>
    </div>
    <p>Earlier this year, we announced <a href="/argo-tunnels-that-live-forever/">Named Tunnels</a>—tunnels with immutable IDs that you can run and stop as you please. You can route traffic into the tunnel by adding a DNS or Cloudflare Load Balancer record, and you can route traffic from the tunnel into your local services by running cloudflared_._</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/640RbNyTpOVzJ2YIenEh4S/f44194c5729857698baa96862c771627/BLOG-314-Multi-Service-Support-for-Cloudflared.png" />
            
            </figure><p>You can <a href="https://developers.cloudflare.com/argo-tunnel/create-tunnel">create a tunnel</a> by running $ cloudflared tunnel create my_tunnel_name. Once you’ve got a tunnel, you can use <a href="https://developers.cloudflare.com/argo-tunnel/routing-to-tunnel/dns">DNS records</a> or <a href="https://developers.cloudflare.com/argo-tunnel/routing-to-tunnel/lb">Cloudflare Load Balancers</a> to route traffic into the tunnel. Once traffic is routed into the tunnel, you can use our new ingress rules to map traffic to local services.</p>
    <div>
      <h3>Map traffic with ingress rules</h3>
      <a href="#map-traffic-with-ingress-rules">
        
      </a>
    </div>
    <p>An ingress rule basically says “send traffic for <i>this</i> internet URL to <i>this</i> local service.” When you invoke <i>cloudflared</i> it’ll read these ingress rules from the configuration file. You write ingress rules under the ingress key of your config file, like this:</p>
            <pre><code>$ cat ~/cloudflared_config.yaml

tunnel: my_tunnel_name
credentials-file: .cloudflared/e0000000-e650-4190-0000-19c97abb503b.json
ingress:
 # Rules map traffic from a hostname to a local service:
 - hostname: example.com
   service: https://localhost:8000
 # Rules can match the request's path to a regular expression:
 - hostname: static.example.com
   path: /images/*\.(jpg|png|gif)
   service: https://machine1.local:3000
 # Rules can match the request's hostname to a wildcard character:
 - hostname: "*.ssh.foo.com"
   service: ssh://localhost:2222
 # You can map traffic to the built-in “Hello World” test server:
 - hostname: foo.com
   service: hello_world
 # This “catch-all” rule doesn’t have a hostname/path, so it matches everything
 - service: http_status:404</code></pre>
            <p>This example maps traffic to three different local services. But cloudflared can map traffic to more than just addresses: it can respond with a given HTTP status (as in the last rule) or with the built-in Hello World test server (as in the second-last rule). See <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/configuration/ingress">the docs</a> for a full list of supported services.</p><p>You can match traffic using the hostname, a path regex, or both. If you don’t use any filters, the ingress rule will match everything (so if you have DNS records from different zones routing into the tunnel, the rule will match all their URLs). Traffic is matched to rules from top to bottom, so in this example, the last rule will match anything that wasn’t matched by an earlier rule. We actually require the last rule to match everything; otherwise, cloudflared could receive a request and not know what to respond with.</p>
    <div>
      <h3>Testing your rules</h3>
      <a href="#testing-your-rules">
        
      </a>
    </div>
    <p>To make sure all your rules are valid, you can run</p>
            <pre><code>$ cat ~/cloudflared_config_invalid.yaml

ingress:
 - hostname: example.com
   service: https://localhost:8000

$ cloudflared tunnel ingress validate
Validating rules from /usr/local/etc/cloudflared/config.yml
Validation failed: The last ingress rule must match all URLs (i.e. it should not have a hostname or path filter)</code></pre>
            <p>This will check that all your ingress rules use valid regexes and map to valid services, and it’ll ensure that your last rule (and only your last rule) matches all traffic. To make sure your ingress rules do what you expect them to do, you can run</p>
            <pre><code>$ cloudflared tunnel ingress rule https://static.example.com/images/dog.gif
Using rules from ~/cloudflared_config.yaml
Matched rule #2
        Hostname: static.example.com
        path: /images/*\.(jpg|png|gif)</code></pre>
            <p>This will check which rule matches the given URL, almost like a dry run for the ingress rules (no tunnels are run and no requests are actually sent). It’s helpful for making sure you’re routing the right URLs to the right services!</p>
    <div>
      <h3>Per-rule configuration</h3>
      <a href="#per-rule-configuration">
        
      </a>
    </div>
    <p>Whenever cloudflared gets a request from the internet, it proxies that request to the matching local service on your origin. Different services might need different configurations for this request; for example, you might want to tweak the timeout or HTTP headers for a certain origin. You can set a default configuration for all your local services, and then override it for specific ones, e.g.</p>
            <pre><code>ingress:
  # Set configuration for all services
  originRequest:
    connectTimeout: 30s
 # This service inherits all the default (root-level) configuration
 - hostname: example.com
   service: https://localhost:8000
 # This service overrides the default configuration
 - service: https://localhost:8001
   originRequest:
     connectTimeout: 10s
     disableChunkedEncoding: true
 # Catch-all rule doesn’t actually use any of the config
 - service: http_status:404</code></pre>
            <p>For a full list of configuration options, check out the <a href="https://developers.cloudflare.com/argo-tunnel/configuration/ingress">docs</a>.</p>
    <div>
      <h3>What’s next?</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>We really hope this makes Argo Tunnel an even easier way to deploy services onto the Internet. If you have any questions, file an issue on our <a href="https://github.com/cloudflare/cloudflared">GitHub</a>. Happy developing!</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Access]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Cloudflare Tunnel]]></category>
            <guid isPermaLink="false">7xshQjXfZVHI2RUzluS5e3</guid>
            <dc:creator>Adam Chalmers</dc:creator>
        </item>
    </channel>
</rss>