
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sat, 04 Apr 2026 10:07:47 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Making magic: Reimagining Developer Experience for the World of Serverless]]></title>
            <link>https://blog.cloudflare.com/making-magic-reimagining-developer-experiences-for-the-world-of-serverless/</link>
            <pubDate>Fri, 31 Jul 2020 13:00:00 GMT</pubDate>
            <description><![CDATA[ Today I’m going to talk about TTFD, or time to first dopamine, and announce a huge improvement to the Workers development experience — wrangler dev. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4hqL6o1Avk7HktlswszMWS/811f87770295baaf507519ad39b2f647/Serverless-Week-Finale_2x.png" />
            
            </figure><p>This week we’ve talked about how Workers provides a step function improvement in the TTFB (time to first byte) of applications, by running lightweight isolates in over 200 cities around the world, free of cold starts. Today I’m going to talk about another metric, one that’s arguably even more important: TTFD, or time to first dopamine, and announce a huge improvement to the Workers development experience — <code>wrangler dev</code>, our edge-based development environment with all the perks of a local environment.</p><p>There’s nothing quite like the rush of getting your first few lines of code to work — no matter how many times you’ve done it before, there's something so magical about the computer understanding exactly what you wanted it to do and doing it!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7s3nM2t1g4MczbszsAf2Ns/59baf4f1ea8c4b36775f06cc3752dec1/Hello-World--_2x--1-.png" />
            
            </figure><p>This is the kind of magic I expected of “serverless”, and while it’s true that most serverless offerings today get you to that feeling faster than setting up a virtual server ever would, I still can’t help but be disappointed with how lackluster developing with most serverless platforms is today.</p><p>Some of my disappointment can be attributed to the leaky nature of the abstraction: the journey to getting you to the point of writing code is drawn out by forced decision making about servers (regions, memory allocation, etc). Servers, however, are not the only thing holding developers back from getting to the delightful magical feeling in the serverless world today.</p><p>The “serverless” experience on AWS Lambda today looks like this: between configuring the right access policy to invoke my own test application, and deciding whether an HTTP or REST API was better suited for my needs, 30 minutes had easily passed, and I still didn’t have a URL I could call to invoke my application. I did, however, spin up five different services, and was already worrying about cleaning them up lest I be charged for them.</p><p>That doesn’t feel like magic!</p><p>In building what we believe to be the serverless platform of the future — a promise that feels very magical —  we wanted to bring back that magical feeling to every step of the development journey. If serverless is about empowering developers, then they should be empowered every step of the way: from proof of concept to MVP and beyond.</p><p>We’re excited to share with you today our approach to making our developer experience delightful — we recognize we still have plenty of room to continue to grow and innovate (and we can’t wait to tell you about everything we have currently in the works as well!), but we’re proud of all the progress we’ve made in making Workers the easiest development platform for developers to use.</p>
    <div>
      <h2>Defining “developer experience”</h2>
      <a href="#defining-developer-experience">
        
      </a>
    </div>
    <p>To get us started, let’s look at what the journey of a developer entails. Today, we’ll be defining the user experience as the following four stages:</p><ul><li><p>Getting started: All the steps we have to take before putting in some keystrokes</p></li><li><p>Iteration: Does my code do what I expect it to do? What do I need to do to get it there?</p></li><li><p>Release: I’ve tested what I can -- time to hit the big red button!</p></li><li><p>Observe: Is anything broken? And how do I fix it?</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4QjV6ToidrLkex8MumePvs/410a3bb672697fc68dea3b1121c58b81/developer-Experience-0_2x.png" />
            
            </figure><p>When approaching each stage of development, we wanted to reimagine the experience, the way that we’ve always wanted our development flow to work, and fix places along the way where existing platforms have let us down.</p>
    <div>
      <h2>Zero to Hello World</h2>
      <a href="#zero-to-hello-world">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/48eF9ftp53E8JpSrf97HLh/764b9547374a5f0ce78fdee9911d99e8/developer-Experience-2_2x.png" />
            
            </figure><p>With Workers, we want to get you to that aforementioned delightful feeling as quickly as possible, and remove every obstacle in the way of writing and deploying your code. The first deployment experience is really important — if you’ve done it once and haven’t given up along the way, you can do it again.</p><p>We’re very proud to say our TTFD — even for a new user without a Cloudflare account -- is as low as three minutes. If you’re an existing customer, you can have your first Worker running in <i>seconds.</i> No regions to choose, no <a href="https://www.cloudflare.com/learning/access-management/what-is-identity-and-access-management/">IAM</a> rules to configure, and no <a href="https://www.cloudflare.com/learning/security/api/what-is-an-api-gateway/">API Gateways</a> to set up or worry about paying for.</p><p>If you’re new to Workers and still trying to get a feel for it, you can instantly deploy your Worker to 200 cities around the world within seconds, with the simple click of a button.</p><p>If you’ve already decided on Workers as the choice for building your next application, we want to make you feel at home by allowing you to use all of your favorite IDEs, be it vim or emacs or VSCode (we don’t care!).</p><p>With the release of <code>wrangler</code> — the official command-line tool for Workers, getting started is just as easy as:</p>
            <pre><code>wrangler generate hello
cd hello
wrangler publish</code></pre>
            <p>Again, in seconds your code is up and running, and easily accessible all over the world.</p><p>“Hello, World!”, of course, doesn’t have to be quite so literal. We provide a <a href="https://developers.cloudflare.com/workers/tutorials">range of tutorials</a> to help get you started and get familiar with developing with Workers.</p><p>To save you that last bit of time in getting started, our <a href="https://developers.cloudflare.com/workers/templates/">template gallery</a> provides starter templates so you can dive straight into building the products you’re excited about -- whether it’s a new <a href="https://developers.cloudflare.com/workers/templates/pages/graphql_server">GraphQL server</a> or a brand new <a href="https://developers.cloudflare.com/workers/templates/pages/sites/">static site</a>, we’ve got you covered.</p>
    <div>
      <h2>Local(ish) development: code, test, repeat</h2>
      <a href="#local-ish-development-code-test-repeat">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/36wFlFKfRAQAGczwKjw6hV/18d96635f2b711c0ec78ea4100f69e40/developer-Experience-3_2x.png" />
            
            </figure><p>We can’t promise to get the code right on your behalf, but we can promise to do everything we can to get you the feedback you need to help you get your code right.</p><p>The development journey requires lots of experimentation, trial and error, and debugging. If my Computer Science degree came with instructions on the back of the bottle, they would read: “code, print, repeat.”</p><p>Getting code right is an extremely iterative, feedback-driven process. We would all love to get code right the first time around and move on, but the reality is, computers are bad mind-readers, and you’ve ended up with an extraneous parenthesis or a stray comma in your JSON, so your code is not going to run. Found where the loose parenthesis was introduced? Great! Now your code is running, but the output is not right — time to go find that off-by-one error.</p><p>Local development has traditionally been the way for developers to get a tight feedback loop during the development process. The crucial components that make up an effective local development environment and make it a great testing ground are: fast feedback loop, its sandboxed nature (ability to develop without affecting production), and accuracy.</p><p>As we started thinking about accomplishing all three of those goals, we realized that being local actually wasn’t itself a requirement — <i>speed</i> is the real requirement, and running on the client is the only way acceptable speed for a good-enough feedback loop could be achieved.</p><p>One option was to provide a traditional local development environment, but one thing didn’t sit well with us: we wanted to provide a local development environment for the Workers runtime, however, we knew there was more to handling a request than just the runtime, which could compromise accuracy. We didn’t want to set our users up to fail with code that works on their machine but not ours.</p><p>Shipping the rest of our edge infrastructure to the user would pose its own challenges of keeping it up to date, and it would require the user to install hundreds of unnecessary dependencies, all potentially to end up with the most frustrating experience of all: running into some installation bug the explanation to which couldn’t be found on StackOverflow. This experience didn’t sit right with us.</p><p>As it turns out, this is a very similar problem to one we commonly solve for our customers: Running code on the client is fast, but it doesn’t give me the control I need; running code on the server gives me the control I need, but it requires a slow round-trip to the origin. All we had to do was take our own advice and run it on the edge! It’s the best of both worlds: your code runs so close to your end user that you get the same performance as running it on the client, without having to lose control.</p><p>To provide developers access to this tight feedback loop, we introduced <code>wrangler dev</code> earlier this year!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/63ZEFNhi4T3P4r3hanzTkg/c66f1f0d8397790067db52f310740c85/wrangler-dev.gif" />
            
            </figure><p><code>wrangler dev</code>  has the look and feel of a local development environment: it runs on localhost but tunnels to the edge, and provides output directly to your favorite IDE of choice. Since <code>wrangler dev</code> now runs on the edge, it works on your machine and ours exactly the same!</p><p>Our <a href="https://github.com/cloudflare/wrangler/releases/tag/v1.11.0-rc.0">release candidate</a> for <code>wrangler dev</code> is live and waiting for you to take it for a test drive, as easily as:</p>
            <pre><code>npm i @cloudflare/wrangler@beta -g</code></pre>
            <p>Let us know what you think.</p>
    <div>
      <h2>Release</h2>
      <a href="#release">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/VNXRWramkTL1TrOgVYFWP/3f3c5b7f9acb4b631bbb1ba656123a39/developer-Experience-4_2x.png" />
            
            </figure><p>After writing all the code, testing every edge case imaginable, and going through code review, at some point the code needs to be released for the rest of the world to reap the fruits of your hard labor and enjoy the features you’ve built.</p><p>For smaller, quick applications, it’s exciting to hit the “Save &amp; deploy” button and let fate take the wheel.</p><p>For production level projects, however, the process of deploying to production may be a bit different. Different organizations adopt different processes for code release. For those using GitHub, last year we introduced our <a href="https://github.com/marketplace/actions/deploy-to-cloudflare-workers-with-wrangler">GitHub Action</a>, to make it easy to configure an integrated release process.</p><p>With Wrangler, you can configure Workers to deploy using your existing CI, to automate deployments, and minimize human intervention.</p><p>When deploying to production, again, feedback becomes extremely important. Some platforms today still take as long as a few minutes to deploy your code. A few minutes may seem trivial, but a few minutes of nervously refreshing, wondering whether your code is live yet, and which version of your code your users are seeing is stressful. This is especially true in a rollback or a bug-fix situation where you want the new version to be live ASAP.</p><p>New Workers are deployed globally in less than five seconds, which means new changes are instantaneous. Better yet, since Workers runs on lightweight isolates, newly deployed Workers don’t experience dreaded <a href="/eliminating-cold-starts-with-cloudflare-workers/">cold starts</a>, which means you can release code as frequently as you’re able to ship it, without having to invest additional time in auxiliary gadgets to pre-warm your Worker — more time for you to start working on your next feature!</p>
    <div>
      <h2>Observe &amp; Resolve</h2>
      <a href="#observe-resolve">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5ow9aadSDFxT6fwaqORsAM/7c9b365d8ae5beabf42ed355c278c34a/developer-Experience-5_2x.png" />
            
            </figure><p>The big red button has been pushed. Dopamine has been replaced with adrenaline: the instant question on your mind is: “Did I break anything? And if so, what, and how do I fix it?” These questions are at the core of what the industry calls “<a href="https://www.cloudflare.com/learning/performance/what-is-observability/">observability</a>”.</p><p>There are different ways things can break and incidents can manifest themselves: increases in errors, drops in traffic, even a drop in performance could be considered a regression.</p><p>To identify these kinds of issues, you need to be able to spot a trend. Raw data, however, is not a very useful medium for spotting trends — humans simply cannot parse raw lines of logs to identify a subtle increase in errors.</p><p>This is why we took a two-punch approach to helping developers identify and fix issues: exposing trend data through analytics, while also providing the ability to tail production logs for forensics and investigation.</p><p>Earlier this year, we introduced <a href="https://developers.cloudflare.com/workers/about/metrics/">Workers Metrics</a>: an easy way for developers to identify trends in their production traffic.</p><p>With requests metrics, you can easily spot any increases in errors, or drastic changes in traffic patterns after a given release:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2LPRX4qmXAzXvSQ3hElhHl/043fed1fe08685aa06ebaa39bdf99e6e/Workers___Account___Cloudflare_-_Web_Performance___Security.png" />
            
            </figure><p>Additionally, sometimes new code can introduce unforeseen regressions in the overall performance of the application. With CPU time metrics, our developers are now able to spot changes in the performance of their Worker, as well as use that information to guide and optimize their code.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1AV2DfVc7351f7eVZnYHuB/3f3f919b19d6dbf6ab438d5f86688d04/Workers___Account___Cloudflare_-_Web_Performance___Security--1-.png" />
            
            </figure><p>Once you’ve identified a regression, we wanted to provide the tools needed to find your bug and fix it, which is why we also recently launched `wrangler tail`: production logs in a single command.</p><p><code>wrangler tail</code> can help diagnose where code is failing or why certain customers are getting unexpected outcomes because it exposes <code>console.log()</code> output and exceptions. By having access to this output, developers can immediately diagnose, fix, and resolve any issues occurring in production.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1nsi5idfA7FmrGn8ayOk8s/751943d98c54b14571e20e9b83afe0f4/output2--1-.gif" />
            
            </figure><p>We know how precious every moment can be when a bad code deploy impacts customer traffic. Luckily, once you’ve found and fixed your bug, it’s only a matter of seconds for users to start benefiting from the fix — unlike other platforms which make you wait as long as 5 minutes, Workers get deployed globally within five seconds.</p>
    <div>
      <h2>Repeat</h2>
      <a href="#repeat">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5GG9OYdYbFgVK607XchFDd/c8e52493e7ea78be7375a5da0f1a4bd1/workers-repeat_2x.png" />
            
            </figure><p>As you’re thinking about your next feature, you checkout a new branch, and the cycle begins all over. We’re excited for you to check out all the improvements we’ve made to the development experience with Workers, all to reduce your time to first dopamine (TTFD).</p><p>We are always working on improving it further, looking where we can remove every additional bit of friction, and love to hear <a href="https://docs.google.com/forms/d/e/1FAIpQLSccrSqTlFMw8l46ihVjmMsl0UjkZ_EsMkls_MmHOoKODJm5cw/viewform?usp=sf_link">your feedback</a> as we do so.</p> ]]></content:encoded>
            <category><![CDATA[Serverless Week]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Wrangler]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">7a2XZfdXiDJOm1CXKJD0Rx</guid>
            <dc:creator>Rita Kozlov</dc:creator>
        </item>
        <item>
            <title><![CDATA[The Serverlist: Workers Unbound, 0ms Cold Start Time, and more!]]></title>
            <link>https://blog.cloudflare.com/serverlist-18th-edition/</link>
            <pubDate>Fri, 31 Jul 2020 11:00:00 GMT</pubDate>
            <description><![CDATA[ Check out our eighteenth edition of The Serverlist below. Get the latest scoop on the serverless space, get your hands dirty with new developer tutorials, engage in conversations with other serverless developers, and find upcoming meetups and conferences to attend. ]]></description>
            <content:encoded><![CDATA[ <p>Check out our eighteenth edition of The Serverlist below. Get the latest scoop on the serverless space, get your hands dirty with new developer tutorials, engage in conversations with other serverless developers, and find upcoming meetups and conferences to attend.</p><p>Sign up below to have The Serverlist sent directly to your mailbox.</p>

 ]]></content:encoded>
            <category><![CDATA[The Serverlist Newsletter]]></category>
            <category><![CDATA[Serverless Week]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <guid isPermaLink="false">IX7E35Xpr0C505rXujmcJ</guid>
            <dc:creator>Connor Peshek</dc:creator>
        </item>
        <item>
            <title><![CDATA[Eliminating cold starts with Cloudflare Workers]]></title>
            <link>https://blog.cloudflare.com/eliminating-cold-starts-with-cloudflare-workers/</link>
            <pubDate>Thu, 30 Jul 2020 13:00:00 GMT</pubDate>
            <description><![CDATA[ A “cold start” is the time it takes to load and execute a new copy of a serverless function for the first time. It’s a problem that’s both complicated to solve and costly to fix. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1oYOYNxDVXur9l8h5l6ZHe/39237ee0a2d46e8e7b7562d087bb8c40/Serverless-Week-Cold-Starts_2x-3.png" />
            
            </figure><p>A “cold start” is the time it takes to load and execute a new copy of a serverless function for the first time. It’s a problem that’s both complicated to solve and costly to fix. Other serverless platforms make you choose between suffering from random increases in execution time, or paying your way out with synthetic requests to keep your function warm. Cold starts are a horrible experience, especially when serverless containers can take full <i>seconds</i> to warm up.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7I5ZmdHAJG6sALtXOcCJvD/095f74d232b47b895ff0fd957a67086b/cold-start-clock_2x.png" />
            
            </figure><p>Unlike containers, Cloudflare Workers utilize isolate technology, which measure cold starts in single-digit milliseconds. Well, at least they <i>did</i>. Today, we’re removing the need to worry about cold starts entirely, by introducing support for Workers that have no cold starts at all – that’s right, zero. Forget about cold starts, warm starts, or... any starts, with Cloudflare Workers you get always-hot, raw performance in more than 200 cities worldwide.</p>
    <div>
      <h3>Why is there a cold start problem?</h3>
      <a href="#why-is-there-a-cold-start-problem">
        
      </a>
    </div>
    <p>It’s impractical to keep everyone’s functions warm in memory <i>all</i> the time. Instead, serverless providers only warm up a function after the first request is received. Then, after a period of inactivity, the function becomes cold again and the cycle continues.</p><p>For Workers, this has never been much of a problem. In contrast to containers that can spend full seconds spinning up a new containerized process for each function, the isolate technology behind Workers allows it to warm up a function in under 5 milliseconds.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/CyIMgGPifYO2zfgd8Y2XV/a7e33fef97b56afc626870d09704ceed/isolates-model_2x-1.png" />
            
            </figure><p><i>Learn more about how isolates enable Cloudflare Workers to be performant and secure</i> <a href="/cloud-computing-without-containers/"><i>here.</i></a></p><p>Cold starts are ugly. They’re unexpected, unavoidable, and cause unpredictable code execution times. You shouldn’t have to compromise your customers’ experience to enjoy <a href="https://www.cloudflare.com/learning/serverless/what-is-serverless/">the benefits of serverless</a>. In a collaborative effort between our Workers and Protocols teams, we set out to create a solution where you never have to worry about cold starts, warm starts, or pre-warming ever again.</p>
    <div>
      <h3>How is a zero cold start even possible?</h3>
      <a href="#how-is-a-zero-cold-start-even-possible">
        
      </a>
    </div>
    <p>Like many features at Cloudflare, security and encryption make our network more intelligent. Since 95% of Worker requests are securely handled over HTTPS, we engineered a solution that uses the Internet’s encryption protocols to our advantage.</p><p>Before a client can send an HTTPS request, it needs to establish a secure channel with the server. This process is known as “handshaking” in the <a href="https://www.cloudflare.com/learning/ssl/what-happens-in-a-tls-handshake/">TLS</a>, or Transport Layer Security, protocol. Most clients also send a hostname (e.g. cloudflare.com) in that handshake, which is referred to as the <a href="https://www.cloudflare.com/learning/ssl/what-is-sni/">SNI</a>, or Server Name Indication. The server receives the handshake, sends back a certificate, and now the client is allowed to send its original request, encrypted.</p><p>Previously, Workers would only load and compile after the <i>entire</i> handshake process was complete, which involves two round-trips between the client and server. But wait, we thought, if the hostname is present in the handshake, why wait until the entire process is done to preload the Worker? Since the handshake takes some time, there is an opportunity to warm up resources during the waiting time before the request arrives.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/20vUSqi8x0NQ2FPLeQ807y/ce0d66bee2ffdae836174d2649a0f65f/Workers-handshake-after_2x.png" />
            
            </figure><p>With our newest optimization, when Cloudflare receives the first packet during TLS negotiation, the “ClientHello,” we hint the Workers runtime to eagerly load that hostname’s Worker. After the handshake is done, the Worker is warm and ready to receive requests. Since it only takes 5 milliseconds to load a Worker, and the average latency between a client and Cloudflare is more than that, the cold start is zero. The Worker starts executing code the moment the request is received from the client.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7G67cFXFAeoyLca4YxgeqC/c33e1c41c983c8480af36e6418676ae7/Workers-handshake-before-_2x.png" />
            
            </figure>
    <div>
      <h3>When are zero cold starts available?</h3>
      <a href="#when-are-zero-cold-starts-available">
        
      </a>
    </div>
    <p>Now, and for everyone! We’ve rolled out this optimization to all Workers customers and it is in production today. There’s no extra fee and no configuration change required. When you build on Cloudflare Workers, you build on an intelligent, distributed network that is constantly pushing the bounds of what's possible in terms of performance.</p><p>For now, this is only available for Workers that are deployed to a “root” hostname like “example.com” and not specific paths like “example.com/path/to/something.” We plan to introduce more optimizations in the future that can preload specific paths.</p>
    <div>
      <h3>What about performance beyond cold starts?</h3>
      <a href="#what-about-performance-beyond-cold-starts">
        
      </a>
    </div>
    <p>We also recognize that performance is more than just zero cold starts. That’s why we announced the beta of <a href="https://www.cloudflare.com/workers-unbound-beta/">Workers Unbound</a> earlier this week. Workers Unbound has the simplicity and performance of Workers with no limits, just raw performance.</p><p>Workers, equipped with zero cold starts, no CPU limits, and a network that spans over 200 cities is prime and ready to take on any serious workload. Now that’s performance.</p>
    <div>
      <h3>Interested in deploying with Workers?</h3>
      <a href="#interested-in-deploying-with-workers">
        
      </a>
    </div>
    <ul><li><p>Learn more about <a href="https://workers.dev">Cloudflare Workers</a></p></li><li><p>Join the Workers Unbound <a href="https://www.cloudflare.com/workers-unbound-beta/">Beta</a></p></li><li><p>Try our new language support for <a href="https://github.com/cloudflare/python-worker-hello-world">Python</a> and <a href="https://github.com/cloudflare/kotlin-worker-hello-world">Kotlin</a></p></li></ul> ]]></content:encoded>
            <category><![CDATA[Serverless Week]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">7FBdTQH1q8YprEn2w4ea41</guid>
            <dc:creator>Ashcon Partovi</dc:creator>
        </item>
        <item>
            <title><![CDATA[Mitigating Spectre and Other Security Threats: The Cloudflare Workers Security Model]]></title>
            <link>https://blog.cloudflare.com/mitigating-spectre-and-other-security-threats-the-cloudflare-workers-security-model/</link>
            <pubDate>Wed, 29 Jul 2020 13:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare is a security company, and the heart of Workers is, in my view, a security project. Running code written by third parties is always a scary proposition, and the primary concern of the Workers team is to make that safe. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Hello, I'm an engineer on the Workers team, and today I want to talk to you about security.</p><p>Cloudflare is a security company, and the heart of Workers is, in my view, a security project. Running code written by third parties is always a scary proposition, and the primary concern of the Workers team is to make that safe.</p><p>For a project like this, it is not enough to pass a security review and say "ok, we're secure" and move on. It's not even enough to consider security at every stage of design and implementation. For Workers, security in and of itself is an ongoing project, and that work is never done. There are always things we can do to reduce the risk and impact of future vulnerabilities.</p><p>Today, I want to give you an overview of our security architecture, and then address two specific issues that we are frequently asked about: V8 bugs, and Spectre.</p>
    <div>
      <h2>Architectural Overview</h2>
      <a href="#architectural-overview">
        
      </a>
    </div>
    <p>Let's start with a quick overview of the Workers Runtime architecture.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/j8LEoe7npsKBqp8pPJnQe/79e962ae5dfc8aa5ab8a41d45f6982c7/Workers-architecture.svg" />
            
            </figure><p>There are two fundamental parts of designing a code sandbox: secure isolation and API design.</p>
    <div>
      <h3>Isolation</h3>
      <a href="#isolation">
        
      </a>
    </div>
    <p>First, we need to create an execution environment where code can't access anything it's not supposed to.</p><p>For this, our primary tool is V8, the JavaScript engine developed by Google for use in Chrome. V8 executes code inside "isolates", which prevent that code from accessing memory outside the isolate -- even within the same process. Importantly, this means we can run many isolates within a single process. This is essential for an edge compute platform like Workers where we must host many thousands of guest apps on every machine, and rapidly switch between these guests thousands of times per second with minimal overhead. If we had to run a separate process for every guest, the number of tenants we could support would be drastically reduced, and we'd have to limit edge compute to a small number of big enterprise customers who could pay a lot of money. With isolate technology, we can make edge compute available to everyone.</p><p>Sometimes, though, we do decide to schedule a worker in its own private process. We do this if it uses certain features that we feel need an extra layer of isolation. For example, when a developer uses the devtools debugger to inspect their worker, we run that worker in a separate process. This is because historically, in the browser, the inspector protocol has only been usable by the browser's trusted operator, and therefore has not received as much security scrutiny as the rest of V8. In order to hedge against the increased risk of bugs in the inspector protocol, we move inspected workers into a separate process with a process-level sandbox. We also use process isolation as an extra defense against Spectre, which I'll describe later in this post.</p><p>Additionally, even for isolates that run in a shared process with other isolates, we run multiple instances of the whole runtime on each machine, which we call "cordons". Workers are distributed among cordons by assigning each worker a level of trust and separating low-trusted workers from those we trust more highly. As one example of this in operation: a customer who signs up for our <a href="https://www.cloudflare.com/plans/free/">free plan</a> will not be scheduled in the same process as an enterprise customer. This provides some defense-in-depth in the case a zero-day security vulnerability is found in V8. But I'll talk more about V8 bugs, and how we address them, later in this post.</p><p>At the whole-process level, we apply another layer of sandboxing for defense in depth. The "layer 2" sandbox uses Linux namespaces and seccomp to prohibit all access to the filesystem and network. Namespaces and seccomp are commonly used to implement containers. However, our use of these technologies is much stricter than what is usually possible in container engines, because we configure namespaces and seccomp after the process has started (but before any isolates have been loaded). This means, for example, we can (and do) use a totally empty filesystem (mount namespace) and use seccomp to block absolutely all filesystem-related system calls. Container engines can't normally prohibit all filesystem access because doing so would make it impossible to use <code>exec()</code> to start the guest program from disk; in our case, our guest programs are not native binaries, and the Workers runtime itself has already finished loading before we block filesystem access.</p><p>The layer 2 sandbox also totally prohibits network access. Instead, the process is limited to communicating only over local Unix domain sockets, to talk to other processes on the same system. Any communication to the outside world must be mediated by some other local process outside the sandbox.</p><p>One such process in particular, which we call the "supervisor", is responsible for fetching worker code and configuration from disk or from other internal services. The supervisor ensures that the sandbox process cannot read any configuration except that which is relevant to the workers that it should be running.</p><p>For example, when the sandbox process receives a request for a worker it hasn't seen before, that request includes the encryption key for that worker's code (including attached secrets). The sandbox can then pass that key to the supervisor in order to request the code. The sandbox cannot request any worker for which it has not received the appropriate key. It cannot enumerate known workers. It also cannot request configuration it doesn't need; for example, it cannot request the TLS key used for HTTPS traffic to the worker.</p><p>Aside from reading configuration, the other reason for the sandbox to talk to other processes on the system is to implement APIs exposed to Workers. Which brings us to API design.</p>
    <div>
      <h3>API Design</h3>
      <a href="#api-design">
        
      </a>
    </div>
    <p>There is a saying: "If a tree falls in the forest, but no one is there to hear it, does it make a sound?" I have a related saying: "If a Worker executes in a fully-isolated environment in which it is totally prevented from communicating with the outside world, does it actually run?"</p><p>Complete code isolation is, in fact, useless. In order for Workers to do anything useful, they have to be allowed to communicate with users. At the very least, a Worker needs to be able to receive requests and respond to them. It would also be nice if it could send requests to the world, safely. For that, we need <a href="https://www.cloudflare.com/learning/security/api/what-is-an-api/">APIs</a>.</p><p>In the context of sandboxing, API design takes on a new level of responsibility. Our APIs define exactly what a Worker can and cannot do. We must be very careful to design each API so that it can only express operations which we want to allow, and no more. For example, we want to allow Workers to make and receive HTTP requests, while we do not want them to be able to access the local filesystem or internal network services.</p><p>Let's dig into the easier example first. Currently, Workers does not allow any access to the local filesystem. Therefore, we do not expose a filesystem API at all. No API means no access.</p><p>But, imagine if we did want to support local filesystem access in the future. How would we do that? We obviously wouldn't want Workers to see the whole filesystem. Imagine, though, that we wanted each Worker to have its own private directory on the filesystem where it can store whatever it wants.</p><p>To do this, we would use a design based on <a href="https://en.wikipedia.org/wiki/Capability-based_security">capability-based security</a>. Capabilities are a big topic, but in this case, what it would mean is that we would give the worker an object of type <code>Directory</code>, representing a directory on the filesystem. This object would have an API that allows creating and opening files and subdirectories, but does not permit traversing "up" the tree to the parent directory. Effectively, each worker would see its private <code>Directory</code> as if it were the root of their own filesystem.</p><p>How would such an API be implemented? As described above, the sandbox process cannot access the real filesystem, and we'd prefer to keep it that way. Instead, file access would be mediated by the supervisor process. The sandbox talks to the supervisor using <a href="https://capnproto.org/rpc.html">Cap'n Proto RPC</a>, a capability-based RPC protocol. (Cap'n Proto is an open source project currently maintained by the Cloudflare Workers team.) This protocol makes it very easy to implement capability-based APIs, so that we can strictly limit the sandbox to accessing only the files that belong to the Workers it is running.</p><p>Now what about network access? Today, Workers are allowed to talk to the rest of the world only via HTTP -- both incoming and outgoing. There is no API for other forms of network access, therefore it is prohibited (though we plan to support other protocols in the future).</p><p>As mentioned before, the sandbox process cannot connect directly to the network. Instead, all outbound HTTP requests are sent over a Unix domain socket to a local proxy service. That service implements restrictions on the request. For example, it verifies that the request is either addressed to a public Internet service, or to the Worker's zone's own origin server, not to internal services that might be visible on the local machine or network. It also adds a header to every request identifying the worker from which it originates, so that abusive requests can be traced and blocked. Once everything is in order, the request is sent on to our HTTP caching layer, and then out to the Internet.</p><p>Similarly, inbound HTTP requests do not go directly to the Workers Runtime. They are first received by an inbound proxy service. That service is responsible for TLS termination (the Workers Runtime never sees TLS keys), as well as identifying the correct Worker script to run for a particular request URL. Once everything is in order, the request is passed over a Unix domain socket to the sandbox process.</p>
    <div>
      <h2>V8 bugs and the "patch gap"</h2>
      <a href="#v8-bugs-and-the-patch-gap">
        
      </a>
    </div>
    <p>Every non-trivial piece of software has bugs, and sandboxing technologies are no exception. Virtual machines have bugs, containers have bugs, and yes, isolates (which we use) also have bugs. We can't live life pretending that no further bugs will ever be discovered; instead, we must assume they will and plan accordingly.</p><p>We rely heavily on isolation provided by V8, the JavaScript engine built by Google for use in Chrome. This has good sides and bad sides. On one hand, V8 is an extraordinarily complicated piece of technology, creating a wider "attack surface" than virtual machines. More complexity means more opportunities for something to go wrong. On the bright side, though, an extraordinary amount of effort goes into finding and fixing V8 bugs, owing to its position as arguably the most popular sandboxing technology in the world. Google regularly pays out 5-figure bounties to anyone finding a V8 sandbox escape. Google also operates fuzzing infrastructure that automatically finds bugs faster than most humans can. Google's investment does a lot to minimize the danger of V8 "zero-days" -- bugs that are found by the bad guys and not known to Google.</p><p>But, what happens after a bug is found and reported by the good guys? V8 is open source, so fixes for security bugs are developed in the open and released to everyone at the same time -- good guys and bad guys. It's important that any patch be rolled out to production as fast as possible, before the bad guys can develop an exploit.</p><p>The time between publishing the fix and deploying it is known as the "patch gap". Earlier this year, <a href="https://www.zdnet.com/article/google-cuts-chrome-patch-gap-in-half-from-33-to-15-days/">Google announced that Chrome's patch gap had been reduced from 33 days to 15 days</a>.</p><p>Fortunately, we have an advantage here, in that we directly control the machines on which our system runs. We have automated almost our entire build and release process, so the moment a V8 patch is published, our systems automatically build a new release of the Workers Runtime and, after one-click sign-off from the necessary (human) reviewers, automatically push that release out to production.</p><p>As a result, our patch gap is now under 24 hours. A patch published by V8's team in Munich during their work day will usually be in production before the end of our work day in the US.</p>
    <div>
      <h2>Spectre: Introduction</h2>
      <a href="#spectre-introduction">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4WEQXH8Xp38VMUTyfFF5LW/7d527aa17b4aa1f5a638dae83abcde43/Spectre-vulnerability-_2x.png" />
            
            </figure><p>We get a lot of questions about Spectre. The V8 team at Google has stated in no uncertain terms that <a href="https://arxiv.org/abs/1902.05178">V8 itself cannot defend against Spectre</a>. Since Workers relies on V8 for sandboxing, many have asked if that leaves Workers vulnerable. However, we do not need to depend on V8 for this; the Workers environment presents many alternative approaches to mitigating Spectre.</p><p>Spectre is complicated and nuanced, and there's no way I can cover everything there is to know about it or how Workers addresses it in a single blog post. But, hopefully I can clear up some of the confusion and concern.</p>
    <div>
      <h3>What is it?</h3>
      <a href="#what-is-it">
        
      </a>
    </div>
    <p>Spectre is a class of attacks in which a malicious program can trick the CPU into "speculatively" performing computation using data that the program is not supposed to have access to. The CPU eventually realizes the problem and does not allow the program to see the results of the speculative computation. However, the program may be able to derive bits of the secret data by looking at subtle side effects of the computation, such as the effects on cache.</p><p><a href="https://www.cloudflare.com/learning/security/threats/meltdown-spectre/">For more background about Spectre, check out our Learning Center page on the topic.</a></p>
    <div>
      <h3>Why does it matter for Workers?</h3>
      <a href="#why-does-it-matter-for-workers">
        
      </a>
    </div>
    <p>Spectre encompasses a wide variety of vulnerabilities present in modern CPUs. The specific vulnerabilities vary by architecture and model, and it is likely that many vulnerabilities exist which haven't yet been discovered.</p><p>These vulnerabilities are a problem for every cloud compute platform. Any time you have more than one tenant running code on the same machine, Spectre attacks come into play. However, the "closer together" the tenants are, the more difficult it can be to mitigate specific vulnerabilities. Many of the known issues can be mitigated at the kernel level (protecting processes from each other) or at the hypervisor level (protecting VMs), often with the help of CPU microcode updates and various tricks (many of which, unfortunately, come with serious performance impact).</p><p>In Cloudflare Workers, we isolate tenants from each other using V8 isolates -- not processes nor VMs. This means that we cannot necessarily rely on OS or hypervisor patches to "solve" Spectre for us. We need our own strategy.</p>
    <div>
      <h3>Why not use process isolation?</h3>
      <a href="#why-not-use-process-isolation">
        
      </a>
    </div>
    <p>Cloudflare Workers is designed to run your code in every single Cloudflare location, of which there are currently 200 worldwide and growing.</p><p>We wanted Workers to be a platform that is accessible to everyone -- not just big enterprise customers who can pay megabucks for it. We need to handle a huge number of tenants, where many tenants get very little traffic.</p><p>Combine these two points, and things get tricky.</p><p>A typical, non-edge serverless provider could handle a low-traffic tenant by sending all of that tenant's traffic to a single machine, so that only one copy of the application needs to be loaded. If the machine can handle, say, a dozen tenants, that's plenty. That machine can be hosted in a mega-datacenter with literally millions of machines, achieving economies of scale. However, this centralization incurs latency and worldwide bandwidth costs when the users don't happen to be nearby.</p><p>With Workers, on the other hand, every tenant, regardless of traffic level, currently runs in every Cloudflare location. And in our quest to get as close to the end user as possible, we sometimes choose locations that only have space for a limited number of machines. The net result is that we need to be able to host thousands of active tenants per machine, with the ability to rapidly spin up inactive ones on-demand. That means that each guest cannot take more than a couple megabytes of memory -- hardly enough space for a call stack, much less everything else that a process needs.</p><p>Moreover, we need context switching to be extremely cheap. Many Workers resident in memory will only handle an event every now and then, and many Workers spend less than a fraction of a millisecond on any particular event. In this environment, a single core can easily find itself switching between thousands of different tenants every second. Moreover, to handle one event, a significant amount of communication needs to happen between the guest application and its host, meaning still more switching and communications overhead. If each tenant lives in its own process, all this overhead is orders of magnitude larger than if many tenants live in a single process. When using strict process isolation in Workers, we find the CPU cost can easily be 10x what it is with a shared process.</p><p>In order to keep Workers inexpensive, fast, and accessible to everyone, we must solve these issues, and that means we must find a way to host multiple tenants in a single process.</p>
    <div>
      <h3>There is no "fix" for Spectre</h3>
      <a href="#there-is-no-fix-for-spectre">
        
      </a>
    </div>
    <p>A dirty secret that the industry doesn't like to admit: no one has "fixed" Spectre. Not even when using heavyweight virtual machines. Everyone is still vulnerable.</p><p>The current approach being taken by most of the industry is essentially a game of whack-a-mole. Every couple months, researchers uncover a new Spectre vulnerability. CPU vendors release some new microcode, OS vendors release kernel patches, and everyone has to update.</p><p>But is it enough to merely deploy the latest patches?</p><p>It is abundantly clear that many more vulnerabilities exist, but haven't yet been publicized. Who might know about those vulnerabilities? Most of the bugs being published are being found by (very smart) graduate students on a shoestring budget. Imagine, for a minute, how many more bugs a well-funded government agency, able to buy the very best talent in the world, could be uncovering.</p><p>To truly defend against Spectre, we need to take a different approach. It's not enough to block individual known vulnerabilities. We must address the entire class of vulnerabilities at once.</p>
    <div>
      <h3>We can't stop it, but we can slow it down</h3>
      <a href="#we-cant-stop-it-but-we-can-slow-it-down">
        
      </a>
    </div>
    <p>Unfortunately, it's unlikely that any catch-all "fix" for Spectre will be found. But for the sake of argument, let's try.</p><p>Fundamentally, all Spectre vulnerabilities use side channels to detect hidden processor state. Side channels, by definition, involve observing some non-deterministic behavior of a system. Conveniently, most software execution environments try hard to eliminate non-determinism, because non-deterministic execution makes applications unreliable.</p><p>However, there are a few sorts of non-determinism that are still common. The most obvious among these is timing. The industry long ago gave up on the idea that a program should take the same amount of time every time it runs, because deterministic timing is fundamentally at odds with heuristic performance optimization. Sure enough, most Spectre attacks focus on timing as a way to detect the hidden microarchitectural state of the CPU.</p><p>Some have proposed that we can solve this by making timers inaccurate or adding random noise. However, it turns out that this does not stop attacks; it only makes them slower. If the timer tracks real time at all, then anything you can do to make it inaccurate can be overcome by running an attack multiple times and using statistics to filter out the noise.</p><p>Many security researchers see this as the end of the story. What good is slowing down an attack, if the attack is still possible? Once the attacker gets your private key, it's game over, right? What difference does it make if it takes them a minute or a month?</p>
    <div>
      <h3>Cascading Slow-downs</h3>
      <a href="#cascading-slow-downs">
        
      </a>
    </div>
    <p>We find that, actually, measures that slow down an attack can be powerful.</p><p>Our key insight is this: as an attack becomes slower, new techniques become practical to make it even slower still. The goal, then, is to chain together enough techniques that an attack becomes so slow as to be uninteresting.</p><p>Much of cryptography, after all, is technically vulnerable to "brute force" attacks -- technically, with enough time, you can break it. But when the time required is thousands (or even billions) of years, we decide that this is good enough.</p><p>So, what do we do to slow down Spectre attacks to the point of meaninglessness?</p>
    <div>
      <h2>Freezing a Spectre Attack</h2>
      <a href="#freezing-a-spectre-attack">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5y7HqWBzP2bdM5HCvoJ2Mq/782bdcf77c38675687244b292d4c02a7/freeze-Spectre_2x.png" />
            
            </figure>
    <div>
      <h3>Step 0: Don't allow native code</h3>
      <a href="#step-0-dont-allow-native-code">
        
      </a>
    </div>
    <p>We do not allow our customers to upload native-code binaries to run on our network. We only accept JavaScript and WebAssembly. Of course, many other languages, like Python, Rust, or even Cobol, can be compiled or transpiled to one of these two formats; the important point is that we do another pass on our end, using V8, to convert these formats into true native code.</p><p>This, in itself, doesn't necessarily make Spectre attacks harder. However, I present this as step 0 because it is fundamental to enabling everything else below.</p><p>Accepting native code programs implies being beholden to an existing CPU architecture (typically, x86). In order to execute code with reasonable performance, it is usually necessary to run the code directly on real hardware, severely limiting the host's control over how that execution plays out. For example, a kernel or hypervisor has no ability to prohibit applications from invoking the CLFLUSH instruction, an instruction <a href="https://gruss.cc/files/flushflush.pdf">which is very useful in side channel attacks</a> and almost nothing else.</p><p>Moreover, supporting native code typically implies supporting whole existing operating systems and software stacks, which bring with them decades of expectations about how the architecture works under them. For example, x86 CPUs allow a kernel or hypervisor to disable the RDTSC instruction, which reads a high-precision timer. Realistically, though, disabling it will break many programs because they are implemented to use RDTSC any time they want to know the current time.</p><p>Supporting native code would bind our hands in terms of mitigation techniques. By using an abstract intermediate format, we have much greater freedom.</p>
    <div>
      <h3>Step 1: Disallow timers and multi-threading</h3>
      <a href="#step-1-disallow-timers-and-multi-threading">
        
      </a>
    </div>
    <p>In Workers, you can get the current time using the JavaScript Date API, for example by calling <code>Date.now()</code>. However, the time value returned by this is not really the current time. Instead, it is the time at which the network message was received which caused the application to begin executing. While the application executes, time is locked in place. For example, say an attacker writes:</p>
            <pre><code>let start = Date.now();
for (let i = 0; i &lt; 1e6; i++) {
  doSpectreAttack();
}
let end = Date.now();</code></pre>
            <p>The values of <code>start</code> and <code>end</code> will always be exactly the same. The attacker cannot use <code>Date</code> to measure the execution time of their code, which they would need to do to carry out an attack.</p><blockquote><p><b>As an aside:</b> This is a measure we actually implemented in mid-2017, long before Spectre was announced (and before we knew about it). We implemented this measure because we were worried about timing side channels in general. Side channels have been a concern of the Workers team from day one, and we have designed our system from the ground up with this concern in mind.</p></blockquote><p>Related to our taming of <code>Date</code>, we also do not permit multi-threading or shared memory in Workers. Everything related to the processing of one event happens on the same thread -- otherwise, it would be possible to "race" threads in order to "MacGyver" an implicit timer. We don't even allow multiple Workers operating on the same request to run concurrently. For example, if you have installed a Cloudflare App on your zone which is implemented using Workers, and your zone itself also uses Workers, then a request to your zone may actually be processed by two Workers in sequence. These run in the same thread.</p><p>So, we have prevented code execution time from being measured <i>locally</i>. However, that doesn't actually prevent it from being measured: it can still be measured remotely. For example, the HTTP client that is sending a request to trigger the execution of the Worker can measure how long it takes for the Worker to respond. Of course, such a measurement is likely to be very noisy, since it would have to traverse the Internet. Such noise can be overcome, in theory, by executing the attack many times and taking an average.</p><blockquote><p><b>Another aside:</b> Some people have suggested that if a serverless platform like Workers were to completely reset an application's state between requests, so that every request "starts fresh", this would make attacks harder. That is, imagine that a Worker's global variables were reset after every request, meaning you cannot store state in globals in one request and then read that state in the next. Then, doesn't that mean the attack has to start over from scratch for every request? If each request is limited to, say, 50ms of CPU time, does that mean that a Spectre attack isn't possible, because there's not enough time to carry it out? Unfortunately, it's not so simple. State doesn't have to be stored in the Worker; it could instead be stored in a conspiring client. The server can return its state to the client in each response, and the client can send it back to the server in the next request.</p></blockquote><p>But is an attack based on remote timers really feasible in practice? <b>In adversarial testing, with help from leading Spectre experts, we have not been able to develop an attack that actually works in production.</b></p><p>However, we don't feel the lack of a working attack means we should stop building defenses. Instead, we're currently testing some more advanced measures, which we plan to roll out in the coming weeks.</p>
    <div>
      <h3>Step 2: Dynamic Process Isolation</h3>
      <a href="#step-2-dynamic-process-isolation">
        
      </a>
    </div>
    <p>We know that if an attack is possible at all, it would take a very long time to run -- hours at the very least, maybe as long as weeks. But once an attack has been running even for a second, we have a huge amount of new data that we can use to trigger further measures.</p><p>Spectre attacks, you see, do a lot of "weird stuff" that you wouldn't usually expect to see in a normal program. These attacks intentionally try to create pathological performance scenarios in order to amplify microarchitectural effects. This is especially true when the attack has already been forced to run billions of times in a loop in order to overcome other mitigations, like those discussed above. This tends to show up in metrics like CPU performance counters.</p><p>Now, the usual problem with using performance metrics to detect Spectre attacks is that sometimes you get false positives. Sometimes, a legitimate program behaves really badly. You can't go around shutting down every app that has bad performance.</p><p>Luckily, we don't have to. Instead, we can choose to reschedule any Worker with suspicious performance metrics into its own process. As I described above, we can't do this with every Worker, because the overhead would be too high. But, it's totally fine to process-isolate just a few Workers, defensively. If the Worker is legitimate, it will keep operating just fine, albeit with a little more overhead. Fortunately for us, the nature of our platform is such that we can reschedule a Worker into its own process at basically any time.</p><p>In fact, fancy performance-counter based triggering may not even be necessary here. If a Worker merely uses a large amount of CPU time per event, then the overhead of isolating it in its own process is relatively less, because it switches context less often. So, we might as well use process isolation for any Worker that is CPU-hungry.</p><p>Once a Worker is isolated, then we can rely on the operating system's Spectre defenses, just aslike, for example, most desktop web browsers now do.</p><p>Over the past year we've been working with the experts at Graz Technical University to develop this approach. (TU Graz's team co-discovered Spectre itself, and has been responsible for a huge number of the follow-on discoveries since then.) We have developed the ability to dynamically isolate workers, and we have identified metrics which reliably detect attacks. The whole system is currently undergoing testing to work out any remaining bugs, and we expect to roll it out fully within the next several weeks.</p><p>But wait, didn't I say earlier that even process isolation isn't a complete defense, because it only addresses known vulnerabilities? Yes, this is still true. However, the trend over time is that new Spectre attacks tend to be slower and slower to carry out, and hence we can reasonably guess that by imposing process isolation we have further slowed down even attacks that we don't know about yet.</p>
    <div>
      <h3>Step 3: Periodic Whole-Memory Shuffling</h3>
      <a href="#step-3-periodic-whole-memory-shuffling">
        
      </a>
    </div>
    <p>After Step 2, we already think we've prevented all known attacks, and we're only worried about hypothetical unknown attacks. How long does a hypothetical unknown attack take to carry out? Well, obviously, nobody knows. But with all the mitigations in place so far, and considering that new attacks have generally been slower than older ones, we think it's reasonable to guess attacks will take days or longer.</p><p>On a time scale of a day, we have new things we can do. In particular, it's totally reasonable to restart the entire Workers runtime on a daily basis, which resets the locations of everything in memory, forcing attacks to restart the process of discovering the locations of secrets.</p><p>We can also reschedule Workers across physical machines or cordons, so that the window to attack any particular neighbor is limited.</p><p>In general, because Workers are fundamentally preemptible (unlike containers or VMs), we have a lot of freedom to frustrate attacks.</p><p>Once we have dynamic process isolation fully deployed, we plan to develop these ideas next. We see this as an ongoing investment, not something that will ever be "done".</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Phew. You just read <i>twelve pages</i> about Workers security. Hopefully I've convinced you that designing a secure sandbox is only the beginning of building a secure compute platform, and the real work is never done. Popular security culture often dwells on clever hacks and clean fixes. But for the difficult real-world problems, often there is no right answer or simple fix, only the hard work of building defenses thicker and thicker.</p> ]]></content:encoded>
            <category><![CDATA[Serverless Week]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[DNS Security]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">4OiEUD4StStS2HaMVdWSIZ</guid>
            <dc:creator>Kenton Varda</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare Workers Announces Broad Language Support]]></title>
            <link>https://blog.cloudflare.com/cloudflare-workers-announces-broad-language-support/</link>
            <pubDate>Tue, 28 Jul 2020 13:01:00 GMT</pubDate>
            <description><![CDATA[ Today, we’re excited to announce support for Python, Scala, Kotlin, Reason and Dart. You can build applications on Cloudflare Workers using your favorite language starting today. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6MiZPpYQdktFLhRS4f5ZrH/244a344224171d551ce65bb5eb0848d8/Serverless-Week-Day-2_2x.png" />
            
            </figure><p>We initially launched <a href="https://workers.cloudflare.com/">Cloudflare Workers</a> with support for JavaScript and languages that compile to WebAssembly, such as Rust, C, and C++. Since then, Cloudflare and the community have improved the usability of <a href="https://github.com/cloudflare/workers-types">Typescript on Workers</a>. But we haven't talked much about the many other <a href="https://github.com/jashkenas/coffeescript/wiki/List-of-languages-that-compile-to-JS">popular languages that compile to JavaScript</a>. Today, we’re excited to announce support for Python, Scala, Kotlin, Reason and Dart.</p><p>You can build applications on Cloudflare Workers using your favorite language starting today.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5gnOXAk9K4vWzDnK70ndDP/983cedb85960785c9b389ef3059c12f0/Workers-Languages_2x.png" />
            
            </figure>
    <div>
      <h2><b>Getting Started</b></h2>
      <a href="#getting-started">
        
      </a>
    </div>
    <p>Getting started is as simple as installing <a href="https://github.com/cloudflare/wrangler">Wrangler</a>, then running generate for the template for your chosen language: <a href="https://github.com/cloudflare/python-worker-hello-world">Python</a>, <a href="https://github.com/cloudflare/scala-worker-hello-world">Scala</a>, <a href="https://github.com/cloudflare/kotlin-worker-hello-world">Kotlin</a>, <a href="https://github.com/cloudflare/dart-worker-hello-world">Dart</a>, or <a href="https://github.com/cloudflare/reason-worker-hello-world">Reason</a>. For Python, this looks like:</p><p><code>wrangler generate my-python-project https://github.com/cloudflare/python-worker-hello-world</code></p><p>Follow the installation instructions in the README inside the generated project directory, then run <code>wrangler publish</code>. You can see the output of your Worker at your workers.dev subdomain, e.g. <a href="https://my-python-project.cody.workers.dev/">https://my-python-project.cody.workers.dev/</a>. You can sign up for a <a href="https://dash.cloudflare.com/sign-up/workers">free Workers account</a> if you don't have one yet.</p><p>That’s it. It is really easy to write in your favorite languages. But, this wouldn’t be a very compelling blog post if we left it at that. Now, I’ll shift the focus to how we added support for these languages and how you can add support for others.</p>
    <div>
      <h3>How it all works under the hood</h3>
      <a href="#how-it-all-works-under-the-hood">
        
      </a>
    </div>
    <p>Language features are important. For instance, it's hard to give up the safety and expressiveness of <a href="https://reasonml.github.io/docs/en/pattern-matching">pattern matching</a> once you've used it. Familiar syntax matters to us as programmers.</p><p>You may also have existing code in your preferred language that you'd like to reuse. Just keep in mind that the <a href="https://www.infoq.com/presentations/cloudflare-v8/">advantages of running on V8</a> come with the limitation that if you use libraries that depend on native code or language-specific VM features, they may not translate to JavaScript. WebAssembly may be an option in that case. But for memory-managed languages you're usually better off compiling to JavaScript, at least until the story around <a href="https://github.com/WebAssembly/gc/issues/44">garbage collection for WebAssembly</a> stabilizes.</p><p>I'll walk through how the Worker language templates are made using a representative example of a dynamically typed language, Python, and a statically typed language, Scala. If you want to follow along, you'll need to have <a href="https://github.com/cloudflare/wrangler">Wrangler</a> installed and configured with your Workers account. If it's your first time using Workers it's a good idea to go through the <a href="https://developers.cloudflare.com/workers/quickstart">quickstart</a>.</p>
    <div>
      <h4>Dynamically typed languages: Python</h4>
      <a href="#dynamically-typed-languages-python">
        
      </a>
    </div>
    <p>You can generate a starter "hello world" Python project for Workers by running</p><p><code>wrangler generate my-python-project https://github.com/cloudflare/python-worker-hello-world</code></p><p>Wrangler will create a <code>my-python-project</code> directory and helpfully remind you to configure your account_id in the wrangler.toml file inside it.  The <a href="https://github.com/cloudflare/python-worker-hello-world/blob/master/README.md">README.md</a> file in the directory links to instructions on setting up <a href="http://www.transcrypt.org/docs/html/installation_use.html">Transcrypt</a>, the Python to JavaScript compiler we're using. If you already have Python 3.7 and virtualenv installed, this just requires running</p>
            <pre><code>cd my-python-project
virtualenv env
source env/bin/activate
pip install transcrypt
wrangler publish</code></pre>
            <p>The main requirement for compiling to JavaScript on Workers is the ability to produce a single js file that fits in our <a href="https://developers.cloudflare.com/workers/about/limits/#script-size">bundle size limit</a> of 1MB. Transcrypt adds about 70k for its Python runtime in this case, which is well within that limit. But by default running Transcrypt on a Python file will produce multiple JS and source map files in a <code>__target__</code> directory. Thankfully Wrangler has <a href="https://developers.cloudflare.com/workers/tooling/wrangler/webpack/">built in support for webpack</a>. There's a <a href="https://www.npmjs.com/package/transcrypt-loader">webpack loader</a> for Transcrypt, making it easy to produce a single file. See the <a href="https://github.com/cloudflare/python-worker-hello-world/blob/master/webpack.config.js">webpack.config.js</a> file for the setup.</p><p>The point of all this is to run some Python code, so let's take a look at <a href="https://github.com/cloudflare/python-worker-hello-world/blob/master/index.py">index.py</a>:</p>
            <pre><code>def handleRequest(request):
   return __new__(Response('Python Worker hello world!', {
       'headers' : { 'content-type' : 'text/plain' }
   }))

addEventListener('fetch', (lambda event: event.respondWith(handleRequest(event.request))))</code></pre>
            <p>In most respects this is very similar to any other <a href="https://developers.cloudflare.com/workers/quickstart#writing-code">Worker hello world</a>, just in Python syntax. Dictionary literals take the place of JavaScript objects, <code>lambda</code> is used instead of an anonymous arrow function, and so on. If using <code>__new__</code> to create instances of JavaScript classes seems awkward, the <a href="http://www.transcrypt.org/docs/html/special_facilities.html#creating-javascript-objects-with-new-constructor-call">Transcrypt docs</a> discuss an alternative.</p><p>Clearly, <code>addEventListener</code> is not a built-in Python function, it's part of the Workers runtime. Because Python is dynamically typed, you don't have to worry about providing type signatures for JavaScript APIs. The downside is that mistakes will result in failures when your Worker runs, rather than when Transcrypt compiles your code. Transcrypt does have experimental support for some degree of static checking using <a href="http://www.transcrypt.org/docs/html/installation_use.html#static-type-validation">mypy</a>.</p>
    <div>
      <h4>Statically typed languages: Scala</h4>
      <a href="#statically-typed-languages-scala">
        
      </a>
    </div>
    <p>You can generate a starter "hello world" Scala project for Workers by running</p><p><code>wrangler generate my-scala-project https://github.com/cloudflare/scala-worker-hello-world</code></p><p>The Scala to JavaScript compiler we're using is <a href="https://www.scala-js.org/doc/">Scala.js</a>. It has a plugin for the Scala build tool, so <a href="https://www.scala-sbt.org/1.x/docs/Setup.html">installing sbt</a> and a JDK is all you'll need.</p><p>Running <code>sbt fullOptJS</code> in the project directory will compile your Scala code to a single index.js file. The build configuration in build.sbt is set up to output to the root of the project, where Wrangler expects to find an index.js file. After that you can run <code>wrangler publish</code> as normal.</p><p>Scala.js uses the Google Closure Compiler to optimize for size when running <code>fullOptJS</code>. For the hello world, the file size is 14k. A more realistic project involving async fetch weighs in around 100k, still well within Workers limits.</p><p>In order to take advantage of static type checking, you're going to need type signatures for the JavaScript APIs you use. There are existing Scala signatures for fetch and service worker related APIs. You can see those being imported in the entry point for the Worker, <a href="https://github.com/cloudflare/scala-worker-hello-world/blob/master/src/main/scala/Main.scala">Main.scala</a>:</p>
            <pre><code>import org.scalajs.dom.experimental.serviceworkers.{FetchEvent}
import org.scalajs.dom.experimental.{Request, Response, ResponseInit}
import scala.scalajs.js</code></pre>
            <p>The import of scala.scalajs.js allows easy access to <a href="https://www.scala-js.org/doc/interoperability/types.html">Scala equivalents of JavaScript types</a>, such as <code>js.Array</code> or <code>js.Dictionary</code>. The remainder of Main looks fairly similar to a <a href="https://github.com/EverlastingBugstopper/worker-typescript-template/blob/master/src/handler.ts">Typescript Worker hello world</a>, with syntactic differences such as Unit instead of Void and square brackets instead of angle brackets for type parameters:</p>
            <pre><code>object Main {
  def main(args: Array[String]): Unit = {
    Globals.addEventListener("fetch", (event: FetchEvent) =&gt; {
      event.respondWith(handleRequest(event.request))
    })
  }

  def handleRequest(request: Request): Response = {
    new Response("Scala Worker hello world", ResponseInit(
        _headers = js.Dictionary("content-type" -&gt; "text/plain")))
  }
}  </code></pre>
            <p>Request, Response and FetchEvent are defined by the previously mentioned imports. But what's this Globals object? There are some Worker-specific extensions to JavaScript APIs. You can handle these in a statically typed language by either <a href="https://github.com/sjrd/scala-js-ts-importer">automatically converting</a> existing Typescript <a href="https://github.com/cloudflare/workers-types">type definitions for Workers</a> or by writing type signatures for the features you want to use. Writing the type signatures isn't hard, and it's good to know how to do it, so I included an example in <a href="https://github.com/cloudflare/scala-worker-hello-world/blob/master/src/main/scala/Globals.scala">Globals.scala</a>:</p>
            <pre><code>import scalajs.js
import js.annotation._

@js.native
@JSGlobalScope
object Globals extends js.Object {
  def addEventListener(`type`: String, f: js.Function): Unit = js.native
}</code></pre>
            <p>The annotation <code>@js.native</code> indicates that the implementation is in existing JavaScript code, not in Scala. That's why the body of the <code>addEventListener</code> definition is just <code>js.native</code>. In a JavaScript Worker you'd call <code>addEventListener</code> as a top-level function in global scope. Here, the <code>@JSGlobalScope</code> annotation indicates that the function signatures we're defining are available in the JavaScript global scope.</p><p>You may notice that the type of the function passed to <code>addEventListener</code> is just <code>js.Function</code>, rather than specifying the argument and return types. If you want more type safety, this could be done as <code>js.Function1[FetchEvent, Unit]</code>.  If you're trying to work quickly at the expense of safety, you could use <code>def addEventListener(any: Any*): Any</code> to allow anything.</p><p>For more information on defining types for JavaScript interfaces, see the <a href="https://www.scala-js.org/doc/interoperability/facade-types.html">Scala.js docs</a>.</p>
    <div>
      <h4>Using Workers KV and async Promises</h4>
      <a href="#using-workers-kv-and-async-promises">
        
      </a>
    </div>
    <p>Let's take a look at a more realistic example using Workers KV and asynchronous calls. The idea for the project is our own HTTP API to store and retrieve text values. For simplicity's sake I'm using the first slash-separated component of the path for the key, and the second for the value. Usage of the finished project will look like <code>PUT /meaning of life/42</code> or <code>GET /meaning of life/</code></p><p>The first thing I need is to add type signatures for the parts of the <a href="https://developers.cloudflare.com/workers/reference/apis/kv/">KV API</a> that I'm using, in Globals.scala. My KV namespace binding in wrangler.toml is just going to be named KV, resulting in a corresponding global object:</p>
            <pre><code>object Globals extends js.Object {
  def addEventListener(`type`: String, f: js.Function): Unit = js.native
  
  val KV: KVNamespace = js.native
}</code></pre>
            
            <pre><code>bash$ curl -w "\n" -X PUT 'https://scala-kv-example.cody.workers.dev/meaning of life/42'

bash$ curl -w "\n" -X GET 'https://scala-kv-example.cody.workers.dev/meaning of life/'
42</code></pre>
            <p>So what's the definition of the KVNamespace type? It's an interface, so it becomes a Scala trait with a <code>@js.native</code> annotation. The only methods I need to add right now are the simple versions of KV.get and KV.put that take and return strings. The return values are asynchronous, so they're wrapped in a <a href="https://www.scala-js.org/api/scalajs-library/1.1.0/scala/scalajs/js/Promise.html">js.Promise</a>. I'll make that wrapped string a type alias, KVValue, just in case we want to deal with the array or stream return types in the future:</p>
            <pre><code>object KVNamespace {
  type KVValue = js.Promise[String]
}

@js.native
trait KVNamespace extends js.Object {
  import KVNamespace._
  
  def get(key: String): KVValue = js.native
  
  def put(key: String, value: String): js.Promise[Unit] = js.native
}</code></pre>
            <p>With type signatures complete, I'll move on to Main.scala and how to handle interaction with JavaScript Promises. It's possible to use <code>js.Promise</code> directly, but I'd prefer to use Scala semantics for asynchronous Futures. The methods <code>toJSPromise</code> and <code>toFuture</code> from <code>js.JSConverters</code> can be used to convert back and forth:</p>
            <pre><code>  def get(key: String): Future[Response] = {
    Globals.KV.get(key).toFuture.map { (value: String) =&gt;
        new Response(value, okInit)
    } recover {
      case err =&gt;
        new Response(s"error getting a value for '$key': $err", errInit)
    }
  }</code></pre>
            <p>The function for putting values makes similar use of <code>toFuture</code> to convert the return value from KV into a Future. I use <code>map</code> to transform the value into a Response, and <code>recover</code> to handle failures. If you prefer async / await syntax instead of using combinators, you can use <a href="https://github.com/scala/scala-async">scala-async</a>.</p><p>Finally, the new definition for <code>handleRequest</code> is a good example of how pattern matching makes code more concise and less error-prone at the same time. We match on exactly the combinations of HTTP method and path components that we want, and default to an informative error for any other case:</p>
            <pre><code>  def handleRequest(request: Request): Future[Response] = {
    (request.method, request.url.split("/")) match {
      case (HttpMethod.GET, Array(_, _, _, key)) =&gt;
        get(key)
      case (HttpMethod.PUT, Array(_, _, _, key, value)) =&gt;
        put(key, value)
      case _ =&gt;
        Future.successful(
          new Response("expected GET /key or PUT /key/value", errInit))
    }
  }</code></pre>
            <p>You can get the complete code for this example by running</p>
            <pre><code>wrangler generate projectname https://github.com/cloudflare/scala-worker-kv</code></pre>
            
    <div>
      <h2>How to contribute</h2>
      <a href="#how-to-contribute">
        
      </a>
    </div>
    <p>I'm a fan of programming languages, and will continue to add more <a href="https://developers.cloudflare.com/workers/templates/">Workers templates</a>. You probably know your favorite language better than I do, so <a href="https://github.com/cloudflare/template-registry/blob/master/CONTRIBUTING.md">pull requests are welcome</a> for a simple hello world or more complex example.</p><p>And if you're into programming languages check out the <a href="https://redmonk.com/sogrady/2020/07/27/language-rankings-6-20/?utm_source=rss&amp;utm_medium=rss&amp;utm_campaign=language-rankings-6-20">latest language rankings</a> from RedMonk where Python is the first non-Java or JavaScript language ever to place in the top two of these rankings.</p><p>Stay tuned for the rest of <a href="/tag/serverless-week/">Serverless Week</a>!</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Serverless Week]]></category>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[Python]]></category>
            <category><![CDATA[Wrangler]]></category>
            <guid isPermaLink="false">75ucduojp4RdGzqCmeqmdX</guid>
            <dc:creator>Cody Koeninger</dc:creator>
        </item>
        <item>
            <title><![CDATA[The Migration of Legacy Applications to Workers]]></title>
            <link>https://blog.cloudflare.com/the-migration-of-legacy-applications-to-workers/</link>
            <pubDate>Tue, 28 Jul 2020 13:00:00 GMT</pubDate>
            <description><![CDATA[ As Cloudflare Workers, and other Serverless platforms, continue to drive down costs while making it easier for developers to stand up globally scaled applications, the migration of legacy applications is becoming increasingly common. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>As Cloudflare Workers, and other Serverless platforms, continue to drive down costs while making it easier for developers to stand up globally scaled applications, the migration of legacy applications is becoming increasingly common. In this post, I want to show how easy it is to migrate such an application onto Workers. To demonstrate, I’m going to use a common migration scenario: moving a legacy application — on an old compute platform behind a VPN or in a private cloud — to a serverless compute platform behind zero-trust security.</p>
    <div>
      <h3>Wait but why?</h3>
      <a href="#wait-but-why">
        
      </a>
    </div>
    <p>Before we dive further into the technical work, however, let me just address up front: why would someone want to do this? What benefits would they get from such a migration? In my experience, there are two sets of reasons: (1) factors that are “pushing” off legacy platforms, or the constraints and problems of the legacy approach; and (2) factors that are “pulling” onto serverless platforms like Workers, which speaks to the many benefits of this new approach. In terms of the push factors, we often see three core ones:</p><ul><li><p>Legacy compute resources are not flexible and must be constantly maintained, often leading to capacity constraints or excess cost;</p></li><li><p>Maintaining VPN credentials is cumbersome, and can introduce security risks if not done properly;</p></li><li><p>VPN client software can be challenging for non-technical users to operate.</p></li></ul><p>Similarly, there are some very key benefits “pulling” folks onto Serverless applications and zero-trust security:</p><ul><li><p>Instant scaling, up or down, depending on usage. No capacity constraints, and no excess cost;</p></li><li><p>No separate credentials to maintain, users can use Single Sign On (SSO) across many applications;</p></li><li><p>VPN hardware / private cloud; and existing compute, can be retired to simplify operations and reduce cost</p></li></ul><p>While the benefits to this more modern end-state are clear, there’s one other thing that causes organizations to pause: the costs in time and migration effort seem daunting. Often what organizations find is that migration is not as difficult as they fear. In the rest of this post, I will show you how Cloudflare Workers, and the rest of the Cloudflare platform, can greatly simplify migrations and help you modernize all of your applications.</p>
    <div>
      <h3>Getting Started</h3>
      <a href="#getting-started">
        
      </a>
    </div>
    <p>To take you through this, we will use a contrived application I’ve written in Node.js to illustrate the steps we would take with a real, and far more complex, example. The goal is to show the different tools and features you can use at each step; and how our platform design supports development and cutover of an application.  We’ll use four key Cloudflare technologies, as we see how to move this Application off of my Laptop and into the Cloud:</p><ol><li><p><b>Serverless Compute through Workers</b></p></li><li><p><b>Robust Developer-focused Tooling for Workers via Wrangler</b></p></li><li><p><b>Zero-Trust security through Access</b></p></li><li><p><b>Instant, Secure Origin Tunnels through Argo Tunnels</b></p></li></ol><p>Our example application for today is called Post Process, and it performs business logic on input provided in an HTTP POST body. It takes the input data from authenticated clients, performs a processing task, and responds with the result in the body of an HTTP response. The server runs in Node.js on my laptop.</p><p>Since the example application is written in Node.js; we will be able to directly copy some of the JavaScript assets for our new application. You could follow this “direct port” method not only for JavaScript applications, <a href="https://github.com/cloudflare/worker-emscripten-template">but even applications in our other WASM-supported languages.</a> For other languages, you first need to rewrite or transpile into one with WASM support.</p><p><b>Into our Application</b>Our basic example will perform only simple text processing, so that we can focus on the broad features of the migration. I’ve set up an unauthenticated copy (using Workers, to give us a <a href="https://www.cloudflare.com/developer-platform/solutions/hosting/">scalable and reliable place to host</a> it) at <a href="https://postprocess-workers.kirk.workers.dev/postprocess">https://postprocess-workers.kirk.workers.dev/postprocess</a> where you can see how it operates. Here is an example cURL:</p>
            <pre><code>curl -X POST https://postprocess-workers.kirk.workers.dev/postprocess --data '{"operation":"2","data":"Data-Gram!"}'</code></pre>
            <p>The relevant takeaways from the code itself are pretty simple:</p><ul><li><p>There are two code modules, which conveniently split the application logic completely from the Preprocessing / HTTP interface.</p></li><li><p>The application logic module exposes one function <i>postProcess(object)</i> where <i>object</i> is the parsed JSON of the POST body. It returns a JavaScript object, ready to be encoded into a string in the JSON HTTP response. <b>This module can be run on Workers JavaScript, with no changes. It only needs a new preprocessing / HTTP interface</b>.</p></li><li><p>The Preprocessing / HTTP interface runs on raw Node.js; and exposes a local HTTPS server. The server does not directly take inbound traffic from the Internet, but sits behind a gateway which controls access to the service.</p></li></ul>
    <div>
      <h4>Code snippet from Node.js HTTP module</h4>
      <a href="#code-snippet-from-node-js-http-module">
        
      </a>
    </div>
    
            <pre><code>const server = http.createServer((req, res) =&gt; {
    if (req.url == '/postprocess') {
        if(req.method == 'POST') {
                gatherPost(req, data =&gt; {
                        try{
                                jsonData = JSON.parse(data)
                        } catch (e) {
                                res.end('Invalid JSON payload! \n')
                                return
                        }
                        result = postProcess(jsonData)
                        res.write(JSON.stringify(result) + '\n');
                        res.end();
                })
        } else {
                res.end('Invalid Method, only POST is supported! \nPlease send a POST with data in format {"Operation":1","data","Data-Gram!"        }
    } else {
        res.end('Invalid request. Did you mean to POST to /postprocess? \n');
    }
});</code></pre>
            
    <div>
      <h4>Code snippet from Node.js logic module</h4>
      <a href="#code-snippet-from-node-js-logic-module">
        
      </a>
    </div>
    
            <pre><code>function postProcess (postJson) {
        const ServerVersion = "2.5.17"
        if(postJson != null &amp;&amp; 'operation' in postJson &amp;&amp; 'data' in postJson){
                var output
                var operation = postJson['operation']
                var data = postJson['data']
                switch(operation){
                        case "1":
                              output = String(data).toLowerCase()
                              break
                        case "2":
                              d = data + "\n"
                              output = d + d + d
                              break
                        case "3":
                              output = ServerVersion
                              break
                        default:
                              output = "Invalid Operation"
                }
                return {'Output': output}
        }
        else{
                return {'Error':'Invalid request, invalid JSON format'}
        }</code></pre>
            
    <div>
      <h4>Current State Application Architecture</h4>
      <a href="#current-state-application-architecture">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1d6MEhUqu77DDapE41clZn/dea420cdf5233fa5afd3f9b7aaa22280/image4-9.png" />
            
            </figure>
    <div>
      <h3>Design Decisions</h3>
      <a href="#design-decisions">
        
      </a>
    </div>
    <p>With all this information in hand, we can arrive at at the details of our new Cloudflare-based design:</p><ol><li><p>Keep the business logic completely intact, and specifically use the same .js asset</p></li><li><p>Build a new preprocessing layer in Workers to replace the Node.js module</p></li><li><p>Use Cloudflare Access to authenticate users to our application</p></li></ol>
    <div>
      <h4>Target State Application Architecture</h4>
      <a href="#target-state-application-architecture">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3XTQxjhc7V7BCNLD6OYtNu/eec3260400209afbf1d7e56f0d753bc4/image3-15.png" />
            
            </figure>
    <div>
      <h3>Finding the first win</h3>
      <a href="#finding-the-first-win">
        
      </a>
    </div>
    <p>One good way to make a migration successful is to find a quick win early on; a useful task which can be executed while other work is still ongoing. It is even better if the quick win also benefits the eventual cutover. We can find a quick win here, if we solve the zero-trust security problem ahead of the compute problem by putting Cloudflare’s security in front of the existing application.</p><p>We will do this by using cloudflare’s <a href="https://developers.cloudflare.com/argo-tunnel/">Argo Tunnel</a> feature to securely connect to the existing application, and <a href="https://developers.cloudflare.com/access/">Access</a> for zero-trust authentication. Below, you can see how easy this process is for any command-line user, with our cloudflared tool.</p><p>I open up a terminal and use <code>cloudflared tunnel login</code>, which presents me with an authentication flow. I then use the <code>cloudflared tunnel --hostname postprocess.kschwenkler.com --url localhost:8080</code> command to connect an Argo Tunnel between the “url” (my local server) and the “hostname” (the new, public address we will use on my Cloudflare zone).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6HmHbX1sc9KoQHRihx36SP/e1aba58bbdba9d7afe35a330c219cb5c/2.gif" />
            
            </figure><p>Next I flip over to my Cloudflare dashboard, and attach an Access Policy to the “hostname” I specified before. We will be using the Service Token mode of Access; which generates a client-specific security token which that client can attach to each HTTP POST. Other modes are better suited to interactive browser use cases.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/343nkuLHV6Hro1OpA9ae6g/8ff8045b063ecf3d5cc479782afbd819/3.gif" />
            
            </figure><p>Now, without using the VPN, I can send a POST to the service, still running on Node.js on my laptop, from any Internet-connected device which has the correct token! It has taken only a few minutes to add zero-trust security to this application; and safely expose it to the Internet while still running on a legacy compute platform (my laptop!).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7BYW2YOuW6EuHNgzM92W7i/2de262828b7b10718f813274a15e46fc/4.gif" />
            
            </figure>
    <div>
      <h3>“Quick Win” Architecture</h3>
      <a href="#quick-win-architecture">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/qYhPJkmdsepqfjMCytdnS/6181df1c9920da0109d37cf3cc62049b/image13.png" />
            
            </figure><p>Beyond the direct benefit of a huge security upgrade; we’ve also made our eventual application migration much easier, by putting the traffic through the target-state API gateway already. We will see later how we can surgically move traffic to the new application for testing, in this state.</p>
    <div>
      <h3>Lift to the Cloud</h3>
      <a href="#lift-to-the-cloud">
        
      </a>
    </div>
    <p>With our zero-trust security benefits in hand, and our traffic running through Cloudflare; we can now proceed with the migration of the application itself to Workers. We’ll be using the <a href="https://developers.cloudflare.com/workers/tooling/wrangler">Wrangler</a> tooling to make this process very easy.</p><p>As noted when we first looked at the code, this contrived application exposes a very clean interface between the Node.js-specific HTTP module, which we need to replace, and the business logic <i>postprocess</i> module which we can use as is with Workers. We’ll first need to re-write the HTTP module, and then bundle it with the existing business logic into a new Workers application.</p><p>Here is a handwritten example of the basic pattern we’ll use for the HTTP module. We can see how the Service Workers API makes it very easy to grab the POST body with <i>await</i>, and how the JSON interface lets us easily pass the data to the <i>postprocess</i> module we took directly from the initial Node.js app.</p>
            <pre><code>addEventListener('fetch', event =&gt; {
 event.respondWith(handleRequest(event.request))
})

async function handleRequest(request) {
 try{
   requestData = await request.json()
 } catch (e) {
   return new Response("Invalid JSON", {status:500})
 }
 const response = new Response(JSON.stringify(postProcess (requestData)))
 return response
}</code></pre>
            <p>For our work on the mock application, we will go a slightly different route; more in line with a real application which would be more complex. Instead of writing this by hand, we will use <a href="https://developers.cloudflare.com/workers/quickstart">Wrangler</a> and our <a href="https://developers.cloudflare.com/workers/templates/pages/router/">Router template,</a> to build the new front end from a robust framework.</p><p>We’ll run <code>wrangler generate post-process-workers https://github.com/cloudflare/worker-template-router</code> to initialize a new Wrangler project with the Router template. Most of the configurations for this template will work as is; and we just have to update account_id in our wrangler.toml and make a few small edits to the code in index.js.</p><p>Below is our <code>index.js</code> after my edits, Note the line const <code>postProcess = require('./postProcess.js')</code> at the start of the new http module - this will tell Wrangler to include the original business logic, from the legacy app’s <code>postProcess.js</code> module which I will copy to our working directory.</p>
            <pre><code>const Router = require('./router')
const postProcess = require('./postProcess.js')

addEventListener('fetch', event =&gt; {
    event.respondWith(handleRequest(event.request))
})

async function handler(request) {
    const init = {
        headers: { 'content-type': 'application/json' },
    }
    const body = JSON.stringify(postProcess(await request.json()))
    return new Response(body, init)
}

async function handleRequest(request) {
    const r = new Router()
    r.post('.*/postprocess*', request =&gt; handler(request))
    r.get('/', () =&gt; new Response('Hello worker!')) // return a default message for the root route

    const resp = await r.route(request)
    return resp
}</code></pre>
            <p>Now we can simply run wrangler publish, to put our application on <a href="https://workers.dev/">workers.dev</a> for testing! The Router template’s defaults; and the small edits made above, are all we need. Since Wrangler automatically exposes the test application to the Internet (note that we can *also* put the test application behind Access, with a slightly modified method), we can easily send test traffic from any device.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4ikUB5oJkvyjkScPyyJ8PT/3d4bad9f0b3fc2b9cb1d8bf115f900aa/5.gif" />
            
            </figure>
    <div>
      <h4>Shift, Safely!</h4>
      <a href="#shift-safely">
        
      </a>
    </div>
    <p>With our application up for testing on workers.dev, we finally come to the last and most daunting migration step: cutting over traffic from the legacy application to the new one without any service interruption.</p><p>Luckily, we had our quick win earlier and are already routing our production traffic through the Cloudflare network (to the legacy application via Argo Tunnels). This provides huge benefits now that we are at the cutover step. Without changing our IP address, SSL configuration, or any other client-facing properties, we can route traffic to the new application with just one wrangler command.</p>
    <div>
      <h4>Seamless cutover from Transition to Target state</h4>
      <a href="#seamless-cutover-from-transition-to-target-state">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7rYqzkjG8iwCxKPNYSbvTH/fd7135b9281f04ae799205a7becb86e6/image14.png" />
            
            </figure><p>We simply modify <code>wrangler.toml</code> to indicate the production domain / route we’d like the application to operate on; and <code>wrangler publish</code>. As soon as Cloudflare receives this update; it will send production traffic to our new application instead of the Argo Tunnel. We have configured the application to send a ‘version’ header which lets us verify this easily using curl.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1NNWzJ0Lf1qPBPQVGv6SpK/8365b2d8c6ff73ed042416398445fb7d/6.gif" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1S3FRDRzxAL0M2JciODIkA/76b7c2c9935629bee8c177d268d6237a/7.gif" />
            
            </figure><p>Rollback, if it is needed, is also very easy. We can either set the <code>wrangler.toml</code> back to the workers.dev only mode, and <code>wrangler publish</code> again; or delete our route manually. Either will send traffic back to the Argo Tunnel.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7zkTHE7QsTycEDpCzOjCWL/3522bdaf3498d5ab7fadc493fa7b7075/8.gif" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4qYUySUW2sUik0WhhsOFiP/bf1e89e2a3c46fae2d9578ad1489882a/9.gif" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1wlX569YmzRMhxjjgy7ZLm/a199cd40004b09aa3734229bffd1012c/10.png" />
            
            </figure>
    <div>
      <h3>In Conclusion</h3>
      <a href="#in-conclusion">
        
      </a>
    </div>
    <p>Clearly, a real application will be more complex than our example above. It may have multiple components, with complex interactions, which must each be handled in turn. Argo Tunnel might remain in use, to connect to a data store or other application outside of our network. We might use WASM to support modules written in other languages. In any of these scenarios, Cloudflare’s Wrangler tooling and Serverless capabilities will help us work through the complexities and achieve success.</p><p>I hope that this simple example has helped you to see how Wrangler, cloudflared, Workers, and our entire global network can work together to make migrations as quick and hassle-free as possible. Whether for this case of an old application behind a VPN, or another application that has outgrown its current home - our Workers platform, Wrangler tooling, and underlying platform will scale to meet your business needs.</p> ]]></content:encoded>
            <category><![CDATA[Serverless Week]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[Developers]]></category>
            <guid isPermaLink="false">3vNFOI5bc36SSeB0SaAr1s</guid>
            <dc:creator>Kirk Schwenkler</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing Workers Unbound]]></title>
            <link>https://blog.cloudflare.com/introducing-workers-unbound/</link>
            <pubDate>Mon, 27 Jul 2020 11:00:00 GMT</pubDate>
            <description><![CDATA[ Today, we are excited to announce the next phase of this with the launch of our new platform, Workers Unbound, without restrictive CPU limits in a private beta. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>We launched Cloudflare Workers® in 2017 with the goal of building the development platform that we wished we had. We want to enable developers to build great software while Cloudflare manages the overhead of configuring and maintaining the infrastructure. Workers are with you from the first line of code, to the first application, all the way to a globally scaled product. By making our Edge network programmable and providing servers in 200+ locations around the world, we offer you the power to execute on even the biggest ideas.</p><p>Behind the scenes at Cloudflare, we’ve been steadily working towards making development on the Edge even more powerful and flexible. Today, we are excited to announce the next phase of this with the launch of our new platform, Workers Unbound, without restrictive CPU limits in a private beta (sign up for details <a href="https://www.cloudflare.com/workers-unbound-beta/">here</a>).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2jQ86X9qoEKfdA1rHg2xd2/bf79311419d3cfa6563d01d8bdba3474/image2-9-3.png" />
            
            </figure>
    <div>
      <h3>What is Workers Unbound? How is it different from Cloudflare Workers?</h3>
      <a href="#what-is-workers-unbound-how-is-it-different-from-cloudflare-workers">
        
      </a>
    </div>
    <p>Workers Unbound is like our classic Cloudflare Workers (now referred to as Workers Bundled), but for applications that need longer execution times. We are extending our CPU limits to allow customers to bring all of their workloads onto Workers, no matter how intensive. It eliminates the choice that developers often have to make, between running fast, simple work on the Edge or running heavy computation in a centralized cloud with unlimited resources.</p><p>This platform will unlock a new class of intensive applications with heavy computation burdens like image processing or complex algorithms. In fact, this is a highly requested feature that we’ve previously unlocked for a number of our enterprise customers, and are now in the process of making it widely available to the public.</p><p>Workers Unbound is built to be a general purpose computing platform, not just as an alternative to niche edge computing offerings. We want to be more compelling for any workload you'd previously think to run on traditional, centralized serverless platforms — faster, more affordable, and more flexible.</p>
    <div>
      <h3>Neat! How can I try it?</h3>
      <a href="#neat-how-can-i-try-it">
        
      </a>
    </div>
    <p>We are excited to offer Workers Unbound to a select group of developers in a private beta. Please reach out via this <a href="https://www.cloudflare.com/workers-unbound-beta/">form</a> with some details about your use case, and we’ll be in touch! We’d love to hear your feedback and can’t wait to see what you build.</p>
    <div>
      <h3>What’s going on behind the scenes?</h3>
      <a href="#whats-going-on-behind-the-scenes">
        
      </a>
    </div>
    <p><a href="https://www.cloudflare.com/learning/serverless/what-is-serverless/">Serverless</a> as it’s known today is constrained by being built on top of old paradigms. Most serverless platforms have inherited containers from their cloud computing origins. Cloudflare has had the opportunity to rethink serverless by building on the Edge and making this technology more performant at scale for complex applications.</p><p>We reap performance benefits by running code on <a href="https://v8docs.nodesource.com/node-0.8/d5/dda/classv8_1_1_isolate.html">V8 Isolates</a>, which are designed to start very quickly with minimal cold start times. Isolates are a technology built by the Google Chrome team to power the JavaScript engine in the browser, and they introduce a new model for running multi-tenant code. They provide lightweight contexts that group variables with the code allowed to mutate them.</p><p>Isolates are far more lightweight than containers, a central tenet of most other serverless providers’ architecture. Containers effectively run a virtual machine, and there’s a lot of overhead associated with them. That, in turn, makes it very hard to run the workload outside a centralized environment.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1GMRxjfSybfSIYGYmzpEiH/a36b5bafa3d383f794d1262b6d06e3f3/isolates-model_2x.png" />
            
            </figure><p>Moreover, a single process on Workers can run hundreds or thousands of isolates, making switching between them seamless. That means it is possible to run code from many different customers within a single operating system process. This low runtime overhead is part of the story of how Workers scales to support many tenants.</p><p>The other part of the story is code distribution. The ability to serve customers from anywhere in the world is a key difference between an edge-based and a region-based serverless paradigm, but it requires us to ship customer code to every server at once. Isolates come to the rescue again: we embed V8 with the same standard JavaScript APIs you can find in browsers, meaning a serverless edge application is both lightweight and performant. This means we can distribute Worker scripts to every server in every datacenter around the world, so that any server, anywhere, can serve requests bound for any customer.</p>
    <div>
      <h3>How does this affect my bill?</h3>
      <a href="#how-does-this-affect-my-bill">
        
      </a>
    </div>
    <p>Performance at scale is top of mind for us because improving performance on our Edge means we can pass those cost savings down to you. We pay the overhead of a JavaScript runtime once, and then are able to run essentially limitless scripts with almost no individual overhead.</p><p>Workers Unbound is a truly cost-effective platform when compared to AWS Lambda. With serverless, you should only pay for what you use with no hidden fees. Workers will not charge you for hidden extras like <a href="https://www.cloudflare.com/learning/security/api/what-is-an-api-gateway/">API gateway</a> or DNS request fees.</p><p>Serverless Pricing Comparison*</p><table><tr><td><p>
</p></td><td><p><b>Workers Unbound</b></p></td><td><p><b>AWS Lambda</b></p></td><td><p><b>AWS Lambda @ Edge</b></p></td></tr><tr><td><p>Requests (per MM requests)</p></td><td><p>$0.15</p></td><td><p>$0.20 - $0.28</p></td><td><p>$0.60</p></td></tr><tr><td><p>Duration (per MM GB-sec)</p></td><td><p>$12.50</p></td><td><p>$16.67 - $22.92</p></td><td><p>$50.01</p></td></tr><tr><td><p>Data Transfer (per egress GB)</p></td><td><p>$0.09</p></td><td><p>$0.09 - $0.16</p></td><td><p>$0.09 - $0.16</p></td></tr><tr><td><p>API Gateway (per MM requests)</p></td><td><p>$0</p></td><td><p>$3.50 - $4.68</p></td><td><p>CloudFront pricing</p></td></tr><tr><td><p>DNS Queries (per MM requests)</p></td><td><p>$0</p></td><td><p>$0.40</p></td><td><p>$0.40</p></td></tr></table><p><i>* Based on pricing disclosed on aws.amazon.com/lambda/pricing as of July 24, 2020. AWS’ published duration pricing is based on 1 GB-sec, which has been multiplied by one million on this table for readability. AWS price ranges reflect different regional pricing. All prices rounded to the nearest two decimal places. Data Transfer for AWS is based on Data Transfer OUT From Amazon EC2 to Internet above 1 GB / month, for up to 9.999 TB / month. API Gateway for AWS is based on Rest APIs above 1MM/month, for up to 333MM/month. Both the Workers Unbound and AWS Lambda services provide 1MM free requests per month and 400,000 GB-seconds of compute time per month. DNS Queries rate for AWS is based on the listed price for up to 1 Billion queries / month.</i></p>
    <div>
      <h3>How much can I save?</h3>
      <a href="#how-much-can-i-save">
        
      </a>
    </div>
    <p>To put our numbers to the test, we deployed a <a href="https://github.com/Electroid/serverless-compare">hello world</a> GraphQL server to both Workers and Lambda. The median execution time on Lambda was 1.54ms, whereas the same workload took 0.90ms on Workers. After crunching the numbers and factoring in all the opaque fees that AWS charges (including API Gateway to allow for requests from the Internet), we found that using Workers Unbound can save you up to 75% -- and that’s just for a hello world! Imagine the cost savings when you’re running complex workloads for millions of users.</p><p>You might be wondering how we’re able to be so competitive. It all comes down to efficiency. The lightweight nature of Workers allows us to do the same work, but with less platform overhead and resource consumption. The execution times from this <a href="https://github.com/Electroid/serverless-compare">GraphQL hello world test are shown below and put platform providers’ overhead on display</a>. Since the test is truly a hello world, the variation is explained by architectural differences between providers (e.g. <a href="/cloud-computing-without-containers/">isolates v. containers</a>).</p><p>GraphQL hello world Execution Time (ms) across Serverless Platforms*</p><table><tr><td><p>
</p></td><td><p><b>Cloudflare Workers</b></p></td><td><p><b>AWS Lambda</b></p></td><td><p><b>Google Cloud Functions</b></p></td><td><p><b>Azure Functions</b></p></td></tr><tr><td><p>Min</p></td><td><p>0.58</p></td><td><p>1.22</p></td><td><p>6.16</p></td><td><p>5.00</p></td></tr><tr><td><p>p50</p></td><td><p>0.90</p></td><td><p>1.54</p></td><td><p>10.41</p></td><td><p>21.00</p></td></tr><tr><td><p>p90</p></td><td><p>1.24</p></td><td><p>7.45</p></td><td><p>15.93</p></td><td><p>110.00</p></td></tr><tr><td><p>p99</p></td><td><p>3.32</p></td><td><p>57.51</p></td><td><p>20.25</p></td><td><p>207.96</p></td></tr><tr><td><p>Max</p></td><td><p>16.39</p></td><td><p>398.54</p></td><td><p>31933.18</p></td><td><p>2768.00</p></td></tr></table><p><i>* The 128MB memory tier was used for each platform. This testing was run in us-east for AWS, us-central for Google, and us-west for Azure. Each platform test was run at a throughput of 1 request per second over the course of an hour. The execution times were taken from each provider's logging system.</i></p><p>These numbers speak for themselves and highlight the efficiency of the Worker’s architecture. On Workers, you don’t just get faster results, you also benefit from the cost savings we pass onto you.</p>
    <div>
      <h3>When can I use it?</h3>
      <a href="#when-can-i-use-it">
        
      </a>
    </div>
    <p>Workers Unbound is a major change to our platform, so we’ll be rolling it out slowly and tweaking it over time. If you’d like to get early access or want to be notified when it’s ready, sign up for details <a href="https://www.cloudflare.com/workers-unbound-beta/">here</a>!</p><p>We’ve got some exciting announcements to share this week. Stay tuned for the rest of Serverless Week!</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Serverless Week]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">4F0YZ7aERnA3xpNec9Diza</guid>
            <dc:creator>Nancy Gao</dc:creator>
        </item>
        <item>
            <title><![CDATA[The Edge Computing Opportunity: It’s Not What You Think]]></title>
            <link>https://blog.cloudflare.com/cloudflare-workers-serverless-week/</link>
            <pubDate>Sun, 26 Jul 2020 11:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare Workers® is one of the largest, most widely used edge computing platforms. We announced Cloudflare Workers nearly three years ago and it's been generally available for the last two years.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Cloudflare Workers® is one of the largest, most widely used edge computing platforms. We <a href="/introducing-cloudflare-workers/">announced Cloudflare Workers</a> nearly three years ago and it's been generally available for the last two years. Over that time, we've seen hundreds of thousands of developers write tens of millions of lines of code that now run across Cloudflare's network.</p><p>Just last quarter, 20,000 developers deployed for the first time a new application using Cloudflare Workers. More than 10% of all requests flowing through our network today use Cloudflare Workers. And, among our largest customers, approximately 20% are adopting Cloudflare Workers as part of their deployments. It's been incredible to watch the platform grow.</p><p>Over the course of the coming week, which we’re calling Serverless Week, we're going to be announcing a series of enhancements to the Cloudflare Workers platform to allow you to build much more complicated applications, lower your serverless computing bills, make your applications even faster, and prove that the Workers platform is secure to its core.</p>
    <div>
      <h3>Matthew’s Hierarchy of Developers' Needs</h3>
      <a href="#matthews-hierarchy-of-developers-needs">
        
      </a>
    </div>
    <p>Before the week begins, I wanted to step back and talk a bit about what we've learned about edge computing over the course of the last three years. When we launched Cloudflare Workers we thought the killer feature was speed. Workers run across the Cloudflare network, closer to end users, so they inherently have faster response times than legacy, centralized serverless platforms.</p><p>However, we’ve learned by watching developers use Cloudflare Workers that there are a number of attributes to a development platform that are far more important than just speed. Speed is the icing on the cake, but it’s not, for most applications, an initial requirement. Focusing only on it is a mistake that will doom edge computing platforms to obscurity.</p><p>Today, almost everyone who talks about the benefits of edge computing still focuses on speed. So did Akamai, which <a href="https://www.akamai.com/fr/fr/multimedia/documents/technical-publication/edgecomputing-extending-enterprise-applications-to-the-edge-of-the-internet-technical-publication.pdf">launched their Java- and .NET-based EdgeComputing platform in 2002</a>, only to shut it down in 2009 after failing to find enough customers where a bit less network latency alone justified the additional cost and complexity of running code at the edge. That’s a cautionary tale much of the industry has forgotten.</p><p>Today, I’m convinced that we were wrong when we launched Cloudflare Workers to think of speed as the killer feature of edge computing, and much of the rest of the industry’s focus remains largely misplaced and risks missing a much larger opportunity.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3b1XdFo7wQqjOPIBwRDFKX/b3e1738c381f7b7a6d0fcb893789483e/Developer-Hierarchy-of-Needs_2x-1.png" />
            
            </figure><p>I'd propose instead that what developers on any platform need, from least to most important, is actually: Speed &lt; Consistency &lt; Cost &lt; Ease of Use &lt; Compliance. Call it: Matthew’s Hierarchy of Developers’ Needs. While nearly everyone talking about edge computing has focused on speed, I'd argue that consistency, cost, ease of use, and especially compliance will ultimately be far more important. In fact, I predict the real killer feature of edge computing over the next three years will have to do with the relatively unsexy but foundationally important: regulatory compliance.</p>
    <div>
      <h3>Speed As the Killer Feature?</h3>
      <a href="#speed-as-the-killer-feature">
        
      </a>
    </div>
    <p>Don't get me wrong, speed is great. Making an application fast is the self-actualization of a developer’s experience. And we built Workers to be extremely fast. By moving computing workloads closer to where an application's users are we can, effectively, overcome the limitations imposed by the speed of light. <a href="https://www.cloudflare.com/network/">Cloudflare's network spans more than 200 cities</a> in more than 100 countries globally. We continue to build that network out to be a few milliseconds from every human on earth.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7fVtJKlbVAesHFN5j4ZPDC/250124ab2c363bd7c874ba9904cb2bea/pasted-image-0-1.png" />
            
            </figure><p>Since we're unlikely to make the speed of light any faster, the ability for any developer to write code and have it run across our entire network means we will always have a performance advantage over legacy, centralized computing solutions — even those that run in the "cloud." If you have to pick an "availability zone" for where to run your application, you're always going to be at a performance disadvantage to an application built on a platform like Workers that runs everywhere Cloudflare’s network extends.</p><p>We believe Cloudflare Workers is already the fastest serverless platform and we’ll continue to build out our network to ensure it remains so.</p>
    <div>
      <h3>Speed Alone Is Niche</h3>
      <a href="#speed-alone-is-niche">
        
      </a>
    </div>
    <p>But let's be real a second. Only a limited set of applications are sensitive to network latency of a few hundred milliseconds. That's not to say under the model of a modern major serverless platform network latency doesn't matter, it's just that the applications that require that extra performance are niche.</p><p>Applications like credit card processing, ad delivery, gaming, and human-computer interactions can be very latency sensitive. Amazon's Alexa and Google Home, for instance, are better than many of their competitors in part because they can take advantage of their corporate parents' edge networks to handle voice processing and therefore have lower latency and feel more responsive.</p><p>But after applications like that, it gets pretty "hand wavy." People who talk a lot about edge computing quickly start talking about IoT and driverless cars. Embarrassingly, when we first launched the Workers platform, I caught myself doing that all the time. Pro tip: when you’re talking to an edge computing evangelist, you can win Buzzword BINGO every time so long as you ensure you have "IoT" and "driverless cars" on your BINGO card.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5obQ40qUM0mdmvUCNogEJM/228e5db7019f7b3a542973f02b5bc60f/speed_2x-1.png" />
            
            </figure><p>Donald Knuth, the famed Stanford Computer Science professor, (along with Tony Hoare, Edsgar Dijkstra, and many others) said something to the effect of "premature optimization is the root of all evil in programming." It shouldn't be surprising, then, that speed alone isn't a compelling enough reason for most developers to choose to use an edge computing platform. Doing so for most applications is premature optimization, aka. the “root of all evil.” So what’s more important than speed?</p>
    <div>
      <h3>Consistency</h3>
      <a href="#consistency">
        
      </a>
    </div>
    <p>While minimizing network latency is not enough to get most developers to move to a new platform, there is one source of latency that is endemic to nearly all serverless platforms: cold start time. A cold start is how long it takes to run an application the first time it executes on a particular server. Cold starts hurt because they make an application unpredictable and inconsistent. Sometimes a serverless application can be fast, if it's hitting a server where the code is hot, but other times it's slow when a container on a new server needs to be spun up and code loaded from disk into memory. Unpredictability really hurts user experience; turns out humans love consistency more than they love speed.</p><p>The problem of cold starts is not unique to edge computing platforms. Inconsistency from cold starts are the bane of all serverless platforms. They are the tax you pay for not having to maintain and deploy your own instances. But edge computing platforms can actually make the cold start problem worse because they spread the computing workload across more servers in more locations. As a result, it's less likely that code will be "warm" on any particular server when a request arrives.</p><p>In other words, the more distributed a platform is, the more likely it is to have a cold start problem. And to work around that on most serverless platforms, developers have to create horrible hacks like performing idle requests to their own application from around the world so that their code stays hot. Adding insult to injury, the legacy cloud providers charge for those throw-away requests, or charge even more for their own hacky pre-warming/”reserved” solutions. It’s absurd!</p>
    <div>
      <h3>Zero Nanosecond Cold Starts</h3>
      <a href="#zero-nanosecond-cold-starts">
        
      </a>
    </div>
    <p>We knew cold starts were important, so, from the beginning, we worked to ensure that cold starts with Workers were under 5 milliseconds. That <a href="/serverless-performance-comparison-workers-lambda/">compares extremely favorably</a> to other serverless platforms like AWS Lambda where cold starts can take as long as 5 seconds (1,000x slower than Workers).</p><p>But we wanted to do better. So, this week, we'll be announcing that Workers now supports zero <i>nanosecond</i> cold starts. Since, unless someone invents a time machine, it's impossible to take less time than that, we're confident that Workers now has the fastest cold starts of any serverless platform. This makes Cloudflare Workers the consistency king beating even the legacy, centralized serverless platforms.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7bUd8lWJVZjfiZOYaj9Wv/561d71f3b071e7d6cbdef9d6e677fa04/consistency-_2x-1.png" />
            
            </figure><p>But, again, in Matthew’s Hierarchy of Developers' Needs, while consistency is more important than speed, there are other factors that are even more important than consistency when choosing a computing platform.</p>
    <div>
      <h3>Cost</h3>
      <a href="#cost">
        
      </a>
    </div>
    <p>If you have to choose between a platform that is fast or one that is cheap, all else being equal, most developers will choose cheap. Developers are only willing to start paying extra for speed when they see user experience being harmed to the point of costing them even more than what a speed upgrade would cost. Until then, cheap beats fast.</p><p>For the most part, edge computing platforms charge a premium for being faster. For instance, a request processed via AWS's Lambda@Edge costs approximately <a href="https://medium.com/@zackbloom/serverless-pricing-and-costs-aws-lambda-and-lambda-edge-169bfb58db75">three times more than a request processed via AWS Lambda</a>; and basic Lambda is already outrageously expensive. That may seem to make sense in some ways — we all assume we need to pay more to be faster — but it’s a pricing rationale that will always make edge computing a niche product servicing only those limited applications extremely sensitive to network latency.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2MiLUX8P0COo3EpAVUOijZ/fb6023e472b58c5b69a6dd3c5feb895c/cost_2x-1.png" />
            
            </figure><p>But edge computing doesn't necessarily need to be more expensive. In fact, it can be cheaper. To understand, look at the cost of delivering services from the edge. If you're well-peered with local ISPs, like Cloudflare's network is, it can be less expensive to deliver bandwidth locally than it is to backhaul it around the world. There can be additional savings on the cost of power and colocation when running at the edge. Those are savings that we can use to help keep the price of the Cloudflare Workers platform low.</p>
    <div>
      <h3>More Efficient Architecture Means Lower Costs</h3>
      <a href="#more-efficient-architecture-means-lower-costs">
        
      </a>
    </div>
    <p>But the real cost win comes from a more efficient architecture. Back in the early-90s when I was a network administrator at my college, when we wanted to add a new application it meant ordering a new server. (We bought servers from Gateway; I thought their cardboard shipping boxes with the cow print were fun.) Then virtual machines (VMs) came along and you could run multiple applications on the same server. Effectively, the overhead per application went down because you needed fewer physical servers per application.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6G1F70GpwLdYuBRr8CIyKS/56b0daadfe97ff71669cf9712ecbae3e/pasted-image-0--1--1.png" />
            
            </figure><p>VMs gave rise to the first public clouds. Quickly, however, cloud providers looked for ways to reduce their overhead further. Containers provided a lighter weight option to run multiple customers’ workloads on the same machine, with dotCloud, which <a href="https://www.infoq.com/news/2013/10/dotcloud-renamed-docker/">went on to become Docker</a>, leading the way and nearly everyone else eventually following. Again, the win with containers over VMs was reducing the overhead per application.</p><p>At Cloudflare, we knew history doesn’t stop, so as we started building Workers we asked ourselves: what comes after containers? <a href="/cloud-computing-without-containers/">The answer was isolates</a>. Isolates are the sandboxing technology that your browser uses to keep processes separate. They are extremely fast and lightweight. It’s why, when you visit a website, your browser can take code it’s never seen before and execute it almost instantly.</p><p>By using isolates, rather than containers or virtual machines, we're able to keep computation overhead much lower than traditional serverless platforms. That allows us to much more efficiently handle compute workloads. We, in turn, can pass the savings from that efficiency on to our customers. We aim not to be less expensive than Lambda@Edge, it’s to be less expensive than Lambda. Much less expensive.</p>
    <div>
      <h3>From Limits to Limitless</h3>
      <a href="#from-limits-to-limitless">
        
      </a>
    </div>
    <p>Originally, we wanted Workers’ pricing to be very simple and cost effective. Instead of charging for requests, CPU time, <i>and</i> bandwidth, like other serverless providers, we just charged per request. Simple. The tradeoff was that we were forced to impose maximum CPU, memory, and application size restrictions. What we’ve seen over the last three years is developers want to build more complicated, sophisticated applications using Workers — some of which pushed the boundaries of these limits. So this week we’re taking the limits off.</p><p>Tomorrow we’ll announce a new Workers option that allows you to run much more complicated computer workloads following the same pricing model that other serverless providers use, but at much more compelling rates. We’ll continue to support our simplified option for users who can live within the previous limits. I’m especially excited to see how developers will be able to harness our technology to build new applications, all at a lower cost and better performance than other legacy, centralized serverless platforms.</p><p>Faster, more consistent, and cheaper are great, but even together those alone aren't enough to win over most developers workloads. So what’s more important than cost?</p>
    <div>
      <h3>Ease of Use</h3>
      <a href="#ease-of-use">
        
      </a>
    </div>
    <p>Developers are lazy. I know firsthand because when I need to write a program I still reach for a trusty language I know like Perl (don't judge me) even if it's slower and more costly. I am not alone.</p><p>That's why with Cloudflare Workers we knew we needed to meet developers where they were already comfortable. That starts with supporting the languages that developers know and love. We've previously announced support for JavaScript, C, C++, Rust, Go, and <a href="/cloudflare-workers-now-support-cobol/">even COBOL</a>. This week we'll be announcing support for Python, Scala, and Kotlin. We want to make sure you don't have to learn a new language and a new platform to get the benefits of Cloudflare Workers. (I’m still pushing for Perl support.)</p><p>Ease also means spending less time on things like technical operations. That's where serverless platforms have excelled. Being able to simply deploy code and allow the platform to scale up and down with load is magical. We’ve seen this with long-time users of Cloudflare Workers like <a href="https://www.cloudflare.com/case-studies/discord/">Discord</a>, which has experienced several thousand percent usage growth over the last three years and the Workers platform has automatically scaled to meet their needs.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2dWO9UYrGYny5thzMuV8IM/0648a0518a458ae04d918f0d7bf96f45/ease-of-use_2x-1.png" />
            
            </figure><p>One challenge, however, of serverless platforms is debugging. Since, as a developer, it can be difficult to replicate the entire serverless platform locally, debugging your applications can be more difficult. This is compounded when deploying code to a platform <a href="https://aws.amazon.com/blogs/networking-and-content-delivery/slashing-cloudfront-change-propagation-times-in-2020-recent-changes-and-looking-forward/">takes as long as 5 minutes, as it can with AWS's Lamda@Edge</a>. If you’re a developer, you know how painful waiting for your code to be deployed and testable can be. That's why it was critical to us that code changes be deployed globally to our entire network across more than 200 cities in less than 15 seconds.</p>
    <div>
      <h3>The Bezos Rule</h3>
      <a href="#the-bezos-rule">
        
      </a>
    </div>
    <p>One of the most important decisions we made internally was to implement what we call the Bezos Rule. It requires two things: 1) that new features Cloudflare engineers build for ourselves must be built using Workers if at all possible; and 2) that any APIs or tools we build for ourselves must be made available to third party Workers developers.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1LFhBbmvchcWN0IpK4rO6A/877f3cf996a06c1d0f158a9e12910db8/Damnit-Bezos_2x-2.png" />
            
            </figure><p>Building a robust testing and debugging framework requires input from developers. Over the last three years, Cloudflare Workers' development toolkit has matured significantly based on feedback from the hundreds of thousands of developers using our platform, including our own team who have used Workers to quickly build innovative new features like <a href="https://teams.cloudflare.com/access">Cloudflare Access</a> and <a href="https://teams.cloudflare.com/gateway/">Gateway</a>. History has shown that the first, best customer of any platform needs to be the development team at the company building the platform.</p><p><a href="https://developers.cloudflare.com/workers/tooling/wrangler">Wrangler</a>, the command-line tool to provision, deploy, and debug your Cloudflare Workers, has developed into a robust developer experience based on extensive feedback from our own team. In addition to being the fastest, most consistent, and most affordable, I'm excited that given the momentum behind Cloudflare Workers it is quickly becoming the easiest serverless platform to use.</p><p>Generally, whatever platform is the easiest to use wins. But there is one thing that trumps even ease of use, and that, I predict, will prove to be edge computing’s actual killer feature.</p>
    <div>
      <h3>Compliance</h3>
      <a href="#compliance">
        
      </a>
    </div>
    <p>If you’re an individual developer, you may not think a lot about regulatory compliance. However, if you work as a developer at a big bank, or insurance company, or health care company, or any other company that touches sensitive data at meaningful scale, then you think about compliance a lot. You may want to use a particular platform because it’s fast, consistent, cheap, and easy to use, but if your CIO, CTO, CISO, or General Counsel says “no” then it’s back to the drawing board.</p><p>Most computing resources that run on cloud computing platforms, including serverless platforms, are created by developers who work at companies where compliance is a foundational requirement. And, up until to now, that’s meant ensuring that platforms follow government regulations like GDPR (European privacy guidelines) or have <a href="https://www.cloudflare.com/compliance/">certifications</a> providing that they follow industry regulations such as PCI DSS (required if you accept credit cards), <a href="https://www.cloudflare.com/learning/privacy/what-is-fedramp/">FedRamp</a> (US government procurement requirements), ISO27001 (security risk management), SOC 1/2/3 (Security, Confidentiality, and Availability controls), and many more.</p>
    <div>
      <h3>The Coming Era of Data Sovereignty</h3>
      <a href="#the-coming-era-of-data-sovereignty">
        
      </a>
    </div>
    <p>But there’s a looming new risk of regulatory requirements that legacy cloud computing solutions are ill-equipped to satisfy. Increasingly, countries are pursuing regulations that ensure that their laws apply to their citizens’ personal data. One way to ensure you’re in compliance with these laws is to store and process  data of a country’s citizens entirely within the country’s borders.</p><p>The <a href="https://medium.com/center-for-media-data-and-society/data-governance-in-the-eu-data-sovereignty-cloud-federation-f26d44032d63">EU</a>, <a href="https://techcrunch.com/2019/12/10/india-personal-data-protection-bill-2019/">India</a>, and <a href="https://digitalguardian.com/blog/breaking-down-lgpd-brazils-new-data-protection-law">Brazil</a> are all major markets that have or are currently considering regulations that assert legal sovereignty over their citizens’ personal data. <a href="https://jsis.washington.edu/news/chinese-data-localization-law-comprehensive-ambiguous/">China</a> has already imposed data localization regulations on many types of data. Whether you think that regulations that appear to require local data storage and processing are a good idea or not — and I personally think they are bad policies that will stifle innovation — my sense is the momentum behind them is significant enough that they are, at this point, likely inevitable. And, once a few countries begin requiring data sovereignty, it will be hard to stop nearly every country from following suit.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2L6N6wLmTtmDn1PyzkSqc8/779d4608ecbf0e38d294114bc7d95a51/Compliance-stamps_2x-1.png" />
            
            </figure><p>The risk is that such regulations could cost developers much of the efficiency gains serverless computing has achieved. If whole teams are required to coordinate between different cloud platforms in different jurisdictions to ensure compliance, it will be a nightmare.</p>
    <div>
      <h3>Edge Computing to the Rescue</h3>
      <a href="#edge-computing-to-the-rescue">
        
      </a>
    </div>
    <p>Herein lies the killer feature of edge computing. As governments impose new data sovereignty regulations, having a network that, with a single platform, spans every regulated geography will be critical for companies seeking to keep and process locally to comply with these new laws while remaining efficient.</p><p>While the regulations are just beginning to emerge, Cloudflare Workers already can run locally in more than 100 countries worldwide. That positions us to help developers meet data sovereignty requirements as they see fit. And we’ll continue to build tools that give developers options for satisfying their compliance obligations, without having to sacrifice the efficiencies the cloud has enabled.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/CFx61V129m3gLfCJF91ho/1a3ba84d45d27fb5f52012d40d5f445e/Edge-computing-to-the-rescue-_2x.png" />
            
            </figure><p>The ultimate promise of serverless has been to allow any developer to say “I don’t care where my code runs, just make it scale.” Increasingly, another promise will need to be “I do care where my code runs, and I need more control to satisfy my compliance department.” Cloudflare Workers allows you the best of both worlds, with instant scaling, locations that span more than 100 countries around the world, and the granularity to choose exactly what you need.</p>
    <div>
      <h3>Serverless Week</h3>
      <a href="#serverless-week">
        
      </a>
    </div>
    <p>The best part? We’re just getting started. Over the coming week, we’ll discuss our vision for serverless and show you how we’re building Cloudflare Workers into the fastest, most cost effective, secure, flexible, robust, easy to use serverless platform. We’ll also highlight use cases from customers who are using Cloudflare Workers to build and scale applications in a way that was previously impossible. And we’ll outline enhancements we’ve made to the platform to make it even better for developers going forward.</p><p>We’ve truly come a long way over the last three years of building out this platform, and I can’t wait to see all the new applications developers build with Cloudflare Workers. You can get started for free right now by visiting: <a href="https://workers.cloudflare.com/">workers.cloudflare.com</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4yERvsPf3T0FBMAX9LOTxL/bc49b236c47408e09c02fc337659b682/Serverless-week-Day-0--copy-2_2x-1.png" />
            
            </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Serverless Week]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Edge]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">2ndj9QXsjjrMjfHEB0Z0Us</guid>
            <dc:creator>Matthew Prince</dc:creator>
        </item>
    </channel>
</rss>