
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sun, 05 Apr 2026 17:13:25 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Improving Data Loss Prevention accuracy with AI-powered context analysis]]></title>
            <link>https://blog.cloudflare.com/improving-data-loss-prevention-accuracy-with-ai-context-analysis/</link>
            <pubDate>Fri, 21 Mar 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare’s Data Loss Prevention is reducing false positives by using a self-improving AI-powered algorithm, built on Cloudflare’s Developer Platform. ]]></description>
            <content:encoded><![CDATA[ <p>We are excited to announce our latest innovation to Cloudflare’s <a href="https://www.cloudflare.com/zero-trust/products/dlp/"><u>Data Loss Prevention</u></a> (DLP) solution: a self-improving AI-powered algorithm that adapts to your organization’s unique traffic patterns to reduce false positives. </p><p>Many customers are plagued by the shapeshifting task of identifying and protecting their sensitive data as it moves within and even outside of their organization. Detecting this data through deterministic means, such as regular expressions, often fails because they cannot identify details that are categorized as personally identifiable information (PII) nor intellectual property (IP). This can generate a high rate of false positives, which contributes to noisy alerts that subsequently may lead to review fatigue. Even more critically, this less than ideal experience can turn users away from relying on our DLP product and result in a reduction in their overall security posture. </p><p>Built into Cloudflare’s DLP Engine, AI enables us to intelligently assess the contents of a document or HTTP request in parallel with a customer’s historical reports to determine context similarity and draw conclusions on data sensitivity with increased accuracy.</p><p>In this blog post, we’ll explore <a href="https://developers.cloudflare.com/cloudflare-one/policies/data-loss-prevention/dlp-profiles/advanced-settings/"><u>DLP AI Context Analysis</u></a>, its implementation using <a href="https://www.cloudflare.com/developer-platform/products/workers-ai/"><u>Workers AI</u></a> and <a href="https://www.cloudflare.com/developer-platform/products/vectorize/"><u>Vectorize</u></a>, and future improvements we’re developing. </p>
    <div>
      <h3>Understanding false positives and their impact on user confidence</h3>
      <a href="#understanding-false-positives-and-their-impact-on-user-confidence">
        
      </a>
    </div>
    <p>Data Loss Prevention (DLP) at Cloudflare detects sensitive information by scanning potential sources of data leakage across various channels such as web, cloud, email, and SaaS applications. While we leverage several detection methods, pattern-based methods like regular expressions play a key role in our approach. This method is effective for many types of sensitive data. However, certain information can be challenging to classify solely through patterns. For instance, U.S. Social Security Numbers (SSNs), structured as <a href="https://en.wikipedia.org/wiki/Social_Security_number#Structure"><u>AAA-GG-SSSS</u></a>, sometimes with dashes omitted, are often confused with other similarly formatted data, such as U.S. taxpayer identification numbers, bank account numbers, or phone numbers. </p><p>Since <a href="https://blog.cloudflare.com/inline-data-loss-prevention/"><u>announcing</u></a> our DLP product, we have introduced new capabilities like <a href="https://developers.cloudflare.com/cloudflare-one/policies/data-loss-prevention/dlp-profiles/advanced-settings/#confidence-levels"><u>confidence thresholds</u></a> to reduce the number of false positives users receive. This method involves examining the surrounding context of a pattern match to assess Cloudflare’s confidence in its accuracy. With confidence thresholds, users specify a threshold (low, medium, or high) to signify a preference for how tolerant detections are to false positives. DLP uses the chosen threshold as a minimum, surfacing only those detections with a confidence score that meets or exceeds the specified threshold.  </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1EOKyJisPTPWcSOep9Se7F/22c1bf40cbd0d698b0e24095826548cd/1.png" />
          </figure><p>However, implementing context analysis is also not a trivial task. A straightforward approach might involve looking for specific keywords near the matched pattern, such as "SSN" near a potential SSN match, but this method has its limitations. Keyword lists are often incomplete, users may make typographical errors, and many true positives do not have any identifying keywords nearby (e.g., bank accounts near routing numbers or SSNs near names).</p>
    <div>
      <h3>Leveraging AI/ML for enhanced detection accuracy</h3>
      <a href="#leveraging-ai-ml-for-enhanced-detection-accuracy">
        
      </a>
    </div>
    <p>To address the limitations of a hardcoded strategy for context analysis, we have developed a dynamic, self-improving algorithm that learns from customer feedback to further improve their future experience. Each time a customer reports a false positive via <a href="https://developers.cloudflare.com/cloudflare-one/policies/data-loss-prevention/dlp-policies/logging-options/#4-view-payload-logs"><u>decrypted payload logs</u></a>, the system reduces its future confidence for hits in similar contexts. Conversely, reports of true positives increase the system's confidence for hits in similar contexts. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4h84zJ0SNtfhTVGzwxVyk0/bbdcce73d4538619abb296617d793bff/2.png" />
          </figure><p>To determine context similarity, we leverage Workers AI. Specifically, <a href="https://developers.cloudflare.com/workers-ai/models/bge-base-en-v1.5/"><u>a pretrained language model</u></a> that converts the text into a high-dimensional vector (i.e. text embedding). These embeddings capture the meaning of the text, ensuring that two sentences with the same meaning but different wording map to vectors that are close to each other. </p><p>When a pattern match is detected, the system uses the AI model to compute the embedding of the surrounding context. It then performs a nearest neighbor search to find previously logged false or true positives with similar meanings. This allows the system to identify context similarities even if the exact wording differs, but the meaning remains the same. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/z8yLmrAXES70MzTn2GdQE/0845b35884535843fa01e4f1a92a3f41/3.png" />
          </figure><p>In our experiments using Cloudflare employee traffic, this approach has proven robust, effectively handling new pattern matches it hadn't encountered before. When the DLP admin reports false and true positives through the Cloudflare dashboard while viewing the payload log of a <a href="https://developers.cloudflare.com/cloudflare-one/policies/data-loss-prevention/dlp-policies/"><u>policy</u></a> match, it helps DLP continue to improve, leading to a significant reduction in false positives over time. </p>
    <div>
      <h3>Seamless integration with Workers AI and Vectorize</h3>
      <a href="#seamless-integration-with-workers-ai-and-vectorize">
        
      </a>
    </div>
    <p>In developing this new feature, we used components from Cloudflare's developer platform — <a href="https://developers.cloudflare.com/workers-ai/"><u>Workers AI</u></a> and <a href="https://developers.cloudflare.com/vectorize/"><u>Vectorize</u></a> — which helps simplify our design. Instead of managing the underlying infrastructure ourselves, we leveraged <a href="https://www.cloudflare.com/developer-platform/products/workers/"><u>Cloudflare Workers</u></a> as the foundation, using Workers AI for text embedding, and Vectorize as the vector database. This setup allows us to focus on the algorithm itself without the overhead of provisioning underlying resources.  </p><p>Thanks to Workers AI, converting text into embeddings couldn’t be easier. With just a single line of code we can transform any text into its corresponding vector representation.</p>
            <pre><code>const result = await env.AI.run(model, {text: [text]}).data;</code></pre>
            <p>This handles everything from tokenization to GPU-powered inference, making the process both simple and scalable.</p><p>The nearest neighbor search is equally straightforward. After obtaining the vector from Workers AI, we use Vectorize to quickly find similar contexts from past reports. In the meantime, we store the vector for the current pattern match in Vectorize, allowing us to learn from future feedback. </p><p>To optimize resource usage, we’ve incorporated a few more clever techniques. For example, instead of storing every vector from pattern hits, we use online clustering to group vectors into clusters and store only the cluster centroids along with counters for tracking hits and reports. This reduces storage needs and speeds up searches. Additionally, we’ve integrated <a href="https://www.cloudflare.com/developer-platform/products/cloudflare-queues/"><u>Cloudflare Queues</u></a> to separate the indexing process from the DLP scanning hot path, ensuring a robust and responsive system.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6e6krasQ5t5ekp1TK0kJ0A/414f74fd48ef10a16e369775ead189b7/4.png" />
          </figure><p>Privacy is a top priority. We redact any matched text before conversion to embeddings, and all vectors and reports are stored in customer-specific private namespaces across <a href="https://www.cloudflare.com/developer-platform/products/vectorize/"><u>Vectorize</u></a>, <a href="https://www.cloudflare.com/developer-platform/products/d1/"><u>D1</u></a>, and <a href="https://www.cloudflare.com/developer-platform/products/workers-kv/"><u>Workers KV</u></a>. This means each customer’s learning process is independent and secure. In addition, we implement data retention policies so that vectors that have not been accessed or referenced within 60 days are automatically removed from our system.  </p>
    <div>
      <h3>Limitations and continuous improvements</h3>
      <a href="#limitations-and-continuous-improvements">
        
      </a>
    </div>
    <p>AI-driven context analysis significantly improves the accuracy of our detections. However, this comes at the cost of some increase in latency for the end user experience.  For requests that do not match any enabled DLP entries, there will be no latency increase.  However, requests that match an enabled entry in a profile with AI context analysis enabled will typically experience an increase in latency of about 400ms. In rare extreme cases, for example requests that match multiple entries, that latency increase could be as high as 1.5 seconds. We are actively working to drive the latency down, ideally to a typical increase of 250ms or better. </p><p>Another limitation is that the current implementation supports English exclusively because of our choice of the language model. However, Workers AI is developing a multilingual model which will enable DLP to increase support across different regions and languages.</p><p>Looking ahead, we also aim to enhance the transparency of AI context analysis. Currently, users have no visibility on how the decisions are made based on their past false and true positive reports. We plan to develop tools and interfaces that provide more insight into how confidence scores are calculated, making the system more explainable and user-friendly.  </p><p>With this launch, AI context analysis is only available for Gateway HTTP traffic. By the end of 2025, AI context analysis will be available in both <a href="https://www.cloudflare.com/zero-trust/products/casb/"><u>CASB</u></a> and <a href="https://www.cloudflare.com/zero-trust/products/email-security/"><u>Email Security</u></a> so that customers receive the same AI enhancements across their entire data landscape.</p>
    <div>
      <h3>Unlock the benefits: start using AI-powered detection features today</h3>
      <a href="#unlock-the-benefits-start-using-ai-powered-detection-features-today">
        
      </a>
    </div>
    <p>DLP’s AI context analysis is in closed beta. Sign up <a href="https://www.cloudflare.com/lp/dlp-ai-context-analysis/"><u>here</u></a> for early access to experience immediate improvements to your DLP HTTP traffic matches. More updates are coming soon as we approach general availability!</p><p>To get access to DLP via Cloudflare One, contact your account manager.</p> ]]></content:encoded>
            <category><![CDATA[Security Week]]></category>
            <category><![CDATA[Zero Trust]]></category>
            <category><![CDATA[DLP]]></category>
            <category><![CDATA[SASE]]></category>
            <category><![CDATA[Data Protection]]></category>
            <category><![CDATA[Cloudflare One]]></category>
            <category><![CDATA[Workers AI]]></category>
            <guid isPermaLink="false">qBn1L12sUXNIbkTPY5HyK</guid>
            <dc:creator>Warnessa Weaver</dc:creator>
            <dc:creator>Tom Shen</dc:creator>
            <dc:creator>Joshua Johnson</dc:creator>
        </item>
        <item>
            <title><![CDATA[Improving Worker Tail scalability]]></title>
            <link>https://blog.cloudflare.com/improving-worker-tail-scalability/</link>
            <pubDate>Fri, 01 Sep 2023 13:00:46 GMT</pubDate>
            <description><![CDATA[ We’re excited to announce improvements to Workers Tail that means it can now be enabled for Workers at any size and scale ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/56gEEDPmOntJoXI16u8qU5/0ed51f115308dc4941b6553fbdaed0f0/image2-15.png" />
            
            </figure><p>Being able to get real-time information from applications in production is extremely important. Many times software passes local testing and automation, but then users report that something isn’t working correctly. Being able to quickly see what is happening, and how often, is critical to debugging.</p><p>This is why we originally developed the Workers Tail feature - to allow developers the ability to view requests, exceptions, and information for their Workers and to provide a window into what’s happening in real time. When we developed it, we also took the opportunity to build it on top of our own Workers technology using products like Trace Workers and Durable Objects. Over the last couple of years, we’ve continued to iterate on this feature - allowing users to quickly access logs <a href="/introducing-workers-dashboard-logs/">from the Dashboard</a> and via <a href="/10-things-i-love-about-wrangler/">Wrangler CLI</a>.</p><p>Today, we’re excited to announce that tail can now be enabled for Workers at any size and scale! In addition to telling you about the new and improved scalability, we wanted to share how we built it, and the changes we made to enable it to scale better.</p>
    <div>
      <h3>Why Tail was limited</h3>
      <a href="#why-tail-was-limited">
        
      </a>
    </div>
    <p>Tail leverages <a href="https://developers.cloudflare.com/workers/runtime-apis/durable-objects/#durable-objects">Durable Objects</a> to handle coordination between the Worker producing messages and consumers like <code>wrangler</code> and the Cloudflare dashboard, and Durable Objects are a great choice for handling real-time communication like this. However, when a single Durable Object instance starts to receive a very high volume of traffic - like the kind that can come with tailing live Workers - it can see some performance issues.</p><p>As a result, Workers with a high volume of traffic could not be supported by the original Tail infrastructure. Tail had to be limited to Workers receiving 100 requests/second (RPS) or less. This was a significant limitation that resulted in many users with large, high-traffic Workers having to turn to their own tooling to get proper observability in production.</p><p>Believing that every feature we provide should scale with users during their development journey, we set out to improve Tail's performance at high loads.</p>
    <div>
      <h3>Updating the way filters work</h3>
      <a href="#updating-the-way-filters-work">
        
      </a>
    </div>
    <p>The first improvement was to the existing filtering feature. When starting a Tail with <a href="https://developers.cloudflare.com/workers/wrangler/commands/#tail"><code>wrangler tail</code></a> (and now with the Cloudflare dashboard) users have the ability to filter out messages based on information in the requests or logs.Previously, this filtering was handled within the Durable Object, which meant that even if a user was filtering out the majority of their traffic, the Durable Object would still have to handle every message. Often users with high traffic Tails were using many filters to better interpret their logs, but wouldn’t be able to start a Tail due to the 100 RPS limit.</p><p>We moved filtering out of the Durable Object and into the Tail message producer, preventing any filtered messages from reaching the Tail Durable Object, and thereby reducing the load on the Tail Durable Object. Moving the filtering out of the Durable Object was the first step in improving Tail’s performance at scale.</p>
    <div>
      <h3>Sampling logs to keep Tails within Durable Object limits</h3>
      <a href="#sampling-logs-to-keep-tails-within-durable-object-limits">
        
      </a>
    </div>
    <p>After moving log filtering outside of the Durable Object, there was still the issue of determining when Tails could be started since there was no way to determine to what degree filters would reduce traffic for a given Tail, and simply starting a Durable Object back up would mean that it more than likely hit the 100 RPS limit immediately.</p><p>The solution for this was to add a safety mechanism for the Durable Object while the Tail was running.</p><p>We created a simple controller to track the RPS hitting a Durable Object and sample messages until the desired volume of 100 RPS is reached. As shown below, sampling keeps the Tail Durable Object RPS below the target of 100.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7a0f6KNAQCBfodwsGwzlEs/340c5f194ea626f41552f090787257a0/image4-12.png" />
            
            </figure><p>When messages are sampled, the following message appears every five seconds to let the user know that they are in sampling mode:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7jSffNOJOWuhVam9g2kjo2/daafee133ea30515a5b1ca30fbe559e7/image3-9.png" />
            
            </figure><p>This message goes away once the Tail is stopped or filters are applied that drop the RPS below 100.</p>
    <div>
      <h3>A final failsafe</h3>
      <a href="#a-final-failsafe">
        
      </a>
    </div>
    <p>Finally as a last resort a failsafe mechanism was added in the case the Durable Object gets fully overloaded. Since RPS tracking is done within the Durable Object, if the Durable Object is overloaded due to an extremely large amount of traffic, the sampling mechanism will fail.</p><p>In the case that an overload is detected, all messages forwarded to the Durable Object are stopped periodically to prevent any issues with Workers infrastructure.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7FXCXIYrmzHVJHFlRF1wXQ/d454b9bffab6eeddb19ec5718636113e/image1-24.png" />
            
            </figure><p>Here we can see a user who had a large amount of traffic that started to become sampled. As the traffic increased, the number of sampled messages grew. Since the traffic was too fast for the sampling mechanism to handle, the Durable Object got overloaded. However, soon excess messages were blocked and the overload stopped.</p>
    <div>
      <h3>Try it out</h3>
      <a href="#try-it-out">
        
      </a>
    </div>
    <p>These new improvements are in place currently and available to all users 🎉</p><p>To Tail Workers via the Dashboard, log in, navigate to your Worker, and click on the Logs tab. You can then start a log stream via the default view.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/F5Z80VaPFuPW71NWvfA3g/27e7fb654ac3eac556611c81a2ad0a1d/image5-9.png" />
            
            </figure><p>If you’re using the Wrangler CLI, you can start a new Tail by running <code>wrangler tail</code>.</p>
    <div>
      <h3>Beyond Worker tail</h3>
      <a href="#beyond-worker-tail">
        
      </a>
    </div>
    <p>While we're excited for tail to be able to reach new limits and scale, we also recognize users may want to go beyond the live logs provided by Tail.</p><p>For example, if you’d like to push log events to additional destinations for a historical view of your application’s performance, we offer <a href="https://developers.cloudflare.com/workers/observability/logpush/">Logpush</a>. If you’d like more insight into and control over log messages and events themselves, we offer <a href="https://developers.cloudflare.com/workers/observability/tail-workers/">Tail Workers</a>.</p><p>These products, and others, can be read about in our <a href="https://developers.cloudflare.com/logs/">Logs documentation</a>. All of them are available for use today.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Wrangler]]></category>
            <category><![CDATA[Logs]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">7JrxctC3rnpXqgmtN5ffYL</guid>
            <dc:creator>Joshua Johnson</dc:creator>
            <dc:creator>Adam Murray</dc:creator>
        </item>
        <item>
            <title><![CDATA[Improving the Wrangler Startup Experience]]></title>
            <link>https://blog.cloudflare.com/improving-the-wrangler-startup-experience/</link>
            <pubDate>Tue, 25 Aug 2020 11:00:00 GMT</pubDate>
            <description><![CDATA[ You can now write any behaviour on requests heading to your site or even run fully fledged applications directly on the edge. Wrangler is the open-source CLI tool used to manage your Workers and has a big focus on enabling a smooth developer experience.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Today I’m excited to announce <code>wrangler login</code>, an easy way to get started with Wrangler! This summer for my internship on the Workers Developer Productivity team I was tasked with helping improve the Wrangler user experience. For those who don’t know, <a href="https://workers.cloudflare.com/">Workers</a> is Cloudflare’s serverless platform which allows users to deploy their software directly to Cloudflare’s edge network.</p><p>This means you can write any behaviour on requests heading to your site or even run fully fledged applications directly on the edge. <a href="https://github.com/cloudflare/wrangler">Wrangler</a> is the open-source CLI tool used to manage your Workers and has a big focus on enabling a smooth developer experience.</p><p>When I first heard I was working on Wrangler, I was excited that I would be working on such a cool product but also a little nervous. This was the first time I would be writing Rust in a professional environment, the first time making meaningful open-source contributions, and on top of that the first time doing all of this remotely. But thanks to lots of guidance and support from my mentor and team, I was able to help make the Wrangler and Workers developer experience just a little bit better.</p>
    <div>
      <h3>The Problem</h3>
      <a href="#the-problem">
        
      </a>
    </div>
    <p>The main improvement I focused on this summer was the experience when getting started with Wrangler. For many of the commands to publish and develop live Workers, the user first needs to authenticate with Cloudflare. This is mainly done through the <code>wrangler config</code> command which has the user go create an API token and paste it into Wrangler. Creating a token involves going to the Cloudflare dashboard, going to your profile, going to the API tokens page, selecting a token template, adding your zones and accounts, and finally creating the token. While this is a completely valid authentication flow, it’s not as easy as it could be.</p><p>It could be frustrating to users who have to leave Wrangler and then possibly get lost in the wrong dashboard page or use the wrong settings for their token. When a group of intern candidates were given the task of using Wrangler, most of them got stuck on this step! Many users might forgo using Workers altogether if this is the first thing they encounter when sitting down to develop. Instead we wanted an experience where users could use their Cloudflare login (ie. their username, password, and possible two-factor authentication) and immediately be ready to go.</p>
    <div>
      <h3>No OAuth? No Problem</h3>
      <a href="#no-oauth-no-problem">
        
      </a>
    </div>
    <p>What we came up with was a way to create and transfer API Tokens for a user, similar to how <a href="https://www.cloudflare.com/products/argo-tunnel/">Argo Tunnel</a> does their login.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5K8pJJ6i1PL1RuKrb2N2IU/5a998782dad3a40d3b15e33885e98e9f/image3-6.png" />
            
            </figure><p>An overview of the process is shown above, which starts with Wrangler. When the user types <code>wrangler login</code> in their terminal, they will be prompted to open the Cloudflare dashboard in their browser. All dashboard pages require the user to sign in before loading and once the user is signed in, all actions taken by the dashboard page will use the authentication of that user.</p><p>This means we can make a dashboard page which automatically creates an API token configured to manage Workers. Then when the user loads this page, a properly configured API token will be created for that user. Our dashboard page will then hand off the token to EdgeWorker Config Service (EWC) which will temporarily store it. While this is all going on Wrangler will be polling EWC waiting for the token to appear and once it does, Wrangler will retrieve the token and authenticate the user. With this, we have a seamless way to authenticate a Cloudflare user.</p>
    <div>
      <h3>Security</h3>
      <a href="#security">
        
      </a>
    </div>
    <p>One thing we had to be mindful of was security, these are users’ tokens after all. If someone was listening to network traffic and saw the request to the Cloudflare dashboard page, nothing would be stopping them from polling EWC themselves and stealing the token away from the user to wreak havoc on their Workers and zones. To solve this problem we used asymmetric RSA encryption. Asymmetric encryption lets us create two separate but mathematically connected keys. One is a private key which can encrypt and decrypt information and one is a public key which can only encrypt information.</p><p>Wrangler will generate a public-private key pair and pass off the public key to our dashboard page. Once the dashboard page is finished creating our token, EWC will then encrypt the token using the public key before storing. This means in the previous scenario where someone takes the token from our user, all they will have is an encrypted token they can’t use. The only way to decrypt it would be with the private key held by Wrangler.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/15A1FtleHZMGbZ6tN6n7XY/8b926ff99ac81feabf48be43b335735b/image2.gif" />
            
            </figure><p>In the end, this solution results in a smooth experience for Workers users. Now instead of rummaging through dashboard pages you can get started with Wrangler in only a few seconds, sometimes without having to leave the comfort of your own terminal.</p><p>Try out <code>wrangler login</code> in the <a href="https://github.com/cloudflare/wrangler/releases">1.11.0 release</a> of <a href="https://github.com/cloudflare/wrangler">Wrangler</a> and let us know how you like it. Also I would like to thank the Workers team for helping make this possible and giving me an awesome experience this summer! In order to implement this feature I had to touch different parts of Cloudflare like EWC and Stratus (Cloudflare’s front end monorepo) and work in areas unfamiliar to me such as frontend TypeScript and React. The responsiveness and encouragement I received helped get this feature created and helped make for a great summer!</p> ]]></content:encoded>
            <category><![CDATA[Wrangler]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <guid isPermaLink="false">4tiFVDo3wfiOHTOoSn8Ptf</guid>
            <dc:creator>Joshua Johnson</dc:creator>
        </item>
    </channel>
</rss>