
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sat, 04 Apr 2026 06:53:19 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Expanding the Cloudflare Workers Observability Ecosystem]]></title>
            <link>https://blog.cloudflare.com/observability-ecosystem/</link>
            <pubDate>Tue, 13 Apr 2021 13:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare adds Data Dog, Honeycomb, New Relic, Sentry, Splunk, and Sumologic as observability partners to the Cloudflare Workers Ecosystem ]]></description>
            <content:encoded><![CDATA[ <p></p><p>One of the themes of Developer Week is “it takes a village”, and <a href="https://www.cloudflare.com/learning/performance/what-is-observability/">observability</a> is one area where that is especially true. Cloudflare Workers lets you quickly write code that is infinitely scalable — no availability regions, no scaling policies. Your code runs in every one of our data centers by default: <b>region Earth,</b> as we like to say. While fast time to market and effortless scale are amazing benefits, seasoned developers know that as soon as your code is in the wild… <i>stuff</i> happens, and you need the tools in place to investigate, diagnose, fix and monitor those issues.</p><p>Today we’re delighted to add to our existing analytics partners. We’re announcing new partnerships with six observability-focused companies that are deeply integrated into the Cloudflare Workers ecosystem. We’re confident these partnerships will provide immediate value in building the operational muscle to maintain and make your next generation of applications fast, secure and bullet-proof in production.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3okZhOLa6mCLgd9sVdimJV/fc15fb727c3958ec87d8ac778ffad8b1/Screenshot-2021-04-12-at-18.23.19.png" />
            
            </figure><p><code>console.log(`Got request. Extracted name=${name}. Saving…`);</code></p><p>Cloudflare <a href="https://developers.cloudflare.com/workers/get-started/guide#2-install-the-workers-cli">wrangler</a> gives you the ability to generate, configure, build, preview and publish your projects, from the comfort of your dev environment. Writing code in your favorite IDE with a fully-fledged CLI tool that also allows you to simulate how your code will run on the edge is a delightful developer experience and one I personally love.</p><p>If you’re like me, you’ll start out your app with console.log statements. <a href="https://developers.cloudflare.com/workers/cli-wrangler/commands#dev">wrangler dev</a> and <a href="https://developers.cloudflare.com/workers/cli-wrangler/commands#tail">wrangler tail</a> both make it incredibly easy to get visibility into your code during dev and test, but for robust applications, you need more — much more. Things like correlating client and server side event data, seeing context around issues, version awareness and data visualization are what allows DevOps teams to create truly robust applications and make customers happy. The great news is — it’s easy to go from <code>console.log</code> to a code or systems monitoring solution with our partners <a href="http://sentry.io/welcome"><b>Sentry</b></a> and <a href="https://newrelic.com/"><b>New Relic</b></a>.</p><p>Sentry enables monitoring application code health. From error tracking to performance monitoring, developers can see issues that really matter, solve them more quickly, and learn continuously about their applications — from frontend to backend.</p><p><a href="https://www.npmjs.com/package/toucan-js">Toucan-js</a>, courtesy of Cloudflare’s very own Robert Cepa, is a reliable Sentry client for Cloudflare Workers and it’s an open-source npm module. It makes it easy to convert basic logging into full application monitoring. A simple <code>npm install toucan-js</code> and a couple of lines of boilerplate setup allow you convert those console.logs into a streaming source of client-side events that will be rendered for analysis in Sentry. Additionally, the distributed nature of serverless means developers need to think about <a href="https://developers.cloudflare.com/workers/learning/how-workers-works#distributed-execution">where and how they can manage state</a>. Toucan-js abstracts that away and allows simple log statements like:</p>
            <pre><code>sentry.addBreadcrumb({
    message: "Got request. Extracted name=${name}. Saving…",
    category: "log"
});</code></pre>
            <p>to be visualized in <a href="http://sentry.io/for/serverless">Sentry</a> as a user journey with filters, times, versioning and more, allowing you to understand what events led to the errors.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1GPKEs7UewznHt3bGyccSj/a399411af7697d2ab3bf56e17cc0406c/image2-7.png" />
            
            </figure><p>New Relic is a popular observability platform, particularly with Enterprises offering a Telemetry Data Platform, Full-Stack observability and Applied Intelligence. While there isn't (yet!) a specific npm package for NewRelic and Cloudflare Workers, the combination of NewRelic’s <a href="https://docs.newrelic.com/docs/logs/log-management/log-api/introduction-log-api/">https log endpoint</a> and Cloudflare Workers <a href="https://developers.cloudflare.com/workers/learning/fetch-event-lifecycle#waituntil">event.waitUntil()</a> means you can very easily instrument your application with NewRelic, without blocking the request and thus not impacting performance.</p>
            <pre><code>let url = "https://log-api.newrelic.com/log/v1";
let init = {
	method: "POST", 
	headers: {"content-type":"application/json"}, 
	body: JSON.stringify(payload)
};
event.waitUntil(fetch(url, init));</code></pre>
            <p>Like Sentry, those logs and events are then available for analysis in the NewRelicOne platform. Cloudflare uses both Sentry and New Relic for exactly the reasons outlined above, and I’m delighted to welcome them to our Developer Ecosystem as Observability Partners.</p><p><a href="https://newrelic.com/platform?wvideo=itkaxutw1r target=_blank"><img src="https://embed-fastly.wistia.com/deliveries/036304a3ca118a002c1f9b34e2de8529.jpg?image_play_button_size=2x&amp;image_crop_resized=960x540&amp;image_play_button=1&amp;image_play_button_color=008c99e0" /></a>
<a href="https://newrelic.com/platform?wvideo=itkaxutw1r target=_blank">New Relic One | New Relic</a></p><blockquote><p>Monitoring your Cloudflare Workers serverless environment with New Relic lets you deliver serverless apps with confidence by rapidly identifying when something goes wrong and quickly pinpointing the problem—without wading through millions of invocation logs. Monitor, visualize, troubleshoot, and alert on all your Workers apps in a single experience.-- <b>Raj Ramanujam, Vice President, Alliances and Channels, New Relic.</b></p></blockquote><blockquote><p>With Cloudflare Workers and Sentry, software teams immediately have the tools and information to solve issues and learn continuously about their code health instead of worrying about systems and resources. We’re thrilled to partner with Cloudflare on building technologies that make it easy for developers to deploy with confidence.-- <b>Elain Szu, Vice President of Marketing, Sentry.</b></p></blockquote>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5nZbt0OxrYEfEyeMMarvTG/5ac46bcadbf1cb42401519d9df1ddbc7/Screenshot-2021-04-12-at-18.34.08.png" />
            
            </figure><p>Developers are not the only part of an organization that need to observe all aspects of their applications in production. As organizations grow and the sophistication of their infrastructure monitoring and security systems grow, they typically implement observability platforms, which provide overall visibility into the entire infrastructure and the ability to alert on anomalies — not just individual applications, appliances, hardware or network. To achieve that goal, observability platforms must ingest as much data as possible. Cloudflare already <a href="https://www.cloudflare.com/partners/analytics/">partners</a> with Datadog, Sumo Logic and Splunk — this allows security and operations teams to ingest <a href="https://developers.cloudflare.com/logs/analytics-integrations">HTTP logs</a> from the network edge along with origin logs and many other sources of data.</p><p>Since that announcement, specific <a href="https://developers.cloudflare.com/logs/log-fields">Cloudflare Workers</a> fields such as WorkerCPUTime, WorkerStatus, WorkerSubrequest, and WorkerSubrequestCount have been added to offer out-of-the-box visibility to Cloudflare Workers execution. Of course, since the value of observability platforms is about whole-of-infrastructure visibility, ideally we want not just execution logs, but the <i>application</i> logs from our systems, similar to the examples in the section above.</p><p>Fortunately, our partners all offer simple HTTP interfaces into their ingestion engines. Check out <b>Datadog</b>’s <a href="https://docs.datadoghq.com/getting_started/api/">HTTP API</a>, <b>Splunk</b>’s <a href="https://docs.splunk.com/Documentation/Splunk/8.1.3/RESTREF/RESTprolog">REST API</a> and <b>SumoLogic</b>’s <a href="https://help.sumologic.com/03Send-Data/Sources/02Sources-for-Hosted-Collectors/HTTP-Source">HTTP Logs and Metric Source</a> for step-by-step instructions on how to easily ingest your Cloudflare Workers logs. Besides getting on your CISO’s good side, if your organization has a Detection and Response team, they’ll be able to help you ensure your Cloudflare Workers application is integrated and <a href="https://www.cloudflare.com/application-services/solutions/app-performance-monitoring/">monitored</a> as a first-class citizen in your organization's security apparatus. For example, the screenshot below shows Datadog surfacing a security signal detecting malicious activity in Cloudflare HTTP logs based on threat intel feeds.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6ivjoT4S5JIVJDMGsfAJRX/6cab40955d6980af3ed37ef31f24f480/image10.png" />
            
            </figure><blockquote><p>Maintaining a strong security posture means ensuring every part of your toolchain is being monitored - from the datacenter/VPC, to your edge network, all the way to your end users. With Datadog’s partnership with Cloudflare, you get edge computing logs alongside the rest of your application stack’s telemetry - giving you an end to end view of your application’s health, performance and security.- <b>Michael Gerstenhaber, Sr. Director, Datadog.</b></p></blockquote><blockquote><p>Teams using Cloudflare Workers with Splunk Observability get full-stack visibility and contextualized insights from metrics, traces and logs across all of their infrastructure, applications and user transactions in real-time and at any scale. With Splunk Observability, IT and DevOps teams now have a seamless and analytics-powered workflow across monitoring, troubleshooting, incident response and optimization. We're excited to partner with Cloudflare to help developers and operations teams slice through the complexity of modern applications and ship code more quickly and reliably.- <b>Jeff Lo, Director of Product Marketing, Splunk</b></p></blockquote><blockquote><p>Reduce downtime and solve customer-impacting issues faster with an integrated observability platform for all of your Cloudflare data, including its Workers serverless platform. By using Cloudflare Workers with Sumo Logic, customers can seamlessly correlate system issues measured by performance monitoring, gain deep visibility provided by logging, and monitor user experience provided by tracing and transaction analytics.- <b>Abelardo Gonzalez, Director of Product Marketing at Sumo Logic</b></p></blockquote>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5pFJW7FkhI2LRQH1c2nAYr/d275d2707bd89aaa52d6b41ad7b482ab/Screenshot-2021-04-12-at-18.34.34.png" />
            
            </figure><p><a href="https://honeycomb.io">Honeycomb.io</a> is an observability platform that gives you high level data regarding how your services are performing, combined with the ability to drill down all the way to the individual user level to troubleshoot issues without having to hop across different data types to piece the data together. Traditionally, when debugging production incidents with dashboards and metrics, it is difficult to drill down beyond aggregate measures. For example, a graph with error rates can’t tell you which exact customers are experiencing the most errors. Logs show you the raw error data, but it's hard to see the big picture unless you know exactly where to look.</p><p>Honeycomb’s event-based model for application telemetry and powerful query engine make it possible to slice your data across billions of rows and thousands of fields to find hidden patterns. The <a href="https://docs.honeycomb.io/working-with-your-data/bubbleup/">BubbleUp</a> feature also helps you automatically detect the differences between “good” sets and “bad” sets of events. The ability to quickly get results means teams can resolve incidents faster and figure out where to make system optimizations.</p><p>The Honeycomb <a href="https://www.npmjs.com/package/@cloudflare/workers-honeycomb-logger">beta npm module</a> for Cloudflare Workers observability is unique, in that it has first-class knowledge of the concept of sub-requests that are a core part of many Worker applications, and this is surfaced directly in the platform. We can’t wait to see the GA version and more innovation around observability for Cloudflare Workers.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2WLS74Q36uCQ31dNtZx0BZ/26ebd003243d6b680578d067c796b988/image6-2.png" />
            
            </figure><blockquote><p>Honeycomb is excited to partner with Cloudflare as they build an ecosystem of tools that support the full lifecycle of delivering successful apps. Writing and deploying code is only part of the equation. Understanding how that code performs and behaves when it is in the hands of users also determines success. Cloudflare and Honeycomb together are shining the light of observability all the way to the edge, which helps technical teams build and operate better apps.- <b>Charity Majors, Honeycomb CTO &amp; cofounder</b>.</p></blockquote>
    <div>
      <h2>Summary</h2>
      <a href="#summary">
        
      </a>
    </div>
    <p>Developers love writing code on Cloudflare Workers. The speed, scale, and developer tooling all combine to make it a delightful experience. Our observability partner announcements today extend that experience from development to operations. Getting real-time, contextual insights into what your code is doing, how it’s performing and any errors it’s generating is at the core of shipping the next generation of transformative apps. Our serverless platform takes care of getting your code right next to your users, and our observability partners make sure that that code does exactly what you designed it to do.</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Partners]]></category>
            <category><![CDATA[Observability]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">2eoGhYjqHIK0pL7FjrBP4s</guid>
            <dc:creator>Steven Pack</dc:creator>
            <dc:creator>Erwin van der Koogh</dc:creator>
        </item>
        <item>
            <title><![CDATA[Announcing Network On-ramp Partners for Cloudflare One]]></title>
            <link>https://blog.cloudflare.com/network-onramp-partnerships/</link>
            <pubDate>Mon, 22 Mar 2021 13:00:00 GMT</pubDate>
            <description><![CDATA[ We know the promise of replacing MPLS links with a global, secure, performant and observable network is going to transform the corporate network and the industry itself.  ]]></description>
            <content:encoded><![CDATA[ <p>Today, we’re excited to announce our newest <a href="https://www.cloudflare.com/network-onramp-partners/">Network On-ramp Partnerships</a> for Cloudflare One. <a href="/introducing-cloudflare-one/">Cloudflare One</a> is designed to help customers achieve a secure and optimized global network. We know the promise of replacing <a href="https://www.cloudflare.com/learning/network-layer/what-is-mpls/">MPLS links</a> with a global, secure, performant and observable network is going to transform the corporate network. To realize this vision, we’re launching partnerships so customers can connect to Cloudflare’s global network from their existing trusted <a href="https://www.cloudflare.com/learning/network-layer/what-is-a-wan/">WAN</a> &amp; <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-sd-wan/">SD-WAN</a> appliances and privately interconnect via the data centers they are co-located in.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/56oNn0ReV4LZf7ShGU5fy6/4d46551c7afd1cf9e69894ca9679cf7b/image3-23.png" />
            
            </figure><p>Today, we are launching our WAN and SD-WAN partnerships with <b>VMware, Aruba</b> and <b>Infovista</b>. We are also adding <b>Digital Realty</b>, <b>CoreSite</b>, <b>EdgeConneX</b>, <b>365 Data Centers, BBIX</b>, <b>Teraco</b> and <b>Netrality Data Centers</b> to our existing Network Interconnect partners Equinix ECX, Megaport, PacketFabric, PCCW ConsoleConnect and Zayo. Cloudflare’s Network On-ramp partnerships now span 15 leading connectivity providers in 70 unique locations, making it easy for our customers to get their traffic onto Cloudflare in a secure and performant way, wherever they are.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2JEblyIxqVhJN2P232d5uo/e1daaf0e6c25be66b6ddd4ca07404331/Group-834-1.png" />
            
            </figure>
    <div>
      <h3>Connect to Cloudflare using your existing WAN or SD-WAN Provider</h3>
      <a href="#connect-to-cloudflare-using-your-existing-wan-or-sd-wan-provider">
        
      </a>
    </div>
    <p>With <a href="http://www.cloudflare.com/magic-wan">Magic WAN</a>, customers can securely connect data centers, offices, devices and cloud properties to Cloudflare’s network and configure routing policies to get the bits where they need to go, all within one SaaS solution: no more MPLS expense or lead times and no more performance penalties from traffic trombones. Many organizations use physical or virtual SD-WAN appliances today to route or tunnel traffic between offices, data centers, and public clouds. Starting today, these customers can leverage their existing infrastructure investments — physical or virtual in the cloud — to connect traffic to Magic WAN with a few simple commands.</p><p>Consider the sample setup below. Magic WAN + Network Onramp Partners allows you to connect on-premise and cloud VPCs in RFC1918 subnets using both physical and virtual appliances, with Magic WAN providing accelerated and secure connectivity. Additionally, Magic Firewall allows you to configure and enforce firewall rules at edge and consistently across all traffic that flows through Magic WAN. Cloudflare’s broad network reach in 200+ cities also means the nearest data center is always close by.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7LhZMqMeFjhSD8t3sqmAz8/0e1e3513c55a3d908cbeca7414175b78/image4-20.png" />
            
            </figure><p><i>3 private networks, using a mixture of a partner hardware appliance, a partner virtual AMI, and a generic Linux router connected to Cloudflare Magic WAN via the nearest Cloudflare data center.</i></p><p>While simple setup and compatibility are great, our shared roadmap with our partners is much more ambitious. Today, these connections are made over Anycast GRE, and we will be announcing IPSec support soon, allowing customers multiple connectivity methods.  </p><p>In future releases, on-ramp partner devices will only require authorization credentials to prove they are part of a customer Magic WAN network, and will then be able to call out to the Magic WAN management API and self-configure — truly realizing the plug’n’play vision of cloud networking. Additionally, Magic WAN customers will be able to benefit from layering Cloudflare's <a href="https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/">Zero Trust security</a> for application access and Internet browsing that unites <a href="https://www.cloudflare.com/learning/access-management/what-is-ztna/">Zero Trust network access</a> (ZTNA), <a href="https://www.cloudflare.com/learning/access-management/what-is-a-secure-web-gateway/">Secure Web Gateway (SWG)</a>, <a href="https://www.cloudflare.com/learning/access-management/what-is-browser-isolation/">Remote Browser Isolation (RBI)</a>, DNS security, L4 firewall, and other once-distinct point products into one seamless platform.</p><blockquote><p>“VMware SD-WAN virtualizes the WAN to decouple network software services from the underlying hardware—providing agility and performance for all enterprises and is a foundational component of the VMware Secure Access Service Edge (SASE) platform. VMware and Cloudflare share a vision to provide customers a cost-effective, turnkey and more secure Global WAN.”- <b>Mark Vondemkamp</b>, vice president products, SD-WAN and SASE business, VMware.</p></blockquote><blockquote><p><i>“Aruba, a Hewlett Packard Enterprise company, is pleased to collaborate with Cloudflare to develop solutions that will enable our customers to easily deploy the Aruba EdgeConnect SD-WAN platform, acquired with Silver Peak, as the enterprise connectivity onramp to the Cloudflare Magic WAN and Magic Firewall. This new solution builds on the Aruba EdgeConnect platform’s best-in-class integration with leading cloud connectivity and security services, and will enable customers to utilize Cloudfare’s Global Edge Network to protect and accelerate cloud workloads.”</i>- <b>Fraser Street</b>, Head of WAN technical alliances for Aruba.</p></blockquote>
    <div>
      <h2>Privately Interconnecting to Cloudflare via PNI</h2>
      <a href="#privately-interconnecting-to-cloudflare-via-pni">
        
      </a>
    </div>
    <p>Tunneling traffic to the nearest Cloudflare data center (which since we’re in over 200 cities, is always close by) is at the heart of Cloudflare Magic WAN. We use standard Internet protocols like GRE and IPSec (coming soon) to securely deliver traffic between our network and our customers’ infrastructure. While protocol-level security is great, anytime you have Internet-facing infrastructure, you open up an <a href="https://www.cloudflare.com/learning/security/what-is-an-attack-surface/">attack surface</a> to misconfiguration.</p><p>For organizations that want the highest level of security and performance, Magic WAN can be combined with <a href="https://www.cloudflare.com/network-interconnect-partnerships/">Cloudflare Network Interconnect Partners</a> for a private, secure and reliable layer 2 network between Cloudflare’s edge and our customers’ network. Private connectivity is one more weapon in building out defense in depth for your network.</p><p>Last year, we <a href="/cloudflare-network-interconnect-partner-program/">announced</a> our first round of our Network Interconnection Partnerships. Today, we’re adding Digital Realty, CoreSite, EdgeConneX, 365 Data Centers, BBIX, Teraco, and Netrality Data Centers to provide:</p><ul><li><p>Colocation with Cloudflare in 70 locations</p></li><li><p>Reduced cross connect lead times</p></li><li><p>Connectivity reference architectures</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2CiZbkTDnepXb4W9Gdt5mv/9170b0ed0686fe2dbf70694e01f7ecbb/Group-833.png" />
            
            </figure><p>With our new partnerships, customers now have more choice to connect privately and securely either via cloud exchanges (a software defined VLAN ordered via dashboard) or with private physical connectivity.</p><blockquote><p><i>“The combination of Cloudflare One and PlatformDIGITAL® opens up new opportunities for our customers to accelerate their digital transformation journey and address data gravity head-on.  Our industry manifesto outlined a roadmap to build new native capabilities for embracing multiple interconnection platforms in collaboration with the industry. Today’s announcement with Cloudflare marks a significant step forward towards executing on that vision. Cloudflare’s solutions for addressing issues such as data localization, compliance and security align closely with Digital Realty’s pervasive data center architecture PDx™ approach and will add further value to the rich connected data communities on our global platform</i>.”- <b>Chris Sharp</b>, Chief Technology Officer, Digital Realty</p></blockquote><blockquote><p><i>“CoreSite’s collaboration with Cloudflare provides customers with the high-speed direct fiber interconnection and enhanced security they need to meet the strictest performance and compliance requirements supporting modern hybrid applications. We are excited to enable Cloudflare Network Interconnect services within our network-dense, cloud-enabled Los Angeles and Denver data center campuses.”</i>- <b>Maile Kaiser</b>, SVP — Sales, Coresite</p></blockquote><blockquote><p>We’re excited to announce this partnership as Cloudflare’s vision to ‘help build a better Internet’ deeply resonates with BBIX’s philosophy. In an environment of increased network security threats, our customers need to rely on highly reliable security solutions like Magic Transit to keep their businesses operating seamlessly. We believe this partnership will leverage the value of both companies and help us meet growing and diverse market demands.<b>— Michikazu Fukuchi, Executive Vice President, Board Director and COO of BBIX, Inc.</b></p></blockquote>
    <div>
      <h3>The WAN of the future, today</h3>
      <a href="#the-wan-of-the-future-today">
        
      </a>
    </div>
    <p>Over the last 10 years, Cloudflare has built one of the fastest, most reliable, and most secure networks in the world. This week we have announced several more performance and security features that customers can leverage for their global security and networking needs. The Network On-ramp Partnerships launch provides private, secure and high performance options to connect your traffic sources to Cloudflare.</p><p>To join our list of On-ramp Partners, please reach out to us at <a>onramppartners@cloudflare.com</a>. We would love to hear more about the platforms you use and would like to see us integrate with - reach out to us <a href="https://cloudflare.com/network-onramp-partners">here</a>.</p><p>If you’d like to learn more about our Network On-ramp Partners, physical PNIs/CNIs, partner/virtual CNIs and how they integrate with Magic WAN, contact your account team or reach out to <a>interconnection@cloudflare.com</a>. To learn more about Magic WAN, please fill out the Contact Us form on the <a href="https://www.cloudflare.com/magic-wan">product page.</a></p> ]]></content:encoded>
            <category><![CDATA[Cloudflare One]]></category>
            <category><![CDATA[Partners]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Security Week]]></category>
            <guid isPermaLink="false">219uUUo9u00xaHUYeP9pbW</guid>
            <dc:creator>Steven Pack</dc:creator>
            <dc:creator>Matt Lewis</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare and WordPress.com partner to Help Build a Better Internet]]></title>
            <link>https://blog.cloudflare.com/cloudflare-and-wordpress/</link>
            <pubDate>Fri, 19 Mar 2021 14:00:00 GMT</pubDate>
            <description><![CDATA[ Today we’re excited to announce a number of initiatives, starting with the integration of Cloudflare’s privacy-first web analytics into WordPress.com. This integration gives WordPress.com publishers choice in how they collect usage data and derive insights about their visitors.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Cloudflare’s mission is to help build a better Internet. We’ve been at it since 2009 and we’re making progress <b>—</b> with approximately 25 million Internet properties being secured and accelerated by our platform.</p><p>When we look at other companies that not only have the scale to impact the Internet, but who are also on a similar mission, it’s hard to ignore Automattic, maintainers of the ubiquitous open-source WordPress software and owner of one the web’s largest WordPress hosting platforms <a href="https://wordpress.com/">WordPress.com</a>, where up to 409 million people read 20 billion pages every month.<sup>1</sup></p>
    <div>
      <h3>Privacy First Web Analytics</h3>
      <a href="#privacy-first-web-analytics">
        
      </a>
    </div>
    <p>When we started brainstorming ways to combine our impact, one shared value stood out: <b>privacy.</b> We both share a vision for a more private Internet. Today we’re excited to announce a number of initiatives, starting with the integration of Cloudflare’s <a href="/privacy-first-web-analytics/">privacy-first web analytics</a> into WordPress.com. This integration gives WordPress.com publishers choice in how they collect usage data and derive insights about their visitors.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3znwHzgEAogcdosqu3OvsU/8aac055d85416de6c12ca5035519364b/image1.gif" />
            
            </figure><p>Figure 1) Cloudflare Web Analytics tracking code integrated in the WordPress.com dashboard</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/sFNS01o7G7oakIR1VyiAa/cfc48917cba8538e0470d1533c3c020f/image4-16.png" />
            
            </figure><p>Figure 2) An example of Cloudflare Web Analytics in the Cloudflare dashboard.</p>
    <div>
      <h3>Automatic Platform Optimization for WordPress</h3>
      <a href="#automatic-platform-optimization-for-wordpress">
        
      </a>
    </div>
    <p>This is not the first time we’ve launched a WordPress-focused product. In October, we introduced <a href="/automatic-platform-optimizations-starting-with-wordpress/">Automatic Platform Optimization</a> for WordPress sites, a service that our testing has shown to improve the TTFB by up to 72%!  This feature has been incredibly popular with our shared customer base and so we continued to look for ways to bring our two platforms closer together.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/59vbljUmjd5sylbUAx6Irz/c4a72a9a95adc13efdc758725ce1d8e8/Automatic-Platform-Optimization-blog-body.png" />
            
            </figure>
    <div>
      <h3>How to Get Started</h3>
      <a href="#how-to-get-started">
        
      </a>
    </div>
    <p>Starting today, Cloudflare Web Analytics settings will appear under the Marketing area of the WordPress.com dashboard, meaning users can simply paste in the analytics code snippet and WordPress.com will take care of injecting the code into their site at runtime. Users will also see links throughout the dashboard to <a href="/automatic-platform-optimizations-starting-with-wordpress/">Cloudflare APO</a> and Cloudflare’s <a href="https://support.cloudflare.com/hc/en-us/articles/200172516-Understanding-Cloudflare-s-CDN">CDN</a>, which they can enable within the Cloudflare dashboard.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4kI44qH8kuypdEMno82yn/c95bdbbae976916fbfe14a7c83920d0e/image3.gif" />
            
            </figure><p>Figure 3) Additional links to Cloudflare performance and security features in the WordPress.com dashboard</p>
    <div>
      <h3>Better Together</h3>
      <a href="#better-together">
        
      </a>
    </div>
    <p>WordPress.com + Cloudflare has always been a best-of-breed collaboration, combining security and performance on one hand, with the world’s leading content management and publishing platform on the other. Integrating privacy-first web analytics with native support in the WordPress.com platform is just the latest step towards a better Internet.</p><p>To learn more and get started, visit our <a href="https://www.cloudflare.com/pg-lp/cloudflare-for-wordpress-dot-com">landing page</a>.</p><hr /><p><sup>1</sup><a href="https://wordpress.com/activity/">https://wordpress.com/activity/</a></p> ]]></content:encoded>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[WordPress]]></category>
            <category><![CDATA[Privacy]]></category>
            <category><![CDATA[Analytics]]></category>
            <guid isPermaLink="false">5a4x8q0z1lgTWECTrui2Jt</guid>
            <dc:creator>Steven Pack</dc:creator>
            <dc:creator>Simon Steiner</dc:creator>
            <dc:creator>Philip Johnson</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare Network Interconnection partnerships launch]]></title>
            <link>https://blog.cloudflare.com/cloudflare-network-interconnect-partner-program/</link>
            <pubDate>Tue, 04 Aug 2020 13:00:00 GMT</pubDate>
            <description><![CDATA[ Today we’re excited to announce Cloudflare’s Network Interconnection Partner Program, in support of our new CNI product. As ever more enterprises turn to Cloudflare to secure and accelerate their branch and core networks, the ability to connect privately and securely becomes increasingly important. ]]></description>
            <content:encoded><![CDATA[ <p>Today we’re excited to announce Cloudflare’s Network Interconnection <a href="https://www.cloudflare.com/network-interconnect-partnerships/">Partner Program</a>, in support of our new CNI <a href="/cloudflare-network-interconnect">product</a>. As ever more enterprises turn to Cloudflare to <a href="https://www.cloudflare.com/learning/network-layer/network-security/">secure</a> and accelerate their branch and core networks, the ability to connect privately and securely becomes increasingly important. Today's announcement significantly increases the interconnection options for our customers, allowing them to connect with us in the location of their choice using the method or vendors they prefer.</p><p>In addition to our <a href="https://www.peeringdb.com/net/4224">physical locations</a>, our customers can now interconnect with us at any of 23 metro areas across five continents using <b>software-defined layer 2 networking technology</b>. Following the recent release of CNI (which includes PNI support for Magic Transit), customers can now order layer 3 DDoS protection in any of the markets below, without requiring physical cross connects, providing <b>private and secure</b> links, with <b>simpler setup</b>.</p>
    <div>
      <h3>Launch Partners</h3>
      <a href="#launch-partners">
        
      </a>
    </div>
    <p>We’re very excited to announce that five of the world's premier interconnect platforms are available at launch. <a href="http://www.consoleconnect.com/"><b>Console Connect by PCCW Global</b></a> in 14 locations, <a href="https://www.megaport.com/"><b>Megaport</b></a> in 14 locations, <a href="https://packetfabric.com/"><b>PacketFabric</b></a> in 15 locations, <a href="https://www.equinix.com/interconnection-services/cloud-exchange-fabric/"><b>Equinix ECX Fabric</b>™</a> in 8 locations and <a href="http://zayo.com/"><b>Zayo Tranzact</b></a> in 3 locations, spanning North America, Europe, Asia, Oceania and Africa.</p>
    <div>
      <h3></h3>
      <a href="#">
        
      </a>
    </div>
    <p>What is an Interconnection Platform?</p><p>Like much of the networking world, there are many terms in the interconnection space for the same thing: Cloud Exchange, Virtual Cross Connect Platform and Interconnection Platform are all synonyms. They are platforms that allow two networks to interconnect privately at layer 2, without requiring additional physical cabling. Instead the customer can order a port and a virtual connection on a dashboard, and the interconnection ‘fabric’ will establish the connection. Since many large customers are already connected to these fabrics for their connections to traditional Cloud providers, it is a very convenient method to establish private connectivity with Cloudflare.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2ewYi6iIQ3UJQsuCYVax4V/8cb32d608702df2a0ddbf03a858e8bc4/BDES-687_Hero_Image_for_Web_Page_of_New_Partner_Program.svg" />
            
            </figure>
    <div>
      <h3>Why interconnect virtually?</h3>
      <a href="#why-interconnect-virtually">
        
      </a>
    </div>
    <p>Cloudflare has an extensive <a href="/cloudflare-peering-portal-beta/">peering</a> infrastructure and already has private links to thousands of other networks. Virtual private interconnection is particularly attractive to customers with strict security postures and demanding performance requirements, but without the added burden of ordering and managing additional physical cross connects and expanding their physical infrastructure.</p>
    <div>
      <h3>Key Benefits of Interconnection Platforms</h3>
      <a href="#key-benefits-of-interconnection-platforms">
        
      </a>
    </div>
    <p><b>Secure</b>Similar to physical PNI, traffic does not pass across the Internet. Rather, it flows from the customer router, to the Interconnection Platform’s network and ultimately to Cloudflare. So while there is still some element of shared infrastructure, it’s not over the public Internet.</p><p><b>Efficient</b>Modern PNIs are typically a minimum of 1Gbps, but if you have the security motivation without the sustained 1Gbps data transfer rates, then you will have idle capacity. Virtual connections provide for “sub-rate” speeds, which means less than 1Gbps, such as 100Mbps, meaning you only pay for what you use. Most providers also allow some level of “burstiness”, which is to say you can exceed that 100Mbps limit for short periods.</p><p><b>Performance</b>By avoiding the public Internet, virtual links avoid Internet congestion.</p><p><b>Price</b>The major cloud providers typically have different pricing for egressing data to the Internet compared to an Interconnect Platform. By connecting to your cloud via an Interconnect Partner, you can benefit from those reduced egress fees between your cloud and the Interconnection Platform. This builds on our <a href="https://www.cloudflare.com/bandwidth-alliance/">Bandwidth Alliance</a> to give customers more options to continue to drive down their network costs.</p><p><b>Less Overhead</b>By virtualizing, you reduce physical cable management to just one connection into the Interconnection Platform. From there, everything defined and managed in software. For example, ordering a 100Mbps link to Cloudflare can be a few clicks in a Dashboard, as would be a 100Mbps link into Salesforce.</p><p><b>Data Center Independence</b>Is your infrastructure in the same metro, but in a different facility to Cloudflare? An Interconnection Platform can bring us together without the need for additional physical links.</p>
    <div>
      <h3>Where can I connect?</h3>
      <a href="#where-can-i-connect">
        
      </a>
    </div>
    <ol><li><p>In any of our <a href="https://www.peeringdb.com/net/4224">physical facilities</a></p></li><li><p>In any of the 23 metro areas where we are currently connected to an Interconnection Platform (see below)</p></li><li><p>If you’d like to connect virtually in a location not yet listed below, simply <a href="https://cloudflare.com/network-interconnect-partner-program">get in touch</a> via our interconnection page and we’ll work out the best way to connect.</p></li></ol>
    <div>
      <h3>Metro Areas</h3>
      <a href="#metro-areas">
        
      </a>
    </div>
    <p>The metro areas below have currently active connections. New providers and locations can be turned up on request.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1Mtk1onQbwTCMBLkicYX3e/b8bb990a2b365c9c23d223569b9545d1/Screen-Shot-2020-08-04-at-8.55.25-AM-1.png" />
            
            </figure>
    <div>
      <h3>What’s next?</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>Our customers have been asking for direct on-ramps to our global network for a long time and we’re excited to deliver that today with both physical and virtual connectivity of the world’s leading interconnection Platforms.</p><p>Already a Cloudflare customer and connected with one of our Interconnection partners? Then <a href="https://www.cloudflare.com/network-interconnect/">contact your account team</a> today to get connected and benefit from improved reliability, security and privacy of Cloudflare Network Interconnect via our interconnection partners.</p><p>Are you an Interconnection Platform with customers demanding direct connectivity to Cloudflare? Head to our <a href="https://www.cloudflare.com/network-interconnect-partnerships/">partner program page</a> and click “Become a partner”. We’ll continue to add platforms and partners according to customer demand.</p><p><i>"Equinix and Cloudflare share the vision of software-defined, virtualized and API-driven network connections. The availability of Cloudflare on the Equinix Cloud Exchange Fabric demonstrates that shared vision and we’re excited to offer it to our joint customers today."</i>– <b>Joseph Harding</b>, Equinix, Vice President, Global Product &amp; Platform MarketingSoftware Developer</p><p><i>"Cloudflare and Megaport are driven to offer greater flexibility to our customers. In addition to accessing Cloudflare’s platform on Megaport’s global internet exchange service, customers can now provision on-demand, secure connections through our Software Defined Network directly to Cloudflare Network Interconnect on-ramps globally. With over 700 enabled data centres in 23 countries, Megaport extends the reach of CNI onramps to the locations where enterprises house their critical IT infrastructure. Because Cloudflare is interconnected with our SDN, customers can point, click, and connect in real time. We’re delighted to grow our partnership with Cloudflare and bring CNI to our services ecosystem — allowing customers to build multi-service, securely-connected IT architectures in a matter of minutes."</i>– <b>Matt Simpson</b>, Megaport, VP of Cloud Services</p><p><i>“The ability to self-provision direct connections to Cloudflare’s network from Console Connect is a powerful tool for enterprises as they come to terms with new demands on their networks. We are really excited to bring together Cloudflare’s industry-leading solutions with PCCW Global’s high-performance network on the Console Connect platform, which will deliver much higher levels of network security and performance to businesses worldwide.”</i>– <b>Michael Glynn</b>, PCCW Global, VP of Digital Automated Innovation</p><p><i>"Our customers can now connect to Cloudflare via a private, secure, and dedicated connection via the PacketFabric Marketplace. PacketFabric is proud to be the launch partner for Cloudflare's Interconnection program. Our large U.S. footprint provides the reach and density that Cloudflare customers need."</i>– <b>Dave Ward</b>, PacketFabric CEO</p> ]]></content:encoded>
            <category><![CDATA[Better Internet]]></category>
            <category><![CDATA[Internet Performance]]></category>
            <category><![CDATA[Magic Transit]]></category>
            <category><![CDATA[Bandwidth Alliance]]></category>
            <category><![CDATA[Partners]]></category>
            <guid isPermaLink="false">5hgzfJ4XidiGTvNVzN5VCq</guid>
            <dc:creator>Steven Pack</dc:creator>
            <dc:creator>Tom Paseka</dc:creator>
        </item>
        <item>
            <title><![CDATA[New tools to monitor your server and avoid downtime]]></title>
            <link>https://blog.cloudflare.com/new-tools-to-monitor-your-server-and-avoid-downtime/</link>
            <pubDate>Wed, 11 Dec 2019 10:13:00 GMT</pubDate>
            <description><![CDATA[ When your server goes down, it’s a big problem. Today, Cloudflare is introducing two new tools to help you understand and respond faster to origin downtime — plus, a new service to automatically avoid downtime. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>When your server goes down, it’s a big problem. Today, Cloudflare is introducing two new tools to help you understand and respond faster to origin downtime — plus, a new service to automatically <i>avoid</i> downtime.</p><p>The new features are:</p><ul><li><p><b>Standalone Health Checks</b>, which notify you as soon as we detect problems at your origin server, without needing a Cloudflare Load Balancer.</p></li><li><p><b>Passive Origin Monitoring</b>, which lets you know when your origin cannot be reached, with no configuration required.</p></li><li><p><b>Zero-Downtime Failover</b>, which can automatically avert failures by retrying requests to origin.</p></li></ul>
    <div>
      <h3>Standalone Health Checks</h3>
      <a href="#standalone-health-checks">
        
      </a>
    </div>
    <p>Our first new tool is Standalone Health Checks, which will notify you as soon as we detect problems at your origin server -- without needing a Cloudflare Load Balancer.</p><p>A <i>Health Check</i> is a service that runs on our edge network to monitor whether your origin server is online. Health Checks are a key part of our load balancing service because they allow us to quickly and actively route traffic to origin servers that are live and ready to serve requests. Standalone Health Checks allow you to monitor the health of your origin even if you only have one origin or do not yet need to balance traffic across your infrastructure.</p><p>We’ve provided many dimensions for you to hone in on exactly what you’d like to check, including response code, protocol type, and interval. You can specify a particular path if your origin serves multiple applications, or you can check a larger subset of response codes for your staging environment. All of these options allow you to properly target your Health Check, giving you a precise picture of what is wrong with your origin.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7vtTuksJ9p1aCImlW8G92Q/77f4673597912f4e067c567c9caa7414/image4-3.png" />
            
            </figure><p>If one of your origin servers becomes unavailable, you will receive a notification letting you know of the health change, along with detailed information about the failure so you can take action to restore your origin’s health.  </p><p>Lastly, once you’ve set up your Health Checks across the different origin servers, you may want to see trends or the top unhealthy origins. With Health Check Analytics, you’ll be able to view all the change events for a given health check, isolate origins that may be top offenders or not performing at par, and move forward with a fix. On top of this, in the near future, we are working to provide you with access to all Health Check raw events to ensure you have the detailed lens to compare Cloudflare Health Check Event logs against internal server logs.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1VYa5CvnnQFYZPtYy6HJ0D/c0b8f509411e19726998b1d6fd3f2b49/image5-3.png" />
            
            </figure><p><b>Users on the Pro, Business, or Enterprise plan will have access to Standalone Health Checks and Health Check Analytics</b> to promote top-tier application reliability and help maximize brand trust with their customers. You can access Standalone Health Checks and Health Check Analytics through the Traffic app in the dashboard.</p>
    <div>
      <h3>Passive Origin Monitoring</h3>
      <a href="#passive-origin-monitoring">
        
      </a>
    </div>
    <p>Standalone Health Checks are a super flexible way to understand what’s happening at your origin server. However, they require some forethought to configure before an outage happens. That’s why we’re excited to introduce <i>Passive</i> Origin Monitoring, which will automatically notify you when a problem occurs -- no configuration required.</p><p>Cloudflare knows when your origin is down, because we’re the ones trying to reach it to serve traffic! When we detect downtime lasting longer than a few minutes, we’ll send you an email.</p><p>Starting today, you can configure origin monitoring alerts to go to multiple email addresses. Origin Monitoring alerts are available in the new Notification Center (more on that below!) in the Cloudflare dashboard:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/KZoJ9kNRg0X8i3LJkBtZN/fd6b4d2b57d0781d495b17effd855495/image1-6.png" />
            
            </figure><p><b>Passive Origin Monitoring is available to customers on </b><a href="https://www.cloudflare.com/plans/"><b>all Cloudflare plans</b></a><b>.</b></p>
    <div>
      <h3>Zero-Downtime Failover</h3>
      <a href="#zero-downtime-failover">
        
      </a>
    </div>
    <p>What’s better than getting notified about downtime? Never having downtime in the first place! With Zero-Downtime Failover, we can automatically retry requests to origin, even before Load Balancing kicks in.</p><p>How does it work? If a request to your origin fails, and Cloudflare has another record for your origin server, we’ll just try another origin <i>within the same HTTP request</i>. The alternate record could be either an A/AAAA record configured via Cloudflare DNS, or another origin server in the same Load Balancing pool.</p><p>Consider an website, example.com, that has web servers at two different IP addresses: <code>203.0.113.1</code> and <code>203.0.113.2</code>. Before Zero-Downtime Failover, if <code>203.0.113.1</code> becomes unavailable, Cloudflare would attempt to connect, fail, and ultimately serve an error page to the user. With Zero-Downtime Failover, if <code>203.0.113.1</code> cannot be reached, then Cloudflare’s proxy will seamlessly attempt to connect to <code>203.0.113.2</code>. If the second server can respond, then Cloudflare can avert serving an error to example.com’s user.</p><p>Since we rolled Zero-Downtime Failover a few weeks ago, we’ve prevented <b>tens of millions of requests per day</b> from failing!</p><p>Zero-Downtime <a href="https://www.cloudflare.com/learning/performance/what-is-load-balancing/">Failover</a> works in conjunction with <a href="https://www.cloudflare.com/learning/performance/what-is-load-balancing/">Load Balancing</a>, Standalone Health Checks, and Passive Origin Monitoring to keep your website running without a hitch. Health Checks and Load Balancing can avert failure, but take time to kick in. Zero-Downtime failover works instantly, but adds latency on each connection attempt. In practice, Zero-Downtime Failover is helpful at the <i>start</i> of an event, when it can instantly recover from errors; once a Health Check has detected a problem, a Load Balancer can then kick in and properly re-route traffic. And if no origin is available, we’ll send an alert via Passive Origin Monitoring.</p><p>To see an example of this in practice, consider an incident from a recent customer. They saw a spike in errors at their origin that would ordinarily cause availability to plummet (red line), but thanks to Zero-Downtime failover, their actual availability stayed flat (blue line).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2cvaHca9BEnJbZXotcJ20N/e2e930d0986728978a569df15e719f51/zdf-availability.png" />
            
            </figure><p>During a 30 minute time period, Zero-Downtime Failover improved overall availability from 99.53% to 99.98%, and prevented 140,000 HTTP requests from resulting in an error.</p><p>It’s important to note that we only attempt to retry requests that have failed during the TCP or TLS connection phase, which ensures that HTTP headers and payload have not been transmitted yet. Thanks to this safety mechanism, <b>we're able to make Zero-Downtime Failover Cloudflare's default behavior for Pro, Business, and Enterprise plans</b>. In other words, Zero-Downtime Failover makes connections to your origins more reliable with no configuration or action required.</p>
    <div>
      <h3>Coming soon: more notifications, more flexibility</h3>
      <a href="#coming-soon-more-notifications-more-flexibility">
        
      </a>
    </div>
    <p>Our customers are always asking us for more insights into the health of their critical edge infrastructure. Health Checks and Passive Origin monitoring are a significant step towards Cloudflare taking a <b>proactive</b> instead of reactive approach to insights.</p><p>To support this work, today we’re announcing the <b>Notification Center</b> as the central place to manage notifications. This is available in the dashboard today, accessible from your Account Home.</p><p>From here, you can create new notifications, as well as view any existing notifications you’ve already set up. Today’s release allows you to configure  Passive Origin Monitoring notifications, and set multiple email recipients.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3ZDuV1fp4eY29qoySlyBr2/a67bfa65849c3194b1ab990b889f66c2/image2-7.png" />
            
            </figure><p>We’re excited about today’s launches to helping our customers avoid downtime. Based on your feedback, we have lots of improvements planned that can help you get the timely insights you need:</p><ul><li><p>New notification delivery mechanisms</p></li><li><p>More events that can trigger notifications</p></li><li><p>Advanced configuration options for Health Checks, including added protocols, threshold based notifications, and threshold based status changes.</p></li><li><p>More ways to configure Passive Health Checks, like the ability to add thresholds, and filter to specific status codes</p></li></ul> ]]></content:encoded>
            <category><![CDATA[Insights]]></category>
            <category><![CDATA[Analytics]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">Uj0yC4ktYS40SSrcbwbbH</guid>
            <dc:creator>Brian Batraski</dc:creator>
            <dc:creator>Jon Levine</dc:creator>
            <dc:creator>Steven Pack</dc:creator>
        </item>
        <item>
            <title><![CDATA[Rapid Development of Serverless Chatbots with Cloudflare Workers and Workers KV]]></title>
            <link>https://blog.cloudflare.com/rapid-development-of-serverless-chatbots-with-cloudflare-workers-and-workers-kv/</link>
            <pubDate>Thu, 25 Apr 2019 13:00:00 GMT</pubDate>
            <description><![CDATA[ As a fast-growing engineering organization, ownership of services changes fairly frequently. Many cycles get burned in chat with questions like "Who owns service x now? ]]></description>
            <content:encoded><![CDATA[ <p></p><p>I'm the Product Manager for the Application Services team here at Cloudflare. We recently identified a need for a new tool around service ownership. As a fast-growing engineering organization, ownership of services changes fairly frequently. Many cycles get burned in chat with questions like "Who owns service x now?</p><p>Whilst it's easy to see how a tool like this saves a few seconds per day for the asker and askee, and saves on some mental context switches, the time saved is unlikely to add up to the cost of development and maintenance.</p>
            <pre><code>= 5 minutes per day
x 260 work days 
= 1300 mins 
/ 60 mins 
= 20 person hours per year</code></pre>
            <p>So a 20-hour investment in that tool would pay itself back in a year valuing everyone's time the same. While we've made great strides in improving the efficiency of building tools at Cloudflare, 20 hours is a stretch for an end-to-end build, deploy and operation of a new tool.</p>
    <div>
      <h3>Enter Cloudflare Workers + Workers KV</h3>
      <a href="#enter-cloudflare-workers-workers-kv">
        
      </a>
    </div>
    <p>The more I use Serverless and Workers, the more I'm struck with the benefits of:</p>
    <div>
      <h4>1. Reduced operational overhead</h4>
      <a href="#1-reduced-operational-overhead">
        
      </a>
    </div>
    <p>When I upload a Worker, it's automatically distributed to 175+ data centers. I don't have to be worried about uptime - it will be up, and it will be fast.</p>
    <div>
      <h4>2. Reduced dev time</h4>
      <a href="#2-reduced-dev-time">
        
      </a>
    </div>
    <p>With operational overhead largely removed, I'm able to focus purely on code. A constrained problem space like this lends itself really well to Workers. I reckon we can knock this out in well under 20 hours.</p>
    <div>
      <h3>Requirements</h3>
      <a href="#requirements">
        
      </a>
    </div>
    <p>At Cloudflare, people ask these questions in Chat, so that's a natural interface to service ownership. Here's the spec:</p><table><tr><td><p><b>Use Case</b></p></td><td><p><b>Input</b></p></td><td><p><b>Output</b></p></td></tr><tr><td><p>Add</p></td><td><p>@ownerbot add Jira IT <a href="http://web.archive.org/web/20190624175546/http://chat.google.com/room/ABC123">http://chat.google.com/room/ABC123</a></p></td><td><p>Service added</p></td></tr><tr><td><p>Delete</p></td><td><p>@ownerbot delete Jira</p></td><td><p>Service deleted</p></td></tr><tr><td><p>Question</p></td><td><p>@ownerbot Kibana</p></td><td><p>SRE Core owns Kibana. The room is: <a href="http://web.archive.org/web/20190624175546/http://chat.google.com/ABC123">http://chat.google.com/ABC123</a></p></td></tr><tr><td><p>Export</p></td><td><p>@ownerbot export</p></td><td><p><code>[{name: "Kibana", owner: "SRE Core"...}]</code></p></td></tr></table>
    <div>
      <h3>Hello @ownerbot</h3>
      <a href="#hello-ownerbot">
        
      </a>
    </div>
    <p>Following the <a href="https://developers.google.com/hangouts/chat/how-tos/bots-develop">Hangouts Chat API Guide</a>, let's start with a hello world bot.</p><ol><li><p>To configure the bot, go to the <a href="https://developers.google.com/hangouts/chat/how-tos/bots-publish">Publish</a> page and scroll down to the <b>Enable The API</b> button:</p></li><li><p>Enter the bot name</p></li><li><p>Download the private key JSON file</p></li><li><p>Go to the <a href="https://console.developers.google.com/">API Console</a></p></li><li><p>Search for the <b>Hangouts Chat API</b> (<i>Note: not the Google+ Hangouts API</i>)
</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7mQUzoV7nOtkEjVc5IOxnf/5981969de5b73cec65d8146eeec3b383/api-console-hangouts-chat-api-1.png" />
            
            </figure></li><li><p>Click <b>Configuration</b> on the left menu</p></li><li><p>Fill out the form as per below <a href="#fn1">[1]</a></p><ul><li><p>Use a hard to guess URL. I <a href="https://www.guidgenerator.com/online-guid-generator.aspx">generate a guide</a> and use that in the URL.</p></li><li><p>The URL will be the route you associate with your Worker in the Dashboard
</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6LfF9FngUCK4THzSlrnFho/ecfee7e37965cdfb841e4f1a304959cd/bot-configuration-1.png" />
            
            </figure></li></ul></li><li><p>Click Save</p></li></ol><p>So Google Chat should know about our bot now. Back in Google Chat, click in the "Find people, rooms, bots" text box and choose "Message a Bot". Your bot should show up in the search:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6En5Ilq95DEcC1hQEfpCO/3c9e5a928f68f279d96e16bc62934b3e/message-a-bot.png" />
            
            </figure><p>It won't be too useful just yet, as we need to create our Worker to receive the messages and respond!</p>
    <div>
      <h3>The Worker</h3>
      <a href="#the-worker">
        
      </a>
    </div>
    <p>In the Workers dashboard, create a script and associate with the route you defined in step #7 (the one with the guide). It should look something like below. <a href="#fn2">[2]</a></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2pjvx9aep146V4aVSCBjvV/41307cdb04256dea6036ff1c6fab1902/route.png" />
            
            </figure><p>The Google Chatbot interface is pretty simple, but weirdly obfuscated in the Hangouts API guide IMHO. You have to reverse engineer the python example.</p><p>Basically, if we message our bot like <code>@ownerbot-blog Kibana</code>, we'll get a message like this:</p>
            <pre><code>  {
    "type": "MESSAGE",
    "message": {
      "argumentText": "Kibana"
    }
  }</code></pre>
            <p>To respond, we need to respond with <code>200 OK</code> and JSON body like this:</p>
            <pre><code>content-length: 27
content-type: application/json

{"text":"Hello chat world"}</code></pre>
            <p>So, the minimum Chatbot Worker looks something like this:</p>
            <pre><code>addEventListener('fetch', event =&gt; { event.respondWith(process(event.request)) });

function process(request) {
  let body = {
	text: "Hello chat world"
  }
  return new Response(JSON.stringify(body), {
    status: 200,
    headers: {
        "Content-Type": "application/json",
        "Cache-Control": "no-cache"
    }
  });
}</code></pre>
            <p>Save and deploy that, and we should be able to message our bot:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4BL2D9sXew5rPpVLtif5bS/d75d9c07bbb51ff94de92115e34a8d71/google-chatbot-hello-world-response.png" />
            
            </figure><p><b>Success</b>!</p>
    <div>
      <h3>Implementation</h3>
      <a href="#implementation">
        
      </a>
    </div>
    <p>OK, on to the meat of the code. Based on the requirements, I see a need for an <code>AddCommand</code>, <code>QueryCommand</code>, <code>DeleteCommand</code> and <code>HelpCommand</code>. I also see some sort of <code>ServiceDirectory</code> that knows how to add, delete and retrieve services.</p><p>I created a CommandFactory which accepts a ServiceDirectory, as well as an implementation of a KV store, which will be Workers KV in production, but I'll mock out in tests.</p>
            <pre><code>class CommandFactory {
    constructor(serviceDirectory, kv) {
        this.serviceDirectory = serviceDirectory;
        this.kv = kv;
    }

    create(argumentText) {
        let parts = argumentText.split(' ');
        let primary = parts[0];       
        
        switch (primary) {
            case "add":
                return new AddCommand(argumentText, this.serviceDirectory, this.kv);
            case "delete":
                return new DeleteCommand(argumentText, this.serviceDirectory, this.kv);
            case "help":
                return new HelpCommand(argumentText, this.serviceDirectory, this.kv);
            default:
                return new QueryCommand(argumentText, this.serviceDirectory, this.kv);
        }
    }
}</code></pre>
            <p>OK, so if we receive a message like <code>@ownerbot add</code>, we'll interpret it as an <code>AddCommand</code>, but if it's not something we recognize, we'll assume it's a <code>QueryCommand</code> like <code>@ownerbot Kibana</code> which makes it easy to parse commands.</p><p>OK, our commands need a service directory, which will look something like this:</p>
            <pre><code>class ServiceDirectory {     
    get(serviceName) {...}
    async add(service) {...}
    async delete(serviceName) {...}
    find(serviceName) {...}
    getNames() {...}
}</code></pre>
            <p>Let's build some commands. Oh, and my chatbot is going to be Ultima IV themed, because... reasons.</p>
            <pre><code>class AddCommand extends Command {

    async respond() {
        let cmdParts = this.commandParts;
        if (cmdParts.length !== 6) {
            return new OwnerbotResponse("Adding a service requireth Name, Owner, Room Name and Google Chat Room Url.", false);
        }
        let name = this.commandParts[1];
        let owner = this.commandParts[2];
        let room = this.commandParts[3];
        let url = this.commandParts[4];
        let aliasesPart = this.commandParts[5];
        let aliases = aliasesPart.split(' ');
        let service = {
            name: name,
            owner: owner,
            room: room,
            url: url,
            aliases: aliases
        }
        await this.serviceDirectory.add(service);
        return new OwnerbotResponse(`My codex of knowledge has expanded to contain knowledge of ${name}. Congratulations virtuous Paladin.`);
    }
}</code></pre>
            <p>The nice thing about the <a href="https://en.wikipedia.org/wiki/Command_pattern">Command</a> pattern for chatbots, is you can encapsulate the logic of each command for testing, as well as compose series of commands together to test out conversations. Later, we could extend it to support undo. Let's test the <code>AddCommand</code></p>
            <pre><code>  it('requires all args', async function() {
            let addCmd = new AddCommand("add AdminPanel 'Internal Tools' 'Internal Tools'", dir, kv); //missing url            
            let res = await addCmd.respond();
            console.log(res.text);
            assert.equal(res.success, false, "Adding with missing args should fail");            
        });

        it('returns success for all args', async function() {
            let addCmd = new AddCommand("add AdminPanel 'Internal Tools' 'Internal Tools Room' 'http://chat.google.com/roomXYZ'", dir, kv);            
            let res = await addCmd.respond();
            console.debug(res.text);
            assert.equal(res.success, true, "Should have succeeded with all args");            
        });</code></pre>
            
            <pre><code>$ mocha -g "AddCommand"
  AddCommand
    add
      ✓ requires all args
      ✓ returns success for all args
  2 passing (19ms)</code></pre>
            <p>So far so good. But adding commands to our ownerbot isn't going to be so useful unless we can query them.</p>
            <pre><code>class QueryCommand extends Command {
    async respond() {
        let service = this.serviceDirectory.get(this.argumentText);
        if (service) {
            return new OwnerbotResponse(`${service.owner} owns ${service.name}. Seeketh thee room ${service.room} - ${service.url})`);
        }
        let serviceNames = this.serviceDirectory.getNames().join(", ");
        return new OwnerbotResponse(`I knoweth not of that service. Thou mightst asketh me of: ${serviceNames}`);
    }
}</code></pre>
            <p>Let's write a test that runs an <code>AddCommand</code> followed by a <code>QueryCommand</code></p>
            <pre><code>describe ('QueryCommand', function() {
    let kv = new MockKeyValueStore();
    let dir = new ServiceDirectory(kv);
    await dir.init();

    it('Returns added services', async function() {    
        let addCmd = new AddCommand("add AdminPanel 'Internal Tools' 'Internal Tools Room' url 'alias' abc123", dir, kv);            
        await addCmd.respond();

        let queryCmd = new QueryCommand("AdminPanel", dir, kv);
        let res = await queryCmd.respond();
        assert.equal(res.success, true, "Should have succeeded");
        assert(res.text.indexOf('Internal Tools') &gt; -1, "Should have returned the team name in the query response");
    })
})</code></pre>
            
    <div>
      <h3>Demo</h3>
      <a href="#demo">
        
      </a>
    </div>
    <p>A lot of the code as been elided for brevity, but you can view the <a href="https://github.com/stevenpack/ownerbot">full source on GitHub</a>. Let's take it for a spin!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/UJpvFMBP0gI5gx6ggEadY/303538f35b351adb396c9f3a0da38c94/ownerbot1-1.gif" />
            
            </figure>
    <div>
      <h3>Learnings</h3>
      <a href="#learnings">
        
      </a>
    </div>
    <p>Some of the things I learned during the development of @ownerbot were:</p><ul><li><p>Chatbots are an awesome use case for Serverless. You can deploy and not worry again about the infrastructure</p></li><li><p>Workers KV means extends the range of useful chatbots to include stateful bots like @ownerbot</p></li><li><p>The <code>Command</code> pattern provides a useful way to encapsulate the parsing and responding to commands in a chatbot.</p></li></ul><p>In <b>Part 2</b> we'll add authentication to ensure we're only responding to requests from our instance of Google Chat</p><ol><li><p>For simplicity, I'm going to use a static shared key, but Google have recently rolled out a more <a href="https://developers.google.com/hangouts/chat/how-tos/bots-develop?hl=en_US#verifying_bot_authenticity">secure method</a> for verifying the caller's authenticity, which we'll expand on in Part 2. <a href="#fnref1">↩︎</a></p></li><li><p>This UI is the multiscript version available to Enterprise customers. You can still implement the bot with a single Worker, you'll just need to recognize and route requests to your chatbot code. <a href="#fnref2">↩︎</a></p></li></ol><p></p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Cloudflare Workers KV]]></category>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">6pbqrsfFAJTY87DgBJAxT9</guid>
            <dc:creator>Steven Pack</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare Peering Portal - Beta]]></title>
            <link>https://blog.cloudflare.com/cloudflare-peering-portal-beta/</link>
            <pubDate>Mon, 22 Oct 2018 12:27:54 GMT</pubDate>
            <description><![CDATA[ Cloudflare launches Peering Portal - Beta to allow network operators and hosting providers to explore transit savings, performance benefits and traffic Management opportunities for peering with Cloudflare or hosting a node ]]></description>
            <content:encoded><![CDATA[ <p></p><p>It can be a big deal for Internet users when Cloudflare rolls into town. After our recent <a href="/mongolia/">Mongolia launch</a>, we received lots of feedback from happy customers that all of a sudden, Internet performance noticeably improved.</p><p>As a result, it's not a surprising that we regularly receive requests from all over the world to either peer with our network, or to host a node. However, potential partners are always keen to know just how much traffic will be served over that link. What performance benefits can end-users expect? How much upstream traffic will the ISP save? What new bandwidth will they have available for traffic management?</p><p>Starting today, ISPs and hosting providers can request a login to the Cloudflare Peering Portal to find the answers to these questions. After validating ownership of your ASN, the Cloudflare network team will provide a login to the newly launched Peering Portal - Beta. You can find more information at: <a href="https://cloudflare.com/partners/peering-portal/">cloudflare.com/partners/peering-portal/</a></p>
    <div>
      <h3>What problem does peering solve?</h3>
      <a href="#what-problem-does-peering-solve">
        
      </a>
    </div>
    <p>If you're new to the core infrastructure of the Internet, the best way to understand peering is to frame the problems it solves:</p><ol><li><p>Bandwidth costs money</p></li><li><p>Internet users don't like slow websites</p></li><li><p>Network operators have limited resources</p></li></ol><p>Consider what happens if you request a site hosted on Cloudflare from home:</p><ul><li><p>The domain resolves to a Cloudflare IP</p></li><li><p>Your browser sends an HTTP request to that IP</p></li><li><p>Your ISP's routers consult their routing table and route the packets upstream to their transit provider (<b>COSTS MONEY</b>)</p></li><li><p>The packets traverse multiple hops to the nearest Cloudflare POP (<b>TAKES TIME</b>)</p></li><li><p>Those packets are taking up some of the ISPs capacity, making it unavailable for other competing packets (<b>TRAFFIC MANAGEMENT</b>)</p></li></ul><p>As Cloudflare continues to grow, we represent an ever-increasing share of the Internet's traffic. A Network Administrator reviewing his or her network, will see this represented as an increasing cost of bandwidth, of slower than optimal request times and of bandwidth not able to be allocated for other competing traffic.</p>
    <div>
      <h3>What is Peering?</h3>
      <a href="#what-is-peering">
        
      </a>
    </div>
    <p>To address these problems, large networks, hosting providers and <a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/">CDNs</a> will peer with each other. That is, to <b>interconnect their networks</b> directly. Put another way, cut out the middleman. If I request a site on Cloudflare and my ISP is already directly connected to the Cloudflare network, the packet will traverse fewer hops, my website will load faster, my ISP will pay less to their transit provider and they can allocate that bandwidth for other sites.</p>
    <div>
      <h3>How does Peering work?</h3>
      <a href="#how-does-peering-work">
        
      </a>
    </div>
    <p>There are a few different types of peering. The simplest way to conceptualize is plugging in a dedicated cable to exchange data directly between two networks. Practically, there a few ways it happens:</p>
    <div>
      <h4>Scenario 1: Private Network Interconnect (PNI)</h4>
      <a href="#scenario-1-private-network-interconnect-pni">
        
      </a>
    </div>
    <p>If Cloudflare and the potential peering partner have equipment in the same data center, we can setup a "Private Network Interconnect", which involves dedicated cabling and network configuration.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1i1kyRktjC9GA3x7FRnrrA/1424300bdbbb08a7b15a26659bcc0925/privatenetworkinterconnect-2-1.png" />
            
            </figure><p>PNI: Private Network Interconnect</p>
    <div>
      <h4>Scenario 2: Internet Exchange Point (IxP)</h4>
      <a href="#scenario-2-internet-exchange-point-ixp">
        
      </a>
    </div>
    <p>An <a href="https://www.cloudflare.com/learning/cdn/glossary/internet-exchange-point-ixp/">Internet exchange point (IxP)</a> is a physical location through which Internet companies such as Internet Service Providers (ISPs) and CDNs connect with each other. It typically involves a switching-fabric, that Cloudflare is already connected to. If our potential peering partner is also present in the IxP, setting up peering is a simple as network configuration, with no additional cabling required.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1cjSMAotYckrvpjkY9uMXN/db35f023896aba3712fe58b542cb47b8/internetexchangepoint-1.png" />
            
            </figure><p>Internet Exchange Point (IxP)</p>
    <div>
      <h4>Scenario 3:  Hosting a Node</h4>
      <a href="#scenario-3-hosting-a-node">
        
      </a>
    </div>
    <p>There are scenarios where neither PNI nor IxP peering is feasible, such as when Cloudflare and our potential partners are not present in the same physical location. In these cases, our partner can request to host Cloudflare equipment in their own data center racks. Once commissioned, the equipment effectively operates as if it were a Cloudflare data center (PoP). The cache will populate and all of our performance and reliability services will execute right there, close to our peering partner's customer.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3zX6b3g8yYq0rgRQMSeAdJ/d38a68df692d1e1d0f04e8444d83a434/cachenode-1.png" />
            
            </figure><p>Hosting a node</p>
    <div>
      <h4>So every network should peer all the time?</h4>
      <a href="#so-every-network-should-peer-all-the-time">
        
      </a>
    </div>
    <p>Peering still has some costs - particularly an administrative burden. The Network Administrators have to agree the details. Commercial teams may require contractual agreements. The link has to be monitored. Therefore, most large networks will have a <a href="https://www.cloudflare.com/peering-policy/">peering policy</a> and prioritize peering with the networks with which they exchange the most data and will thus generate the most mutual value.</p><p>This is especially true for hosting nodes. In addition to the configuration and contractual agreements, there is the logistical effort of receiving and configuring equipment.</p>
    <div>
      <h4>So when should networks peer with Cloudflare?</h4>
      <a href="#so-when-should-networks-peer-with-cloudflare">
        
      </a>
    </div>
    <p>The Peering Portal - Beta, released today, allows any ASN (Autonomous System Number) network to view detailed statistics on transit data between it and Cloudflare. Existing partners can use it to review existing session data and statistics and prospective partners can use it to estimate potential transit cost savings, performance improvements and the freeing up of upstream bandwidth to use for other traffic.</p>
    <div>
      <h3>Welcome to the Peering Portal - Beta</h3>
      <a href="#welcome-to-the-peering-portal-beta">
        
      </a>
    </div>
    <p>The Peering Portal allows both existing and prospective partners to see:</p>
    <div>
      <h4>Time Series Traffic Statistics</h4>
      <a href="#time-series-traffic-statistics">
        
      </a>
    </div>
    <p>This view shows both growth over time, and the relative traffic for each location.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1VInCXk8nVCor6bXE2BH9j/18fffb9cf1c5a30281d40e4dc38c2cc9/cloudflare-peering-time-series.png" />
            
            </figure><p>Time series traffic data</p>
    <div>
      <h4>Peering Session Statistics</h4>
      <a href="#peering-session-statistics">
        
      </a>
    </div>
    <p>Peering sessions outline how many sessions are established in various locations.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5iDKSEfo3wryhE3uWE9nH5/c75c4cc7c568edda683f4032cd359b79/session-statistics.png" />
            
            </figure><p>Session Statistics</p>
    <div>
      <h4>Prefix Data</h4>
      <a href="#prefix-data">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5e9Ukn7x5XJx7aluOrpQOJ/ff546e133108284b01027b5f3aaa0559/public-ixp-sessions-1.png" />
            
            </figure>
    <div>
      <h4>POP Relative Traffic Weighting</h4>
      <a href="#pop-relative-traffic-weighting">
        
      </a>
    </div>
    <p>At a glance view of where data flows in- and out- of connections to Cloudflare from your ASN.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1xCCzvTrc7mWanTpOZCSuV/268d5f7ce5bb14a244d098a27e7e19bd/cloudflare-pop-traffic.png" />
            
            </figure><p>Relative Traffic Weight by Peering Location</p><p>We plan to add many more statistics over time.</p><p>So, whether you're a current peering partner who'd like more insight into your current traffic with Cloudflare, or a future partner who'd like to explore the performance, financial and traffic management benefits of peering with Cloudflare, or hosting a node visit <a href="https://cloudflare.com/partners/peering-portal">cloudflare.com/partners/peering-portal</a> and request a login.</p><p>You can also review our peering policy <a href="https://cloudflare.com/peering-policy">here</a>.</p> ]]></content:encoded>
            <category><![CDATA[Network]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Peering]]></category>
            <category><![CDATA[Cache]]></category>
            <guid isPermaLink="false">Z4BiDLdjWVRekKcOlB4Ue</guid>
            <dc:creator>Steven Pack</dc:creator>
        </item>
        <item>
            <title><![CDATA[Serverless Rust with Cloudflare Workers]]></title>
            <link>https://blog.cloudflare.com/cloudflare-workers-as-a-serverless-rust-platform/</link>
            <pubDate>Tue, 16 Oct 2018 12:00:00 GMT</pubDate>
            <description><![CDATA[ The Workers team just announced support for WebAssembly (WASM) within Workers. If you saw my post on Internet Native Apps, you'll know that I believe WebAssembly will play a big part in the apps of the future. ]]></description>
            <content:encoded><![CDATA[ <p><b><i>Update</i></b><i>: Rust Tooling for Workers has improved significantly since this post. Go </i><a href="/introducing-wrangler-cli/"><i>here</i></a><i> to check out Wrangler, our new Rust+Workers cli</i></p><hr /><p>The Workers team just <a href="/webassembly-on-cloudflare-workers/">announced support</a> for WebAssembly (WASM) within Workers. If you saw my post on <a href="/internet-native-applications/">Internet Native Apps</a>, you'll know that I believe WebAssembly will play a big part in the apps of the future.</p><p>It's exciting times for Rust developers. Cloudflare's Serverless Platform, Cloudflare Workers, allows you to compile your code to WASM, upload to 150+ data centers and invoke those functions just as easily as if they were JavaScript functions. Today I'm going to convert my lipsum generator to use Rust and explore the developer experience (hint: it's already pretty nice).</p><p>The Workers teams notes in the documentation:</p><blockquote><p>...WASM is not always the right tool for the job. For lightweight tasks like redirecting a request to a different URL or checking an authorization token, sticking to pure JavaScript is probably both faster and easier than WASM. WASM programs operate in their own separate memory space, which means that it's necessary to copy data in and out of that space in order to operate on it. Code that mostly interacts with external objects without doing any serious "number crunching" likely does not benefit from WASM.</p></blockquote><p>OK, I'm unlikely to gain significant performance improvements on this particular project, but it serves as a good opportunity illustrate the developer experience and tooling. ?</p>
    <div>
      <h2>Setup the environment with wasm-pack</h2>
      <a href="#setup-the-environment-with-wasm-pack">
        
      </a>
    </div>
    <p>Coding with WASM has been bleeding edge for a while, but Rust's tool for WASM development recently reached a fairly ergonomic state and even got a <a href="https://rustwasm.github.io/wasm-pack/">website</a>. Make sure you have the <a href="https://rustwasm.github.io/wasm-pack/book/prerequisites/index.html">prerequisites</a> installed and then follow the steps below to get started.</p><p>wasm-pack allows you to compile Rust to WebAssembly, as well as generate bindings between JavaScript objects and Rust objects. We'll talk about why that's important later.</p>
            <pre><code># Install wasm-pack
curl https://rustwasm.github.io/wasm-pack/installer/init.sh -sSf | sh

# Cargo generate to build apps based on templates
cargo install cargo-generate

# And generate a HelloWorld wasm app, based on the wasm-pack template
cargo generate --git https://github.com/rustwasm/wasm-pack-template

?  Project Name: bob-ross-lorem-ipsum-rust
?   Creating project called `bob-ross-lorem-ipsum-rust`...
✨   Done! New project created /Volumes/HD2/Code/cloudflare/bobross/bob-ross-lipsum-rust/api-rust/bob-ross-lorem-ipsum-rust</code></pre>
            <p>The <a href="https://rustwasm.github.io/wasm-pack/book/tutorial/template-deep-dive/cargo-toml.html">WASM book</a> describes some of the glue in the Cargo.toml file, but the meat of the project is here:</p>
            <pre><code>...
#[wasm_bindgen]
extern {
    fn alert(s: &amp;str);
}

#[wasm_bindgen]
pub fn greet() {
    alert("Hello, bob-ross-lorem-ipsum-rust!");
}</code></pre>
            <p>This does two things</p><ol><li><p>Binds to the "external" function in our host environment where the WASM will run. If that's the browser, it will popup a window.</p></li><li><p>It also defines a Rust function, greet() which will be made available as a function callable from the host environment, in our case JavaScript.</p></li></ol><p>Build with <code>wasm-pack build</code></p>
            <pre><code>$ wasm-pack build
  
  [1/9] ?  Checking `rustc` version...
  [2/9] ?  Checking crate configuration...
  [3/9] ?  Adding WASM target...
  [4/9] ?  Compiling to WASM...
  [5/9] ?  Creating a pkg directory...
  [6/9] ?  Writing a package.json...
  ⚠️   [WARN]: Field 'description' is missing from Cargo.toml. It is not necessary, but recommended
  ⚠️   [WARN]: Field 'repository' is missing from Cargo.toml. It is not necessary, but recommended
  ⚠️   [WARN]: Field 'license' is missing from Cargo.toml. It is not necessary, but recommended
  [7/9] ?  Copying over your README...
  [8/9] ⬇️  Installing wasm-bindgen...
  [9/9] ?‍♀️  Running WASM-bindgen...
  ✨   Done in 2 minutes
| ?   Your wasm pkg is ready to publish at "/Volumes/HD2/Code/cloudflare/bobross/bob-ross-lipsum-rust/bob-ross</code></pre>
            <p>Subsequent builds will be faster. We eventually want to ship that .wasm file to a Worker, but I'd like to keep things local and test in a browser first.</p><p>There's an <a href="https://www.npmjs.com/package/create-wasm-app">npm</a> package that will create a templated webpack webapp, preconfigured to import WebAssembly node modules, which we'll use as a test harness.</p>
            <pre><code>$ npm init wasm-app www
npx: installed 1 in 2.533s
? Rust + ? Wasm = ❤</code></pre>
            <p>Install the dependencies with <code>npm install</code> and then <code>npm start</code> to fire up the webpack bundled web server to serve your page</p>
            <pre><code>$ npm start

&gt; create-wasm-app@0.1.0 start /Volumes/HD2/Code/cloudflare/bobross/bob-ross-lipsum-rust/bob-ross-lorem-ipsum-rust/www
&gt; webpack-dev-server

ℹ ｢wds｣: Project is running at http://localhost:8080/</code></pre>
            <p>Open your web browser at <a href="http://localhost:8080">http://localhost:8080</a> and you should see your first WASM generated content!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7nGG5L8JRVz3j0DE8M7jds/18178a9da5a9a7f5718a665145a11153/rust-wasm-hello-world.png" />
            
            </figure><p>OK, that's promising, but it's not actually our code. Our greet function returned <code>"Hello, bob-ross-lorem-ipsum-rust!"</code></p><p>If we open up <code>www/index.js</code>, we can see this:</p>
            <pre><code>import * as wasm from "hello-wasm-pack";

wasm.greet();</code></pre>
            <p>So it's importing a node module "hello-wasm-pack" which was part of the template. We want to import our <i>own</i> module we built with <code>cargo generate</code> earlier.</p><p>First, expose our WASM package as a node module:</p>
            <pre><code># Create a global node_modules entry pointing to your local wasm pkg
$ cd pkg
$ npm link
...
/Users/steve/.nvm/versions/node/v8.11.3/lib/node_modules/bob-ross-lorem-ipsum-rust -&gt; /Volumes/HD2/Code/cloudflare/bob-ross-lorem-ipsum-rust/pkg</code></pre>
            <p>Then make it available as a node_module in our test harness.</p>
            <pre><code>$ cd ../www
$ npm link bob-ross-lorem-ipsum-rust
/Volumes/HD2/Code/cloudflare/bobross/bob-ross-lorem-ipsum-rust/www/node_modules/bob-ross-lorem-ipsum-rust -&gt; /Users/steve/.nvm/versions/node/v8.11.3/lib/node_modules/bob-ross-lorem-ipsum-rust -&gt; /Volumes/HD2/Code/cloudflare/bobross/bob-ross-lorem-ipsum-rust/pkg</code></pre>
            <p>Import it in the index.js file</p>
            <pre><code>//import * as wasm from "hello-wasm-pack";
import * as wasm from "bob-ross-lorem-ipsum-rust"</code></pre>
            <p>and run!</p>
            <pre><code>npm run build
npm run start</code></pre>
            <p>Better! That's our code.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/31yXXKuqGD2mkeKNer0Dok/c263af03d29c34f4fd8f745981f0c421/rust-wasm-hello-world2.png" />
            
            </figure>
    <div>
      <h3>Quick recap</h3>
      <a href="#quick-recap">
        
      </a>
    </div>
    <p>We have:</p><ul><li><p>A Hello, World WASM module</p></li><li><p>Exposed as an npm module</p></li><li><p>A webpack app which imports that module</p></li><li><p>And invokes the greet() function</p></li></ul><p>We're now going to port our Bob Ross Lorem Ipsum generator to Rust, and try it out locally before uploading as a worker. Check it out on in the <a href="https://github.com/stevenpack/bob-ross-lipsum-rust">GitHub</a> repo, or follow along.</p>
            <pre><code>
use std::vec::Vec;
use rand::distributions::{Range, Distribution};
use rand::rngs::SmallRng;
use rand::FromEntropy;

static PHRASES: [&amp;str; 370] = [...elided for clarity];

fn get_random_indexes(cnt: usize) -&gt; Vec&lt;usize&gt; {
    let mut rng = get_rng();
    let range = Range::new(0, PHRASES.len());    
    (0..cnt)
        .map(|_| range.sample(&amp;mut rng))
        .collect()
}

fn get_phrase(idx: usize) -&gt; &amp;'static str {
    PHRASES[idx]        
}

fn get_rng() -&gt; SmallRng {    
    SmallRng::from_entropy()
}

fn get_phrases(idxs: &amp;Vec&lt;usize&gt;) -&gt; Vec&lt;&amp;'static str&gt; {    
    idxs.iter()
        .map(|idx| get_phrase(*idx))
        .collect()
}

fn need_newline(newline: usize, idx: usize) -&gt; bool {
    //idx+1 because idx is zero-based but we want a new line after "every x phrases".
    (newline &gt; 0) &amp;&amp; (idx &gt; 0) &amp;&amp; ((idx + 1) % newline == 0)
}

fn need_space(newline: usize, idx: usize) -&gt; bool {
    !need_newline(newline, idx)
}

fn build_phrase_text(idxs: Vec&lt;usize&gt;, newline: usize) -&gt; String {
    let phrases_vec = get_phrases(&amp;idxs);
    let mut string = String::new();
    for i in 0..phrases_vec.len() {
        //the phrase
        string.push_str(phrases_vec[i]);
        //spaces between phrases
        if need_space(newline, i) {
            string.push(' ');
        }
        //new lines
        if need_newline(newline, i) {
            string.push_str("\n\n");
        }
    }
    string
}

pub fn get_phrase_text(phrase_cnt: usize, newline: usize) -&gt; String {
    let idxs = get_random_indexes(phrase_cnt);
    build_phrase_text(idxs,newline)
}

#[cfg(test)]
mod tests {
    use super::*;

    fn get_test_indexes() -&gt; Vec&lt;usize&gt; {
        vec![34, 2, 99, 43, 128, 300, 45, 56, 303, 42, 11]
    }
    
    #[test]
    fn get_phrases() {
        let randoms = get_test_indexes();
        let phrases = super::get_phrases(&amp;randoms);
        println!("{:?}", phrases);
    }
}</code></pre>
            <p>Running the tests shows everything looks good:</p>
            <pre><code>$ cargo test -- --nocapture
    Finished dev [unoptimized + debuginfo] target(s) in 0.40s                                                                                       
     Running target/debug/deps/bob_ross_lorem_ipsum_rust-5be29ab9ead7494d

running 1 test
["Decide where your cloud lives. Maybe he lives right in here.", "A fan brush is a fantastic piece of equipment. Use it. Make friends with it.", "If we\'re going to have animals around we all have to be concerned about them and take care of them.", "Don\'t kill all your dark areas - you need them to show the light.", "It\'s almost like something out of a fairytale book.", "We don\'t have anything but happy trees here.", "Even the worst thing we can do here is good.", "Everything is happy if you choose to make it that way.", "We don\'t make mistakes we just have happy little accidents.", "Don\'t hurry. Take your time and enjoy.", "All you have to learn here is how to have fun."]
test phrases::tests::get_phrases ... ok

test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out</code></pre>
            <p>And I've exposed the method to WASM like this:</p>
            <pre><code>#[wasm_bindgen]
pub fn get_phrase_text(phrase_cnt: usize, new_line: usize) -&gt; String {
    phrases::get_phrase_text(phrase_cnt, new_line)
}</code></pre>
            <p>So, we should be good to call our WASM from the browser test harness. Let's modify <code>www/index.js</code> to invoke <code>get_phrase_text</code> and fire it up in the browser!</p>
            <pre><code>//wasm.greet();
let phraseText = wasm.get_phrase_text(100, 10);
console.log(phraseText);
alert(phraseText);</code></pre>
            <p>Fail.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7AIwddjpP7c8NrOrl5FONS/02cd2d9c04e00f9f0a5109fea2d01142/rust-wasm-fail-no-entropy.png" />
            
            </figure><p>If you've played around with Rust, you'll know how jarring it can be to see your code compile and tests pass, only to have it blow up at runtime. The strictness of the language means your code behaves exactly as you expect more often than other languages, so this failure really threw me.</p><p>Analysing the stack trace, we can see the failure starts at FromEntropy. My first instinct was that the WASM host didn't support providing entropy. I re-jigged the code to use a time-based seed instead and that failed too. The common theme seemed to be both entropy, and the current time, both make system calls.</p><p>Reading through the relevant Github issues which discuss this <a href="https://github.com/rustwasm/team/issues/16">here</a> and <a href="https://github.com/rust-lang/rust/pull/47102">here</a>, it looks like the design for how Rust generated WASM will handle system calls remains open. If the compiler isn't able to guarantee the system calls will be available, shouldn't the linker fail to compile? I think the answer lies in the <code>wasm-unknown-unknown</code> triplet that we compile to. There are no guarantees on what the target platform provides when you target unknown, so you're on your own.</p><p>That said, we know that the v8 JavaScript engine will be our host in both the browser, and in Workers. There are libraries which allow us to make all Web APIs defined in the ECMAScript standard available to Rust, such as <a href="https://rustwasm.github.io/wasm-bindgen/api/js_sys/index.html">js-sys</a></p><p>Using that, I can rewrite the failing <code>get_rng()</code> method to return a pseudo-random number generated seed with a time-based value using the <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date">Date</a> object provided by the JavaScript host, rather than making a system call. Full code listing is on <a href="https://github.com/stevenpack/bob-ross-lipsum-rust">Github</a></p>
            <pre><code>fn get_rng() -&gt; SmallRng {    
    use js_sys::Date;
    use rand::SeedableRng;

	//from Javascript	
    let ticks = Date::now(); 
    //convert the number to byte array to use as a seed
    let tick_bytes = transmute(ticks as u128); 
    SmallRng::from_seed(tick_bytes)
}</code></pre>
            <p>After another <code>wasm-pack build</code> and reloading our test page...</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4tQ0dr6wWPBgr5koG2NNXK/d27bbfbc4b957b61294d6b0d3f90cedb/rust-wasm-random-phrases.png" />
            
            </figure><p>Huzzah! OK, if my WASM module returns the right output in Chrome, I'm feeling good about it working in Workers.</p>
    <div>
      <h2>From local browser harness to Workers</h2>
      <a href="#from-local-browser-harness-to-workers">
        
      </a>
    </div>
    <p>You can use the <a href="https://developers.cloudflare.com/workers/api/">API</a> or UI to upload. Below, I upload the .wasm file in my /pkg director and bind it to the global variable BOBROSS_WASM, where it will be available in my Worker.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/32dpvCO880Ktl6yq92ZjmN/871f582f646aab3a5244aee51bd43610/rust-wasm-upload.png" />
            
            </figure><p>If you're following and looked at the output of the <code>wasm-pack build</code> command, you might have noticed it produced a JavaScript glue file in the pkg folder, which is actually what the browser executed.</p><p>It looks like this:</p>
            <pre><code>/* tslint:disable */
import * as wasm from './bob_ross_lorem_ipsum_rust_bg';

let cachedDecoder = new TextDecoder('utf-8');

let cachegetUint8Memory = null;
function getUint8Memory() {
    if (cachegetUint8Memory === null || cachegetUint8Memory.buffer !== wasm.memory.buffer) {
        cachegetUint8Memory = new Uint8Array(wasm.memory.buffer);
    }
    return cachegetUint8Memory;
}

function getStringFromWasm(ptr, len) {
    return cachedDecoder.decode(getUint8Memory().subarray(ptr, ptr + len));
}

export function __wbg_alert_8c454b1ebc6068d7(arg0, arg1) {
    let varg0 = getStringFromWasm(arg0, arg1);
    alert(varg0);
}
/**
* @returns {void}
*/
export function greet() {
    return wasm.greet();
}

let cachedGlobalArgumentPtr = null;
function globalArgumentPtr() {
    if (cachedGlobalArgumentPtr === null) {
        cachedGlobalArgumentPtr = wasm.__wbindgen_global_argument_ptr();
    }
    return cachedGlobalArgumentPtr;
}

let cachegetUint32Memory = null;
function getUint32Memory() {
    if (cachegetUint32Memory === null || cachegetUint32Memory.buffer !== wasm.memory.buffer) {
        cachegetUint32Memory = new Uint32Array(wasm.memory.buffer);
    }
    return cachegetUint32Memory;
}
/**
* @param {number} arg0
* @param {number} arg1
* @returns {string}
*/
export function get_phrase_text(arg0, arg1) {
    const retptr = globalArgumentPtr();
    wasm.get_phrase_text(retptr, arg0, arg1);
    const mem = getUint32Memory();
    const rustptr = mem[retptr / 4];
    const rustlen = mem[retptr / 4 + 1];

    const realRet = getStringFromWasm(rustptr, rustlen).slice();
    wasm.__wbindgen_free(rustptr, rustlen * 1);
    return realRet;

}

const __wbg_now_4410283ed4cdb45a_target = Date.now.bind(Date) || function() {
    throw new Error(`wasm-bindgen: Date.now.bind(Date) does not exist`);
};

export function __wbg_now_4410283ed4cdb45a() {
    return __wbg_now_4410283ed4cdb45a_target();
}</code></pre>
            <p>It takes care of the marshalling of strings from WASM into JavaScript and freeing the memory it uses. In an ideal world, we'd just include this in our Worker and be done. However, there a few differences between how Workers instantiates WebAssembly modules and the browser.</p><p>You need to:</p><ul><li><p>Remove the import line</p></li><li><p>Remove the export keywords</p></li><li><p>Wrap all the functions in a module</p></li><li><p>Create an importObject referencing the methods</p></li><li><p>Pass that in when you create the WebAssembly instance</p></li></ul><p><i>Update 28-Dec-2018: wasm-bindgen has been updated since this post. See this </i><a href="https://github.com/stevenpack/bob-ross-lipsum-rust/pull/1"><i>PR</i></a><i> if you're getting errors related to the glue code. Thanks </i><a href="https://github.com/andrewdavidmackenzie"><i>Andrew</i></a><i>!</i></p><p>You can view a <a href="/content/images/2018/10/rust-wasm-local-vs-worker-diff.png">side-by-side diff</a> or a <a href="https://github.com/stevenpack/bob-ross-lipsum-rust/blob/master/worker/glue-to-worker.patch">patch</a> to see the changes required to have it run in a Worker. Include the modified glue code into your worker and you can now call it like any other function. (Thanks for the tip Jake Riesterer!)</p>
            <pre><code>// Request Handler
async function handleRequest(request) {

    let url = new URL(request.url);

    //Serve the UI
    if (url.pathname === "/" ) {
        let init = { "status" : 200 , "headers" : { 'Content-Type': 'text/html' } };
        return new Response(ui, init);
    }

    let phraseCount = Math.min(parseInt(url.searchParams.get("phrases") || 100), 10000);
    let newLine = Math.min(parseInt(url.searchParams.get("newline") || 0), 10000);

    //Serverless Rust in 150+ data centers!
    let phraseText = mod.get_phrase_text(phraseCount, newLine);
    return new Response(phraseText);
}</code></pre>
            <p>Success!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5uC0hcvIpB50a7w5p5NdOE/d1ef8455307a8a2c6ad25dfbf3dc5c5c/rust-wasm-success.png" />
            
            </figure><p>The full <a href="https://github.com/stevenpack/bob-ross-lipsum-rust/blob/master/worker/worker.js">source code</a> is on Github.</p>
    <div>
      <h3>Summary</h3>
      <a href="#summary">
        
      </a>
    </div>
    <p>We can compile Rust to WASM, and call it from Serverless functions woven into the very fabric of the Internet. That's huge and I can't wait to do more of it.</p><p>There's some wrangling of the generated code required, but the tooling will improve over time. Once you've modified the glue code, calling a function in Rust generated WASM modules is just as simple as JavaScript.</p><p>Are we Serverless yet? Yes we are.</p><p>In a future post, I'll extract out the phrases and the UI to the <a href="https://developers.cloudflare.com/workers/kv/">KV store</a> to show a full fledged serverless app powered by Rust and WASM.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Rust]]></category>
            <category><![CDATA[Programming]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">13Q5KDegAIrmgsWVwnbxbs</guid>
            <dc:creator>Steven Pack</dc:creator>
        </item>
        <item>
            <title><![CDATA[Bob Ross, Lorem Ipsum, Heroku and Cloudflare Workers]]></title>
            <link>https://blog.cloudflare.com/bob-ross-lorem-ipsum-heroku-and-cloudflare/</link>
            <pubDate>Sun, 09 Sep 2018 15:00:00 GMT</pubDate>
            <description><![CDATA[ It may not be immediately obvious how these things are related, but bear with me... It was 4pm Friday and one of the engineers on the Cloudflare Tools team came to me with an emergency. "Steve! The Bob Ross Ipsum generator is down!". ]]></description>
            <content:encoded><![CDATA[ <p>It may not be immediately obvious how these things are related, but bear with me... It was 4pm Friday and one of the engineers on the Cloudflare Tools team came to me with an emergency. "<i>Steve! The Bob Ross Ipsum generator is down!</i>".</p><p>If you've not heard of <a href="https://en.wikipedia.org/wiki/Lorem_ipsum">Lorem Ipsum</a>, it's an extract from a latin poem that designers use as placeholder text when designing the layout of a document. There are generators all over the web that will spit out as much text as you need.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1AuGhmLTCklTkAseu8noU3/87b59bfb7d627aefd95a28772c27315a/Lorem_ipsum_design.svg" />
            
            </figure><p><i>Source: Wikipedia</i></p><p>Of course, the web being the web that we all love, there are also endless parodies of Lorem Ipsum. You can generate <a href="https://www.shopify.com/partners/blog/79940998-15-funny-lorem-ipsum-generators-to-shake-up-your-design-mockups">Hodor Ipsum, Cat Ipsum and Hipster Ipsum</a>. I have a new, undisputed favourite: Bob Ross Ipsum.</p><p>Not growing up in the U.S., I hadn't come across the lovable, calm, serene and beautiful human that is Bob Ross. If you haven't spent 30 mins watching him <a href="https://www.youtube.com/watch?v=pw5ETGiiBRg">paint a landscape</a>, you should do that now. He built a following as host of the TV show “<i>The Joy of Painting</i>” which ran on the U.S. PBS channel from 1983-1994. He became famous for his relaxed approach to painting and his catch phrases like “Happy Little Trees”</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6DX6AWOOqrnYTYZqdgA6N6/c81f86420c79f8cb0c7d7cd5a33342e9/image.png" />
            
            </figure><p>Here's a sneak peek of the sort of language you'll hear. I feel better already!</p><blockquote><p>Remember how free clouds are. They just lay around in the sky all day long. These things happen automatically. All you have to do is just let them happen. There are no mistakes. You can fix anything that happens. Volunteering your time; it pays you and your whole community fantastic dividends. You create the dream - then you bring it into your world. You can do anything here - the only prerequisite is that it makes you happy. A tree needs to be your friend if you're going to paint him. Nice little clouds playing around in the sky. Pretend you're water. Just floating without any effort. Having a good day. Nature is so fantastic, enjoy it. Let it make you happy.</p></blockquote><p>OK, so it turned out the distressed engineer always uses Bob Ross Ipsum when he's building UIs. But the site was down!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6HG6w0ZTvvThA8IYYJT9Nq/f30b3edf2d0f27d78ef6e3322ae589fc/site-down.png" />
            
            </figure><p>My guess is the site got popular enough that the VPS wasn't worth paying, or the hosting provider didn't appreciate the traffic. As a well-trained Cloudflarian, my initial response was:</p>
    <div>
      <h3>"<i>I could build one of these in about 5 minutes using </i><a href="https://developers.cloudflare.com/workers/about/"><i>Workers</i></a><i>!!</i>"</h3>
      <a href="#i-could-build-one-of-these-in-about-5-minutes-using">
        
      </a>
    </div>
    <p>OK Step 1, stand on the shoulders of giants. Has anyone open sourced a Bob Ross Lorem Ipsum Generator?</p>
            <pre><code>$npm search "bob ross"
NAME                      | DESCRIPTION          | AUTHOR          | DATE       | VERSION  | KEYWORDS       
postcss-bob-ross-palette  | Bring a little Bob…  | =jonathantneal  | 2015-12-01 | 1.0.1    | postcss css pos
bob-ross                  | Bob Ross Color…      | =azz            | 2017-02-14 | 1.0.0    | Bob Ross Color 
hubot-ross                | A hubot script to…   | =tcrammond      | 2015-03-31 | 1.0.1    | hubot hubot scr
bob-ross-lipsum           | Phrases from Bob…    | =forresto       | 2016-01-15 | 1.1.1    | lorem ipsum</code></pre>
            <p>Of course they have! And the code is delightfully simple:</p>
            <pre><code>function getPhrase () {
  return phrases[Math.floor(Math.random()*phrases.length)]
}

function getPhrases (length) {
  if (!length) length = 1
  var happyPhrases = []
  for (var i=0; i&lt;length; i++) {
    happyPhrases.push(getPhrase())
  }
  return happyPhrases.join(' ')
}

// Compiled by http://www.bobrosslipsum.com/ 2016 January
var phrases = [...elided for clarity...]</code></pre>
            <p>Assuming we've registered a domain and <a href="https://support.cloudflare.com/hc/en-us/articles/201720164-Step-2-Create-a-Cloudflare-account-and-add-a-website">put it on Cloudflare</a>, let's see how quickly can we get a globally distributed, highly available API running in 150+ data centers, to generate some Bob Ross Lorem Ipsum.</p><p>I'm going to:</p><ol><li><p>Launch workers</p></li><li><p>Confirm I get console output</p></li><li><p>Put a test response</p></li><li><p>Paste in my code to generate Bob Ross Lorem Ipsum</p></li><li><p>Test it out</p></li><li><p>Add a route</p></li><li><p>Save*</p></li><li><p>Request it in the browser</p></li></ol><p>* This pushes it to 150+ data centers... no biggie.</p><p>So it takes about 90 secs to build a basic Worker serving dynamically generated text from the Edge. It blows me away just how productive you can be with Cloudflare Workers. With a few clicks, we have code deployed to 150+ data centers and within 10ms of 90% of the world's Internet population. And it's <a href="/serverless-performance-comparison-workers-lambda/"><i>fast</i></a><i>.</i></p><p>The more I use it, the more it reminds of Heroku, and how ease-of-deployment and the developer experience really drove adoption of that platform.</p><p>OK, so generating dynamic text is OK for an MVP, it would be nice if we at least had a UI and some options. You can use <a href="/using-webpack-to-bundle-workers/">Webpack to bundle resources</a> in your Workers, but I wanted this app to be as simple as possible, so I created a basic HTML page to capture some options, included my HTML as a string, and served it from the root of my Worker. The full code listing is on  <a href="https://github.com/stevenpack/bob-ross-lipsum">Github</a>.</p>
            <pre><code>const ui = '...basic html page...';

async function handleRequest(request) {

    let url = new URL(request.url);
    //Serve the UI
    if (url.pathname === "/" ) {
        let init = { "status" : 200 , "headers" : { 'Content-Type': 'text/html' } };
        return new Response(ui, init);
    }

    let phraseCount = Math.min(parseInt(url.searchParams.get("phrases") || 100), 10000);
    let newLine = Math.min(parseInt(url.searchParams.get("newline") || 0), 10000);

    let phraseArr = getPhrasesArr(phraseCount);
    if (newLine &gt; 0) {
        phraseArr = breakLines(phraseArr, newLine);
    }
    return new Response(phraseArr.join(''));
}</code></pre>
            <p>The team is now unblocked. Development can continue. Here's the full version in action. You can play with it live at: <a href="https://www.bobrossloremipsum.com/">https://www.bobrossloremipsum.com</a></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7HhtsmrQre7phjaweK2dkW/bf9cc451b372cf2faa2faffebcc6c7c8/bob-ross-worker-full.gif" />
            
            </figure><p>Want to join a rocketship? <a href="https://boards.greenhouse.io/cloudflare/jobs/589482?gh_jid=589482">I’m hiring in Austin and San Francisco</a></p> ]]></content:encoded>
            <category><![CDATA[Fun]]></category>
            <category><![CDATA[Programming]]></category>
            <guid isPermaLink="false">3pgsPwajOx8KIQc16OzD2R</guid>
            <dc:creator>Steven Pack</dc:creator>
        </item>
        <item>
            <title><![CDATA[Internet Native Applications]]></title>
            <link>https://blog.cloudflare.com/internet-native-applications/</link>
            <pubDate>Tue, 21 Aug 2018 19:08:00 GMT</pubDate>
            <description><![CDATA[ I grew up with DOS and Windows 3.1. I remember applications being *fast* - instant feedback or close to it. Today, native applications like Outlook or Apple Mail still feel fast - click compose and the window is there instantly and it feels snappy. Internet applications do not. ]]></description>
            <content:encoded><![CDATA[ <p>I grew up with DOS and Windows 3.1. I remember applications being <i>fast</i> - instant feedback or close to it. Today, native applications like Outlook or Apple Mail still feel fast - click compose and the window is there instantly and it feels snappy. Internet applications do not.</p><p>My first Internet experience was paying $30 for a prepaid card with 10 hour access over a 14.4k modem. First, it was bulletin boards and later IRC and the <a href="http://WWW">WWW</a>. From my small seaside town in Australia, the Internet was a window into the wider world, but it was slooooooow. In a way, it didn’t matter. The world of opportunities the Internet opened up, from information to music, to socializing and ecommerce, who cared if it was slow? The <i>utility</i> of the Internet and Internet applications meant I would use them regardless of the experience.</p><p>Performance improved from the 90s, but in 2008 when I switched from Outlook downloading my Yahoo! email over IMAP to Gmail in the browser, it wasn’t because it was faster - it wasn’t - it was because features like search, backed up mail, and unlimited storage were too good to resist. The cloud computing power that Google could bring to bear on my mail meant I was happy to trade native performance for a browser-based one that wasn’t bad, but definitely not snappy.</p><p>Efforts like Electron have attempted to blend performance and utility by offering a host in the native windowing technology, which loads an HTML5 app. <i>It’s not working</i>. Some of the most popular Electron apps today are not snappy at all.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5B68Pcbo7EKP8JzGpIQF7J/a76f25640c23eab14dec5a6b1f2e1980/image3.gif" />
            
            </figure><p><i>Recorded on Macbook 2013 with 8gb RAM</i></p>
    <div>
      <h3>How did we get here?</h3>
      <a href="#how-did-we-get-here">
        
      </a>
    </div>
    <p>So how did we get here and where are we going? I think of applications during the period of 1980-2018 like this:</p><ol><li><p>1980-2000 Functionality and performance increased at a similar rate</p></li><li><p>2000-2018 Functionality increased during the Internet age, but perceived performance degraded</p></li></ol>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1dRgEm1gzsQLTd8DUxc9FF/e722015805c295125cbae1d5026bf787/image2-2.png" />
            
            </figure><p>I think we’re entering the next phase, whereby performance and the user experience will again track the rise in the utility of those apps. I predict we’ll go back to a chart like this:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/eXaUH6WubzdKd3iemMEay/8300db177d759785f6b9c57f31f8de1a/image13-1.png" />
            
            </figure><p>To summarize, while the Internet and the “Cloud” brought a huge increase in functionality the performance of Internet Applications has been hampered by:</p><ul><li><p>Downloading interpreted code</p></li><li><p>Over chatty protocols</p></li><li><p>From origin servers far away from users</p></li></ul><p>This has prevented Internet Applications achieving their potential of being both performant and functional. This is changing and I am defining this new era of applications as <b>Internet Native Applications</b>.</p>
    <div>
      <h3>Where are we going?</h3>
      <a href="#where-are-we-going">
        
      </a>
    </div>
    <p>Internet Native Applications will combine the utility of Internet apps, but with the speed of local desktop apps. These apps will feel magical, as the functionality that was previously in some distant data center, will now feel like just an extension of the computer itself.</p><p>How will this happen and what will be the drivers?</p>
    <div>
      <h3>1. Remote services will be embedded in the network itself less than 10ms from the client.</h3>
      <a href="#1-remote-services-will-be-embedded-in-the-network-itself-less-than-10ms-from-the-client">
        
      </a>
    </div>
    <p>As <a href="https://www.cloudflare.com/network/">huge Edge networks</a> get closer to end-users, the perceived performance cost of not serving a request from the Edge and requiring a round-trip to a legacy cloud provider will increase. In a world where users will come to expect near-instant responses from Internet Native applications, a response not served from the Edge and requiring a trip to a centralized cloud-based origin will feel like a <b>"cache miss"</b>.</p><p>In the same way systems programmers have to consider the impact of <a href="https://gist.github.com/jboner/2841832">L1 vs L2 vs Main Memory access</a>, so too will Internet Native application architects consider carefully each time an application requires a resource to be fetched or a computation to be run from a centralized data center, as opposed to the Edge. As more and more services are available &lt;10ms from the end user, the cost of the new “cache miss” will increase. For comparison, <a href="https://en.wikipedia.org/wiki/Hard_disk_drive_performance_characteristics#Seek_time">a disk seek on an average desktop HDD costs around 10ms</a> - the same time it takes to get a request to Edge, so Edge network requests will feel local - requests to far flung centralized data centers will be noticeably slow.</p><p>APIs, storage, and eventually even data-intensive tasks will be processed on the Edge. To the user, it will feel like this magical functionality is just an extension of their computer. As network compute nodes push further and further towards users, from <a href="https://en.wikipedia.org/wiki/Internet_exchange_point">Internet Exchange Points</a> (IXs), to <a href="https://en.wikipedia.org/wiki/Internet_service_provider">ISPs</a> and even to cell towers, it will indeed be close to the truth.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/07PtrtEHkKY6rmv2oLMai/a085c09f7ff3ca68535a39c1be1c7838/Screen-Shot-2018-08-15-at-6.48.36-PM.png" />
            
            </figure><p><i>Near instant</i></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7GyKDAuepS3wp1ci9rlir2/ed9ce5e9794c50c361b5dc7329c257d6/image11-1.png" />
            
            </figure><p><i>The new "cache miss"</i></p>
    <div>
      <h3>2. Client-side code will executed at near-native speed using WebAssembly.</h3>
      <a href="#2-client-side-code-will-executed-at-near-native-speed-using-webassembly">
        
      </a>
    </div>
    <p>Ever since the web emerged as the ultimate app distribution platform, there have been attempts to embed runtimes in browsers. Java Applets, ActiveX and Flash to name a few. Java probably made the most headway, but was plagued by write-once-debug-everywhere issues and installation friction.</p><p><a href="https://developer.mozilla.org/en-US/docs/WebAssembly">WASM</a> is gaining <a href="https://caniuse.com/#feat=wasm">momentum</a> and now ships in all <a href="https://blog.mozilla.org/blog/2017/11/13/webassembly-in-browsers/">major browsers</a>. It's still an <a href="https://webassembly.org/docs/mvp/">MVP</a>, but with wide support and promising early <a href="https://hackernoon.com/screamin-speed-with-webassembly-b30fac90cd92">performance testing</a>, expect more and more apps to take advantage of WASM. It's early days  –  WASM can't interact with the DOM yet, but <a href="https://webassembly.org/docs/future-features/">that is coming</a> too. The debate is ongoing, but I believe WASM will enable user interaction with UI elements in the browser to get much closer to the “native” speed of the past.</p>
    <div>
      <h3>3. Clients exchange data using a new generation of performance-based protocols.</h3>
      <a href="#3-clients-exchange-data-using-a-new-generation-of-performance-based-protocols">
        
      </a>
    </div>
    <p>TCP, conceived in <a href="https://tools.ietf.org/html/rfc793">1981</a>, can no longer meet the needs of the highest performing Internet applications. Dropped packets on mobile networks cause <a href="http://blog.davidsingleton.org/mobiletcp/">back-off</a> and newer protocols handle congestion control better.</p><p><a href="https://en.wikipedia.org/wiki/QUIC">QUIC</a>, now under the auspices of the IETF and starting to see Internet-facing <a href="/the-road-to-quic/">server implementations</a>, will underpin the connection of Internet Native applications to their nearest network compute node, mostly using an efficient, multiplexed <a href="https://en.wikipedia.org/wiki/HTTP/2">HTTP/2</a> connection.</p><p>HTTPS</p><p>QUIC</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6ZtrEcl4N7PZI5sQchvDjT/2acfe269de230ef983cdf63ddf98880b/image6-1.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/Q3E8kshxICSuXUS7KjfIa/4633d51c44e5f9ac7ba95dffd5e0647c/image9-1.png" />
            
            </figure>
    <div>
      <h4>Market Implications</h4>
      <a href="#market-implications">
        
      </a>
    </div>
    
    <div>
      <h5>More services to move to the Edge</h5>
      <a href="#more-services-to-move-to-the-edge">
        
      </a>
    </div>
    <p>With the Edge now a <a href="https://developers.cloudflare.com/workers/about/">general-compute platform</a> – more and more services will move there, driving up the percentage of requests that can be fulfilled within 10ms. Legacy cloud providers will continue to provide services like like AI/ML training, data pipelines and "big data" applications for some time, but many services will move "up the stack", to be closer to the user.</p>
    <div>
      <h5>The old “Cloud” becomes a pluggable implementation detail</h5>
      <a href="#the-old-cloud-becomes-a-pluggable-implementation-detail">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5xBRedqe8gIaouzevON2Pn/55208bc7eaabf3f9fa3d473b401868f2/image4-1.png" />
            
            </figure><p><i>Source: </i><a href="https://github.com/google/go-cloud"><i>https://github.com/google/go-cloud</i></a></p><p>CIOs woke up years ago to the risk of vendor lock-in and have defended against it in a variety of forms such as hybrid clouds, open-source, and container orchestration to name a few. Google launched another attempt to further commoditize cloud services with its launch of <a href="https://blog.golang.org/go-cloud">go/cloud</a>. It provides a generalizable way for application architects to define cloud resources in a non-vendor specific way.</p><p>I believe Internet Native applications will increasingly define their cloud dependencies, e.g., storage, as pluggable implementation details, and deploy policies on the edge to route according to latency, cost, privacy and other domain-specific criteria.</p>
    <div>
      <h3>Summary</h3>
      <a href="#summary">
        
      </a>
    </div>
    <p>No single technology defines <b>Internet Native Applications</b>. However, I believe the combination of:</p><ul><li><p>A rapidly increasing percentage of requests serviced directly from the Edge under 10ms</p></li><li><p>Near-native speed code on the browser, powered by WebAssembly</p></li><li><p>Improved Internet protocols like QUIC and HTTP/2</p></li></ul><p>Will usher in a new era of performance for Internet-based applications. I believe the difference in user experience will be so significant, that it will bring about a fundamental shift in the way we architect the applications of tomorrow  –  <b>Internet Native Applications</b>  –  and Engineers, Architects and CIOs need to start planning for this shift now.</p>
    <div>
      <h4>Closing Note</h4>
      <a href="#closing-note">
        
      </a>
    </div>
    <p>Do you remember people asking if <i>“you have the Internet on your computer?”</i> when people didn’t really understand what the Internet was? Internet Native applications will finally make it irrelevant  –  you won’t be able to tell the difference.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6jt9eezd1xMGLoFlKILWCJ/9f2a545d92ff03732c4d70d84745e8b4/image12-1.png" />
            
            </figure><p><b>Want to work on technology powering the next era? </b><a href="https://boards.greenhouse.io/cloudflare/jobs/589484?gh_jid=589484"><b>I’m hiring</b></a><b> for Cloudflare in San Francisco and Austin.</b></p> ]]></content:encoded>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Programming]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">2aztVlmvIq2BBxYH0GWFRe</guid>
            <dc:creator>Steven Pack</dc:creator>
        </item>
        <item>
            <title><![CDATA[Debugging Serverless Apps]]></title>
            <link>https://blog.cloudflare.com/debugging-serverless-apps/</link>
            <pubDate>Thu, 05 Jul 2018 13:00:00 GMT</pubDate>
            <description><![CDATA[ The Workers team have already done an amazing job of creating a functional, familiar edit and debug tooling experience in the Workers IDE. It's Chrome Developer Tools fully integrated to Workers. `console.log` in your Worker goes straight to the console, just as if you were debugging locally! ]]></description>
            <content:encoded><![CDATA[ <p>The Workers team have already done an amazing job of creating a functional, familiar edit and debug tooling experience in the Workers IDE. It's Chrome Developer Tools fully integrated to Workers.</p><p><code>console.log</code> in your Worker goes straight to the console, just as if you were debugging locally! Furthermore, errors and even log lines come complete with call-site info, so you click and navigate straight to the relevant line.In this blog post I’m going to show a small and powerful technique I use to make debugging serverless apps simple and quick.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1SFQMJY48ruzXTvK0EcG7p/8e4d2bcf77cda6a6ff2b360a8ad778f8/workers-ide-console.gif" />
            
            </figure><p>There is a <a href="https://developers.cloudflare.com/workers/writing-workers/debugging-tips/">comprehensive guide</a> to common debugging approaches and I'm going to focus on returning debug information in a header. This is a great tip and one that I use to capture debug information when I'm using curl or Postman, or integration tests. It was a little finicky to get right the first time, so let me save you some trouble.</p><p>If you've followed <a href="/p/607ad519-5652-4688-9fff-33fbb1fc9d3f/">part 1</a> or <a href="/p/5cd5d990-7b88-4e62-9615-9c51d33daae8/">part 2</a> of my Workers series, you'll know I'm using Typescript, but the approach would equally apply to Javascript. In the rest of this example, I’ll be using the routing framework I created in part 2.</p>
    <div>
      <h3>Requesting Debug Info</h3>
      <a href="#requesting-debug-info">
        
      </a>
    </div>
    <p>I want my Worker to return debugging information whenever:</p><ul><li><p>An X-Debug header is present or</p></li><li><p>a ?debug query parameter is present.</p></li></ul><p>Exercise for the reader: You may also like to require a shared secret key (so that you control who can enable debugging information) and pass a log level.</p><p>I'd like my debug info to be the same that I'd see in the Workers IDE. That is, all the log lines and any exception info from the execution of my Worker.</p>
    <div>
      <h3>Architecture</h3>
      <a href="#architecture">
        
      </a>
    </div>
    <p>Logging is orthogonal to the main request flow, so let's try keep it abstracted. Different frameworks use different terms for this abstraction. I’ll use the term <a href="https://en.wikipedia.org/wiki/Interceptor_pattern">interceptor</a>.</p><p>Let's define an interceptor as something that runs pre and/or post the main request flow.</p>
            <pre><code>/**
 * Intercepts requests before handlers and responses after handlers
 */
export interface IInterceptor {
  preProcess(req: RequestContextBase): void;
  postProcess(req: RequestContextBase, res: Response): void;
}</code></pre>
            <p>And then run pre and post processing before and after the handler has executed.</p>
            <pre><code>public async handle(request: Request): Promise&lt;Response&gt; {
	this.preProcess(req);
	const handler = this.route(req);
	const res = await handler.handle(req);
	this.postProcess(req, res);
	return res;
}

private preProcess(req: RequestContextBase) {
	for (const interceptor of this.interceptors) {
	  interceptor.preProcess(req);
	}
}

private postProcess(req: RequestContextBase, res: Response) {
	for (const interceptor of this.interceptors) {
	  interceptor.postProcess(req, res);
	}
}</code></pre>
            <p>OK, so with a generalized pattern to execute code before and after a request, let's add our first Interceptor:</p>
    <div>
      <h3>LogInterceptor</h3>
      <a href="#loginterceptor">
        
      </a>
    </div>
    <p>First we'll need a logger. This logger just redirects to console, but also keeps track of the log lines so the interceptor can retrieve them later.</p>
            <pre><code>export class Logger implements ILogger {
  public logLines: string[] = [];

  public debug(logLine: string): void {
    this.log(`DEBUG: ${logLine}`);
  }

  public info(logLine: string): void {
    this.log(`INFO: ${logLine}`);
  }

  public warn(logLine: string): void {
    this.log(`WARN: ${logLine}`);
  }

  public error(logLine: string): void {
    this.log(`ERROR: ${logLine}`);
  }

  public getLines(): string[] {
    return this.logLines;
  }

  public clear(): void {
    this.logLines = [];
  }

  private log(logLine: string): void {
    // tslint:disable-next-line:no-console
    console.log(logLine);
    this.logLines.push(logLine);
  }
}</code></pre>
            <p>The <code>LogInterceptor</code> is simple enough in post processing, if it detects the X-Debug header or debug query param, it adds all the log lines to the X-Debug response header as a URL-encoded string.</p>
            <pre><code>const logger = new Logger();

export class LogInterceptor implements IInterceptor {
  public preProcess(req: RequestContextBase) {
    return;
  }

  public postProcess(req: RequestContextBase, res: Response) {
    logger.debug('Evaluating request for logging');
    const debugHeader = 'X-Debug';
    if (
      req.url.searchParams.get('debug') !== 'true' &amp;&amp;
      req.request.headers.get(debugHeader) !== 'true'
    ) {
      return;
    }
    logger.debug('Executing log interceptor');
    const lines = logger.getLines();
    const logStr = encodeURIComponent(lines.join('\n'));

    logger.debug(`Adding to ${debugHeader} header ${logStr.length} chars`);
    res.headers.append(debugHeader, logStr);
  }
}</code></pre>
            <p>Now it's up to the client to display.</p>
    <div>
      <h3>Decoding the result</h3>
      <a href="#decoding-the-result">
        
      </a>
    </div>
    <p>urldecode isn't native on most operating systems. There are Perl and Python implementations, but here's a Bash only function:</p>
            <pre><code>$ urldecode() { : "${*//+/ }"; echo -e "${_//%/\\x}"; }</code></pre>
            <p>Source: <a href="https://stackoverflow.com/questions/6250698/how-to-decode-url-encoded-string-in-shell">StackOverflow</a></p><p>Using that, we can call curl, extract the headers, grep for our X-Debug header and then invoke the urldecode function.</p>
            <pre><code>$ urldecode `curl -sD - -o /dev/null https://cryptoserviceworker.com/api/all/spot/btc-usd -H "X-Debug:true" | grep x-debug`
x-debug: INFO: Handling: https://cryptoserviceworker.com/api/all/spot/btc-usd
DEBUG: No handlers, getting from factory
DEBUG: Found handler for /api/all/spot/btc-usd
DEBUG: ["spot","btc-usd"]
DEBUG: Getting spot from https://api.gdax.com/products/btc-usd/ticker
DEBUG: ["spot","btc-usd"]
DEBUG: Parsing spot...
INFO: GDAX response {"trade_id":45329353,"price":"6287.01000000","size":"0.03440000","bid":"6287","ask":"6287.01","volume":"9845.51680796","time":"2018-06-25T18:12:48.282000Z"}
INFO: Bitfinex response {"mid":"6283.45","bid":"6283.4","ask":"6283.5","last_price":"6283.5","low":"6068.5","high":"6341.0","volume":"28642.882017660013","timestamp":"1529950365.0694907"}
DEBUG: Evaluating request for logging
DEBUG: Executing log interceptor</code></pre>
            <p>Boom. Decoded debug info right there in the console. Ship it.</p><p>If you log stack traces in your worker with <code>logger.error(e.stack)</code>, that will also format nicely:</p>
            <pre><code>$ urldecode `curl -sD - -o /dev/null https://cryptoserviceworker.com/api/all/spot/btc-usd -H "X-Debug:true" | grep x-debug`
x-debug: INFO: Handling: https://cryptoserviceworker.com/api/all/spot/btc-usd
ERROR: Error: boom
    at Router.&lt;anonymous&gt; (worker.js:118:35)
    at step (worker.js:32:23)
    at Object.next (worker.js:13:53)
    at worker.js:7:71
    at new Promise (&lt;anonymous&gt;)
    at __awaiter (worker.js:3:12)
    at Router.handle (worker.js:111:16)
    at worker.js:48:42
    at step (worker.js:32:23)
    at Object.next (worker.js:13:53)
DEBUG: Evaluating request for logging
DEBUG: Executing log interceptor</code></pre>
            
    <div>
      <h3>Summary</h3>
      <a href="#summary">
        
      </a>
    </div>
    <p>In this post we:</p><ul><li><p>Defined a pre- and post-processing framework using Interceptors</p></li><li><p>Implemented a LogInterceptor to return logs generated as we were processing in the X-Debug header</p></li><li><p>Decoded them in bash</p></li></ul><p>May the logs be with you.</p> ]]></content:encoded>
            <category><![CDATA[Serverless]]></category>
            <guid isPermaLink="false">1f1WHkIOBPw08d4ch1vGPR</guid>
            <dc:creator>Steven Pack</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cryptocurrency API Gateway using Typescript+Workers]]></title>
            <link>https://blog.cloudflare.com/cryptocurrency-api-gateway-typescript-workers/</link>
            <pubDate>Fri, 29 Jun 2018 13:00:00 GMT</pubDate>
            <description><![CDATA[ If you followed part one, I’ve set up an environment to write TypeScript with tests and deploy to the Cloudflare Edge using npm run upload. In this post, I’ll expand on one Worker Recipe even further. ]]></description>
            <content:encoded><![CDATA[ <p>If you followed <a href="/p/607ad519-5652-4688-9fff-33fbb1fc9d3f/">part one</a>, I have an environment setup where I can write Typescript with tests and deploy to the Cloudflare Edge with <code>npm run upload</code>. For this post, I want to take one of the <a href="https://developers.cloudflare.com/workers/recipes/aggregating-multiple-requests/">Worker Recipes</a> further.</p><p>I'm going to build a mini HTTP request routing and handling framework, then use it to build a <a href="https://www.cloudflare.com/learning/security/api/what-is-an-api-gateway/">gateway</a> to multiple cryptocurrency API providers. My point here is that in a single file, with no dependencies, you can quickly build pretty sophisticated logic and deploy fast and easily to the Edge. Furthermore, using modern Typescript with async/await and the rich type structure, you also write clean, async code.</p><p>OK, here we go...</p><p>My API will look like this:</p><table><tr><td><p>Verb</p></td><td><p>Path</p></td><td><p>Description</p></td></tr><tr><td><p>GET</p></td><td><p><code>/api/ping</code></p></td><td><p>Check the Worker is up</p></td></tr><tr><td><p>GET</p></td><td><p><code>/api/all/spot/:symbol</code></p></td><td><p>Aggregate the responses from all our configured gateways</p></td></tr><tr><td><p>GET</p></td><td><p><code>/api/race/spot/:symbol</code></p></td><td><p>Return the response of the provider who responds fastest</p></td></tr><tr><td><p>GET</p></td><td><p><code>/api/direct/:exchange/spot/:symbol</code></p></td><td><p>Pass through the request to the gateway. E.g. gdax or bitfinex</p></td></tr></table>
    <div>
      <h3>The Framework</h3>
      <a href="#the-framework">
        
      </a>
    </div>
    <p>OK, this is Typescript, I get interfaces and I'm going to use them. Here's my ultra-mini-http-routing framework definition:</p>
            <pre><code>export interface IRouter {
  route(req: RequestContextBase): IRouteHandler;
}

/**
 * A route
 */
export interface IRoute {
  match(req: RequestContextBase): IRouteHandler | null;
}

/**
 * Handles a request.
 */
export interface IRouteHandler {
  handle(req: RequestContextBase): Promise&lt;Response&gt;;
}

/**
 * Request with additional convenience properties
 */
export class RequestContextBase {
  public static fromString(str: string) {
    return new RequestContextBase(new Request(str));
  }

  public url: URL;
  constructor(public request: Request) {
    this.url = new URL(request.url);
  }
}</code></pre>
            <p>So basically all requests will go to <code>IRouter</code>. If it finds an <code>IRoute</code> that returns an <code>IRouterHandler</code>, then it will call that and pass in <code>RequestContextBase</code>, which is just the request with a parsed URL for convenience.</p><p>I stopped short of dependency injection, so here's the router implementation with 4 routes we've implemented (Ping, Race, All and Direct). Each route corresponds to one of the four operations I defined in the API above and returns the corresponding <code>IRouteHandler</code>.</p>
            <pre><code>export class Router implements IRouter {
  public routes: IRoute[];

  constructor() {
    this.routes = [
      new PingRoute(),
      new RaceRoute(),
      new AllRoute(),
      new DirectRoute(),
    ];
  }

  public async handle(request: Request): Promise&lt;Response&gt; {
    try {
      const req = new RequestContextBase(request);
      const handler = this.route(req);
      return handler.handle(req);
    } catch (e) {
      return new Response(undefined, {
        status: 500,
        statusText: `Error. ${e.message}`,
      });
    }
  }

  public route(req: RequestContextBase): IRouteHandler {
    const handler: IRouteHandler | null = this.match(req);
    if (handler) {
      logger.debug(`Found handler for ${req.url.pathname}`);
      return handler;
    }
    return new NotFoundHandler();
  }

  public match(req: RequestContextBase): IRouteHandler | null {
    for (const route of this.routes) {
      const handler = route.match(req);
      if (handler != null) {
        return handler;
      }
    }
    return null;
  }
}</code></pre>
            <p>You can see above I return a NotFoundHandler if we can't find a matching route. Its implementation is below. It's easy to see how 401, 405, 500 and all the common handlers could be implemented.</p>
            <pre><code>/**
 * 404 Not Found
 */
export class NotFoundHandler implements IRouteHandler {
  public async handle(req: RequestContextBase): Promise&lt;Response&gt; {
    return new Response(undefined, {
      status: 404,
      statusText: 'Unknown route',
    });
  }
}</code></pre>
            <p>Now let's start with Ping. The framework separates matching a route and handling the request. Firstly the route:</p>
            <pre><code>export class PingRoute implements IRoute {
  public match(req: RequestContextBase): IRouteHandler | null {
    if (req.request.method !== 'GET') {
      return new MethodNotAllowedHandler();
    }
    if (req.url.pathname.startsWith('/api/ping')) {
      return new PingRouteHandler();
    }
    return null;
  }
}</code></pre>
            <p>Simple enough, if the URL starts with <code>/api/ping</code>, handle the request with a <code>PingRouteHandler</code></p>
            <pre><code>export class PingRouteHandler implements IRouteHandler {
  public async handle(req: RequestContextBase): Promise&lt;Response&gt; {
    const pong = 'pong;';
    const res = new Response(pong);
    logger.info(`Responding with ${pong} and ${res.status}`);
    return new Response(pong);
  }
}</code></pre>
            <p>So at this point, if you followed along with Part 1, you can do:</p>
            <pre><code>$ npm run upload
$ curl https://cryptoserviceworker.com/api/ping
pong</code></pre>
            <p>OK, next the <code>AllHandler</code>, this aggregates the responses. Firstly the route matcher:</p>
            <pre><code>export class AllRoute implements IRoute {
  public match(req: RequestContextBase): IRouteHandler | null {
    if (req.url.pathname.startsWith('/api/all/')) {
      return new AllHandler();
    }
    return null;
  }
}</code></pre>
            <p>And if the route matches, we'll handle it by farming off the requests to our downstream handlers:</p>
            <pre><code>export class AllHandler implements IRouteHandler {
  constructor(private readonly handlers: IRouteHandler[] = []) {
    if (handlers.length === 0) {
      const factory = new HandlerFactory();
      logger.debug('No handlers, getting from factory');
      this.handlers = factory.getProviderHandlers();
    }
  }

  public async handle(req: RequestContextBase): Promise&lt;Response&gt; {
    const responses = await Promise.all(
      this.handlers.map(async h =&gt; h.handle(req))
    );
    const jsonArr = await Promise.all(responses.map(async r =&gt; r.json()));
    return new Response(JSON.stringify(jsonArr));
  }
}</code></pre>
            <p>I'm cheating a bit here because I haven't shown you the code for <code>HandlerFactory</code> or the implementation of <code>handle</code> for each one. You can look up the full source <a href="https://github.com/stevenpack/cryptoserviceworker/blob/master/src/service-worker.ts">here</a>.</p><p>Take a moment here to appreciate just what's happening. You're writing very expressive async code that in a few lines, is able to multiplex a request to multiple endpoints and aggregate the results. Furthermore, it's running in a sandboxed environment in a data center very close to your end user. <b>Edge-side code is a game changer.</b></p><p>Let's see it in action.</p>
            <pre><code>$ curl https://cryptoserviceworker.com/api/all/spot/btc-usd
[  
   {  
      "symbol":"btc-usd",
      "price":"6609.06000000",
      "utcTime":"2018-06-20T05:26:19.512000Z",
      "provider":"gdax"
   },
   {  
      "symbol":"btc-usd",
      "price":"6600.7",
      "utcTime":"2018-06-20T05:26:22.284Z",
      "provider":"bitfinex"
   }
]</code></pre>
            <p>Cool, OK, who's fastest? First, the route handler:</p>
            <pre><code>export class RaceRoute implements IRoute {
  public match(req: RequestContextBase): IRouteHandler | null {
    if (req.url.pathname.startsWith('/api/race/')) {
      return new RaceHandler();
    }
    return null;
  }
}</code></pre>
            <p>And the handler. Basically just using <code>Promise.race</code> to pick the winner</p>
            <pre><code>export class RaceHandler implements IRouteHandler {
  constructor(private readonly handlers: IRouteHandler[] = []) {
    const factory = new HandlerFactory();
    this.handlers = factory.getProviderHandlers();
  }

  public handle(req: RequestContextBase): Promise&lt;Response&gt; {
    return this.race(req, this.handlers);
  }

  public async race(
    req: RequestContextBase,
    responders: IRouteHandler[]
  ): Promise&lt;Response&gt; {
    const arr = responders.map(r =&gt; r.handle(req));
    return Promise.race(arr);
  }
}</code></pre>
            <p>So who's fastest? Tonight it's gdax.</p>
            <pre><code>curl https://cryptoserviceworker.com/api/race/spot/btc-usd
{  
   "symbol":"btc-usd",
   "price":"6607.15000000",
   "utcTime":"2018-06-20T05:33:16.074000Z",
   "provider":"gdax"
}</code></pre>
            
    <div>
      <h3>Summary</h3>
      <a href="#summary">
        
      </a>
    </div>
    <p>Using Typescript+Workers, in &lt; 500 lines of code, we were able to</p><ul><li><p>Define an interface for a mini HTTP routing and handling framework</p></li><li><p>Implement a basic implementation of that framework</p></li><li><p>Build Routes and Handlers to provide Ping, All, Race and Direct handlers</p></li><li><p>Deploy it to 160+ data centers with <code>npm run upload</code></p></li></ul><p>Stay tuned for more, and PRs welcome, particularly for more providers.</p><p><i>If you have a worker you'd like to share, or want to check out workers from other Cloudflare users, visit the </i><a href="https://community.cloudflare.com/tags/recipe-exchange"><i>“Recipe Exchange”</i></a><i> in the Workers section of the </i><a href="https://community.cloudflare.com/c/developers/workers"><i>Cloudflare Community Forum</i></a><i>.</i></p> ]]></content:encoded>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Programming]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[API Gateway]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">6vJMJrpNtXwRzr8sYBkXpj</guid>
            <dc:creator>Steven Pack</dc:creator>
        </item>
        <item>
            <title><![CDATA[Bootstrapping a Typescript Worker]]></title>
            <link>https://blog.cloudflare.com/bootstrapping-a-typescript-worker/</link>
            <pubDate>Wed, 27 Jun 2018 13:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare Workers allows you to quickly deploy Javascript code to our 150+ data centers around the world and execute very close to your end-user. The edit/compile/debug story is already pretty amazing using the Workers IDE with integrated Chrome Dev Tools.  ]]></description>
            <content:encoded><![CDATA[ <p>Cloudflare <a href="https://developers.cloudflare.com/workers/about/">Workers</a> allows you to quickly deploy Javascript code to our 150+ data centers around the world and execute very close to your end-user. The edit/compile/debug story is already pretty amazing using the <a href="https://dash.cloudflare.com/workers">Workers IDE</a> with integrated Chrome Dev Tools. However, for those hankering for some <a href="https://www.typescriptlang.org/">Typescript</a> and an IDE with static analysis, autocomplete and that jazz, follow along to see one way to set up a Typescript project with <a href="https://www.jetbrains.com/webstorm/">Webstorm</a> and npm run upload your code straight to the edge.</p>
    <div>
      <h3>Pre Requisites</h3>
      <a href="#pre-requisites">
        
      </a>
    </div>
    <p>My environment looks like this:</p><ul><li><p>macOS High Sierra</p></li><li><p>node v8.11.3</p></li><li><p>npm v5.6.0</p></li><li><p>Webstorm v2018.1.3</p></li></ul><p>You'll also need a <a href="https://support.cloudflare.com/hc/en-us/articles/201720164">Cloudflare domain</a> and to <a href="https://www.cloudflare.com/a/workers">activate Workers</a> on it.</p><p>I'll be using cryptoserviceworker.com</p><p>I'll also use Yeoman to build our initial scaffolding. Install it with <code>npm install yo -g</code></p>
    <div>
      <h2>Getting Started</h2>
      <a href="#getting-started">
        
      </a>
    </div>
    <p>Let's start with a minimal node app with a "hello world" class and a test.</p>
            <pre><code>mkdir cryptoserviceworker &amp;&amp; cd cryptoserviceworker
npm install generator-node-typescript -g
yo node-typescript</code></pre>
            <p>That generator creates the following directory structure:</p>
            <pre><code>drwxr-xr-x   16 steve  staff     512 Jun 18 20:40 .
drwxr-xr-x   10 steve  staff     320 Jun 18 20:35 ..
-rw-r--r--    1 steve  staff     197 Jun 18 20:40 .editorconfig
-rw-r--r--    1 steve  staff      96 Jun 18 20:40 .gitignore
-rw-r--r--    1 steve  staff     147 Jun 18 20:40 .npmignore
-rw-r--r--    1 steve  staff     267 Jun 18 20:40 .travis.yml
drwxr-xr-x    5 steve  staff     160 Jun 18 20:40 .vscode
-rw-r--r--    1 steve  staff    1066 Jun 18 20:40 LICENSE
-rw-r--r--    1 steve  staff    2071 Jun 18 20:40 README.md
drwxr-xr-x    4 steve  staff     128 Jun 18 20:40 __tests__
drwxr-xr-x  479 steve  staff   15328 Jun 18 20:40 node_modules
-rw-r--r--    1 steve  staff  244624 Jun 18 20:40 package-lock.json
-rw-r--r--    1 steve  staff    1506 Jun 18 20:40 package.json
drwxr-xr-x    4 steve  staff     128 Jun 18 20:40 src
-rw-r--r--    1 steve  staff     454 Jun 18 20:40 tsconfig.json
-rw-r--r--    1 steve  staff      73 Jun 18 20:40 tslint.json</code></pre>
            <p>It includes default settings, a task runner, an initial Typescript config and more. We won't use all of it, but it's a good starting point.</p>
    <div>
      <h2>First Test</h2>
      <a href="#first-test">
        
      </a>
    </div>
    <p>If we take a look at the contents of <code>src/greeter.ts</code>, we'll see it's a very Typescript implementation of hello world.</p>
            <pre><code>$ cat greeter.ts 
export class Greeter {
  private greeting: string;

  constructor(message: string) {
    this.greeting = message;
  }

  public greet(): string {
    return `Bonjour, ${this.greeting}!`;
  }
}</code></pre>
            <p>Because Yeoman has set up our test infrastructure, we should be able exercise the code using the greeter test in <code>__tests__/greeter-spec.ts</code></p>
            <pre><code>import { Greeter } from '../src/greeter';

test('Should greet with message', () =&gt; {
  const greeter = new Greeter('friend');
  expect(greeter.greet()).toBe('Bonjour, friend!');
});</code></pre>
            <p>This generator uses jest. It's installed locally, but let's install it globally for convenience and run it!</p>
            <pre><code>npm install jest -g
jest
 PASS  __tests__/greeter-spec.ts
 PASS  __tests__/index-spec.ts

Test Suites: 2 passed, 2 total
Tests:       2 passed, 2 total
Snapshots:   0 total
Time:        1.867s
Ran all test suites.
</code></pre>
            <p>OK, so we have a testable Typescript template. Let's fire up Webstorm and write some code!</p>
    <div>
      <h3>Hello, World with Workers</h3>
      <a href="#hello-world-with-workers">
        
      </a>
    </div>
    <p>A hello world implementation in Typescript might look something like this:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3iZzB5lable4vXw45e7HjV/7b56613bef3fa2565b776631cae4deda/typescript-hello-world.png" />
            
            </figure><p>Webstorm doesn't like it as you can see from the red error highlights. Even though Request and Response are part of the Service Worker API and will be available to us in the V8 runtime, Typescript doesn't know about them yet. <a href="https://www.npmjs.com/package/node-fetch">node-fetch</a> provides an implementation for node, so let’s install that.</p>
            <pre><code>npm install node-fetch
npm install @types/node-fetch</code></pre>
            <p>That made Webstorm happier. It’s been able to locate the type definitions.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/79lrvSbwWcbZMWH2Q93vJk/bf5dd88ad97663d1be28c11f214916bb/typescript-hello-world2.png" />
            
            </figure><p>Now let's write a test. Create a new file <b>tests</b>/worker-spec.ts:</p>
            <pre><code>import { Request } from "node-fetch";
import { Worker } from "../src/worker";

test('Should say hello', () =&gt; {

  const worker = new Worker();
  const request = new Request("https://cryptoserviceworker.com/");
  const response = worker.handle(request);
  expect(response.status).toEqual(200);
  expect(response.body).toEqual("Hello, world!");
});</code></pre>
            <p>And delete the other files and tests so we're just working worker.ts and worker-spect.ts</p><p>Run <code>jest</code></p>
            <pre><code> PASS  __tests__/worker-spec.ts
 PASS  __tests__/worker-spec.js

Test Suites: 2 passed, 2 total
Tests:       2 passed, 2 total
Snapshots:   0 total
Time:        1.213s, estimated 2s</code></pre>
            <p>OK, so our test passed, but notice it ran both the Typescript and the Javascript? Let's restrict to just Typescript. Go into package.json, locate jest and change</p><p><code>"testRegex": "(/__tests__/.*|\\.(test|spec))\\.(ts|js)$",</code> to<code>"testRegex": "(/__tests__/.*)\\-spec.ts$"</code></p><p>Run it again:</p>
            <pre><code>jest
 PASS  __tests__/worker-spec.ts
  ✓ Should say hello (8ms)

Test Suites: 1 passed, 1 total
Tests:       1 passed, 1 total
Snapshots:   0 total
Time:        1.123s, estimated 2s
Ran all test suites.
</code></pre>
            <p>Better. OK, ship it!</p>
    <div>
      <h3>From Local Typescript to Worker Compatible Javascript.</h3>
      <a href="#from-local-typescript-to-worker-compatible-javascript">
        
      </a>
    </div>
    <p>Let's take a look at <code>src/worker.js</code> to see how our Typescript transpiled.</p>
            <pre><code>"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
const node_fetch_1 = require("node-fetch");
class Worker {
    handle(request) {
        return new node_fetch_1.Response('Hello, world!');
    }
}
exports.Worker = Worker;</code></pre>
            <p>Actually, let's try it in the Cloudflare Workers IDE and try it for real. Go to your <a href="https://dash.cloudflare.com">dashboard</a>, click the Workers icon and then "Launch Editor"</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6H2ZEwr9QfXdCxE7Ai7kOs/a5d2edbd452afcc5898a3adcb2ba211f/workers-dashboard.png" />
            
            </figure><p>First things first, check the canonical Hello World implementation works.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1YHeN0MBVI3oGNznabkxvl/53eb88bb32ed71bb81fdd47d5a973f83/hello-world-ide.png" />
            
            </figure><p>Awesome, now let's replace it with our "transpiled from Typescript" version:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7lIHef0I92E8ZoIlza2hdT/6dfb9fb0677b08341719b38552795cfc/fail1.png" />
            
            </figure><p>Fail. OK, so the out of the box "transpiled from typescript" is not going to work. Let's make the changes necessary to get it run manually, then incorporate that into the build process.</p><p><b>Error #1: Uncaught ReferenceError: exports is not defined at line 2</b>That's easy enough, let's add <code>var exports = {}</code>. Update Preview.</p><p><b>Error #2: Uncaught ReferenceError: require is not defined at line 4</b></p><p>True, we're running in V8 on the Cloudflare Edge and the only code is what we uploaded. There are no "node_modules" to include. Plus, that line was only for dev anyway. Remove it. Update Preview.</p><p><b>Error #3: No event handlers were registered. This script does nothing.</b></p><p>Right, we need to invoke the code. Let's add a snippet to the top of the file to actually invoke our worker.</p>
            <pre><code>addEventListener('fetch', event =&gt; {
  let worker = new exports.Worker();
  event.respondWith(worker.handle(event.request));
})</code></pre>
            <p><b>Error #4: Uncaught ReferenceError: </b><code><b>node_fetch_1</b></code><b> is not defined</b></p><p>Right, we removed that because <a href="https://developer.mozilla.org/en-US/docs/Web/API/Request">Response</a> is a native object when it runs in the context of a worker. So remove the <code>node_fetch_1</code> prefix.</p><p><b>Error #5: exports.__esModule = true does nothing</b></p><p>So let's remove that.</p><p>Success!!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6HVd2qR7Uhmbrfnd1doTZN/6461c7eb9afbd15ca4e27e02ccbb8cd3/hello-world-success.png" />
            
            </figure><p>OK, so with some massaging, we got a Worker transpiled from Typescript to execute. We:</p><ul><li><p>Added a line to create an exports object</p></li><li><p>Removed the dev dependency on "node_fetch"</p></li><li><p>Removed the exports.__esModule = true line</p></li></ul><p>Let's add that to our build process so we can have "Worker-ready" Javascript every time we make a change to our Typescript.</p>
    <div>
      <h3>Grunt</h3>
      <a href="#grunt">
        
      </a>
    </div>
    <p>I'm going to use Grunt to automate that. Here's my new <code>worker.ts</code></p>
            <pre><code>// --BEGIN PREAMBLE--
/// //Invoke worker
/// var exports = {};
/// addEventListener('fetch', event =&gt; {
///   event.respondWith(fetchAndApply(event.request))
/// });
///
/// async function fetchAndApply(request) {
///   let worker = new exports.Worker();
///   return worker.handle(request);
/// }
// --END PREAMBLE--

// --BEGIN COMMENT--
// mock the methods and objects that will be available in the browser
import { Request, Response } from 'node-fetch';
// --END COMMENT--
export class Worker {
  public handle(request: Request) {
    return new Response("Hello, world!")
  }
}</code></pre>
            <p>I want to uncomment the preamble to invoke our script, comment out the dev dependencies and remove the __esmodule line. Let's install Grunt, a text-replace module and create a <code>Gruntfile.js</code></p>
            <pre><code>npm install grunt-cli -g
npm install grunt --save-dev
npm install grunt-replace --save-dev
touch Gruntfile.js</code></pre>
            <p>My <code>Gruntfile.js</code> looks like this</p>
            <pre><code>module.exports = function (grunt) {

  grunt.loadNpmTasks('grunt-replace');
  grunt.initConfig({
    replace: {
      comments: {
        options: {
          patterns: [
            {
              /* Comment imports for node during dev */
              match: /--BEGIN COMMENT--[\s\S]*?--END COMMENT--/g,
              replacement: 'Dev environment code block removed by build'
            },
            {
              /* Uncomment preamble for production to process the request */
              match: /\/\/\//mg,
              replacement: ''
            }
          ]
        },
        files: [
          { expand: true, flatten: true, src: ['src/worker.ts'], dest: 'build/' }
        ]
      },
      exports: {
        //remove the exports line that typescript includes without an option to
        //suppress, but is not in the v8 env that workers run in.
        options: {
          patterns: [
            {
              match: /exports.__esModule = true;/g,
              replacement: "// exports line commented by build"
            }
          ]
        },
        files: [
          { expand: true, flatten: true, src: ['build/worker.js'], dest: 'build/' }
        ]
      }
    }
  });

  grunt.registerTask('prepare-typescript', 'replace:comments');
  grunt.registerTask('fix-export', 'replace:exports');
};
</code></pre>
            <p>There are two tasks. The first is the comment/uncomment step that we want before our Typescript is transpiled.</p><p>The second is to remove the <code>exports.__esmodule = true</code> line</p>
            <pre><code>$ grunt prepare-typescript
Running "replace:comments" (replace) task
&gt;&gt; 11 replacements in 1 file.

Done.</code></pre>
            <p>If we open <code>build/worker.ts</code>, we see this:</p>
            <pre><code>// --BEGIN PREAMBLE--
 //Invoke worker
 var exports = {};
 addEventListener('fetch', event =&gt; {
   event.respondWith(fetchAndApply(event.request))
 });

 async function fetchAndApply(request) {
   let worker = new exports.Worker();
   return worker.handle(request);
 }
// --END PREAMBLE--

// Dev environment code block removed by build
export class Worker {
  public handle(request: Request) {
    return new Response("Hello, world!")
  }
}</code></pre>
            <p>Opening <code>build/worker.js</code> you'll see a whole lot of code generated for handling async functions. That's because we're using the <code>async</code> keyword in the preamble.</p>
            <pre><code>"use strict";
var __awaiter = (this &amp;&amp; this.__awaiter) || function (thisArg, _arguments, P, generator) {
    return new (P || (P = Promise))(function (resolve, reject) {
        function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } }
        function rejected(value) { try { step(generator["throw"](value)); } catch (e) { reject(e); } }
        function step(result) { result.done ? resolve(result.value) : new P(function (resolve) { resolve(result.value); }).then(fulfilled, rejected); }
        step((generator = generator.apply(thisArg, _arguments || [])).next());
    });
};
var __generator = (this &amp;&amp; this.__generator) || function (thisArg, body) {
    var _ = { label: 0, sent: function() { if (t[0] &amp; 1) throw t[1]; return t[1]; }, trys: [], ops: [] }, f, y, t, g;
    return g = { next: verb(0), "throw": verb(1), "return": verb(2) }, typeof Symbol === "function" &amp;&amp; (g[Symbol.iterator] = function() { return this; }), g;
    function verb(n) { return function (v) { return step([n, v]); }; }
    function step(op) {
        if (f) throw new TypeError("Generator is already executing.");
        while (_) try {
            if (f = 1, y &amp;&amp; (t = y[op[0] &amp; 2 ? "return" : op[0] ? "throw" : "next"]) &amp;&amp; !(t = t.call(y, op[1])).done) return t;
            if (y = 0, t) op = [0, t.value];
            switch (op[0]) {
                case 0: case 1: t = op; break;
                case 4: _.label++; return { value: op[1], done: false };
                case 5: _.label++; y = op[1]; op = [0]; continue;
                case 7: op = _.ops.pop(); _.trys.pop(); continue;
                default:
                    if (!(t = _.trys, t = t.length &gt; 0 &amp;&amp; t[t.length - 1]) &amp;&amp; (op[0] === 6 || op[0] === 2)) { _ = 0; continue; }
                    if (op[0] === 3 &amp;&amp; (!t || (op[1] &gt; t[0] &amp;&amp; op[1] &lt; t[3]))) { _.label = op[1]; break; }
                    if (op[0] === 6 &amp;&amp; _.label &lt; t[1]) { _.label = t[1]; t = op; break; }
                    if (t &amp;&amp; _.label &lt; t[2]) { _.label = t[2]; _.ops.push(op); break; }
                    if (t[2]) _.ops.pop();
                    _.trys.pop(); continue;
            }
            op = body.call(thisArg, _);
        } catch (e) { op = [6, e]; y = 0; } finally { f = t = 0; }
        if (op[0] &amp; 5) throw op[1]; return { value: op[0] ? op[1] : void 0, done: true };
    }
};
exports.__esModule = true;
// --BEGIN PREAMBLE--
//Invoke worker
var exports = {};
addEventListener('fetch', function (event) {
    event.respondWith(fetchAndApply(event.request));
});
function fetchAndApply(request) {
    return __awaiter(this, void 0, void 0, function () {
        var worker;
        return __generator(this, function (_a) {
            worker = new exports.Worker();
            return [2 /*return*/, worker.handle(request)];
        });
    });
}
// --END PREAMBLE--
// Dev environment code block removed by build
var Worker = /** @class */ (function () {
    function Worker() {
    }
    Worker.prototype.handle = function (request) {
        return new Response("Hello, world!");
    };
    return Worker;
}());
exports.Worker = Worker;</code></pre>
            <p>Now let's remove that <code>exports.__esModule = true</code> line.</p><p><code>grunt fix-export</code></p><p>and now we'll see instead in the worker.js <code>// exports line commented by build</code>.</p>
    <div>
      <h3>Put it together</h3>
      <a href="#put-it-together">
        
      </a>
    </div>
    <p>I just want to run <code>npm run build</code> and get Worker-friendly Javascript. Let's modify <code>package.json</code> to do just that. Change</p><p><code>"build": "tsc --pretty"</code> to <code>"build": "grunt prepare-typescript &amp;&amp; tsc build/*.ts --pretty --skipLibCheck; grunt fix-export",</code></p><p>And run it.</p><p><code>npm run build</code> will result in</p>
            <pre><code>"use strict";
var __awaiter = (this &amp;&amp; this.__awaiter) || function (thisArg, _arguments, P, generator) {
    return new (P || (P = Promise))(function (resolve, reject) {
        function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } }
        function rejected(value) { try { step(generator["throw"](value)); } catch (e) { reject(e); } }
        function step(result) { result.done ? resolve(result.value) : new P(function (resolve) { resolve(result.value); }).then(fulfilled, rejected); }
        step((generator = generator.apply(thisArg, _arguments || [])).next());
    });
};
var __generator = (this &amp;&amp; this.__generator) || function (thisArg, body) {
    var _ = { label: 0, sent: function() { if (t[0] &amp; 1) throw t[1]; return t[1]; }, trys: [], ops: [] }, f, y, t, g;
    return g = { next: verb(0), "throw": verb(1), "return": verb(2) }, typeof Symbol === "function" &amp;&amp; (g[Symbol.iterator] = function() { return this; }), g;
    function verb(n) { return function (v) { return step([n, v]); }; }
    function step(op) {
        if (f) throw new TypeError("Generator is already executing.");
        while (_) try {
            if (f = 1, y &amp;&amp; (t = y[op[0] &amp; 2 ? "return" : op[0] ? "throw" : "next"]) &amp;&amp; !(t = t.call(y, op[1])).done) return t;
            if (y = 0, t) op = [0, t.value];
            switch (op[0]) {
                case 0: case 1: t = op; break;
                case 4: _.label++; return { value: op[1], done: false };
                case 5: _.label++; y = op[1]; op = [0]; continue;
                case 7: op = _.ops.pop(); _.trys.pop(); continue;
                default:
                    if (!(t = _.trys, t = t.length &gt; 0 &amp;&amp; t[t.length - 1]) &amp;&amp; (op[0] === 6 || op[0] === 2)) { _ = 0; continue; }
                    if (op[0] === 3 &amp;&amp; (!t || (op[1] &gt; t[0] &amp;&amp; op[1] &lt; t[3]))) { _.label = op[1]; break; }
                    if (op[0] === 6 &amp;&amp; _.label &lt; t[1]) { _.label = t[1]; t = op; break; }
                    if (t &amp;&amp; _.label &lt; t[2]) { _.label = t[2]; _.ops.push(op); break; }
                    if (t[2]) _.ops.pop();
                    _.trys.pop(); continue;
            }
            op = body.call(thisArg, _);
        } catch (e) { op = [6, e]; y = 0; } finally { f = t = 0; }
        if (op[0] &amp; 5) throw op[1]; return { value: op[0] ? op[1] : void 0, done: true };
    }
};
// exports line commented by build
// --BEGIN PREAMBLE--
//Invoke worker
var exports = {};
addEventListener('fetch', function (event) {
    event.respondWith(fetchAndApply(event.request));
});
function fetchAndApply(request) {
    return __awaiter(this, void 0, void 0, function () {
        var worker;
        return __generator(this, function (_a) {
            worker = new exports.Worker();
            return [2 /*return*/, worker.handle(request)];
        });
    });
}
// --END PREAMBLE--
// Dev environment code block removed by build
var Worker = /** @class */ (function () {
    function Worker() {
    }
    Worker.prototype.handle = function (request) {
        return new Response('Hello, world!');
    };
    return Worker;
}());
exports.Worker = Worker;</code></pre>
            <p>Paste that into the Workers IDE... works first time.</p>
    <div>
      <h3>Automated upload</h3>
      <a href="#automated-upload">
        
      </a>
    </div>
    <p>It's going to get old uploading from our IDE to the Web IDE every time we want to test a change and we're going to want to auto deploy from CI at some point. Thankfully there's <a href="https://developers.cloudflare.com/workers/api/">Workers Configuration API</a>, which makes it very simple to upload a Worker automatically:</p><p><code>curl -X PUT "https://api.cloudflare.com/client/v4/zones/:zone_id/workers/script" -H "X-Auth-Email:YOUR_CLOUDFLARE_EMAIL" -H "X-Auth-Key:ACCOUNT_AUTH_KEY" -H "Content-Type:application/javascript" --data-binary "@PATH_TO_YOUR_WORKER_SCRIPT"</code></p><p>OK, so we need our zone ID, Cloudflare email, auth key and path to the binary. I'm going to create Grunt task that uses the <a href="https://www.npmjs.com/package/dotenv">dotenv</a> package to load config from a .env file or environment variables.</p><p>Create a <code>.env</code> file that looks like this:</p>
            <pre><code>CF_WORKER_ZONE_ID=xxxxxxxxxxxxxxxxxxx
CF_WORKER_EMAIL=steve@example.com
CF_WORKER_AUTH_KEY=xxxxxxxxxxxxxxxxxx
CF_WORKER_PATH=build/worker.js</code></pre>
            <p>To locate your zone ID and auth key, go to the dashboard, select your zone and click the "Overview" icon.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/39GyjeFWomLYOQ1LUjNg5E/b1625fcce903fd56d053d034074f9d71/overview.png" />
            
            </figure><p>The zone ID is right there, then click "Get API key" and choose the "Global API Key" to get the Auth Key.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3o5LTjxiyp74BEDsmNLt4p/a91ea533cff106348d4fafb4845cdbdc/api-key.png" />
            
            </figure><p>Fill out your .env with those values and then add the following to your Gruntfile which will:</p><ul><li><p>Read your config</p></li><li><p>Upload to Cloudflare</p></li><li><p>Parse any success or error messages.</p></li></ul>
            <pre><code>grunt.registerTask('upload-worker', 'Uploads workers to Cloudflare', function(path) {

    require('dotenv').config();
    const fs = require('fs');
    const log = console;

    const done = this.async();
    const conf = readConfig();
    path = path || grunt.option('path') || process.env.CF_WORKER_PATH;
    if (!path) {
      fail("path is required");
    }
    if (!fs.existsSync(path)) {
      fail(`path not found ${path}`);
    }

    let script = fs.readFileSync(path);
    log.info("Uploading...");
    let url = `https://api.cloudflare.com/client/v4/zones/${conf.zoneId}/workers/script`;
    let options = {
      url: url,
      method: 'PUT',
      headers: {
        'Content-Type': 'application/javascript'
      },
      body: script
    };
    invokeApi(options, conf, done);
  });

  function invokeApi(options, conf, done) {

    // Add authentication to the request
    options.headers = options.headers || {};
    Object.assign(options.headers, {
      'X-Auth-Email': conf.email,
      'X-Auth-Key': conf.apiKey,
    });

    request(options, function(error, response) {
      try {
        if (error) {
          log.error(error);
          fail(`API failure ${response.statusCode} error: ${error}`);
          done();
          return;
        }
        let body = JSON.parse(response.body);
        if (body) {
          logResult(body);
        }
        done();
      } catch (e) {
        fail(`Unhandled error. ${e}`);
        done();
      }
    });
  }

  function logResult(body) {
    body.success ? log.error("Status: Success") : log.error("Status: Failed");
    let errors = body.errors || [];
    if (errors) {
      log.info(` Errors: ${errors.length}`);
      for (let e of errors) {
        log.error(` Code: ${e.code} Message: ${e.message}`);
      }
    }
    let messages = body.messages || [];
    if (messages) {
      log.info(` Messages ${messages.length}`);
      for (let msg of messages) {
        log.info(` ${msg}`);
      }
    }
    let result = body.result;
    log.info(" Result");
    log.info(` ${JSON.stringify(result, null, 2)}`);
  }

  function readConfig() {
    let zoneId = grunt.option('zoneId') || process.env.CF_WORKER_ZONE_ID;
    let email = grunt.option('email') || process.env.CF_WORKER_EMAIL;
    let apiKey = grunt.option('apiKey') || process.env.CF_WORKER_AUTH_KEY;

    log.debug("zoneID: " + zoneId);
    log.debug("email: " + email);
    log.debug("apiKey: " + "*".repeat(apiKey.length));

    if (!zoneId || !email || !apiKey) {
      fail("zone id, cloudflare email and api key are required");
    }
    return {
      zoneId: zoneId,
      email: email,
      apiKey: apiKey
    }
  }

  function fail(message) {
    grunt.fail.fatal(message, TASK_FAILED);
  }</code></pre>
            <p>Finally, let's add a new task to <code>package.json</code> so we can just <code>npm run upload</code> any time we update our Worker.</p><p><code>"upload": "grunt upload-worker"</code></p>
            <pre><code>npm run upload-worker

Running "upload-worker" task
zoneID: **************
email: steve@example.com
apiKey: *************************************
Uploading...
Status: Success
 Errors: 0
 Messages 0
 Result
 {
  "script"...
 }</code></pre>
            <p>Voila! Script uploaded. OK, so if it's uploaded, we can call call it remotely:</p><p><code>$ curl https://cryptoserviceworker.com/hello</code></p><p>Hmmm... nothing. Ah, we haven't actually configured Workers to route any requests to our Worker. You can do this via the API, but since it's a one off, I'll do it in the web IDE.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7cOmAOF6ivdau7cpgr8VEF/6fd51e85e26d9d0e7309e599c677a78c/routes.png" />
            
            </figure><p>And try again:</p>
            <pre><code>$ curl https://cryptoserviceworker.com/hello
Hello, world!</code></pre>
            <p>Success! OK, so to recap:</p><ul><li><p>We've bootstrapped a Typescript project using NodeJS and Webstorm</p></li><li><p>Written a "Hello, World" worker in Typescript</p></li><li><p>Setup build tasks to modify the code for Workers</p></li><li><p>Automatically uploading to the Cloudflare edge with <code>npm run upload</code></p></li><li><p>...</p></li><li><p>Profit</p></li></ul><hr /><p><i>If you have a worker you'd like to share, or want to check out workers from other Cloudflare users, visit the </i><a href="https://community.cloudflare.com/tags/recipe-exchange"><i>“Recipe Exchange”</i></a><i> in the Workers section of the </i><a href="https://community.cloudflare.com/c/developers/workers"><i>Cloudflare Community Forum</i></a><i>.</i></p> ]]></content:encoded>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <guid isPermaLink="false">1t6nsZnipssFhdwh63UZon</guid>
            <dc:creator>Steven Pack</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare Argo Tunnel with Rust+Raspberry Pi]]></title>
            <link>https://blog.cloudflare.com/cloudflare-argo-tunnel-with-rust-and-raspberry-pi/</link>
            <pubDate>Fri, 06 Apr 2018 14:00:00 GMT</pubDate>
            <description><![CDATA[ Serving content from a Rust web server running on a Raspberry Pi from your home to the world, with a Cloudflare Argo Tunnels. ]]></description>
            <content:encoded><![CDATA[ <p>Yesterday Cloudflare launched <a href="https://developers.cloudflare.com/argo-tunnel/">Argo Tunnel</a>. In the words of the product team:</p><blockquote><p>Argo Tunnel exposes applications running on your local web server, on any network with an Internet connection, without adding DNS records or configuring a firewall or router. It just works.</p></blockquote><p>Once I grokked this, the first thing that came to mind was that I could actually use one of my Raspberry Pi's sitting around to serve a website, without:</p><ul><li><p>A flaky DDNS running on my router</p></li><li><p>Exposing my home network to the world</p></li><li><p>A cloud VM</p></li></ul><p>Ooooh... so exciting.</p>
    <div>
      <h3>The Rig</h3>
      <a href="#the-rig">
        
      </a>
    </div>
    <p>I'll assume you already have a Raspberry Pi with Raspbian on it.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/yhfAIyjzM1fchLSHmBBLg/f32e891d40339d5d66139573d9c0e17b/rig.JPG.jpeg" />
            
            </figure><p>Plug the Pi into your router. It should now have an IP address. Look that up in your router’s admin UI:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6nD8bAb7meAZ3X44R4o8YJ/632c6a588761b86e7c7be9f9e87f8c0e/devices.png" />
            
            </figure><p>OK, that's promising. Let's connect to that IP using the default pi/raspberry credentials:</p>
            <pre><code>$ ssh 192.168.8.26 -l pi
pi@192.168.8.26's password: 

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Sun Mar 18 23:24:11 2018 from stevens-air-2.lan
pi@raspberrypi:~ $ </code></pre>
            <p>We're in!</p><p><b>Pro tip: quick way to figure it out which type you have is</b></p>
            <pre><code>pi@raspberrypi:~ $ cat /proc/cpuinfo | grep 'Revision' | awk '{print $3}' | sed 's/^1000//'
a22082</code></pre>
            <p>Then look up the value in the <a href="https://elinux.org/RPi_HardwareHistory">Raspbery Pi revision history</a>. I have Raspberry Pi 3 Model B</p>
    <div>
      <h3>Internet connectivity</h3>
      <a href="#internet-connectivity">
        
      </a>
    </div>
    <p>OK, so we have a Pi connected to our router. Let's make 100% sure it can connect to the Internet.</p>
            <pre><code>pi@raspberrypi:~$ $ curl -I https://www.cloudflare.com
HTTP/2 200
date: Tue, 20 Mar 2018 22:54:20 GMT
content-type: text/html; charset=utf-8
set-cookie: __cfduid=dfb9c369ae12fe6eace48ed9b51aedbb01521586460; expires=Wed, 20-Mar-19 22:54:20 GMT; path=/; domain=.cloudflare.com; HttpOnly
x-powered-by: Express
cache-control: no-cache
x-xss-protection: 1; mode=block
strict-transport-security: max-age=15780000; includeSubDomains
x-content-type-options: nosniff
x-frame-options: SAMEORIGIN
served-in-seconds: 0.025
set-cookie: __cflb=3128081942; path=/; expires=Wed, 21-Mar-18 21:54:20 GMT
expect-ct: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
server: cloudflare
cf-ray: 3febc2914beb7f06-SFO-DOG</code></pre>
            <p>That first line HTTP/2 200 is the OK status code, which is enough to tell us we can connect out to the Internet. Normally this wouldn't be particularly exciting, as it's allowing connections <b>in</b> that causes problems. That's the promise of Argo Tunnels however, it says on the tin we don't need to poke any firewall holes or configure any DNS. Big claim, let's test it.</p>
    <div>
      <h3>Install the Agent</h3>
      <a href="#install-the-agent">
        
      </a>
    </div>
    <p>Go to <a href="https://developers.cloudflare.com/argo-tunnel/downloads/">https://developers.cloudflare.com/argo-tunnel/downloads/</a> to get the url for the ARM build for your Pi. At the time of writing it was <a href="https://bin.equinox.io/c/VdrWdbjqyF/cloudflared-stable-linux-arm.tgz">https://bin.equinox.io/c/VdrWdbjqyF/cloudflared-stable-linux-arm.tgz</a></p>
            <pre><code>$ wget https://bin.equinox.io/c/VdrWdbjqyF/cloudflared-stable-linux-arm.tgz
Resolving bin.equinox.io (bin.equinox.io)... 54.243.137.45, 107.22.233.132, 50.19.252.69, ...
Connecting to bin.equinox.io (bin.equinox.io)|54.243.137.45|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 5293773 (5.0M) [application/octet-stream]
Saving to: ‘cloudflared-stable-linux-arm.tgz’
...</code></pre>
            <p>Untar it</p>
            <pre><code>$ mkdir argo-tunnel
$ tar -xvzf cloudflared-stable-linux-arm.tgz -C ./argo-tunnel
cloudflared
$ cd argo-tunnel</code></pre>
            <p>Check you can execute it.</p>
            <pre><code>$ ./cloudflared --version
cloudflared version 2018.3.0 (built 2018-03-02-1820 UTC)</code></pre>
            <p>Looks OK. Now, we're hoping that the agent will magically connect from the Pi out to the nearest Cloudflare POP. We obviously want that to be secure. Furthermore, we're expecting that when a request comes inbound, it magically gets routed through Cloudflare's network and back to my Raspberry Pi.</p><p>Seems unlikely, but let’s have faith. Here is my mental model of what's happening:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2yhnPd0ciTQSUCgu03CHu7/b5360d9194e28944ed230e6cb5e19fc2/Argo-Tunnel-Diagram.png" />
            
            </figure><p>So let's create that secure tunnel. I guess we need some sort of certificate or credentials...</p>
            <pre><code>$ ./cloudflared login</code></pre>
            <p>You'll see output in the command window similar to this:</p>
            <pre><code>A browser window should have opened at the following URL:

https://www.cloudflare.com/a/warp?callback=&lt;some token&gt;

If the browser failed to open, open it yourself and visit the URL above.</code></pre>
            <p>Our headless Pi doesn't have a web browser, so let's copy the url from the console into the browser on our host dev machine.</p><p>This part assumes you already have a domain on Cloudflare If you don't go to the <a href="https://support.cloudflare.com/hc/en-us/articles/201720164">setup guide</a> to get started.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4NdfPejq5R5wFIBaw6fjmx/04998230fa9a9fae7cef05e0f282cc43/authorize-choose-domain.png" />
            
            </figure><p>We're being asked which domain we want this tunnel to sit behind. I've chosen <b>pacman.wiki</b>. Click Authorize.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2e3js21yw4OkiapEadWalz/bbe4e38f7dcffae2b6f3a98395d5ba9b/authorize-confirm.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/49TShvgc3EHdZ9O1a8s1Be/2a346df5abf86d07f15e2f8f271a2619/authorize-complete.png" />
            
            </figure><p>You should now see this back on your pi:</p>
            <pre><code>You have successfully logged in.
If you wish to copy your credentials to a server, they have been saved to:
/home/pi/.cloudflared/cert.pem</code></pre>
            <p>Aha! That answers how the tunnel gets secured. The agent has created a certificate and will use that to secure the connection back to Cloudflare. Now let's create the tunnel and serve some content!</p><p><code>$ cloudflared --hostname [hostname] --hello-world</code></p><p><b>hostname</b> is a fully-qualified domain name under the domain you chose to Authorize for Argo Tunnels earlier. I'm going to use <b>tunnel.pacman.wiki</b></p>
            <pre><code>$ ./cloudflared --hostname tunnel.pacman.wiki --hello-world
INFO[0002] Proxying tunnel requests to https://127.0.0.1:46727 
INFO[0000] Starting Hello World server at 127.0.0.1:53030 
INFO[0000] Starting metrics server                       addr="127.0.0.1:53031"
INFO[0005] Connected to LAX                             
INFO[0010] Connected to SFO-DOG                         
INFO[0012] Connected to LAX                             
INFO[0012] Connected to SFO-DOG  </code></pre>
            <p>Huh, interesting. So, we've connected to my nearest POP(s). I'm in the San Francisco Bay Area, so SJC and LAX seems reasonable. What now though? Surely that's not it? If I'm reading this right, I can go to my browser, enter <a href="https://tunnel.pacman.wiki">https://tunnel.pacman.wiki</a> and I'll get a hello world page... surely not.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4HzTsm0DkHPzZSQ527zAYb/1dec3343ed9ec63465c60fb28dbe60ed/success-1.png" />
            
            </figure><p>And back on the Pi</p>
            <pre><code>INFO[0615] GET https://127.0.0.1:62627/ HTTP/1.1 CF-RAY=4067701b598e8184-LAX
INFO[0615] 200 OK  CF-RAY=4067701b598e8184-LAX</code></pre>
            <p>Mind. Blown. So what happened here exactly...</p><ol><li><p>The agent on the Pi created a secure tunnel (a persistent http2 connection) back to the nearest Cloudflare Argo Tunnels server</p></li><li><p>The tunnel was secured with the certificate generated by the agent.</p></li><li><p>A request for <a href="https://tunnel.pacman.wiki">https://tunnel.pacman.wiki</a> went from my browser out through the Internet and was routed to the nearest Cloudflare <a href="https://www.cloudflare.com/learning/cdn/glossary/anycast-network/">datacenter</a></p></li><li><p>Cloudflare received the request, saw the domain was Cloudflare managed and saw a tunnel set up to that hostname</p></li><li><p>The request got routed over that http2 connection back to my Pi</p></li></ol><p>I'm serving traffic over the Internet, from my Pi, with no ports opened on my home router. That is so cool.</p>
    <div>
      <h3>More than hello world</h3>
      <a href="#more-than-hello-world">
        
      </a>
    </div>
    <p>If you're reading this, I've won my battle with the Cloudflare blog editing team about long form vs short form content :p</p><p>Serving hello world is great, but I want to expose a real web server. If you're like me, if you can find any vaguely relevant reason to use Rust, then you use Rust. If you're also like me, you want to try one of these async web servers the cool kids talk about on <a href="https://www.reddit.com/r/rust/">/r/rust</a> like <a href="https://gotham.rs/">gotham</a>. Let's do it.</p><p>First, install rust using <a href="https://www.rustup.rs/">rustup</a>.</p><p><code>$ curl https://sh.rustup.rs -sSf | sh</code></p><p>When prompted, just hit enter</p>
            <pre><code>1) Proceed with installation (default)
2) Customize installation
3) Cancel installation
...
  stable installed - rustc 1.24.1 (d3ae9a9e0 2018-02-27)
...</code></pre>
            <p>OK, Rust is installed. Now clone Gotham and build the hello_world example:</p>
            <pre><code>$ git clone https://github.com/gotham-rs/gotham
$ cd gotham/examples/hello_world
$ cargo build</code></pre>
            <p><b>Pro tip:</b> if cargo is not found, run <code>source $HOME/.cargo/env</code>. It will be automatic in future sessions.</p><p>As cargo does its magic, you can think to yourself about how it's a great package manager, how there really are a lot of dependencies and how OSS really is standing on the shoulders of giants of giants of giants of giants—eventually you'll have the example built.</p>
            <pre><code>...
Compiling gotham_examples_hello_world v0.0.0 (file:///home/pi/argo-tunnel/gotham/examples/hello_world)
    Finished dev [unoptimized + debuginfo] target(s) in 502.83 secs
    
$ cd ../../target/debug
$ ./gotham_examples_hello_world 
Listening for requests at http://127.0.0.1:7878</code></pre>
            <p>We have a rust web server listening on a local port. Let's connect the tunnel to that.</p><p><code>./cloudflared --hostname gotham.pacman.wiki http://127.0.0.1:7878</code></p><p>Type <b>gotham.pacman.wiki</b> into your web browser and you'll see those glorious words, "Hello, world".</p>
    <div>
      <h2>Wait, this post was meant to be <i>more</i> than hello world.</h2>
      <a href="#wait-this-post-was-meant-to-be-more-than-hello-world">
        
      </a>
    </div>
    <p>OK, challenge accepted. Rust being fancy and modern is OK with Unicode. Let's serve some of that.</p>
            <pre><code>$ cd examples/hello_world/src/
$ nano src/main.rs </code></pre>
            <p>Replace the hello world string:</p><p><code>Some((String::from("Hello World!").into_bytes(), mime::TEXT_PLAIN)),</code></p><p>with some Unicode and a content-type hint so the browser know how to render it:</p><p><code>Some((String::from("&lt;html&gt;&lt;head&gt;&lt;meta http-equiv='Content-Type' content='text/html; charset=UTF-8'&gt;&lt;/head&gt;&lt;body&gt;&lt;marquee&gt;Pᗣᗧ•••MᗣN&lt;/marquee&gt;&lt;/body&gt;&lt;/html&gt;").into_bytes(), mime::TEXT_HTML)),</code></p><p>Build and run</p>
            <pre><code>$ cargo build
...
./gotham_examples_hello_world 
Listening for requests at http://127.0.0.1:7878</code></pre>
            <p><code>$ ./cloudflared --hostname gotham.pacman.wiki http://127.0.0.1:7878</code></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2O0v47vQuYvFKKPsCY134y/22c094cfb746e462c6433e6db493794d/pacman-2.gif" />
            
            </figure><p>And now we have some unicode served from our Pi at home over the Internet by a highly asynchronous web server written in a fast, safe, high-level language. Cool.</p>
    <div>
      <h3>Are we done?</h3>
      <a href="#are-we-done">
        
      </a>
    </div>
    <p>We should probably auto start both the agent and the web server on boot so they don't die when we end our ssh session.</p>
            <pre><code>$ sudo ./cloudflared service install
INFO[0000] Failed to copy user configuration. Before running the service, 
ensure that /etc/cloudflared contains two files, cert.pem and config.yml  
error="open cert.pem: no such file or directory"</code></pre>
            <p>Nice error! OK, the product team have helpfully documented what to put in that file <a href="https://developers.cloudflare.com/argo-tunnel/reference/config/">here</a></p>
            <pre><code>$ sudo cp ~/.cloudflared/cert.pem /etc/cloudflared
$ sudo nano /etc/cloudflared/config.yml</code></pre>
            
            <pre><code>#config.yml
hostname: gotham.pacman.wiki
url: http://127.0.0.1:7878</code></pre>
            
    <div>
      <h4>Autostart for the Agent</h4>
      <a href="#autostart-for-the-agent">
        
      </a>
    </div>
    
            <pre><code>$ sudo ./cloudflared service install
INFO[0000] Using Systemd                                
ERRO[0000] systemctl: Created symlink from /etc/systemd/system/multi-user.target.wants/cloudflared.service to /etc/systemd/system/cloudflared.service.
INFO[0000] systemctl daemon-reload       </code></pre>
            
    <div>
      <h4>Autostart for the Web Server</h4>
      <a href="#autostart-for-the-web-server">
        
      </a>
    </div>
    <p>Copy the web server executable somewhere outside the gotham source tree so you can play around with the source code. I copied mine to <code>/home/pi/argo-tunnel/server/bin/</code></p><p><code>nano /etc/rc.local</code></p><p>Add line: <code>/home/pi/argo-tunnel/server/bin/gotham_examples_hello_world &amp;</code> just before <code>exit 0</code></p><p><code>sudo reboot</code></p><p>On restart, ssh back in again and check both the agent and server are running.</p>
            <pre><code>$ sudo ps -aux | grep tunnel
root       501  0.1  0.2  37636  1976 ?        Sl   06:30   0:00 /home/pi/argo-tunnel/server/bin/gotham_examples_hello_world
root       977 15.7  1.4 801292 13972 ?        Ssl  06:30   0:01 /home/pi/argo-tunnel/cloudflared --config /etc/cloudflared/config.yml --origincert /etc/cloudflared/cert.pem --no-autoupdate</code></pre>
            <p>Profit.</p> ]]></content:encoded>
            <category><![CDATA[Rust]]></category>
            <category><![CDATA[Argo Smart Routing]]></category>
            <category><![CDATA[Raspberry Pi]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Programming]]></category>
            <category><![CDATA[Cloudflare Tunnel]]></category>
            <guid isPermaLink="false">1fVcP30JWQOAu5llzgN6Yk</guid>
            <dc:creator>Steven Pack</dc:creator>
        </item>
    </channel>
</rss>