
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sat, 04 Apr 2026 16:18:06 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Workers AI: serverless GPU-powered inference on Cloudflare’s global network]]></title>
            <link>https://blog.cloudflare.com/workers-ai/</link>
            <pubDate>Wed, 27 Sep 2023 13:00:47 GMT</pubDate>
            <description><![CDATA[ We are excited to launch Workers AI - an AI inference as a service platform, empowering developers to run AI models with just a few lines of code, all powered by our global network of GPUs ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1kH38tclcLOGwYv40vTHNy/300956275074e73dd480a93898d43c08/image1-29.png" />
            
            </figure><p>If you're anywhere near the developer community, it's almost impossible to avoid the impact that AI’s recent advancements have had on the ecosystem. Whether you're using <a href="https://www.cloudflare.com/learning/ai/what-is-artificial-intelligence/">AI</a> in your workflow to improve productivity, or you’re shipping AI based features to your users, it’s everywhere. The focus on AI improvements are extraordinary, and we’re super excited about the opportunities that lay ahead, but it's not enough.</p><p>Not too long ago, if you wanted to leverage the power of AI, you needed to know the ins and outs of <a href="https://www.cloudflare.com/learning/ai/what-is-machine-learning/">machine learning</a>, and be able to manage the infrastructure to power it.</p><p>As a developer platform with over one million active developers, we believe there is so much potential yet to be unlocked, so we’re changing the way AI is delivered to developers. Many of the current solutions, while powerful, are based on closed, proprietary models and don't address privacy needs that developers and users demand. Alternatively, the open source scene is exploding with powerful models, but they’re simply not accessible enough to every developer. Imagine being able to run a model, from your code, wherever it’s <a href="https://www.cloudflare.com/developer-platform/solutions/hosting/">hosted</a>, and never needing to find GPUs or deal with setting up the infrastructure to support it.</p><p>That's why we are excited to launch Workers AI - an AI inference as a service platform, empowering developers to run AI models with just a few lines of code, all powered by our global network of GPUs. It's open and accessible, serverless, privacy-focused, runs near your users, pay-as-you-go, and it's built from the ground up for a best in class developer experience.</p>
    <div>
      <h2>Workers AI - making inference <b>just work</b></h2>
      <a href="#workers-ai-making-inference-just-work">
        
      </a>
    </div>
    <p>We’re launching Workers AI to put AI inference in the hands of every developer, and to actually deliver on that goal, it should <b>just work</b> out of the box. How do we achieve that?</p><ul><li><p>At the core of everything, it runs on the right infrastructure - our world-class network of GPUs</p></li><li><p>We provide off-the-shelf models that run seamlessly on our infrastructure</p></li><li><p>Finally, deliver it to the end developer, in a way that’s delightful. A developer should be able to build their first Workers AI app in minutes, and say “Wow, that’s kinda magical!”.</p></li></ul><p>So what exactly is Workers AI? It’s another building block that we’re adding to our developer platform - one that helps developers run well-known AI models on serverless GPUs, all on Cloudflare’s trusted global network. As one of the latest additions to our developer platform, it works seamlessly with Workers + Pages, but to make it truly accessible, we’ve made it platform-agnostic, so it also works everywhere else, made available via a REST API.</p>
    <div>
      <h2>Models you know and love</h2>
      <a href="#models-you-know-and-love">
        
      </a>
    </div>
    <p>We’re launching with a curated set of popular, open source models, that cover a wide range of inference tasks:</p><ul><li><p><b>Text generation (large language model):</b> meta/llama-2-7b-chat-int8</p></li><li><p><b>Automatic speech recognition (ASR):</b> openai/whisper</p></li><li><p><b>Translation:</b> meta/m2m100-1.2</p></li><li><p><b>Text classification:</b> huggingface/distilbert-sst-2-int8</p></li><li><p><b>Image classification:</b> microsoft/resnet-50</p></li><li><p><b>Embeddings:</b> baai/bge-base-en-v1.5</p></li></ul><p>You can browse all available models in your Cloudflare dashboard, and soon you’ll be able to dive into logs and analytics on a per model basis!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3iLFApyCjCwTCEtV8QRhke/91793f5eaabe3c426cf5fb7f421f4508/image4-14.png" />
            
            </figure><p>This is just the start, and we’ve got big plans. After launch, we’ll continue to expand based on community feedback. Even more exciting - in an effort to take our catalog from zero to sixty, we’re announcing a partnership with Hugging Face, a leading AI community + hub. The partnership is multifaceted, and you can read more about it <a href="/best-place-region-earth-inference">here</a>, but soon you’ll be able to browse and run a subset of the Hugging Face catalog directly in Workers AI.</p>
    <div>
      <h2>Accessible to everyone</h2>
      <a href="#accessible-to-everyone">
        
      </a>
    </div>
    <p>Part of the mission of our developer platform is to provide <b>all</b> the building blocks that developers need to build the applications of their dreams. Having access to the right blocks is just one part of it — as a developer your job is to put them together into an application. Our goal is to make that as easy as possible.</p><p>To make sure you could use Workers AI easily regardless of entry point, we wanted to provide access via: Workers or Pages to make it easy to use within the Cloudflare ecosystem, and via REST API if you want to use Workers AI with your current stack.</p><p>Here’s a quick CURL example that translates some text from English to French:</p>
            <pre><code>curl https://api.cloudflare.com/client/v4/accounts/{ACCOUNT_ID}/ai/run/@cf/meta/m2m100-1.2b \
-H "Authorization: Bearer {API_TOKEN}" \
	-d '{ "text": "I'll have an order of the moule frites", "target_lang": "french" }'</code></pre>
            <p>And here are what the response looks like:</p>
            <pre><code>{
  "result": {
    "answer": "Je vais commander des moules frites"
  },
  "success": true,
  "errors":[],
  "messages":[]
}</code></pre>
            <p>Use it with any stack, anywhere - your favorite Jamstack framework, Python + Django/Flask, Node.js, Ruby on Rails, the possibilities are endless. And deploy.</p>
    <div>
      <h2>Designed for developers</h2>
      <a href="#designed-for-developers">
        
      </a>
    </div>
    <p>Developer experience is really important to us. In fact, most of this post has been about just that. Making sure it works out of the box. Providing popular models that just work. Being accessible to all developers whether you build and deploy with Cloudflare or elsewhere. But it’s more than that - the experience should be frictionless, zero to production should be fast, and it should feel good along the way.</p><p>Let’s walk through another example to show just how easy it is to use! We’ll run Llama 2, a popular <a href="https://www.cloudflare.com/learning/ai/what-is-large-language-model/">large language model</a> open sourced by Meta, in a worker.</p><p>We’ll assume you have some of the basics already complete (Cloudflare account, Node, NPM, etc.), but if you don’t <a href="https://developers.cloudflare.com/workers-ai/get-started/local-dev-setup/">this guide</a> will get you properly set up!</p>
    <div>
      <h3>1. Create a Workers project</h3>
      <a href="#1-create-a-workers-project">
        
      </a>
    </div>
    <p>Create a new project named workers-ai by running:</p>
            <pre><code>$ npm create cloudflare@latest</code></pre>
            <p>When setting up your workers-ai worker, answer the setup questions as follows:</p><ul><li><p>Enter <b>workers-ai</b> for the app name</p></li><li><p>Choose <b>Hello World</b> script for the type of application</p></li><li><p>Select <b>yes</b> to using TypeScript</p></li><li><p>Select <b>yes</b> to using Git</p></li><li><p>Select <b>no</b> to deploying</p></li></ul><p>Lastly navigate to your new app directory:</p>
            <pre><code>cd workers-ai</code></pre>
            
    <div>
      <h3>2. Connect Workers AI to your worker</h3>
      <a href="#2-connect-workers-ai-to-your-worker">
        
      </a>
    </div>
    <p>Create a Workers AI binding, which allows your worker to access the Workers AI service without having to manage an API key yourself.</p><p>To bind Workers AI to your worker, add the following to the end of your <b>wrangler.toml</b> file:</p>
            <pre><code>[ai]
binding = "AI" #available in your worker via env.AI</code></pre>
            <p>You can also bind Workers AI to a Pages Function. For more information, refer to <a href="https://developers.cloudflare.com/pages/platform/functions/bindings/#ai">Functions Bindings</a>.</p>
    <div>
      <h3>3. Install the Workers AI client library</h3>
      <a href="#3-install-the-workers-ai-client-library">
        
      </a>
    </div>
    
            <pre><code>npm install @cloudflare/ai</code></pre>
            
    <div>
      <h3>4. Run an inference task in your worker</h3>
      <a href="#4-run-an-inference-task-in-your-worker">
        
      </a>
    </div>
    <p>Update the <b>source/index.ts</b> with the following code:</p>
            <pre><code>import { Ai } from '@cloudflare/ai'
export default {
  async fetch(request, env) {
    const ai = new Ai(env.AI);
    const input = { prompt: "What's the origin of the phrase 'Hello, World'" };
    const output = await ai.run('@cf/meta/llama-2-7b-chat-int8', input );
    return new Response(JSON.stringify(output));
  },
};</code></pre>
            
    <div>
      <h3>5. Develop locally with Wrangler</h3>
      <a href="#5-develop-locally-with-wrangler">
        
      </a>
    </div>
    <p>While in your project directory, test Workers AI locally by running:</p>
            <pre><code>$ npx wrangler dev --remote</code></pre>
            <p><b>Note -</b> These models currently only run on Cloudflare’s network of GPUs (and not locally), so setting <code>--remote</code> above is a must, and you’ll be prompted to log in at this point.</p><p>Wrangler will give you a URL (most likely localhost:8787). Visit that URL, and you’ll see a response like this</p>
            <pre><code>{
  "response": "Hello, World is a common phrase used to test the output of a computer program, particularly in the early stages of programming. The phrase "Hello, World!" is often the first program that a beginner learns to write, and it is included in many programming language tutorials and textbooks as a way to introduce basic programming concepts. The origin of the phrase "Hello, World!" as a programming test is unclear, but it is believed to have originated in the 1970s. One of the earliest known references to the phrase is in a 1976 book called "The C Programming Language" by Brian Kernighan and Dennis Ritchie, which is considered one of the most influential books on the development of the C programming language.
}</code></pre>
            
    <div>
      <h3>6. Deploy your worker</h3>
      <a href="#6-deploy-your-worker">
        
      </a>
    </div>
    <p>Finally, deploy your worker to make your project accessible on the Internet:</p>
            <pre><code>$ npx wrangler deploy
# Outputs: https://workers-ai.&lt;YOUR_SUBDOMAIN&gt;.workers.dev</code></pre>
            <p>And that’s it. You can literally go from zero to deployed AI in minutes. This is obviously a simple example, but shows how easy it is to run Workers AI from any project. </p>
    <div>
      <h2>Privacy by default</h2>
      <a href="#privacy-by-default">
        
      </a>
    </div>
    <p>When Cloudflare was founded, our value proposition had three pillars: more secure, more reliable, and more performant. Over time, we’ve realized that a better Internet is also a more private Internet, and we want to play a role in building it.</p><p>That’s why Workers AI is private by default - we don’t train our models, LLM or otherwise, on your data or conversations, and our models don’t learn from your usage. You can feel confident using Workers AI in both personal and business settings, without having to worry about leaking your data. Other providers only offer this fundamental feature with their enterprise version. With us, it’s built in for everyone.</p><p>We’re also excited to support data localization in the future. To make this happen, we have an ambitious GPU rollout plan - we’re launching with seven sites today, roughly 100 by the end of 2023, and nearly everywhere by the end of 2024. Ultimately, this will empower developers to keep delivering killer AI features to their users, while staying compliant with their end users’ data localization requirements.</p>
    <div>
      <h2>The power of the platform</h2>
      <a href="#the-power-of-the-platform">
        
      </a>
    </div>
    
    <div>
      <h4>Vector database - Vectorize</h4>
      <a href="#vector-database-vectorize">
        
      </a>
    </div>
    <p>Workers AI is all about running Inference, and making it really easy to do so, but sometimes inference is only part of the equation. Large language models are trained on a fixed set of data, based on a snapshot at a specific point in the past, and have no context on your business or use case. When you submit a prompt, information specific to you can increase the quality of results, making it more useful and relevant. That’s why we’re also launching Vectorize, our <a href="https://www.cloudflare.com/learning/ai/what-is-vector-database/">vector database</a> that’s designed to work seamlessly with Workers AI. Here’s a quick overview of how you might use Workers AI + Vectorize together.</p><p>Example: Use your data (knowledge base) to provide additional context to an LLM when a user is chatting with it.</p><ol><li><p><b>Generate initial embeddings:</b> run your data through Workers AI using an <a href="https://www.cloudflare.com/learning/ai/what-are-embeddings/">embedding model</a>. The output will be embeddings, which are numerical representations of those words.</p></li><li><p><b>Insert those embeddings into Vectorize:</b> this essentially seeds the vector database with your data, so we can later use it to retrieve embeddings that are similar to your users’ query</p></li><li><p><b>Generate embedding from user question:</b> when a user submits a question to your AI app, first, take that question, and run it through Workers AI using an embedding model.</p></li><li><p><b>Get context from Vectorize:</b> use that embedding to query Vectorize. This should output embeddings that are similar to your user’s question.</p></li><li><p><b>Create context aware prompt:</b> Now take the original text associated with those embeddings, and create a new prompt combining the text from the vector search, along with the original question</p></li><li><p><b>Run prompt:</b> run this prompt through Workers AI using an LLM model to get your final result</p></li></ol>
    <div>
      <h4>AI Gateway</h4>
      <a href="#ai-gateway">
        
      </a>
    </div>
    <p>That covers a more advanced use case. On the flip side, if you are running models elsewhere, but want to get more out of the experience, you can run those APIs through our AI gateway to get features like caching, rate-limiting, analytics and logging. These features can be used to protect your end point, monitor and optimize costs, and also help with data loss prevention. Learn more about AI gateway <a href="/announcing-ai-gateway">here</a>.</p>
    <div>
      <h2>Start building today</h2>
      <a href="#start-building-today">
        
      </a>
    </div>
    <p>Try it out for yourself, and let us know what you think. Today we’re launching Workers AI as an open Beta for all Workers plans - free or paid. That said, it’s super early, so…</p>
    <div>
      <h4>Warning - It’s an early beta</h4>
      <a href="#warning-its-an-early-beta">
        
      </a>
    </div>
    <p>Usage is <b>not currently recommended for production apps</b>, and limits + access are subject to change.</p>
    <div>
      <h4>Limits</h4>
      <a href="#limits">
        
      </a>
    </div>
    <p>We’re initially launching with limits on a per-model basis</p><ul><li><p>@cf/meta/llama-2-7b-chat-int8: 50 reqs/min globally</p></li></ul><p>Checkout our <a href="https://developers.cloudflare.com/workers-ai/platform/limits/">docs</a> for a full overview of our limits.</p>
    <div>
      <h4>Pricing</h4>
      <a href="#pricing">
        
      </a>
    </div>
    <p>What we released today is just a small preview to give you a taste of what’s coming (we simply couldn’t hold back), but we’re looking forward to putting the full-throttle version of Workers AI in your hands.</p><p>We realize that as you approach building something, you want to understand: how much is this going to cost me? Especially with AI costs being so easy to get out of hand. So we wanted to share the upcoming pricing of Workers AI with you.</p><p>While we won’t be billing on day one, we are announcing what we expect our pricing will look like.</p><p>Users will be able to choose from two ways to run Workers AI:</p><ul><li><p><b>Regular Twitch Neurons (RTN)</b> - running wherever there's capacity at $0.01 / 1k neurons</p></li><li><p><b>Fast Twitch Neurons (FTN)</b> - running at nearest user location at $0.125 / 1k neurons</p></li></ul><p>You may be wondering — what’s a neuron?</p><p>Neurons are a way to measure AI output that always scales down to zero (if you get no usage, you will be charged for 0 neurons). To give you a sense of what you can accomplish with a thousand neurons, you can: generate 130 LLM responses, 830 image classifications, or 1,250 embeddings.</p><p>Our goal is to help our customers pay only for what they use, and choose the pricing that best matches their use case, whether it’s price or latency that is top of mind.</p>
    <div>
      <h3>What’s on the roadmap?</h3>
      <a href="#whats-on-the-roadmap">
        
      </a>
    </div>
    <p>Workers AI is just getting started, and we want your feedback to help us make it great. That said, there are some exciting things on the roadmap.</p>
    <div>
      <h4>More models, please</h4>
      <a href="#more-models-please">
        
      </a>
    </div>
    <p>We're launching with a solid set of models that just work, but will continue to roll out new models based on your feedback. If there’s a particular model you'd love to see on Workers AI, pop into our <a href="https://discord.cloudflare.com/">Discord</a> and let us know!</p><p>In addition to that, we're also announcing a <a href="/best-place-region-earth-inference">partnership with Hugging Face</a>, and soon you'll be able to access and run a subset of the Hugging Face catalog directly from Workers AI.</p>
    <div>
      <h4>Analytics + observability</h4>
      <a href="#analytics-observability">
        
      </a>
    </div>
    <p>Up to this point, we’ve been hyper focussed on one thing - making it really easy for any developer to run powerful AI models in just a few lines of code. But that’s only one part of the story. Up next, we’ll be working on some analytics and <a href="https://www.cloudflare.com/learning/performance/what-is-observability/">observability</a> capabilities to give you insights into your usage + performance + spend on a per-model basis, plus the ability to fig into your logs if you want to do some exploring.</p>
    <div>
      <h4>A road to global GPU coverage</h4>
      <a href="#a-road-to-global-gpu-coverage">
        
      </a>
    </div>
    <p>Our goal is to be the best place to run inference on Region: Earth, so we're adding GPUs to our data centers as fast as we can.</p><p><b>We plan to be in 100 data centers by the end of this year</b></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5A8SGUOEAcs3sjNjv48yIh/bafbc77b256fef490d4357613b036603/image3-28.png" />
            
            </figure><p><b>And nearly everywhere by the end of 2024</b></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2rrL2H0dHYZ4hxOBq0X1pw/f38d122af92f789dc2b31d3bdea1ab06/unnamed-3.png" />
            
            </figure><p><b>We’re really excited to see you build</b> - head over to <a href="https://developers.cloudflare.com/workers-ai/">our docs</a> to get started.</p><p>If you need inspiration, want to share something you’re building, or have a question - pop into our <a href="https://discord.com/invite/cloudflaredev">Developer Discord</a>.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Database]]></category>
            <category><![CDATA[Vectorize]]></category>
            <category><![CDATA[Developers]]></category>
            <guid isPermaLink="false">6jSrrIFC7yStZxCaqaM0c1</guid>
            <dc:creator>Phil Wittig</dc:creator>
            <dc:creator>Rita Kozlov</dc:creator>
            <dc:creator>Rebecca Weekly</dc:creator>
            <dc:creator>Celso Martinho</dc:creator>
            <dc:creator>Meaghan Choi</dc:creator>
        </item>
        <item>
            <title><![CDATA[How we’re making Cloudflare’s infrastructure more sustainable]]></title>
            <link>https://blog.cloudflare.com/extending-the-life-of-hardware/</link>
            <pubDate>Wed, 14 Dec 2022 14:00:00 GMT</pubDate>
            <description><![CDATA[ Our hardware sustainability initiative encapsulates using hardware components for as long as possible, recycling them responsibly when it is time to decommission them, and selecting the most power-efficient options for our workloads. ]]></description>
            <content:encoded><![CDATA[ <p><i></i></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/47ARstqMEWAV7BVw9FQNB8/9d78d60adcc0186084b7df43171dd27c/image1-27.png" />
            
            </figure><p>Whether you are building a global network or buying groceries, some rules of sustainable living remain the same: be thoughtful about what you get, make the most out of what you have, and try to upcycle your waste rather than throwing it away. These rules are central to Cloudflare — we take helping build a better Internet seriously, and we define this as not just having the most secure, reliable, and performant network — but also the most sustainable one.</p><p>With incredible growth of the Internet, and the increased usage of Cloudflare’s network, even linear improvements to sustainability in our hardware today will result in exponential gains in the future. We want to use this post to outline how we think about the sustainability impact of the hardware in our network, and what we’re doing to continually mitigate that impact.</p>
    <div>
      <h2>Sustainability in the realm of servers</h2>
      <a href="#sustainability-in-the-realm-of-servers">
        
      </a>
    </div>
    <p>The total carbon footprint of a server is approximately 6 tons of Carbon Dioxide equivalent (CO2eq) when used in the US. There are four parts to the carbon footprint of any computing device:</p><ol><li><p>The embodied emissions: source materials and production</p></li><li><p>Packing and shipping</p></li><li><p>Use of the product</p></li><li><p>End of life.</p></li></ol><p>The emissions from the actual operations and use of a server account for the vast majority of the total life-cycle impact. The secondary impact is embodied emissions (which is the carbon footprint from the creation of the device in the first place), which is about <a href="https://www.dell.com/en-us/dt/corporate/social-impact/advancing-sustainability/climate-action/product-carbon-footprints.htm#tab0=3&amp;pdf-overlay=//www.delltechnologies.com/asset/en-us/products/multi-product/industry-market/pcf-lca-whitepaper.pdf">10% overall</a>.</p>
    <div>
      <h3>Use of Product Emissions</h3>
      <a href="#use-of-product-emissions">
        
      </a>
    </div>
    <p>It’s difficult to reduce the total emissions for the operation of servers. If there’s a workload that needs computing power, the server will complete the workload and use the energy required to complete it. What we can do, however, is consistently seek to improve the amount of computing output per kilo of CO2 emissions — and the way we do that is to consistently upgrade our hardware to the most power-efficient designs. As we switch from one generation of server to the next, we often see very large increases in computing output, at the same level of power consumption. In this regard, given energy is a large cost for our business, our incentives of reducing our environmental impact are naturally aligned to our business model.</p>
    <div>
      <h3>Embodied Emissions</h3>
      <a href="#embodied-emissions">
        
      </a>
    </div>
    <p>The other large category of emissions — the embodied emissions — are a domain where we actually have a lot more control than the use of the product. Reminder from before: the embodied carbon means the sources of emissions generated outside of equipments' operation. How can we reduce the embodied emissions involved in running a fleet of servers? Turns out, there are a few ways: modular design, relying on open vs proprietary standards to enable reuse, and recycling.</p>
    <div>
      <h4><b>Modular Design</b></h4>
      <a href="#modular-design">
        
      </a>
    </div>
    <p>The first big opportunity is through modular system design. Modular systems are a great way of reducing embodied carbon, as they result in fewer new components and allow for parts that don’t have efficiency upgrades to be leveraged longer. Modular server design is essentially decomposing functions of the motherboard onto sub-boards so that the server owner can selectively upgrade the components that are required for their use cases.</p><p>How much of an impact can modular design have? Well, if 30% of the server is delivering meaningful efficiency gains (usually CPU and memory, sometimes I/O), we may really need to upgrade those in order to meet efficiency goals, but creating an additional 70% overhead in embodied carbon (i.e. the rest of the server, which often is made up of components that do not get more efficient) is not logical. Modular design allows us to upgrade the components that will improve the operational efficiency of our data centers, but amortize carbon in the “glue logic” components over the longer time periods for which they can continue to function.</p><p>Previously, many systems providers drove ridiculous and useless changes in the peripherals (custom I/Os, outputs that may not be needed for a specific use case such as VGA for crash carts we might not use given remote operations, etc.), which would force a new motherboard design for every new CPU socket design. By standardizing those interfaces across vendors, we can now only source the components we need, and reuse a larger percentage of systems ourselves. This trend also helps with reliability (sub-boards are more well tested), and supply assurance (since standardized subcomponent boards can be sourced from more vendors), something all of us in the industry have had top-of-mind given global supply challenges of the past few years.</p>
    <div>
      <h4><b>Standards-based Hardware to Encourage Re-use</b></h4>
      <a href="#standards-based-hardware-to-encourage-re-use">
        
      </a>
    </div>
    <p>But even with modularity, components need to go somewhere after they’ve been deprecated — and historically, this place has been a landfill. There is demand for second-hand servers, but many have been parts of closed systems with proprietary firmware and BIOS, so repurposing them has been costly or impossible to integrate into new systems. The economics of a circular economy are such that service fees for closed firmware and BIOS support as well as proprietary interconnects or ones that are not standardized can make reuse prohibitively expensive. How do you solve this? Well, if servers can be supported using open source firmware and BIOS, you dramatically reduce the cost of reusing the parts — so that another provider can support the new customer.</p>
    <div>
      <h4><b>Recycling</b></h4>
      <a href="#recycling">
        
      </a>
    </div>
    <p>Beyond that, though, there are parts failures, or parts that are simply no longer economical to be run, even in the second hand market. Metal recycling can always be done, and some manufacturers are starting to invest in <a href="https://www.apple.com/recycling/nationalservices/">programs</a> there, although the energy investment for extracting the usable elements sometimes doesn’t make sense. There is innovation in this domain, <a href="https://pubs.acs.org/doi/abs/10.1021/acssuschemeng.9b07006">Zhan, et al. (2020)</a> developed an environmentally friendly and efficient hydrothermal-buffering technique for the recycling of GaAs-based ICs, achieving gallium and arsenic recovery rates of 99.9 and 95.5% respectively. Adoption is still limited — most manufacturers are discussing water recycling and renewable energy vs. full-fledged recycling of metals — but we’re closely monitoring the space to take advantage of any further innovation that happens.</p>
    <div>
      <h2>What Cloudflare is Doing To Reduce Our Server Impact</h2>
      <a href="#what-cloudflare-is-doing-to-reduce-our-server-impact">
        
      </a>
    </div>
    <p>It is great to talk about these concepts, but we are doing this work today. I’d describe them as being under two main banners: taking steps to reduce embodied emissions through modular and open standards design, and also using the most power-efficient solutions for our workloads.</p>
    <div>
      <h3>Gen 12: Walking the Talk</h3>
      <a href="#gen-12-walking-the-talk">
        
      </a>
    </div>
    <p>Our next generation of servers, Gen 12, will be coming soon. We’re emphasizing modular-driven design, as well as a focus on open standards, to enable reuse of the components inside the servers.</p>
    <div>
      <h4><b>A modular-driven design</b></h4>
      <a href="#a-modular-driven-design">
        
      </a>
    </div>
    <p>Historically, every generation of server here at Cloudflare has required a massive redesign. An upgrade to a new CPU required a new motherboard, power supply, chassis, memory DIMMs, and BMC. This, in turn, might mean new fans, storage, network cards, and even cables. However, many of these components are not changing drastically from generation to generation: these components are built using older manufacturing processes, and leverage interconnection protocols that do not require the latest speeds.</p><p>To help illustrate this, let’s look at our <a href="/the-epyc-journey-continues-to-milan-in-cloudflares-11th-generation-edge-server/">Gen 11</a> server today: a single socket server is ~450W of power, with the CPU and associated memory taking about 320W of that (potentially 360W at peak load). All the other components on that system (mentioned above) are ~100W of operational power (mostly dominated by fans, which is why so many companies are exploring alternative cooling designs), so they are not where the optimization efforts or newer ICs will greatly improve the system’s efficiency. So, instead of rebuilding all those pieces from scratch for every new server and generating that much more embodied carbon, we are reusing them as often as possible.</p><p>By disaggregating components that require changes for efficiency reasons from other system-level functions (storage, fans, BMCs, programmable logic devices, etc.), we are able to maximize reuse of electronic components across generations. Building systems modularly like this significantly reduces our embodied carbon footprint over time. Consider how much waste would be eliminated if you were able to upgrade your car's engine to improve its efficiency without changing the rest of the parts that are working well, like the frame, seats, and windows. That's what modular design is enabling in data centers like ours across the world.</p>
    <div>
      <h4><b>A Push for Open Standards, Too</b></h4>
      <a href="#a-push-for-open-standards-too">
        
      </a>
    </div>
    <p>We, as an industry, have to work together to accelerate interoperability across interfaces, standards, and vendors if we want to achieve true modularity and our goal of a 70% reduction in e-waste. We have begun this effort by leveraging standard add-in-card form factors (<a href="https://www.opencompute.org/documents/facebook-ocp-mezzanine-20-specification">OCP 2.0</a> and <a href="http://files.opencompute.org/oc/public.php?service=files&amp;t=3c8f57684f959c5b7abe2eb3ee0705b4">3.0</a> NICs, <a href="https://www.opencompute.org/documents/ocp-dc-scm-spec-rev-1-0-pdf">Datacenter Secure Control Module</a> for our security and management modules, etc.) and our next server design is leveraging <a href="https://drive.google.com/file/d/1Ai3FkXzEZjxO8MpPlAo-ZjL5PH4S7NQu/view">Datacenter Modular Hardware System</a>, an open-source design specification that allows for modular subcomponents to be connected across common buses (regardless of the system manufacturer). This technique allows us to maintain these components over multiple generations without having to incur more carbon debt on parts that don’t change as often as CPUs and memory.</p><p>In order to enable a more comprehensive circular economy, Cloudflare has made extensive and increasing use of open-source solutions, like <a href="https://en.wikipedia.org/wiki/OpenBMC">OpenBMC</a>, a requirement for all of our vendors, and we work to ensure fixes are upstreamed to the community. Open system firmware allows for greater security through auditability, but the most important factor for sustainability is that a new party can assume responsibility and support for that server, which allows systems that might otherwise have to be destroyed to be reused. This ensures that (other than data-bearing assets, which are destroyed based on our security policy) 99% of hardware used by Cloudflare is repurposed, reducing the number of new servers that need to be built to fulfill global capacity demand. Further details about the specifics of how that happens – and how you can join our vision of reducing e-waste – you can find in <a href="/sustainable-end-of-life-hardware">this blog post</a>.</p>
    <div>
      <h3>Using the most power-efficient solutions for our workloads</h3>
      <a href="#using-the-most-power-efficient-solutions-for-our-workloads">
        
      </a>
    </div>
    <p>The other big way we can push for sustainability (in our hardware) while responding to our exponential increase in demand without wastefully throwing more servers at the problem is simple in concept, and difficult in practice: testing and deploying more power-efficient architectures and tuning them for our workloads. This means not only evaluating the efficiency of our next generation of servers and networking gear, but also reducing hardware and energy waste in our fleet.</p><p>Currently, in production, we see that Gen 11 servers can handle about 25% more requests than Gen 10 servers for the same amount of energy. This is <a href="/the-epyc-journey-continues-to-milan-in-cloudflares-11th-generation-edge-server/">about what we expected</a> when we were testing in mid-2021, and is exciting to see given that we continue to launch new products and services we couldn’t test at that time.</p><p>System power efficiency is not as simple a concept as it used to be for us. Historically, the key metric for assessing efficiency has been requests per second per watt. This metric allowed for multi-generational performance comparisons when qualifying new generations of servers, but it was really designed with our historical core product suite in mind.</p><p>We want – and, as a matter of scaling, require – our global network to be an increasingly intelligent threat detection mechanism, and also a highly performant development platform for our customers. As anyone who’s looked at a benchmark when shopping for a new computer knows, fast performance in one domain (traditional benchmarks such as SpecInt_Rate, STREAM, etc.) does not necessarily mean fast performance in another (e.g. AI inference, video processing, bulk <a href="https://www.cloudflare.com/learning/cloud/what-is-object-storage/">object storage</a>). The validation testing process for our next generation of server needs to take all of these workloads and their relative prevalence into account — not just requests. The deep partnership between hardware and software that Cloudflare can have is enabling optimization opportunities that other companies running third party code cannot pursue. I often say this is one of our superpowers, and this is the opportunity that makes me most excited about my job every day.</p><p>The other way we can be both sustainable and efficient is by leveraging domain-specific accelerators. Accelerators are a wide field, and we’ve seen incredible opportunities with application-level ones (see our recent announcement on <a href="/av1-cloudflare-stream-beta/">AV1 hardware acceleration for Cloudflare Stream</a>) as well as infrastructure accelerators (sometimes referred to as Smart NICs). That said, adding new silicon to our fleet is only adding to the problem if it isn’t as efficient as the thing it’s replacing, and a node-level performance analysis often misses the complexity of deployment in a fleet as distributed as ours, so we’re moving quickly but cautiously.</p>
    <div>
      <h2>Moving Forward: Industry Standard Reporting</h2>
      <a href="#moving-forward-industry-standard-reporting">
        
      </a>
    </div>
    <p>We’re pushing by ourselves as hard as we can, but there are certain areas where the industry as a whole needs to step up.</p><p>In particular: there is a woeful lack of standards about emissions reporting for server component manufacturing and operation, so we are engaging with standards bodies like the Open Compute Project to help define sustainability metrics for the industry at large. This post explains how we are increasing our efficiency and decreasing our carbon footprint generationally, but there should be a clear methodology that we can use to ensure that you know what kind of businesses you are supporting.</p><p>The <a href="https://ghgprotocol.org/sites/default/files/standards/ghg-protocol-revised.pdf">Greenhouse Gas (GHG) Protocol</a> initiative is doing a great job developing internationally accepted GHG accounting and reporting standards for business and to promote their broad adoption. They define scope 1 emissions to be the “direct carbon accounting of a reporting company’s operations” which is somewhat easy to calculate, and quantify scope 3 emissions as “the indirect value chain emissions.” To have standardized metrics across the entire life cycle of generating equipment, we need the carbon footprint of the subcomponents’ manufacturing process, supply chains, transportation, and even the construction methods used in building our data centers.</p><p>Ensuring embodied carbon is measured consistently across vendors is a necessity for building industry-standard, defensible metrics.</p>
    <div>
      <h2>Helping to build a better, greener, Internet</h2>
      <a href="#helping-to-build-a-better-greener-internet">
        
      </a>
    </div>
    <p>The carbon impact of the cloud has a meaningful impact on the Earth–by some accounts, the <a href="https://www.nature.com/articles/d41586-018-06610-y">ICT footprint will be 21% of global energy demand by 2030</a>. We’re absolutely committed to keeping Cloudflare’s footprint on the planet as small as possible. If you’ve made it this far through, and you’re interested in contributing to building the most global, efficient, and sustainable network on the Internet — <a href="https://www.cloudflare.com/careers/jobs/?department=Infrastructure&amp;title=Systems">the Hardware Systems Engineering team is hiring</a>. Come join us.</p> ]]></content:encoded>
            <category><![CDATA[Impact Week]]></category>
            <category><![CDATA[Hardware]]></category>
            <category><![CDATA[Sustainability]]></category>
            <guid isPermaLink="false">6EJuZW4JtnsLjt5l0E2CSR</guid>
            <dc:creator>Rebecca Weekly</dc:creator>
            <dc:creator>Jon Rolfe</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare’s approach to handling BMC vulnerabilities]]></title>
            <link>https://blog.cloudflare.com/bmc-vuln/</link>
            <pubDate>Thu, 26 May 2022 13:17:40 GMT</pubDate>
            <description><![CDATA[ Cloudflare’s approach to handling firmware vulnerabilities and how we keep our internal data protected ]]></description>
            <content:encoded><![CDATA[ <p></p><p>In recent years, management interfaces on servers like a Baseboard Management Controller (BMC) have been the target of cyber attacks including ransomware, implants, and disruptive operations. Common BMC vulnerabilities like <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6260">Pantsdown</a> and <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16649">USBAnywhere</a>, combined with infrequent firmware updates, have left servers vulnerable.</p><p>We were recently informed from a trusted vendor of <a href="https://eclypsium.com/2022/05/26/quanta-servers-still-vulnerable-to-pantsdown/">new, critical vulnerabilities</a> in popular BMC software that we use in our fleet. Below is a summary of what was discovered, how we mitigated the impact, and how we look to prevent these types of vulnerabilities from having an impact on Cloudflare and our customers.</p>
    <div>
      <h2>Background</h2>
      <a href="#background">
        
      </a>
    </div>
    <p>A baseboard management controller is a small, specialized processor used for remote monitoring and management of a host system. This processor has multiple connections to the host system, giving it the ability to monitor hardware, update BIOS firmware, power cycle the host, and many more things.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2OwMxITR79BDNE5I1YLWdB/501dd49d38b12452052c566362c98d1e/image1-63.png" />
            
            </figure><p>Access to the BMC can be local or, in some cases, remote. With remote vectors open, there is potential for malware to be installed on the BMC from the local host via PCI Express or the Low Pin Count (LPC) interface. With compromised software on the BMC, malware or spyware could maintain persistence on the server.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3nrW5ruSaHnnuHAGr9TbQQ/e2ae41ef9fed7713a3e13d991c62c002/image2-2.gif" />
            
            </figure><p>According to the <a href="https://nvd.nist.gov/vuln/detail/CVE-2019-6260">National Vulnerability Database</a>, the two BMC chips (<a href="https://www.aspeedtech.com/">ASPEED</a> AST2400 and AST2500) have implemented Advanced High-Performance Bus (AHB) bridges, which allow arbitrary read and write access to the physical address space of the BMC from the host. This means that malware running on the server can also access the RAM of the BMC.</p><p>These BMC vulnerabilities are sufficient to enable ransomware propagation, server bricking, and data theft.</p>
    <div>
      <h2>Impacted versions</h2>
      <a href="#impacted-versions">
        
      </a>
    </div>
    <p>Numerous vulnerabilities were found to affect the <a href="https://www.qct.io/product/index/Server/rackmount-server/1U-Rackmount-Server/QuantaGrid-D52B-1U">QuantaGrid D52B</a> cloud server due to vulnerable software found in the BMC. These vulnerabilities are associated with specific interfaces that are exposed on AST2400 and AST2500 and explained in <a href="https://nvd.nist.gov/vuln/detail/CVE-2019-6260">CVE-2019-6260</a>. The vulnerable interfaces in question are:</p><ul><li><p>iLPC2AHB bridge Pt I</p></li><li><p>iLPC2AHB bridge Pt II</p></li><li><p>PCIe VGA P2A bridge</p></li><li><p><a href="https://en.wikipedia.org/wiki/Direct_memory_access">DMA</a> from/to arbitrary BMC memory via X-DMA</p></li><li><p><a href="https://en.wikipedia.org/wiki/Universal_asynchronous_receiver-transmitter">UART</a>-based SoC Debug interface</p></li><li><p>LPC2AHB bridge</p></li><li><p>PCIe BMC P2A bridge</p></li><li><p>Watchdog setup</p></li></ul><p>An attacker might be able to update the BMC directly using SoCFlash through inband LPC or BMC debug universal async receiver-transmitter (UART) serial console. While this might be thought of as a usual path in case of total corruption, this is actually an abuse within SoCFlash by using any open interface for flashing.</p>
    <div>
      <h2>Mitigations and response</h2>
      <a href="#mitigations-and-response">
        
      </a>
    </div>
    
    <div>
      <h3>Updated firmware</h3>
      <a href="#updated-firmware">
        
      </a>
    </div>
    <p>We reached out to one of our manufacturers, Quanta, to validate that existing firmware within a subset of systems was in fact patched against these vulnerabilities. While some versions of our firmware were not vulnerable, others were. A patch was released, tested, and deployed on the affected BMCs within our fleet.</p><p>Cloudflare Security and Infrastructure teams also proactively worked with additional manufacturers to validate their own BMC patches were not explicitly vulnerable to these firmware vulnerabilities and interfaces.</p>
    <div>
      <h3>Reduced exposure of BMC remote interfaces</h3>
      <a href="#reduced-exposure-of-bmc-remote-interfaces">
        
      </a>
    </div>
    <p>It is a standard practice within our data centers to implement network segmentation to separate different planes of traffic. Our out-of-band networks are not exposed to the outside world and only accessible within their respective data centers. Access to any management network goes through a defense in depth approach, restricting connectivity to jumphosts and authentication/authorization through our zero trust <a href="https://www.cloudflare.com/cloudflare-one/">Cloudflare One</a> service.</p>
    <div>
      <h3>Reduced exposure of BMC local interfaces</h3>
      <a href="#reduced-exposure-of-bmc-local-interfaces">
        
      </a>
    </div>
    <p>Applications within a host are limited in what can call out to the BMC. This is done to restrict what can be done from the host to the BMC and allow for secure in-band updating and userspace logging and monitoring.</p>
    <div>
      <h3>Do not use default passwords</h3>
      <a href="#do-not-use-default-passwords">
        
      </a>
    </div>
    <p>This sounds like common knowledge for most companies, but we still follow a standard process of changing not just the default username and passwords that come with BMC software, but disabling the default accounts to prevent them from ever being used. Any static accounts follow a regular password rotation.</p>
    <div>
      <h3>BMC logging and auditing</h3>
      <a href="#bmc-logging-and-auditing">
        
      </a>
    </div>
    <p>We log all activity by default on our BMCs. Logs that are captured include the following:</p><ul><li><p>Authentication (Successful, Unsuccessful)</p></li><li><p>Authorization (user/service)</p></li><li><p>Interfaces (SOL, CLI, UI)</p></li><li><p>System status (Power on/off, reboots)</p></li><li><p>System changes (firmware updates, flashing methods)</p></li></ul><p>We were able to validate that there was no malicious activity detected.</p>
    <div>
      <h2>What's next for the BMC</h2>
      <a href="#whats-next-for-the-bmc">
        
      </a>
    </div>
    <p>Cloudflare regularly works with several original design manufacturers (ODMs) to produce the highest performing, efficient, and secure computing systems according to our own specifications. The standard processors used for our baseboard management controller often ship with proprietary firmware which is less transparent and more cumbersome to maintain for us and our ODMs. We believe in improving on every component of the systems we operate in over 270 cities around the world.</p>
    <div>
      <h3>OpenBMC</h3>
      <a href="#openbmc">
        
      </a>
    </div>
    <p>We are moving forward with <a href="https://github.com/openbmc/openbmc">OpenBMC</a>, an open-source firmware for our supported baseboard management controllers. Based on the Yocto Project, a toolchain for Linux on embedded systems, OpenBMC will enable us to specify, build, and configure our own firmware based on the latest Linux kernel featureset per our specification, similar to the physical hardware and ODMs.</p><p>OpenBMC firmware will enable:</p><ul><li><p>Latest stable and patched Linux kernel</p></li><li><p>Internally-managed <a href="https://www.cloudflare.com/application-services/products/ssl/">TLS certificates</a> for secure, trusted communication across our isolated management network</p></li><li><p>Fine-grained credentials management</p></li><li><p>Faster response time for patching and critical updates</p></li></ul><p>While many of these features are community-driven, vulnerabilities like Pantsdown are <a href="https://gerrit.openbmc-project.xyz/c/openbmc/meta-phosphor/+/13290/5/aspeed-layer/recipes-bsp/u-boot/files/0001-aspeed-Disable-unnecessary-features.patch">patched quickly</a>.</p>
    <div>
      <h3>Extending secure boot</h3>
      <a href="#extending-secure-boot">
        
      </a>
    </div>
    <p>You may have read about our recent work securing the boot process with a <a href="/anchoring-trust-a-hardware-secure-boot-story/">hardware root-of-trust</a>, but the BMC has its own boot process that often starts as soon as the system gets power. Newer versions of the BMC chips we use, as well as leveraging <a href="https://docs.microsoft.com/en-us/azure/security/fundamentals/project-cerberus">cutting edge</a> <a href="https://axiado.com/">security co-processors</a>, will allow us to extend our secure boot capabilities prior to loading our UEFI firmware by validating cryptographic signatures on our BMC/OpenBMC firmware. By extending our security boot chain to the very first device that has power to our systems, we greatly reduce the impact of malicious implants that can be used to take down a server.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>While this vulnerability ended up being one we could quickly resolve through firmware updates with Quanta and quick action by our teams to validate and patch our fleet, we are continuing to innovate through OpenBMC, and secure root of trust to ensure that our fleet is as secure as possible. We are grateful to our partners for their quick action and are always glad to report any risks and our mitigations to ensure that you can trust how seriously we take your security.</p> ]]></content:encoded>
            <category><![CDATA[Vulnerabilities]]></category>
            <category><![CDATA[Hardware]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">3SuM39ggodivvGpr9OtO0X</guid>
            <dc:creator>Derek Chamorro</dc:creator>
            <dc:creator>Rebecca Weekly</dc:creator>
        </item>
        <item>
            <title><![CDATA[Marcelo Affonso and Rebecca Weekly: why we joined Cloudflare]]></title>
            <link>https://blog.cloudflare.com/marcelo-affonso-and-rebecca-weekly-why-we-joined-cloudflare/</link>
            <pubDate>Tue, 26 Apr 2022 13:00:52 GMT</pubDate>
            <description><![CDATA[ Marcelo Affonso (VP of Infrastructure Operations) and Rebecca Weekly (VP of Hardware Systems) recently joined our team. Here they share their journey to Cloudflare, what motivated them to join us, and what they are most excited about ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Marcelo Affonso (VP of Infrastructure Operations) and Rebecca Weekly (VP of Hardware Systems) recently joined our team. Here they share their journey to Cloudflare, what motivated them to join us, and what they are most excited about.</p>
    <div>
      <h3>Marcelo Affonso - VP of Infrastructure Operations</h3>
      <a href="#marcelo-affonso-vp-of-infrastructure-operations">
        
      </a>
    </div>
    <p>I am thrilled to join Cloudflare and lead our global infrastructure operations. My focus will be building, expanding, optimizing, and accelerating Cloudflare’s fast-growing infrastructure presence around the world.</p><p>Recently, I have found myself reflecting on how central the Internet has become to the lives of people all over the world. We use the Internet to work, to connect with families and friends, and to get essential services. Communities, governments, businesses, and cultural institutions now use the Internet as a primary communication and collaboration layer.</p><p>But on its own, the Internet wasn’t architected to support that level of use. It needs better security protections, faster and more reliable connectivity, and more support for various privacy preferences. What’s more, those benefits can’t just be available to large businesses. They need to be accessible to a full range of communities, governments, and individuals who now rely on the Internet. And they need to be accessible in various ways to align with people’s diverse needs and priorities.</p><p>My own personal and professional experiences make these challenges particularly interesting. On a personal level, I was born in Brazil, immigrated to Canada in my late teens, and I have been very fortunate to live and work in seven different countries across North America, South America, and Europe. In embracing all of that change, I’ve learned the importance of being flexible and adaptable — since an approach that may work in one context or culture, may not be relevant in a different one.</p><p>On the professional side, I’ve spent much of my career in logistics operations, supply chain management, and cloud infrastructure — most recently at Amazon. After nearly a decade managing Amazon fulfillment operations across the UK, Italy, and Canada, I shifted to Amazon Web Services. There I supported the organization’s second-largest region globally, delivering operational excellence for a rapidly expanding data center portfolio spanning tens of thousands of computer racks. I’ve found great personal fulfillment in figuring out how to deliver and operate infrastructure and services at a massive scale. So to the broader need I mentioned, creating a safer, faster, more private Internet for the whole world is an absolutely fascinating challenge.</p><p>I am extremely grateful for the opportunity I had to participate in the growth and expansion of Amazon. But reflecting on all the Internet's needs and challenges, I realized I wanted in my next role to be able to make a big impact on those areas on the broadest possible scale.</p><p>With that in mind, Cloudflare was the obvious — and the most exciting — next step.</p><p>Cloudflare is the world's most connected cloud network, providing security, speed, reliability, and privacy to anything connected to the Internet — including websites, APIs, corporate networks, and distributed workforces. Our network sits within 50 milliseconds of 95% of the Internet-connected population globally. We’ve become the most trusted, efficient, and relied-upon network on the Internet. For someone interested in helping support the Internet’s role in our daily lives — and in the exciting logistical challenges which enable all of that — there’s no better place to be.</p><p>When I met the Cloudflare team, I was immediately drawn to the incredible pace at which they innovate and operate, as well as to their ambitious goal to help build a better Internet. Cloudflare as a whole is very principled in its approach to democratize technologies and operate with a global mindset and focus on adoption to the latest standards. I found this quite refreshing. Similarly, I appreciated the open communication and transparency culture both within and outside the organization, as well as the desire across the teams to continuously learn and adapt.</p><p>In the short time I’ve been here, I’ve already started working on many exciting aspects of our network’s growth. We recently announced the addition of <a href="/mid-2022-new-cities/">18 new cities</a> to our network, expanding our scope to over 270 cities globally. We’re also growing the number of <a href="/cloudflare-network-interconnect/">Cloudflare Network Interconnect (CNI)</a> locations across the world, to make it even easier for more customers to connect to our network.</p><p>In addition, I’m particularly thrilled to work with our team to deploy <a href="/introducing-r2-object-storage/">Cloudflare R2 Storage</a> and to lead the expansion of <a href="/cloudflare-for-offices/">Cloudflare for Offices</a>, which provides office traffic a direct connection to our network and Cloudflare services.</p><p>It’s an honor to join this talented, innovative, and ambitious team and to be part of Cloudflare’s important mission. I feel extremely fortunate to join the company at such a critical period of growth, and I am excited to help Cloudflare — and the Internet as a whole — realize their full potential.</p>
    <div>
      <h3>Rebecca Weekly - VP of Hardware Systems</h3>
      <a href="#rebecca-weekly-vp-of-hardware-systems">
        
      </a>
    </div>
    <p>I am overjoyed to join Cloudflare and apply my experience in semiconductor and system design and verification to design the next generation of solutions that will power the Internet.</p><p>I have happily spent my whole career in hardware because, to put it simply, integrated chips power the world. I’ve been fortunate to contribute to a variety of problems and use cases, including accelerating gas distribution models, improving graphics chips for gaming systems, validating ASICs targeting infrastructure and application acceleration, and designing transistor CPUs and their systems for operations at hyperscale.</p><p>Over the course of that journey, I’ve realized that we are entering the “new golden age of computer architecture” as defined by John Hennessy and David Patterson in their February 2019 address to the Association for Computing Machinery. To summarize their nearly two hour lecture is impossible, but I’ll risk it because it was a major influence on me making the leap to Cloudflare.</p><p>Hennessy and Patterson argue that evolving computational efficiency in light of the end of Dennard scaling and the slowing of Moore’s Law requires the industry to address the inherent inefficiencies in general purpose ARM- and x86-based processors. They highlight three opportunities:</p><ol><li><p>High-level language performance optimization on existing infrastructure (we have optimized for decades for developer efficiency at the risk of massive inefficiencies in traditional CPU architectures)</p></li><li><p>Domain-specific architectures which yield efficiencies through optimizing parallelism in the hardware for a specific computational domain.</p></li><li><p>The hybrid case of domain-specific languages yielding opportunities for domain specific architectures, in order to accelerate infrastructure efficiency holistically.</p></li></ol><p>When considering my next step, I knew I wanted to help shape Hennessy and Patterson’s “golden age”. That meant being closer to application developers and working hand-in-hand with them to enable a greater architectural optimization than either of us would be able to achieve on our own. The trouble is that such opportunities are increasingly rare. In many companies, hardware and software have been abstracted thanks to the rise of hyperscale cloud providers.</p><p>That’s exactly where Cloudflare comes in.</p><p>Cloudflare is helping build a better Internet. We’re doing so by combining deep software expertise — i.e., the security, performance, reliability, and privacy services our customers use — with equivalent focus on hardware — i.e. the growth and increasing efficiency of the global network on which those services live. And we’re doing so on the broadest and most inclusive scale possible — serving everyone from large enterprises to mom-and-pop shops, often using open-source software and solutions. This openness has led to us serving over 32 million HTTP requests per second on average — <a href="/application-security/">a significant fraction of the entire Internet</a>.</p><p>For someone interested in exploring the future of architectural optimization through the intersection of software and hardware, being able to do it with the whole Internet as your sandbox is the ultimate opportunity.</p><p>From my experience as the Chairperson of the Open Compute Project Foundation, where we drive hyperscale innovation from the cloud to the edge, I felt great synergy leading the Hardware Systems team here at Cloudflare. Together we are identifying, developing, delivering, and scaling the hardware systems that benefit the entire Internet, and you can bet we will enthusiastically share our findings to help shape the future of this industry.</p>
    <div>
      <h3>We are only getting started…</h3>
      <a href="#we-are-only-getting-started">
        
      </a>
    </div>
    <p>Come join us in helping build a better Internet. If you want to learn more about working at Cloudflare or explore the many career opportunities we have around the world, check out the links below.</p><p><a href="https://www.cloudflare.com/about-overview/">About Cloudflare</a><a href="https://www.cloudflare.com/careers/jobs/">Open Roles</a></p> ]]></content:encoded>
            <category><![CDATA[Life at Cloudflare]]></category>
            <category><![CDATA[Recruiting]]></category>
            <category><![CDATA[Careers]]></category>
            <guid isPermaLink="false">5Ie08eTkyTTaCoD72VvGNy</guid>
            <dc:creator>Marcelo Affonso</dc:creator>
            <dc:creator>Rebecca Weekly</dc:creator>
        </item>
    </channel>
</rss>