
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Fri, 03 Apr 2026 17:08:44 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Inside Gen 13: how we built our most powerful server yet]]></title>
            <link>https://blog.cloudflare.com/gen13-config/</link>
            <pubDate>Mon, 23 Mar 2026 13:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare's Gen 13 servers introduce AMD EPYC™ Turin 9965 processors and a transition to 100 GbE networking to meet growing traffic demands. In this technical deep dive, we explain the engineering rationale behind each major component selection. ]]></description>
            <content:encoded><![CDATA[ <p>A few months ago, Cloudflare announced <a href="https://blog.cloudflare.com/20-percent-internet-upgrade/"><u>the transition to FL2</u></a>, our Rust-based rewrite of Cloudflare's core request handling layer. This transition accelerates our ability to help build a better Internet for everyone. With the migration in the software stack, Cloudflare has refreshed our server hardware design with improved hardware capabilities and better efficiency to serve the evolving demands of our network and software stack. Gen 13 is designed with 192-core AMD EPYC™ Turin 9965 processor, 768 GB of DDR5-6400 memory, 24 TB of PCIe 5.0 NVMe storage, and dual 100 GbE port network interface card.</p><p>Gen 13 delivers:</p><ul><li><p>Up to 2x throughput compared to Gen 12 while staying within latency SLA</p></li><li><p>Up to 50% improvement in performance / watt efficiency, reducing data center expansion costs</p></li><li><p>Up to 60% higher throughput per rack keeping rack power budget constant</p></li><li><p>2x memory capacity, 1.5x storage capacity, 4x network bandwidth</p></li><li><p>Introduced PCIe encryption hardware support in addition to memory encryption</p></li><li><p>Improved support for thermally demanding powerful drop-in PCIe accelerators</p></li></ul><p>This blog post covers the engineering rationale behind each major component selection: what we evaluated, what we chose, and why.</p><table><tr><td><p>Generation</p></td><td><p>Gen 13 Compute</p></td><td><p>Previous Gen 12 Compute</p></td></tr><tr><td><p>Form Factor</p></td><td><p>2U1N, Single socket</p></td><td><p>2U1N, Single socket</p></td></tr><tr><td><p>Processor</p></td><td><p>AMD EPYC™ 9965 
Turin 192-Core Processor</p></td><td><p>AMD EPYC™ 9684X 
Genoa-X 96-Core Processor</p></td></tr><tr><td><p>Memory</p></td><td><p>768GB of DDR5-6400 x12 memory channel</p></td><td><p>384GB of DDR5-4800 x12 memory channel</p></td></tr><tr><td><p>Storage</p></td><td><p>x3 E1.S NVMe</p><p>
</p><p> Samsung PM9D3a 7.68TB / 
Micron 7600 Pro 7.68TB</p></td><td><p>x2 E1.S NVMe </p><p>
</p><p>Samsung PM9A3 7.68TB / 
Micron 7450 Pro 7.68TB</p></td></tr><tr><td><p>Network</p></td><td><p>Dual 100 GbE OCP 3.0 </p><p>
</p><p>Intel Ethernet Network Adapter E830-CDA2 /
NVIDIA Mellanox ConnectX-6 Dx</p></td><td><p>Dual 25 GbE OCP 3.0</p><p>
</p><p>Intel Ethernet Network Adapter E810-XXVDA2 / 
NVIDIA Mellanox ConnectX-6 Lx</p></td></tr><tr><td><p>System Management</p></td><td><p>DC-SCM 2.0 ASPEED AST2600 (BMC) + AST1060 (HRoT)</p></td><td><p>DC-SCM 2.0 ASPEED AST2600 (BMC) + AST1060 (HRoT)</p></td></tr><tr><td><p>Power Supply</p></td><td><p>1300W, Titanium Grade</p></td><td><p>800W, Titanium Grade</p></td></tr></table>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2Gawj2GP8s2CCZCWwNgBiB/587b0ed5ef65cf95cf178e5457150b6a/image3.png" />
          </figure><p><i><sup>Figure: Gen 13 server</sup></i></p>
    <div>
      <h2>CPU</h2>
      <a href="#cpu">
        
      </a>
    </div>
    <table><tr><td><p>Gen 12</p></td><td><p>AMD EPYC™ 9684X Genoa-X 96-Core (400W TDP, 1152 MB L3 Cache)</p></td></tr><tr><td><p>Gen 13</p></td><td><p>AMD EPYC™ 9965 Turin Dense 192-Core (500W TDP, 384 MB L3 Cache)</p></td></tr></table><p>During the design phase, we evaluated several 5th generation AMD EPYC™ Processors, code-named Turin, in Cloudflare’s hardware lab: AMD Turin 9755, AMD Turin 9845, and AMD Turin 9965. The table below summarizes the differences in <a href="https://www.amd.com/content/dam/amd/en/documents/epyc-business-docs/datasheets/amd-epyc-9005-series-processor-datasheet.pdf"><u>specifications</u></a> of the candidates for Gen 13 servers against the AMD Genoa-X 9684X used in our <a href="https://blog.cloudflare.com/gen-12-servers/"><u>Gen 12 servers</u></a>. Notably, all three candidates offer increases in core count but with smaller L3 cache per core. However, with the <a href="https://blog.cloudflare.com/20-percent-internet-upgrade/"><u>migration to FL2</u></a>, the new workloads are <a href="https://blog.cloudflare.com/gen13-launch/"><u>less dependent on L3 cache and scale up well with the increased core count to achieve up to 100% increase in throughput</u></a>.</p><p>The three CPU candidates are designed to target different use cases: AMD Turin 9755 offers superior per-core performance, AMD Turin 9965 trades per-core performance for efficiency, and AMD Turin 9845 trades core count for lower socket power. We evaluated three CPUs in the production environment.</p><table><tr><td><p>CPU Model</p></td><td><p>AMD Genoa-X 9684X</p></td><td><p>AMD Turin 9755</p></td><td><p>AMD Turin 9845</p></td><td><p>AMD Turin 9965</p></td></tr><tr><td><p>For server platform</p></td><td><p>Gen 12</p></td><td><p>Gen 13 candidate</p></td><td><p>Gen 13 candidate</p></td><td><p>Gen 13 candidate</p></td></tr><tr><td><p># of CPU Cores</p></td><td><p>96</p></td><td><p>128</p></td><td><p>160</p></td><td><p>192</p></td></tr><tr><td><p># of Threads</p></td><td><p>192</p></td><td><p>256</p></td><td><p>320</p></td><td><p>384</p></td></tr><tr><td><p>Base Clock</p></td><td><p>2.4 GHz</p></td><td><p>2.7 GHz</p></td><td><p>2.1 GHz</p></td><td><p>2.25 GHz</p></td></tr><tr><td><p>Max Boost Clock</p></td><td><p>3.7 GHz</p></td><td><p>4.1 GHz</p></td><td><p>3.7 GHz</p></td><td><p>3.7 GHz</p></td></tr><tr><td><p>All Core Boost Clock</p></td><td><p>3.42 GHz</p></td><td><p>4.1 GHz</p></td><td><p>3.25 GHz</p></td><td><p>3.35 GHz</p></td></tr><tr><td><p>Total L3 Cache</p></td><td><p>1152 MB</p></td><td><p>512 MB</p></td><td><p>320 MB</p></td><td><p>384 MB</p></td></tr><tr><td><p>L3 cache per core</p></td><td><p>12 MB / core</p></td><td><p>4 MB / core</p></td><td><p>2 MB / core</p></td><td><p>2 MB / core</p></td></tr><tr><td><p>Maximum configurable TDP</p></td><td><p>400W</p></td><td><p>500W</p></td><td><p>390W</p></td><td><p>500W</p></td></tr></table>
    <div>
      <h3>Why AMD Turin 9965?</h3>
      <a href="#why-amd-turin-9965">
        
      </a>
    </div>
    <p>First, <b>FL2 ended the L3 cache crunch</b>.</p><p>L3 cache is the large, last-level cache shared among all CPU cores on the same compute die to store frequently used data. It bridges the gap between slow main memory external to the CPU, and the fast but smaller L1 and L2 cache on the CPU, reducing the latency for the CPU to access data.</p><p>Some may notice that the 9965 has only 2 MB of L3 cache per core, an 83.3% reduction from the 12 MB per core on Gen 12’s Genoa-X 9684X. Why trade away the very cache advantage that gave Gen 12 its edge? The answer lies in how our workloads have evolved.</p><p>Cloudflare has <a href="https://blog.cloudflare.com/20-percent-internet-upgrade/"><u>migrated from FL1 to FL2</u></a>, a complete rewrite of our request handling layer in Rust. With the new software stack, Cloudflare’s request processing pipeline has become significantly less dependent on large L3 cache. FL2 workloads <a href="https://blog.cloudflare.com/gen13-launch/"><u>scale nearly linearly with core count</u></a>, and the 9965’s 192 cores provide a 2x increase in hardware threads over Gen 12.</p><p>Second, <b>performance per total cost of ownership (TCO)</b>. During production evaluation, the 9965’s 192 cores delivered the highest aggregate requests per second of the three candidates, and its performance-per-watt scaled favorably at 500W TDP, yielding superior rack-level TCO.</p><table><tr><td><p>
</p></td><td><p><b>Gen 12 </b></p></td><td><p><b>Gen 13 </b></p></td></tr><tr><td><p>Processor</p></td><td><p>AMD EPYC™ 4th Gen Genoa-X 9684X</p></td><td><p>AMD EPYC™ 5th Gen Turin 9965</p></td></tr><tr><td><p>Core count</p></td><td><p>96C/192T</p></td><td><p>192C/384T</p></td></tr><tr><td><p>FL throughput</p></td><td><p>Baseline</p></td><td><p>Up to +100%</p></td></tr><tr><td><p>Performance per watt</p></td><td><p>Baseline</p></td><td><p>Up to +50%</p></td></tr></table><p>Third, <b>operational simplicity</b>. Our operational teams have a strong preference for fewer, higher-density servers. Managing a fleet of 192-core machines means fewer nodes to provision, patch, and monitor per unit of compute delivered. This directly reduces operational overhead across our global network.</p><p>Finally,<b> </b>they are <b>forward compatible</b>. The AMD processor architecture supports DDR5-6400, PCIe Gen 5.0, CXL 2.0 Type 3 memory across all SKUs. AMD Turin 9965 has the highest number of high-performing cores per socket in the industry, maximizing the compute density per socket, maintaining competitiveness and relevance of the platform for years to come. By moving to AMD Turin 9965 from AMD Genoa-X 9684X, we get longer security support from AMD, extending the useful life of the Gen 13 server before they become obsolete and need to be refreshed.</p>
    <div>
      <h2>Memory</h2>
      <a href="#memory">
        
      </a>
    </div>
    <table><tr><td><p>Gen 12</p></td><td><p>12x 32GB DDR5-4800 2Rx8 (384 GB total, 4 GB/core)</p></td></tr><tr><td><p>Gen 13</p></td><td><p>12x 64GB DDR5-6400 2Rx4 (768 GB total, 4 GB/core)</p></td></tr></table><p>Because the AMD Turin processor has twice the core count of the previous generation, it demands more memory resources, both in capacity and in bandwidth, to deliver throughput gains.</p>
    <div>
      <h3>Maximizing bandwidth with 12 channels</h3>
      <a href="#maximizing-bandwidth-with-12-channels">
        
      </a>
    </div>
    <p>The chosen AMD EPYC™ 9965 CPU supports twelve memory channels, and for Gen 13, we are populating every single one of them. We’ve selected 64 GB DDR5-6400 ECC RDIMMs in a “one DIMM per channel” (1DPC) configuration.</p><p>This setup provides 614 GB/s of peak memory bandwidth per socket, a 33.3% increase compared to our Gen 12 server platform. By utilizing all 12 channels, we ensure that the CPU is never “starved” for data, even during the most memory-intensive parallel workloads.</p><p>Populating all twelve channels in a balanced configuration — equal capacity per channel, with no mixed configurations — is common best practice. This matters operationally: AMD Turin processors interleave across all memory channels with the same DIMM type, same memory capacity and same rank configuration. Interleaving increases memory bandwidth by spreading contiguous memory access across all memory channels in the interleave set instead of sending all memory access to a single or a small subset of memory channels. </p>
    <div>
      <h3>The 4 GB per core “sweet spot”</h3>
      <a href="#the-4-gb-per-core-sweet-spot">
        
      </a>
    </div>
    <p>Our Gen 12 servers are configured with 4GB per core. We revisited that decision as we designed Gen 13.</p><p>Cloudflare launches a lot of new products and services every month, and each new product or service demands an incremental amount of memory capacity. These accumulate over time and could become an issue of memory pressure, if memory capacity is not sized appropriately.</p><p>Initial requirement considered a memory-to-core ratio between 4 GB and 6 GB per core. With 192 cores on the AMD Turin 9965, that translates to a range of 768 GB to 1152 GB. Note that at higher capacities,  DIMM module capacity granularity are typically 16GB increments. With 12 channels in a 1DPC configuration, our options are 12x 48GB (576 GB), 12x 64GB (768 GB), or 12x 96GB (1152 GB).</p><ul><li><p>12x 48GB = 576 GB, or 1.5 GB/thread. The memory capacity of this configuration is too low; this would starve memory-hungry workloads and violate the lower bound.</p></li><li><p>12x 96GB = 1152 GB, or 3.0 GB/thread. This would be a 50% capacity increase per core and would also result in higher power consumption and a substantial increase in cost, especially in the current market conditions where memory prices are 10x of what they were a year ago.</p></li><li><p>12x 64GB = 768 GB, or 2.0 GB/thread (4 GB/core). This configuration is consistent with our Gen 12 memory to core ratio, and represents a 2x increase in memory capacity per server. Keeping the memory capacity configuration at 4 GB per core provides sufficient capacity for workloads that scale with core count, like our primary workload, FL, and provide sufficient memory capacity headroom for future growth without overprovisioning.</p></li></ul><p><a href="https://blog.cloudflare.com/20-percent-internet-upgrade/"><u>FL2 uses memory more efficiently</u></a> than FL1 did: our internal measures show FL2 uses less than half the CPU of FL1, and far less than half the memory. The capacity freed up by the software stack migration provides ample headroom to support Cloudflare growth for the next few years.</p><p>The decision: 12x 64GB for 768 GB total. This maintains the proven 4 GB/core ratio, provides a 2x total capacity increase over Gen 12, and stays within the DIMM cost curve sweet spot.</p>
    <div>
      <h3>Efficiency through dual rank</h3>
      <a href="#efficiency-through-dual-rank">
        
      </a>
    </div>
    <p>In Gen 12, we demonstrated that dual-rank DIMMs provide measurably higher memory throughput than single-rank modules, with advantages of up to 17.8% at a 1:1 read-write ratio. Dual-rank DIMMs are faster because they allow the memory controller to access one rank while another is refreshing. That same principle carries forward here.</p><p>Our requirement also calls for approximately 1 GB/s of memory bandwidth per hardware thread. With 614 GB/s of peak bandwidth across 384 threads, we deliver 1.6 GB/s per thread, comfortably exceeding the minimum. Production analysis has shown that Cloudflare workloads are not memory-bandwidth-bound, so we bank the headroom as margin for future workload growth.</p><p>By opting for 2Rx4 DDR5 RDIMMs at maximum supported 6400MT/s, we ensure we get the lowest latency and best performance from our Gen 13 platform memory configuration.</p>
    <div>
      <h2>Storage</h2>
      <a href="#storage">
        
      </a>
    </div>
    <table><tr><td><p>Gen 12</p></td><td><p>x2 E1.S NVMe PCIe 4.0, 16 TB total</p><p>Samsung PM9A3 7.68TB</p><p>Micron 7450 Pro 7.68TB</p></td></tr><tr><td><p>Gen 13</p></td><td><p>x3 E1.S NVMe PCIe 5.0, 24 TB total</p><p>Samsung PM9D3a 7.68TB</p><p>Micron 7600 Pro 7.68TB</p><p>+10x U.2 NVMe PCIe 5.0 option</p></td></tr></table><p>Our storage architecture underwent a transformation in Gen 12 when we pivoted from M.2 to EDSFF E1.S. For Gen 13, we are increasing the storage capacity and the bandwidth to align with the latest technology. We have also added a front drive bay for flexibility to add up to 10x U.2 drives to keep pace with Cloudflare storage product growth. </p>
    <div>
      <h3>The move to PCIe 5.0</h3>
      <a href="#the-move-to-pcie-5-0">
        
      </a>
    </div>
    <p>Gen 13 is configured with PCIe Gen 5.0 NVMe drives. While Gen 4.0 served us well, the move to Gen 5.0 ensures that our storage subsystem can serve data at improved latency, and keep up with increased storage bandwidth demand from the new processor. </p>
    <div>
      <h3>16 TB to 24 TB</h3>
      <a href="#16-tb-to-24-tb">
        
      </a>
    </div>
    <p>Beyond the speed increase, we are physically expanding the array from two to three NVMe drives. Our Gen 12 server platform was designed with four E1.S storage drive slots, but only two slots were populated with 8TB drives. The Gen 13 server platform uses the same design with four E1.S storage drive slots available, but with three slots populated with 8TB drives. Why add a third drive? This increases our storage capacity per server from 16TB to 24TB, ensuring we are expanding our global storage capacity to maintain and improve CDN cache performance. This supports growth projections for Durable Objects, Containers, and Quicksilver services, too.</p>
    <div>
      <h3>Front drive bay to support additional drives</h3>
      <a href="#front-drive-bay-to-support-additional-drives">
        
      </a>
    </div>
    <p>For Gen 13, the chassis is designed with a front drive bay that can support up to ten U.2 PCIe Gen 5.0 NVMe drives. The front drive bay provides the option for Cloudflare to use the same chassis across compute and storage platforms, as well as the flexibility to convert a compute SKU to a storage SKU when needed. </p>
    <div>
      <h3>Endurance and reliability</h3>
      <a href="#endurance-and-reliability">
        
      </a>
    </div>
    <p>We designed our servers to have a 5-year operational life and require storage drives endurance to sustain 1 DWPD (Drive Writes Per Day) over the full server lifespan.</p><p>Both the Samsung PM9D3a and Micron 7600 Pro meet the 1 DWPD specification with a hardware over-provisioning (OP) of approximately 7%. If future workload profiles demand higher endurance, we have the option to hold back additional user capacity to increase effective OP.</p>
    <div>
      <h3>NVMe 2.0 and OCP NVMe 2.0 compliance</h3>
      <a href="#nvme-2-0-and-ocp-nvme-2-0-compliance">
        
      </a>
    </div>
    <p>Both the Samsung PM9D3a and Micron 7600 adopt the NVMe 2.0 specification (up from NVMe 1.4) and the OCP NVMe Cloud SSD Specification 2.0. Key improvements include Zoned Namespaces (ZNS) for better write amplification management, Simple Copy Command for intra-device data movement without crossing the PCIe bus, and enhanced Command and Feature Lockdown for tighter security controls. The OCP 2.0 spec also adds deeper telemetry and debug capabilities purpose-built for datacenter operations, which aligns with our emphasis on fleet-wide manageability.</p>
    <div>
      <h3>Thermal efficiency</h3>
      <a href="#thermal-efficiency">
        
      </a>
    </div>
    <p>The storage drives will continue to be in the E1.S 15mm form factor. Its high-surface-area design is essential for cooling these new Gen 5.0 controllers, which can pull upwards of 25W under sustained heavy I/O. The 2U chassis provides ample airflow over the E1.S drives as well as U.2 drive bays, a design advantage we validated in Gen 12 when we made the decision to move from 1U to 2U.</p>
    <div>
      <h2>Network</h2>
      <a href="#network">
        
      </a>
    </div>
    <table><tr><td><p>Gen 12</p></td><td><p>Dual 25 GbE port OCP 3.0 NIC </p><p>Intel E810-XXVDA2</p><p>NVIDIA Mellanox ConnectX-6 Lx</p></td></tr><tr><td><p>Gen 13</p></td><td><p>Dual 100 GbE port OCP 3.0 NIC</p><p>Intel E830-CDA2</p><p>NVIDIA Mellanox ConnectX-6 Dx</p></td></tr></table><p>For more than eight years, dual 25 GbE was the backbone of our fleet. <a href="https://blog.cloudflare.com/a-tour-inside-cloudflares-g9-servers/"><u>Since 2018</u></a> it has served us well, but as the CPU has improved to serve more requests and our products scale, we’ve officially hit the wall. For Gen 13, we are quadrupling our per-port bandwidth.</p>
    <div>
      <h3>Why 100 GbE and why now?</h3>
      <a href="#why-100-gbe-and-why-now">
        
      </a>
    </div>
    <p>Network Interface Card (NIC) bandwidth must keep pace with compute performance growth. With 192 modern cores, our 25 GbE links will become a measurable bottleneck. Production data from our co-locations worldwide over a week showed that, on our Gen 12, P95 bandwidth per port is consistently &gt;50% of available bandwidth. Since throughput is doubling per server on Gen 13, we are at risk of saturating the NIC bandwidth.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2lxP5Vy6y6CzCk1rE9FKVU/064d9e5392a08e92b38bca637d053573/image4.png" />
          </figure><p><sup><i>Figure: on Gen 12, P95 bandwidth per port is consistently &gt;50% of available bandwidth</i></sup></p><p>The decision to go to 100 GbE rather than 50 GbE was driven by industry economics: 50 GbE transceiver volumes remain low in the industry, making them a poor supply chain bet. Dual 100 GbE ports also give us 200 Gb/s of aggregate bandwidth per server, future-proofing against the next several years of traffic growth.</p>
    <div>
      <h3>Hardware choices and compatibility</h3>
      <a href="#hardware-choices-and-compatibility">
        
      </a>
    </div>
    <p>We are maintaining our dual-vendor strategy to ensure supply chain resilience, a lesson hard-learned during the pandemic when single-sourcing the Gen 11 NIC left us scrambling.</p><p>Both NICs are compliant with <a href="https://www.servethehome.com/ocp-nic-3-0-form-factors-quick-guide-intel-broadcom-nvidia-meta-inspur-dell-emc-hpe-lenovo-gigabyte-supermicro/"><u>OCP 3.0 SFF/TSFF</u></a> form factor with the integrated pull tab, maintaining chassis commonality with Gen 12 and ensuring field technicians need no new tools or training for swaps.</p>
    <div>
      <h3>PCIe Allocation</h3>
      <a href="#pcie-allocation">
        
      </a>
    </div>
    <p>The OCP 3.0 NIC slot is allocated PCIe 4.0 x16 lanes on the motherboard, providing 256 Gb/s of bidirectional bandwidth, more than enough for dual 100 GbE (200 Gb/s aggregate) with room to spare.</p>
    <div>
      <h2>Management</h2>
      <a href="#management">
        
      </a>
    </div>
    <table><tr><td><p>Gen 12</p></td><td><p><a href="https://blog.cloudflare.com/introducing-the-project-argus-datacenter-ready-secure-control-module-design-specification/"><u>Project Argus</u></a> Data Center Secure Control Module 2.0</p></td></tr><tr><td><p>Gen 13</p></td><td><p><a href="https://blog.cloudflare.com/introducing-the-project-argus-datacenter-ready-secure-control-module-design-specification/"><u>Project Argus</u></a> Data Center Secure Control Module 2.0</p><p>PCIe encryption</p></td></tr></table><p>We are maintaining the architectural shift, introduced in Gen 12, of separating management and security-related components from the motherboard onto the <a href="https://blog.cloudflare.com/introducing-the-project-argus-datacenter-ready-secure-control-module-design-specification/"><u>Project Argus</u></a> Data Center Secure Control Module 2.0.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6F3XH0uQvBry9LJkZlVZOi/42f507d0d46d276db1e3724b21ea49dc/image1.png" />
          </figure><p><sup><i>Figure: Project Argus DC-SCM 2.0</i></sup></p>
    <div>
      <h3>Continuity with DC-SCM 2.0</h3>
      <a href="#continuity-with-dc-scm-2-0">
        
      </a>
    </div>
    <p>We are carrying forward the Data Center Secure Control Module 2.0 (DC-SCM 2.0) standard. By decoupling management and security functions from the motherboard, we ensure that the “brains” of the server’s security stay modular and protected.</p><p>The DC-SCM module houses our most critical components:</p><ul><li><p>Basic Input/Output System (BIOS)</p></li><li><p>Baseboard Management Controller (BMC)</p></li><li><p>Hardware Root of Trust (HRoT) and TPM (Infineon SLB 9672)</p></li><li><p>Dual BMC/BIOS flash chips for redundancy</p></li></ul>
    <div>
      <h3>Why we are staying the course with DC-SCM 2.0</h3>
      <a href="#why-we-are-staying-the-course-with-dc-scm-2-0">
        
      </a>
    </div>
    <p>The decision to keep this architecture for Gen 13 is driven by the proven security gains we saw in the previous generation. By offloading these functions to a dedicated module, we maintain:</p><ul><li><p><b>Rapid recovery</b>: Dual image redundancy allows for near-instant restoration of BIOS/UEFI and BMC firmware if an accidental corruption or a malicious update is detected.</p></li><li><p><b>Physical resilience</b>: The Gen 13 chassis also moves the intrusion detection mechanism further from the flat edge of the chassis, making physical intercept harder.</p></li><li><p><b>PCIe encryption</b>: In addition to TSME (Transparent Secure Memory Encryption) for CPU-to-memory encryption that was already enabled since our Gen 10 platforms, AMD Turin 9965 processor for Gen 13 extends encryption to PCIe traffic, this ensures data is protected in transit across every bus in the system.</p></li><li><p><b>Operational consistency</b>: Sticking with the Gen 12 management stack means our security audits, deployment, provisioning, and operational standard procedure remain fully compatible. </p></li></ul>
    <div>
      <h2>Power</h2>
      <a href="#power">
        
      </a>
    </div>
    <table><tr><td><p>Gen 12</p></td><td><p>800W 80 PLUS Titanium CRPS</p></td></tr><tr><td><p>Gen 13</p></td><td><p>1300W 80 PLUS Titanium CRPS</p></td></tr></table><p>As we upgrade the compute and networking capability of the server, the power envelope of our servers has naturally expanded. Gen 13 are equipped with bigger power supplies to deliver the power needed.</p>
    <div>
      <h3>The jump to 1300W</h3>
      <a href="#the-jump-to-1300w">
        
      </a>
    </div>
    <p>While our Gen 12 nodes operated comfortably with 800W 80 PLUS Titanium CRPS (Common Redundant Power Supply), the Gen 13 specification requires a larger power supply. We have selected a 1300W 80 PLUS Titanium CRPS.</p><p>Power consumption of Gen 13 during typical operation has risen to 850W, a 250W increase over the 600W seen in Gen 12. The primary contributors are the 500W TDP CPU (up from 400W), doubling of the memory capacity and the additional NVMe drive.</p><p>Why 1300W instead of 1000W? The current PSU ecosystem lacks viable, high-efficiency options at 1000W. To ensure supply chain reliability, we moved to the next industry-standard tier of 1300W. </p><p><a href="https://eur-lex.europa.eu/eli/reg/2019/424/oj/eng"><u>EU Lot 9</u></a> is a regulation that requires servers deploying in the European Union to have power supplies with efficiency at 10%, 20%, 50% and 100% load to be at or above the percentage threshold specified in the regulation. The threshold matches <a href="https://www.clearesult.com/80plus/80plus-psu-ratings-explained"><u>80 PLUS Power Supply certification program</u></a> titanium grade PSU requirement. We chose a titanium grade PSU for Gen 13 to maintain full compliance with EU Lot 9, ensuring that the servers can be deployed in our European data centers and beyond. </p>
    <div>
      <h3>Thermal design: 2U pays dividends again</h3>
      <a href="#thermal-design-2u-pays-dividends-again">
        
      </a>
    </div>
    <p>The 2U1N form factor we adopted in Gen 12 continues to pay dividends. Gen 13 uses 5x 80mm fans (up from 4x in Gen 12) to handle the increased thermal load from the 500W CPU. The larger fan volume, combined with the 2U chassis airflow characteristics, means fans operate well below maximum duty cycle at typical ambient temperatures, keeping fan power in the &lt; 50W range per fan.</p>
    <div>
      <h2>Drop-in accelerator support</h2>
      <a href="#drop-in-accelerator-support">
        
      </a>
    </div>
    <table><tr><td><p>Gen 12</p></td><td><p>x2 single width FHFL or x1 double width FHFL</p></td></tr><tr><td><p>Gen 13</p></td><td><p>x2 double width FHFL</p></td></tr></table><p>Maintaining the modularity of our fleet is a core requirement for our server design. This requirement enabled Cloudflare to quickly retrofit and <a href="https://blog.cloudflare.com/workers-ai?_gl=1*1gag2w6*_gcl_au*MzM4MjEyMTE0LjE3Njg5NDQ2NjA.*_ga*YzE1ZWNmMTgtNWNmOC00ZDJhLTkyYjUtMzQ0NjNiZjE1OWY1*_ga_SQCRB0TXZW*czE3NzMzNTQzNjQkbzE1JGcxJHQxNzczMzU0NTQ4JGoxOCRsMCRoMCRkQmROOWVoOFpxajBtSWtMTGRCa1VUVDJaY2RoaXBxTmY4QQ../#a-road-to-global-gpu-coverage"><u>deploy GPUs globally to more than 100 cities in 2024</u></a>. In Gen 13, we are continuing the support of high-performance PCIe add-in cards.</p><p>On Gen 13, the 2U chassis layout is updated and configured to support more demanding power and thermal requirements. While Gen 12 was limited to a single double-width GPU, the Gen 13 architecture now supports two double-width PCIe cards.</p>
    <div>
      <h2>A launchpad to scale Cloudflare to greater heights</h2>
      <a href="#a-launchpad-to-scale-cloudflare-to-greater-heights">
        
      </a>
    </div>
    <p>Every generation of Cloudflare servers is an exercise in balancing competing constraints: performance versus power, capacity versus cost, flexibility versus simplicity. Gen 13 comes with 2x core count, 2x memory capacity, 4x network bandwidth, 1.5x storage capacity, and future-proofing for accelerator deployments — all while improving total cost of ownership and maintaining a robust management feature set and security posture that our global fleet demands.</p><p>Gen 13 servers are fully qualified and will be deployed to serve millions of requests across Cloudflare’s global network in more than 330 cities. As always, Cloudflare’s journey to serve the Internet as efficiently as possible does not end here. As the deployment of Gen 13 begins, we are planning the architecture for Gen 14.</p><p>If you are excited about helping build a better Internet, come join us. <a href="https://www.cloudflare.com/careers/jobs/"><u>We are hiring</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Hardware]]></category>
            <category><![CDATA[Infrastructure]]></category>
            <category><![CDATA[Engineering]]></category>
            <category><![CDATA[AMD]]></category>
            <guid isPermaLink="false">7KkjVfneO6PwoHTEAiSYVM</guid>
            <dc:creator>Syona Sarma</dc:creator>
            <dc:creator>JQ Lau</dc:creator>
            <dc:creator>Ma Xiong</dc:creator>
            <dc:creator>Victor Hwang</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare’s 12th Generation servers — 145% more performant and 63% more efficient]]></title>
            <link>https://blog.cloudflare.com/gen-12-servers/</link>
            <pubDate>Wed, 25 Sep 2024 13:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare is thrilled to announce the general deployment of our next generation of server — Gen 12 powered by AMD Genoa-X processors. This new generation of server focuses on delivering exceptional performance across all Cloudflare services, enhanced support for AI/ML workloads, significant strides in power efficiency, and improved security features. ]]></description>
            <content:encoded><![CDATA[ <p>Cloudflare is thrilled to announce the general deployment of our next generation of servers — Gen 12 powered by AMD EPYC 9684X (code name “Genoa-X”) processors. This next generation focuses on delivering exceptional performance across all Cloudflare services, enhanced support for AI/ML workloads, significant strides in power efficiency, and improved security features.</p><p>Here are some key performance indicators and feature improvements that this generation delivers as compared to the <a href="https://blog.cloudflare.com/the-epyc-journey-continues-to-milan-in-cloudflares-11th-generation-edge-server/"><u>prior generation</u></a>: </p><p>Beginning with performance, with close engineering collaboration between Cloudflare and AMD on optimization, Gen 12 servers can serve more than twice as many requests per second (RPS) as Gen 11 servers, resulting in lower Cloudflare infrastructure build-out costs.</p><p>Next, our power efficiency has improved significantly, by more than 60% in RPS per watt as compared to the prior generation. As Cloudflare continues to expand our infrastructure footprint, the improved efficiency helps reduce Cloudflare’s operational expenditure and carbon footprint as a percentage of our fleet size.</p><p>Third, in response to the growing demand for AI capabilities, we've updated the thermal-mechanical design of our Gen 12 server to support more powerful GPUs. This aligns with the <a href="https://www.cloudflare.com/lp/pg-ai/?utm_medium=cpc&amp;utm_source=google&amp;utm_campaign=2023-q4-acq-gbl-developers-wo-ge-general-paygo_mlt_all_g_search_bg_exp__dev&amp;utm_content=workers-ai&amp;gad_source=1&amp;gclid=CjwKCAjwl6-3BhBWEiwApN6_kjigJdDvEYqHPYi8tdXuTe4APbqX923v-CBjpGiAVwITNhp8GrW3ARoCyJ4QAvD_BwE&amp;gclsrc=aw.ds"><u>Workers AI</u></a> objective to support larger large language models and increase throughput for smaller models. This enhancement underscores our ongoing commitment to advancing AI inference capabilities</p><p>Fourth, to underscore our security-first position as a company, we've integrated hardware <a href="https://trustedcomputinggroup.org/about/what-is-a-root-of-trust-rot/"><u>root of trust</u></a> (HRoT) capabilities to ensure the integrity of boot firmware and board management controller firmware. Continuing to embrace open standards, the baseboard management and security controller (Data Center Secure Control Module or <a href="https://drive.google.com/file/d/13BxuseSrKo647hjIXjp087ei8l5QQVb0/view"><u>OCP DC-SCM</u></a>) that we’ve designed into our systems is modular and vendor-agnostic, enabling a unified <a href="https://www.openbmc.org/"><u>openBMC</u></a> image, quicker prototyping, and allowing for reuse.</p><p>Finally, given the increasing importance of supply assurance and reliability in infrastructure deployments, our approach includes a robust multi-vendor strategy to mitigate supply chain risks, ensuring continuity and resiliency of our infrastructure deployment.</p><p>Cloudflare is dedicated to constantly improving our server fleet, empowering businesses worldwide with enhanced performance, efficiency, and security.</p>
    <div>
      <h2>Gen 12 Servers </h2>
      <a href="#gen-12-servers">
        
      </a>
    </div>
    <p>Let's take a closer look at our Gen 12 server. The server is powered by a 4th generation AMD EPYC Processor, paired with 384 GB of DDR5 RAM, 16 TB of NVMe storage, a dual-port 25 GbE NIC, and two 800 watt power supply units.</p>
<div><table><thead>
  <tr>
    <th><span>Generation</span></th>
    <th><span>Gen 12 Compute</span></th>
    <th><span>Previous Gen 11 Compute</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>Form Factor</span></td>
    <td><span>2U1N - Single socket</span></td>
    <td><span>1U1N - Single socket</span></td>
  </tr>
  <tr>
    <td><span>Processor</span></td>
    <td><span>AMD EPYC 9684X Genoa-X 96-Core Processor</span></td>
    <td><span>AMD EPYC 7713 Milan 64-Core Processor</span></td>
  </tr>
  <tr>
    <td><span>Memory</span></td>
    <td><span>384GB of DDR5-4800</span><br /><span>x12 memory channel</span></td>
    <td><span>384GB of DDR4-3200</span><br /><span>x8 memory channel</span></td>
  </tr>
  <tr>
    <td><span>Storage</span></td>
    <td><span>x2 E1.S NVMe</span><br /><span>Samsung PM9A3 7.68TB / Micron 7450 Pro 7.68TB</span></td>
    <td><span>x2 M.2 NVMe</span><br /><span>2x Samsung PM9A3 x 1.92TB</span></td>
  </tr>
  <tr>
    <td><span>Network</span></td>
    <td><span>Dual 25 Gbe OCP 3.0 </span><br /><span>Intel Ethernet Network Adapter E810-XXVDA2 / NVIDIA Mellanox ConnectX-6 Lx</span></td>
    <td><span>Dual 25 Gbe OCP 2.0</span><br /><span>Mellanox ConnectX-4 dual-port 25G</span></td>
  </tr>
  <tr>
    <td><span>System Management</span></td>
    <td><span>DC-SCM 2.0</span><br /><span>ASPEED AST2600 (BMC) + AST1060 (HRoT)</span></td>
    <td><span>ASPEED AST2500 (BMC)</span></td>
  </tr>
  <tr>
    <td><span>Power Supply</span></td>
    <td><span>800W - Titanium Grade</span></td>
    <td><span>650W - Titanium Grade</span></td>
  </tr>
</tbody></table></div>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/ywinOgSFpevEcQSZLhESv/b61d70a1504b4d873d0bbf2e83221bf6/BLOG-2116_2.png" />
          </figure><p><sup><i>Cloudflare Gen 12 server</i></sup></p>
    <div>
      <h3>CPU</h3>
      <a href="#cpu">
        
      </a>
    </div>
    <p>During the design phase, we conducted an extensive survey of the CPU landscape. These options offer valuable choices as we consider how to shape the future of Cloudflare's server technology to match the needs of our customers. We evaluated many candidates in the lab, and short-listed three standout CPU candidates from the 4th generation AMD EPYC Processor lineup: Genoa 9654, Bergamo 9754, and Genoa-X 9684X for production evaluation. The table below summarizes the differences in <a href="https://www.amd.com/content/dam/amd/en/documents/products/epyc/epyc-9004-series-processors-data-sheet.pdf"><u>specifications</u></a> of the short-listed candidates for Gen 12 servers against the AMD EPYC 7713 used in our Gen 11 servers. Notably, all three candidates offer significant increase in core count and marked increase in all core boost clock frequency.</p>
<div><table><thead>
  <tr>
    <th><span>CPU Model</span></th>
    <th><a href="https://www.amd.com/en/products/specifications/server-processor.html"><span>AMD EPYC 7713</span></a></th>
    <th><a href="https://www.amd.com/en/products/specifications/server-processor.html"><span>AMD EPYC 9654</span></a></th>
    <th><a href="https://www.amd.com/en/products/specifications/server-processor.html"><span>AMD EPYC 9754</span></a></th>
    <th><a href="https://www.amd.com/en/products/specifications/server-processor.html"><span>AMD EPYC 9684X</span></a></th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><span>Series</span></td>
    <td><span>Milan</span></td>
    <td><span>Genoa</span></td>
    <td><span>Bergamo</span></td>
    <td><span>Genoa-X</span></td>
  </tr>
  <tr>
    <td><span># of CPU Cores</span></td>
    <td><span>64</span></td>
    <td><span>96</span></td>
    <td><span>128</span></td>
    <td><span>96</span></td>
  </tr>
  <tr>
    <td><span># of Threads</span></td>
    <td><span>128</span></td>
    <td><span>192</span></td>
    <td><span>256</span></td>
    <td><span>192</span></td>
  </tr>
  <tr>
    <td><span>Base Clock</span></td>
    <td><span>2.0 GHz</span></td>
    <td><span>2.4 GHz</span></td>
    <td><span>2.25 GHz</span></td>
    <td><span>2.4 GHz</span></td>
  </tr>
  <tr>
    <td><span>Max Boost Clock</span></td>
    <td><span>3.67 GHz</span></td>
    <td><span>3.7 Ghz</span></td>
    <td><span>3.1 Ghz</span></td>
    <td><span>3.7 Ghz</span></td>
  </tr>
  <tr>
    <td><span>All Core Boost Clock</span></td>
    <td><span>2.7 GHz *</span></td>
    <td><span>3.55 GHz</span></td>
    <td><span>3.1GHz</span></td>
    <td><span>3.42 GHz</span></td>
  </tr>
  <tr>
    <td><span>Total L3 Cache</span></td>
    <td><span>256 MB</span></td>
    <td><span>384 MB</span></td>
    <td><span>256 MB</span></td>
    <td><span>1152 MB</span></td>
  </tr>
  <tr>
    <td><span>L3 cache per core</span></td>
    <td><span>4MB / core</span></td>
    <td><span>4MB / core</span></td>
    <td><span>2MB / core</span></td>
    <td><span>12MB / core</span></td>
  </tr>
  <tr>
    <td><span>Maximum configurable TDP</span></td>
    <td><span>240W</span></td>
    <td><span>400W</span></td>
    <td><span>400W</span></td>
    <td><span>400W</span></td>
  </tr>
</tbody></table></div><p><sub>*Note: AMD EPYC 7713 all core boost clock frequency of 2.7 GHz is not an official specification of the CPU but based on data collected at Cloudflare production fleet.</sub></p><p>During production evaluation, the configuration of all three CPUs were optimized to the best of our knowledge, including thermal design power (TDP) configured to 400W for maximum performance. The servers are set up to run the same processes and services like any other server we have in production, which makes for a great side-by-side comparison. </p>
<div><table><thead>
  <tr>
    <th></th>
    <th><span>Milan 7713</span></th>
    <th><span>Genoa 9654</span></th>
    <th><span>Bergamo 9754</span></th>
    <th><span>Genoa-X 9684X</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>Production performance (request per second) multiplier</span></td>
    <td><span>1x</span></td>
    <td><span>2x</span></td>
    <td><span>2.15x</span></td>
    <td><span>2.45x</span></td>
  </tr>
  <tr>
    <td><span>Production efficiency (request per second per watt) multiplier</span></td>
    <td><span>1x</span></td>
    <td><span>1.33x</span></td>
    <td><span>1.38x</span></td>
    <td><span>1.63x</span></td>
  </tr>
</tbody></table></div>
    <div>
      <h4>AMD EPYC Genoa-X in Cloudflare Gen 12 server</h4>
      <a href="#amd-epyc-genoa-x-in-cloudflare-gen-12-server">
        
      </a>
    </div>
    <p>Each of these CPUs outperforms the previous generation of processors by at least 2x. AMD EPYC 9684X Genoa-X with 3D V-cache technology gave us the greatest performance improvement, at 2.45x, when compared against our Gen 11 servers with AMD EPYC 7713 Milan.</p><p>Comparing the performance between Genoa-X 9684X and Genoa 9654, we see a ~22.5% performance delta. The primary difference between the two CPUs is the amount of L3 cache available on the CPU. Genoa-X 9684X has 1152 MB of L3 cache, which is three times the Genoa 9654 with 384 MB of L3 cache. Cloudflare workloads benefit from more low level cache being accessible and avoid the much larger latency penalty associated with fetching data from memory.</p><p>Genoa-X 9684X CPU delivered ~22.5% improved performance consuming the same amount of 400W power compared to Genoa 9654. The 3x larger L3 cache does consume additional power, but only at the expense of sacrificing 3% of highest achievable all core boost frequency on Genoa-X 9684X, a favorable trade-off for Cloudflare workloads.</p><p>More importantly, Genoa-X 9684X CPU delivered 145% performance improvement with only 50% system power increase, offering a 63% power efficiency improvement that will help drive down operational expenditure tremendously. It is important to note that even though a big portion of the power efficiency is due to the CPU, it needs to be paired with optimal thermal-mechanical design to realize the full benefit. Earlier last year, <a href="https://blog.cloudflare.com/cloudflare-gen-12-server-bigger-better-cooler-in-a-2u1n-form-factor/"><u>we made the thermal-mechanical design choice to double the height of the server chassis to optimize rack density and cooling efficiency across our global data centers</u></a>. We estimated that moving from 1U to 2U would reduce fan power by 150W, which would decrease system power from 750 watts to 600 watts. Guess what? We were right — a Gen 12 server consumes 600 watts per system at a typical ambient temperature of 25°C.</p><p>While high performance often comes at a higher price, fortunately AMD EPYC 9684X offer an excellent balance between cost and capability. A server designed with this CPU provides top-tier performance without necessitating a huge financial outlay, resulting in a good Total Cost of Ownership improvement for Cloudflare.</p>
    <div>
      <h3>Memory</h3>
      <a href="#memory">
        
      </a>
    </div>
    <p>AMD Genoa-X CPU supports twelve memory channels of DDR5 RAM up to 4800 mega transfers per second (MT/s) and per socket Memory Bandwidth of 460.8 GB/s. The twelve channels are fully utilized with 32 GB ECC 2Rx8 DDR5 RDIMM with one DIMM per channel configuration for a combined total memory capacity of 384 GB. </p><p>Choosing the optimal memory capacity is a balancing act, as maintaining an optimal memory-to-core ratio is important to make sure CPU capacity or memory capacity is not wasted. Some may remember that our Gen 11 servers with 64 core AMD EPYC 7713 CPUs are also configured with 384 GB of memory, which is about 6 GB per core. So why did we choose to configure our Gen 12 servers with 384 GB of memory when the core count is growing to 96 cores? Great question! A lot of memory optimization work has happened since we introduced Gen 11, including some that we blogged about, like <a href="https://blog.cloudflare.com/scalable-machine-learning-at-cloudflare/"><u>Bot Management code optimization</u></a> and <a href="https://blog.cloudflare.com/how-we-built-pingora-the-proxy-that-connects-cloudflare-to-the-internet/"><u>our transition to highly efficient Pingora</u></a>. In addition, each service has a memory allocation that is sized for optimal performance. The per-service memory allocation is programmed and monitored utilizing Linux control group resource management features. When sizing memory capacity for Gen 12, we consulted with the team who monitor resource allocation and surveyed memory utilization metrics collected from our fleet. The result of the analysis is that the optimal memory-to-core ratio is 4 GB per CPU core, or 384 GB total memory capacity. This configuration is validated in production. We chose dual rank memory modules over single rank memory modules because they have higher memory throughput, which improves server performance (read more about <a href="https://blog.cloudflare.com/ddr4-memory-organization-and-how-it-affects-memory-bandwidth/"><u>memory module organization and its effect on memory bandwidth</u></a>). </p><p>The table below shows the result of running the <a href="https://www.intel.com/content/www/us/en/developer/articles/tool/intelr-memory-latency-checker.html"><u>Intel Memory Latency Checker (MLC)</u></a> tool to measure peak memory bandwidth for the system and to compare memory throughput between 12 channels of dual-rank (2Rx8) 32 GB DIMM and 12 channels of single rank (1Rx4) 32 GB DIMM. Dual rank DIMMs have slightly higher (1.8%) read memory bandwidth, but noticeably higher write bandwidth. As write ratios increased from 25% to 50%, the memory throughput delta increased by 10%.</p>
<div><table><thead>
  <tr>
    <th><span>Benchmark</span></th>
    <th><span>Dual rank advantage over single rank</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>Intel MLC ALL Reads</span></td>
    <td><span>101.8%</span></td>
  </tr>
  <tr>
    <td><span>Intel MLC 3:1 Reads-Writes</span></td>
    <td><span>107.7%</span></td>
  </tr>
  <tr>
    <td><span>Intel MLC 2:1 Reads-Writes</span></td>
    <td><span>112.9%</span></td>
  </tr>
  <tr>
    <td><span>Intel MLC 1:1 Reads-Writes</span></td>
    <td><span>117.8%</span></td>
  </tr>
  <tr>
    <td><span>Intel MLC Stream-triad like</span></td>
    <td><span>108.6%</span></td>
  </tr>
</tbody></table></div><p>The table below shows the result of running the <a href="https://www.amd.com/en/developer/zen-software-studio/applications/spack/stream-benchmark.html"><u>AMD STREAM benchmark</u></a> to measure sustainable main memory bandwidth in MB/s and the corresponding computation rate for simple vector kernels. In all 4 types of vector kernels, dual rank DIMMs provide a noticeable advantage over single rank DIMMs.</p>
<div><table><thead>
  <tr>
    <th><span>Benchmark</span></th>
    <th><span>Dual rank advantage over single rank</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>Stream Copy</span></td>
    <td><span>115.44%</span></td>
  </tr>
  <tr>
    <td><span>Stream Scale</span></td>
    <td><span>111.22%</span></td>
  </tr>
  <tr>
    <td><span>Stream Add</span></td>
    <td><span>109.06%</span></td>
  </tr>
  <tr>
    <td><span>Stream Triad</span></td>
    <td><span>107.70%</span></td>
  </tr>
</tbody></table></div>
    <div>
      <h3>Storage</h3>
      <a href="#storage">
        
      </a>
    </div>
    <p>Cloudflare’s Gen X server and Gen 11 server support <a href="https://en.wikipedia.org/wiki/M.2"><u>M.2</u></a> form factor drives. We liked the M.2 form factor mainly because it was compact. The M.2 specification was introduced in 2012, but today, the connector system is dated and the industry has concerns about its ability to maintain signal integrity with the high speed signal specified by <a href="https://www.xda-developers.com/pcie-5/"><u>PCIe 5.0</u></a> and <a href="https://pcisig.com/pci-express-6.0-specification"><u>PCIe 6.0</u></a> specifications. The 8.25W thermal limit of the M.2 form factor also limits the number of flash dies that can be fitted, which limits the maximum supported capacity per drive. To address these concerns, the industry has introduced the <a href="https://americas.kioxia.com/content/dam/kioxia/en-us/business/ssd/data-center-ssd/asset/KIOXIA_Meta_Microsoft_EDSFF_E1_S_Intro_White_Paper.pdf"><u>E1.S</u></a> specification and is transitioning from the M.2 form factor to the E1.S form factor. </p><p>In Gen 12, we are making the change to the <a href="https://www.snia.org/forums/cmsi/knowledge/formfactors#EDSFF"><u>EDSFF</u></a> E1 form factor, more specifically the E1.S 15mm. E1.S 15mm, though still in a compact form factor, provides more space to fit more flash dies for larger capacity support. The form factor also has better cooling design to support more than 25W of sustained power.</p><p>While the AMD Genoa-X CPU supports 128 PCIe 5.0 lanes, we continue to use NVMe devices with PCIe Gen 4.0 x4 lanes, as PCIe Gen 4.0 throughput is sufficient to meet drive bandwidth requirements and keep server design costs optimal. The server is equipped with two 8 TB NVMe drives for a total of 16 TB available storage. We opted for two 8 TB drives instead of four 4 TB drives because the dual 8 TB configuration already provides sufficient I/O bandwidth for all Cloudflare workloads that run on each server.</p>
<div><table><thead>
  <tr>
    <th><span>Sequential Read (MB/s) :</span></th>
    <th><span>6,700</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>Sequential Write (MB/s) :</span></td>
    <td><span>4,000</span></td>
  </tr>
  <tr>
    <td><span>Random Read IOPS:</span></td>
    <td><span>1,000,000</span></td>
  </tr>
  <tr>
    <td><span>Random Write IOPS: </span></td>
    <td><span>200,000</span></td>
  </tr>
  <tr>
    <td><span>Endurance</span></td>
    <td><span>1 DWPD</span></td>
  </tr>
  <tr>
    <td><span>PCIe GEN4 x4 lane throughput</span></td>
    <td><span>7880 MB/s</span></td>
  </tr>
</tbody></table></div><p><sub><i>Storage devices performance specification</i></sub></p>
    <div>
      <h3>Network</h3>
      <a href="#network">
        
      </a>
    </div>
    <p>Cloudflare servers and top-of-rack (ToR) network equipment operate at <a href="https://en.wikipedia.org/wiki/25_Gigabit_Ethernet"><u>25 GbE</u></a> speeds. In Gen 12, we utilized a <a href="https://www.opencompute.org/wiki/Server/DC-MHS"><u>DC-MHS</u></a> motherboard-inspired design, and upgraded from an <a href="https://drive.google.com/file/d/1VGAtABAKU9fq3KfClYhFOgGFN3oe63Uw/view?usp=sharing"><u>OCP 2.0 form factor</u></a> to an <a href="https://drive.google.com/file/d/1U3oEGiSWfupG4SnIdPuJ_8Nte2lJRqTN/view?usp=sharing"><u>OCP 3.0 form factor</u></a>, which provides tool-less serviceability of the NIC. The OCP 3.0 form factor also occupies less space in the 2U server compared to PCIe-attached NICs, which improves airflow and frees up space for other application-specific PCIe cards, such as GPUs.</p><p>Cloudflare has been using the Mellanox CX4-Lx EN dual port 25 GbE NIC since our <a href="https://blog.cloudflare.com/a-tour-inside-cloudflares-g9-servers/"><u>Gen 9 servers in 2018</u></a>. Even though the NIC has served us well over the years, we are single sourced. During the pandemic, we were faced with supply constraints and extremely long lead times. The team scrambled to qualify the Broadcom M225P dual port 25 GbE NIC as our second-sourced NIC in 2022, ensuring we could continue to turn up servers to serve customer demand. With the lessons learned from single-sourcing the Gen 11 NIC, we are now dual-sourcing and have chosen the Intel Ethernet Network Adapter E810 and NVIDIA Mellanox ConnectX-6 Lx to support Gen 12. These two NICs are compliant with the <a href="https://www.opencompute.org/wiki/Server/NIC"><u>OCP 3.0 specification</u></a> and offer more MSI-X queues that can then be mapped to the increased core count on the AMD EPYC 9684X. The Intel Ethernet Network Adapter comes with an additional advantage, offering full Generic Segmentation Offload (GSO) support including VLAN-tagged encapsulated traffic, whereast many vendors either only support <a href="https://netdevconf.info/1.2/papers/LCO-GSO-Partial-TSO-MangleID.pdf"><u>Partial GSO</u></a> or do not support it at all today. With Full GSO support, the kernel spent noticeably less time in softirq segmenting packets, and servers with Intel E810 NICs are processing approximately 2% more requests per second.</p>
    <div>
      <h3>Improved security with DC-SCM: Project Argus</h3>
      <a href="#improved-security-with-dc-scm-project-argus">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6ur3YccqqXckIL6oWKd6Lq/5352252ff8e5c1fb15eb02d1572a0689/BLOG-2116_3.png" />
          </figure><p><sup><i>DC-SCM in Gen 12 server (Project Argus)</i></sup></p><p>Gen 12 servers are integrated with <a href="https://blog.cloudflare.com/introducing-the-project-argus-datacenter-ready-secure-control-module-design-specification/"><u>Project Argus</u></a>, one of the industry first implementations of <a href="https://drive.google.com/file/d/13BxuseSrKo647hjIXjp087ei8l5QQVb0/view"><u>Data Center Secure Control Module 2.0 (DC-SCM 2.0)</u></a>. DC-SCM 2.0 decouples server management and security functions away from the motherboard. The baseboard management controller (BMC), hardware root of trust (HRoT), trusted platform module (TPM), and dual BMC/BIOS flash chips are all installed on the DC-SCM. </p><p>On our Gen X and Gen 11 server, Cloudflare moved our secure boot trust anchor from the system Basic Input/Output System (BIOS) or the Unified Extensible Firmware Interface (UEFI) firmware to hardware-rooted boot integrity — <a href="https://blog.cloudflare.com/anchoring-trust-a-hardware-secure-boot-story/"><u>AMD’s implementation of Platform Secure Boot (PSB)</u></a> or <a href="https://blog.cloudflare.com/armed-to-boot/"><u>Ampere’s implementation of Single Domain Secure Boot</u></a>. These solutions helped secure Cloudflare infrastructure from BIOS / UEFI firmware attacks. However, we are still vulnerable to out-of-band attacks through compromising the BMC firmware. BMC is a microcontroller that provides out-of-band monitoring and management capabilities for the system. When compromised, attackers can read processor console logs accessible by BMC and control server power states for example. On Gen 12, the HRoT on the DC-SCM serves as the trust store of cryptographic keys and is responsible to authenticate the BIOS/UEFI firmware (independent of CPU vendor) and the BMC firmware for secure boot process.</p><p>In addition, on the DC-SCM, there are additional flash storage devices to enable storing back-up BIOS/UEFI firmware and BMC firmware to allow rapid recovery when a corrupted or malicious firmware is programmed, and to be resilient to flash chip failure due to aging.</p><p>These updates make our Gen 12 server more secure and more resilient to firmware attacks.</p>
    <div>
      <h3>Power</h3>
      <a href="#power">
        
      </a>
    </div>
    <p>A Gen 12 server consumes 600 watts at a typical ambient temperature of 25°C. Even though this is a 50% increase from the 400 watts consumed by the Gen 11 server, as mentioned above in the CPU section, this is a relatively small price to pay for a 145% increase in performance. We’ve paired the server up with dual 800W common redundant power supplies (CRPS) with 80 PLUS Titanium grade efficiency. Both power supply units (PSU) operate actively with distributed power and current. The units are hot-pluggable, allowing the server to operate with redundancy and maximize uptime.</p><p><a href="https://www.clearesult.com/80plus/program-details"><u>80 PLUS</u></a> is a PSU efficiency certification program. The Titanium grade efficiency PSU is 2% more efficient than the Platinum grade efficiency PSU between typical operating load of 25% to 50%. 2% may not sound like a lot, but considering the size of Cloudflare fleet with servers deployed worldwide, 2% savings over the lifetime of all Gen 12 deployment is a reduction of more than 7 GWh, <a href="https://www.epa.gov/energy/greenhouse-gas-equivalencies-calculator#results"><u>equivalent to carbon sequestered by more than 3400 acres of U.S. forests in one year</u></a>.  This upgrade also means our Gen 12 server complies with <a href="https://www.unicomengineering.com/blog/eu-lot-9-update-the-coming-server-power-migration/"><u>EU Lot9 requirements</u></a> and can be deployed in the EU region.</p>
<div><table><thead>
  <tr>
    <th><span>80 PLUS certification</span></th>
    <th><span>10%</span></th>
    <th><span>20%</span></th>
    <th><span>50%</span></th>
    <th><span>100%</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>80 PLUS Platinum</span></td>
    <td><span>-</span></td>
    <td><span>92%</span></td>
    <td><span>94%</span></td>
    <td><span>90%</span></td>
  </tr>
  <tr>
    <td><span>80 PLUS Titanium</span></td>
    <td><span>90%</span></td>
    <td><span>94%</span></td>
    <td><span>96%</span></td>
    <td><span>91%</span></td>
  </tr>
</tbody></table></div>
    <div>
      <h3>Drop-in GPU support</h3>
      <a href="#drop-in-gpu-support">
        
      </a>
    </div>
    <p>Demand for machine learning and AI workloads exploded in 2023, and Cloudflare <a href="https://blog.cloudflare.com/workers-ai/"><u>introduced Workers AI </u></a>to serve the needs of our customers. Cloudflare retrofitted or deployed GPUs worldwide in a portion of our Gen 11 server fleet to support the growth of Workers AI. Our Gen 12 server is also designed to accommodate the addition of more powerful GPUs. This gives Cloudflare the flexibility to support Workers AI in all regions of the world, and to strategically place GPUs in regions to reduce inference latency for our customers. With this design, the server can run Cloudflare’s full software stack. During times when GPUs see lower utilization, the server continues to serve general web requests and remains productive.</p><p>The electrical design of the motherboard is designed to support up to two PCIe add-in cards and the power distribution board is sized to support an additional 400W of power. The mechanics are sized to support either a single FHFL (full height, full length) double width GPU PCIe card, or two FHFL single width GPU PCIe cards. The thermal solution including the component placement, fans, and air duct design are sized to support adding GPUs with TDP up to 400W.</p>
    <div>
      <h3>Looking to the future</h3>
      <a href="#looking-to-the-future">
        
      </a>
    </div>
    <p>Gen 12 Servers are currently deployed and live in multiple Cloudflare data centers worldwide, and already process millions of requests per second. Cloudflare’s EPYC journey has not ended — the 5th-gen AMD EPYC CPUs (code name “Turin”) are already available for testing, and we are very excited to start the architecture planning and design discussion for the Gen 13 server. <a href="https://www.cloudflare.com/careers/jobs/"><u>Come join us</u></a> at Cloudflare to help build a better Internet!</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[EPYC]]></category>
            <category><![CDATA[AMD]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Cloudflare Network]]></category>
            <category><![CDATA[Hardware]]></category>
            <guid isPermaLink="false">sdvPBDBwhcEcrODVeOE7A</guid>
            <dc:creator>JQ Lau</dc:creator>
            <dc:creator>Ma Xiong</dc:creator>
            <dc:creator>Syona Sarma</dc:creator>
        </item>
    </channel>
</rss>