
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Fri, 10 Apr 2026 19:30:04 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Inside Gen 13: how we built our most powerful server yet]]></title>
            <link>https://blog.cloudflare.com/gen13-config/</link>
            <pubDate>Mon, 23 Mar 2026 13:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare's Gen 13 servers introduce AMD EPYC™ Turin 9965 processors and a transition to 100 GbE networking to meet growing traffic demands. In this technical deep dive, we explain the engineering rationale behind each major component selection. ]]></description>
            <content:encoded><![CDATA[ <p>A few months ago, Cloudflare announced <a href="https://blog.cloudflare.com/20-percent-internet-upgrade/"><u>the transition to FL2</u></a>, our Rust-based rewrite of Cloudflare's core request handling layer. This transition accelerates our ability to help build a better Internet for everyone. With the migration in the software stack, Cloudflare has refreshed our server hardware design with improved hardware capabilities and better efficiency to serve the evolving demands of our network and software stack. Gen 13 is designed with 192-core AMD EPYC™ Turin 9965 processor, 768 GB of DDR5-6400 memory, 24 TB of PCIe 5.0 NVMe storage, and dual 100 GbE port network interface card.</p><p>Gen 13 delivers:</p><ul><li><p>Up to 2x throughput compared to Gen 12 while staying within latency SLA</p></li><li><p>Up to 50% improvement in performance / watt efficiency, reducing data center expansion costs</p></li><li><p>Up to 60% higher throughput per rack keeping rack power budget constant</p></li><li><p>2x memory capacity, 1.5x storage capacity, 4x network bandwidth</p></li><li><p>Introduced PCIe encryption hardware support in addition to memory encryption</p></li><li><p>Improved support for thermally demanding powerful drop-in PCIe accelerators</p></li></ul><p>This blog post covers the engineering rationale behind each major component selection: what we evaluated, what we chose, and why.</p><table><tr><td><p>Generation</p></td><td><p>Gen 13 Compute</p></td><td><p>Previous Gen 12 Compute</p></td></tr><tr><td><p>Form Factor</p></td><td><p>2U1N, Single socket</p></td><td><p>2U1N, Single socket</p></td></tr><tr><td><p>Processor</p></td><td><p>AMD EPYC™ 9965 
Turin 192-Core Processor</p></td><td><p>AMD EPYC™ 9684X 
Genoa-X 96-Core Processor</p></td></tr><tr><td><p>Memory</p></td><td><p>768GB of DDR5-6400 x12 memory channel</p></td><td><p>384GB of DDR5-4800 x12 memory channel</p></td></tr><tr><td><p>Storage</p></td><td><p>x3 E1.S NVMe</p><p>
</p><p> Samsung PM9D3a 7.68TB / 
Micron 7600 Pro 7.68TB</p></td><td><p>x2 E1.S NVMe </p><p>
</p><p>Samsung PM9A3 7.68TB / 
Micron 7450 Pro 7.68TB</p></td></tr><tr><td><p>Network</p></td><td><p>Dual 100 GbE OCP 3.0 </p><p>
</p><p>Intel Ethernet Network Adapter E830-CDA2 /
NVIDIA Mellanox ConnectX-6 Dx</p></td><td><p>Dual 25 GbE OCP 3.0</p><p>
</p><p>Intel Ethernet Network Adapter E810-XXVDA2 / 
NVIDIA Mellanox ConnectX-6 Lx</p></td></tr><tr><td><p>System Management</p></td><td><p>DC-SCM 2.0 ASPEED AST2600 (BMC) + AST1060 (HRoT)</p></td><td><p>DC-SCM 2.0 ASPEED AST2600 (BMC) + AST1060 (HRoT)</p></td></tr><tr><td><p>Power Supply</p></td><td><p>1300W, Titanium Grade</p></td><td><p>800W, Titanium Grade</p></td></tr></table>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2Gawj2GP8s2CCZCWwNgBiB/587b0ed5ef65cf95cf178e5457150b6a/image3.png" />
          </figure><p><i><sup>Figure: Gen 13 server</sup></i></p>
    <div>
      <h2>CPU</h2>
      <a href="#cpu">
        
      </a>
    </div>
    <table><tr><td><p>Gen 12</p></td><td><p>AMD EPYC™ 9684X Genoa-X 96-Core (400W TDP, 1152 MB L3 Cache)</p></td></tr><tr><td><p>Gen 13</p></td><td><p>AMD EPYC™ 9965 Turin Dense 192-Core (500W TDP, 384 MB L3 Cache)</p></td></tr></table><p>During the design phase, we evaluated several 5th generation AMD EPYC™ Processors, code-named Turin, in Cloudflare’s hardware lab: AMD Turin 9755, AMD Turin 9845, and AMD Turin 9965. The table below summarizes the differences in <a href="https://www.amd.com/content/dam/amd/en/documents/epyc-business-docs/datasheets/amd-epyc-9005-series-processor-datasheet.pdf"><u>specifications</u></a> of the candidates for Gen 13 servers against the AMD Genoa-X 9684X used in our <a href="https://blog.cloudflare.com/gen-12-servers/"><u>Gen 12 servers</u></a>. Notably, all three candidates offer increases in core count but with smaller L3 cache per core. However, with the <a href="https://blog.cloudflare.com/20-percent-internet-upgrade/"><u>migration to FL2</u></a>, the new workloads are <a href="https://blog.cloudflare.com/gen13-launch/"><u>less dependent on L3 cache and scale up well with the increased core count to achieve up to 100% increase in throughput</u></a>.</p><p>The three CPU candidates are designed to target different use cases: AMD Turin 9755 offers superior per-core performance, AMD Turin 9965 trades per-core performance for efficiency, and AMD Turin 9845 trades core count for lower socket power. We evaluated three CPUs in the production environment.</p><table><tr><td><p>CPU Model</p></td><td><p>AMD Genoa-X 9684X</p></td><td><p>AMD Turin 9755</p></td><td><p>AMD Turin 9845</p></td><td><p>AMD Turin 9965</p></td></tr><tr><td><p>For server platform</p></td><td><p>Gen 12</p></td><td><p>Gen 13 candidate</p></td><td><p>Gen 13 candidate</p></td><td><p>Gen 13 candidate</p></td></tr><tr><td><p># of CPU Cores</p></td><td><p>96</p></td><td><p>128</p></td><td><p>160</p></td><td><p>192</p></td></tr><tr><td><p># of Threads</p></td><td><p>192</p></td><td><p>256</p></td><td><p>320</p></td><td><p>384</p></td></tr><tr><td><p>Base Clock</p></td><td><p>2.4 GHz</p></td><td><p>2.7 GHz</p></td><td><p>2.1 GHz</p></td><td><p>2.25 GHz</p></td></tr><tr><td><p>Max Boost Clock</p></td><td><p>3.7 GHz</p></td><td><p>4.1 GHz</p></td><td><p>3.7 GHz</p></td><td><p>3.7 GHz</p></td></tr><tr><td><p>All Core Boost Clock</p></td><td><p>3.42 GHz</p></td><td><p>4.1 GHz</p></td><td><p>3.25 GHz</p></td><td><p>3.35 GHz</p></td></tr><tr><td><p>Total L3 Cache</p></td><td><p>1152 MB</p></td><td><p>512 MB</p></td><td><p>320 MB</p></td><td><p>384 MB</p></td></tr><tr><td><p>L3 cache per core</p></td><td><p>12 MB / core</p></td><td><p>4 MB / core</p></td><td><p>2 MB / core</p></td><td><p>2 MB / core</p></td></tr><tr><td><p>Maximum configurable TDP</p></td><td><p>400W</p></td><td><p>500W</p></td><td><p>390W</p></td><td><p>500W</p></td></tr></table>
    <div>
      <h3>Why AMD Turin 9965?</h3>
      <a href="#why-amd-turin-9965">
        
      </a>
    </div>
    <p>First, <b>FL2 ended the L3 cache crunch</b>.</p><p>L3 cache is the large, last-level cache shared among all CPU cores on the same compute die to store frequently used data. It bridges the gap between slow main memory external to the CPU, and the fast but smaller L1 and L2 cache on the CPU, reducing the latency for the CPU to access data.</p><p>Some may notice that the 9965 has only 2 MB of L3 cache per core, an 83.3% reduction from the 12 MB per core on Gen 12’s Genoa-X 9684X. Why trade away the very cache advantage that gave Gen 12 its edge? The answer lies in how our workloads have evolved.</p><p>Cloudflare has <a href="https://blog.cloudflare.com/20-percent-internet-upgrade/"><u>migrated from FL1 to FL2</u></a>, a complete rewrite of our request handling layer in Rust. With the new software stack, Cloudflare’s request processing pipeline has become significantly less dependent on large L3 cache. FL2 workloads <a href="https://blog.cloudflare.com/gen13-launch/"><u>scale nearly linearly with core count</u></a>, and the 9965’s 192 cores provide a 2x increase in hardware threads over Gen 12.</p><p>Second, <b>performance per total cost of ownership (TCO)</b>. During production evaluation, the 9965’s 192 cores delivered the highest aggregate requests per second of the three candidates, and its performance-per-watt scaled favorably at 500W TDP, yielding superior rack-level TCO.</p><table><tr><td><p>
</p></td><td><p><b>Gen 12 </b></p></td><td><p><b>Gen 13 </b></p></td></tr><tr><td><p>Processor</p></td><td><p>AMD EPYC™ 4th Gen Genoa-X 9684X</p></td><td><p>AMD EPYC™ 5th Gen Turin 9965</p></td></tr><tr><td><p>Core count</p></td><td><p>96C/192T</p></td><td><p>192C/384T</p></td></tr><tr><td><p>FL throughput</p></td><td><p>Baseline</p></td><td><p>Up to +100%</p></td></tr><tr><td><p>Performance per watt</p></td><td><p>Baseline</p></td><td><p>Up to +50%</p></td></tr></table><p>Third, <b>operational simplicity</b>. Our operational teams have a strong preference for fewer, higher-density servers. Managing a fleet of 192-core machines means fewer nodes to provision, patch, and monitor per unit of compute delivered. This directly reduces operational overhead across our global network.</p><p>Finally,<b> </b>they are <b>forward compatible</b>. The AMD processor architecture supports DDR5-6400, PCIe Gen 5.0, CXL 2.0 Type 3 memory across all SKUs. AMD Turin 9965 has the highest number of high-performing cores per socket in the industry, maximizing the compute density per socket, maintaining competitiveness and relevance of the platform for years to come. By moving to AMD Turin 9965 from AMD Genoa-X 9684X, we get longer security support from AMD, extending the useful life of the Gen 13 server before they become obsolete and need to be refreshed.</p>
    <div>
      <h2>Memory</h2>
      <a href="#memory">
        
      </a>
    </div>
    <table><tr><td><p>Gen 12</p></td><td><p>12x 32GB DDR5-4800 2Rx8 (384 GB total, 4 GB/core)</p></td></tr><tr><td><p>Gen 13</p></td><td><p>12x 64GB DDR5-6400 2Rx4 (768 GB total, 4 GB/core)</p></td></tr></table><p>Because the AMD Turin processor has twice the core count of the previous generation, it demands more memory resources, both in capacity and in bandwidth, to deliver throughput gains.</p>
    <div>
      <h3>Maximizing bandwidth with 12 channels</h3>
      <a href="#maximizing-bandwidth-with-12-channels">
        
      </a>
    </div>
    <p>The chosen AMD EPYC™ 9965 CPU supports twelve memory channels, and for Gen 13, we are populating every single one of them. We’ve selected 64 GB DDR5-6400 ECC RDIMMs in a “one DIMM per channel” (1DPC) configuration.</p><p>This setup provides 614 GB/s of peak memory bandwidth per socket, a 33.3% increase compared to our Gen 12 server platform. By utilizing all 12 channels, we ensure that the CPU is never “starved” for data, even during the most memory-intensive parallel workloads.</p><p>Populating all twelve channels in a balanced configuration — equal capacity per channel, with no mixed configurations — is common best practice. This matters operationally: AMD Turin processors interleave across all memory channels with the same DIMM type, same memory capacity and same rank configuration. Interleaving increases memory bandwidth by spreading contiguous memory access across all memory channels in the interleave set instead of sending all memory access to a single or a small subset of memory channels. </p>
    <div>
      <h3>The 4 GB per core “sweet spot”</h3>
      <a href="#the-4-gb-per-core-sweet-spot">
        
      </a>
    </div>
    <p>Our Gen 12 servers are configured with 4GB per core. We revisited that decision as we designed Gen 13.</p><p>Cloudflare launches a lot of new products and services every month, and each new product or service demands an incremental amount of memory capacity. These accumulate over time and could become an issue of memory pressure, if memory capacity is not sized appropriately.</p><p>Initial requirement considered a memory-to-core ratio between 4 GB and 6 GB per core. With 192 cores on the AMD Turin 9965, that translates to a range of 768 GB to 1152 GB. Note that at higher capacities,  DIMM module capacity granularity are typically 16GB increments. With 12 channels in a 1DPC configuration, our options are 12x 48GB (576 GB), 12x 64GB (768 GB), or 12x 96GB (1152 GB).</p><ul><li><p>12x 48GB = 576 GB, or 1.5 GB/thread. The memory capacity of this configuration is too low; this would starve memory-hungry workloads and violate the lower bound.</p></li><li><p>12x 96GB = 1152 GB, or 3.0 GB/thread. This would be a 50% capacity increase per core and would also result in higher power consumption and a substantial increase in cost, especially in the current market conditions where memory prices are 10x of what they were a year ago.</p></li><li><p>12x 64GB = 768 GB, or 2.0 GB/thread (4 GB/core). This configuration is consistent with our Gen 12 memory to core ratio, and represents a 2x increase in memory capacity per server. Keeping the memory capacity configuration at 4 GB per core provides sufficient capacity for workloads that scale with core count, like our primary workload, FL, and provide sufficient memory capacity headroom for future growth without overprovisioning.</p></li></ul><p><a href="https://blog.cloudflare.com/20-percent-internet-upgrade/"><u>FL2 uses memory more efficiently</u></a> than FL1 did: our internal measures show FL2 uses less than half the CPU of FL1, and far less than half the memory. The capacity freed up by the software stack migration provides ample headroom to support Cloudflare growth for the next few years.</p><p>The decision: 12x 64GB for 768 GB total. This maintains the proven 4 GB/core ratio, provides a 2x total capacity increase over Gen 12, and stays within the DIMM cost curve sweet spot.</p>
    <div>
      <h3>Efficiency through dual rank</h3>
      <a href="#efficiency-through-dual-rank">
        
      </a>
    </div>
    <p>In Gen 12, we demonstrated that dual-rank DIMMs provide measurably higher memory throughput than single-rank modules, with advantages of up to 17.8% at a 1:1 read-write ratio. Dual-rank DIMMs are faster because they allow the memory controller to access one rank while another is refreshing. That same principle carries forward here.</p><p>Our requirement also calls for approximately 1 GB/s of memory bandwidth per hardware thread. With 614 GB/s of peak bandwidth across 384 threads, we deliver 1.6 GB/s per thread, comfortably exceeding the minimum. Production analysis has shown that Cloudflare workloads are not memory-bandwidth-bound, so we bank the headroom as margin for future workload growth.</p><p>By opting for 2Rx4 DDR5 RDIMMs at maximum supported 6400MT/s, we ensure we get the lowest latency and best performance from our Gen 13 platform memory configuration.</p>
    <div>
      <h2>Storage</h2>
      <a href="#storage">
        
      </a>
    </div>
    <table><tr><td><p>Gen 12</p></td><td><p>x2 E1.S NVMe PCIe 4.0, 16 TB total</p><p>Samsung PM9A3 7.68TB</p><p>Micron 7450 Pro 7.68TB</p></td></tr><tr><td><p>Gen 13</p></td><td><p>x3 E1.S NVMe PCIe 5.0, 24 TB total</p><p>Samsung PM9D3a 7.68TB</p><p>Micron 7600 Pro 7.68TB</p><p>+10x U.2 NVMe PCIe 5.0 option</p></td></tr></table><p>Our storage architecture underwent a transformation in Gen 12 when we pivoted from M.2 to EDSFF E1.S. For Gen 13, we are increasing the storage capacity and the bandwidth to align with the latest technology. We have also added a front drive bay for flexibility to add up to 10x U.2 drives to keep pace with Cloudflare storage product growth. </p>
    <div>
      <h3>The move to PCIe 5.0</h3>
      <a href="#the-move-to-pcie-5-0">
        
      </a>
    </div>
    <p>Gen 13 is configured with PCIe Gen 5.0 NVMe drives. While Gen 4.0 served us well, the move to Gen 5.0 ensures that our storage subsystem can serve data at improved latency, and keep up with increased storage bandwidth demand from the new processor. </p>
    <div>
      <h3>16 TB to 24 TB</h3>
      <a href="#16-tb-to-24-tb">
        
      </a>
    </div>
    <p>Beyond the speed increase, we are physically expanding the array from two to three NVMe drives. Our Gen 12 server platform was designed with four E1.S storage drive slots, but only two slots were populated with 8TB drives. The Gen 13 server platform uses the same design with four E1.S storage drive slots available, but with three slots populated with 8TB drives. Why add a third drive? This increases our storage capacity per server from 16TB to 24TB, ensuring we are expanding our global storage capacity to maintain and improve CDN cache performance. This supports growth projections for Durable Objects, Containers, and Quicksilver services, too.</p>
    <div>
      <h3>Front drive bay to support additional drives</h3>
      <a href="#front-drive-bay-to-support-additional-drives">
        
      </a>
    </div>
    <p>For Gen 13, the chassis is designed with a front drive bay that can support up to ten U.2 PCIe Gen 5.0 NVMe drives. The front drive bay provides the option for Cloudflare to use the same chassis across compute and storage platforms, as well as the flexibility to convert a compute SKU to a storage SKU when needed. </p>
    <div>
      <h3>Endurance and reliability</h3>
      <a href="#endurance-and-reliability">
        
      </a>
    </div>
    <p>We designed our servers to have a 5-year operational life and require storage drives endurance to sustain 1 DWPD (Drive Writes Per Day) over the full server lifespan.</p><p>Both the Samsung PM9D3a and Micron 7600 Pro meet the 1 DWPD specification with a hardware over-provisioning (OP) of approximately 7%. If future workload profiles demand higher endurance, we have the option to hold back additional user capacity to increase effective OP.</p>
    <div>
      <h3>NVMe 2.0 and OCP NVMe 2.0 compliance</h3>
      <a href="#nvme-2-0-and-ocp-nvme-2-0-compliance">
        
      </a>
    </div>
    <p>Both the Samsung PM9D3a and Micron 7600 adopt the NVMe 2.0 specification (up from NVMe 1.4) and the OCP NVMe Cloud SSD Specification 2.0. Key improvements include Zoned Namespaces (ZNS) for better write amplification management, Simple Copy Command for intra-device data movement without crossing the PCIe bus, and enhanced Command and Feature Lockdown for tighter security controls. The OCP 2.0 spec also adds deeper telemetry and debug capabilities purpose-built for datacenter operations, which aligns with our emphasis on fleet-wide manageability.</p>
    <div>
      <h3>Thermal efficiency</h3>
      <a href="#thermal-efficiency">
        
      </a>
    </div>
    <p>The storage drives will continue to be in the E1.S 15mm form factor. Its high-surface-area design is essential for cooling these new Gen 5.0 controllers, which can pull upwards of 25W under sustained heavy I/O. The 2U chassis provides ample airflow over the E1.S drives as well as U.2 drive bays, a design advantage we validated in Gen 12 when we made the decision to move from 1U to 2U.</p>
    <div>
      <h2>Network</h2>
      <a href="#network">
        
      </a>
    </div>
    <table><tr><td><p>Gen 12</p></td><td><p>Dual 25 GbE port OCP 3.0 NIC </p><p>Intel E810-XXVDA2</p><p>NVIDIA Mellanox ConnectX-6 Lx</p></td></tr><tr><td><p>Gen 13</p></td><td><p>Dual 100 GbE port OCP 3.0 NIC</p><p>Intel E830-CDA2</p><p>NVIDIA Mellanox ConnectX-6 Dx</p></td></tr></table><p>For more than eight years, dual 25 GbE was the backbone of our fleet. <a href="https://blog.cloudflare.com/a-tour-inside-cloudflares-g9-servers/"><u>Since 2018</u></a> it has served us well, but as the CPU has improved to serve more requests and our products scale, we’ve officially hit the wall. For Gen 13, we are quadrupling our per-port bandwidth.</p>
    <div>
      <h3>Why 100 GbE and why now?</h3>
      <a href="#why-100-gbe-and-why-now">
        
      </a>
    </div>
    <p>Network Interface Card (NIC) bandwidth must keep pace with compute performance growth. With 192 modern cores, our 25 GbE links will become a measurable bottleneck. Production data from our co-locations worldwide over a week showed that, on our Gen 12, P95 bandwidth per port is consistently &gt;50% of available bandwidth. Since throughput is doubling per server on Gen 13, we are at risk of saturating the NIC bandwidth.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2lxP5Vy6y6CzCk1rE9FKVU/064d9e5392a08e92b38bca637d053573/image4.png" />
          </figure><p><sup><i>Figure: on Gen 12, P95 bandwidth per port is consistently &gt;50% of available bandwidth</i></sup></p><p>The decision to go to 100 GbE rather than 50 GbE was driven by industry economics: 50 GbE transceiver volumes remain low in the industry, making them a poor supply chain bet. Dual 100 GbE ports also give us 200 Gb/s of aggregate bandwidth per server, future-proofing against the next several years of traffic growth.</p>
    <div>
      <h3>Hardware choices and compatibility</h3>
      <a href="#hardware-choices-and-compatibility">
        
      </a>
    </div>
    <p>We are maintaining our dual-vendor strategy to ensure supply chain resilience, a lesson hard-learned during the pandemic when single-sourcing the Gen 11 NIC left us scrambling.</p><p>Both NICs are compliant with <a href="https://www.servethehome.com/ocp-nic-3-0-form-factors-quick-guide-intel-broadcom-nvidia-meta-inspur-dell-emc-hpe-lenovo-gigabyte-supermicro/"><u>OCP 3.0 SFF/TSFF</u></a> form factor with the integrated pull tab, maintaining chassis commonality with Gen 12 and ensuring field technicians need no new tools or training for swaps.</p>
    <div>
      <h3>PCIe Allocation</h3>
      <a href="#pcie-allocation">
        
      </a>
    </div>
    <p>The OCP 3.0 NIC slot is allocated PCIe 4.0 x16 lanes on the motherboard, providing 256 Gb/s of bidirectional bandwidth, more than enough for dual 100 GbE (200 Gb/s aggregate) with room to spare.</p>
    <div>
      <h2>Management</h2>
      <a href="#management">
        
      </a>
    </div>
    <table><tr><td><p>Gen 12</p></td><td><p><a href="https://blog.cloudflare.com/introducing-the-project-argus-datacenter-ready-secure-control-module-design-specification/"><u>Project Argus</u></a> Data Center Secure Control Module 2.0</p></td></tr><tr><td><p>Gen 13</p></td><td><p><a href="https://blog.cloudflare.com/introducing-the-project-argus-datacenter-ready-secure-control-module-design-specification/"><u>Project Argus</u></a> Data Center Secure Control Module 2.0</p><p>PCIe encryption</p></td></tr></table><p>We are maintaining the architectural shift, introduced in Gen 12, of separating management and security-related components from the motherboard onto the <a href="https://blog.cloudflare.com/introducing-the-project-argus-datacenter-ready-secure-control-module-design-specification/"><u>Project Argus</u></a> Data Center Secure Control Module 2.0.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6F3XH0uQvBry9LJkZlVZOi/42f507d0d46d276db1e3724b21ea49dc/image1.png" />
          </figure><p><sup><i>Figure: Project Argus DC-SCM 2.0</i></sup></p>
    <div>
      <h3>Continuity with DC-SCM 2.0</h3>
      <a href="#continuity-with-dc-scm-2-0">
        
      </a>
    </div>
    <p>We are carrying forward the Data Center Secure Control Module 2.0 (DC-SCM 2.0) standard. By decoupling management and security functions from the motherboard, we ensure that the “brains” of the server’s security stay modular and protected.</p><p>The DC-SCM module houses our most critical components:</p><ul><li><p>Basic Input/Output System (BIOS)</p></li><li><p>Baseboard Management Controller (BMC)</p></li><li><p>Hardware Root of Trust (HRoT) and TPM (Infineon SLB 9672)</p></li><li><p>Dual BMC/BIOS flash chips for redundancy</p></li></ul>
    <div>
      <h3>Why we are staying the course with DC-SCM 2.0</h3>
      <a href="#why-we-are-staying-the-course-with-dc-scm-2-0">
        
      </a>
    </div>
    <p>The decision to keep this architecture for Gen 13 is driven by the proven security gains we saw in the previous generation. By offloading these functions to a dedicated module, we maintain:</p><ul><li><p><b>Rapid recovery</b>: Dual image redundancy allows for near-instant restoration of BIOS/UEFI and BMC firmware if an accidental corruption or a malicious update is detected.</p></li><li><p><b>Physical resilience</b>: The Gen 13 chassis also moves the intrusion detection mechanism further from the flat edge of the chassis, making physical intercept harder.</p></li><li><p><b>PCIe encryption</b>: In addition to TSME (Transparent Secure Memory Encryption) for CPU-to-memory encryption that was already enabled since our Gen 10 platforms, AMD Turin 9965 processor for Gen 13 extends encryption to PCIe traffic, this ensures data is protected in transit across every bus in the system.</p></li><li><p><b>Operational consistency</b>: Sticking with the Gen 12 management stack means our security audits, deployment, provisioning, and operational standard procedure remain fully compatible. </p></li></ul>
    <div>
      <h2>Power</h2>
      <a href="#power">
        
      </a>
    </div>
    <table><tr><td><p>Gen 12</p></td><td><p>800W 80 PLUS Titanium CRPS</p></td></tr><tr><td><p>Gen 13</p></td><td><p>1300W 80 PLUS Titanium CRPS</p></td></tr></table><p>As we upgrade the compute and networking capability of the server, the power envelope of our servers has naturally expanded. Gen 13 are equipped with bigger power supplies to deliver the power needed.</p>
    <div>
      <h3>The jump to 1300W</h3>
      <a href="#the-jump-to-1300w">
        
      </a>
    </div>
    <p>While our Gen 12 nodes operated comfortably with 800W 80 PLUS Titanium CRPS (Common Redundant Power Supply), the Gen 13 specification requires a larger power supply. We have selected a 1300W 80 PLUS Titanium CRPS.</p><p>Power consumption of Gen 13 during typical operation has risen to 850W, a 250W increase over the 600W seen in Gen 12. The primary contributors are the 500W TDP CPU (up from 400W), doubling of the memory capacity and the additional NVMe drive.</p><p>Why 1300W instead of 1000W? The current PSU ecosystem lacks viable, high-efficiency options at 1000W. To ensure supply chain reliability, we moved to the next industry-standard tier of 1300W. </p><p><a href="https://eur-lex.europa.eu/eli/reg/2019/424/oj/eng"><u>EU Lot 9</u></a> is a regulation that requires servers deploying in the European Union to have power supplies with efficiency at 10%, 20%, 50% and 100% load to be at or above the percentage threshold specified in the regulation. The threshold matches <a href="https://www.clearesult.com/80plus/80plus-psu-ratings-explained"><u>80 PLUS Power Supply certification program</u></a> titanium grade PSU requirement. We chose a titanium grade PSU for Gen 13 to maintain full compliance with EU Lot 9, ensuring that the servers can be deployed in our European data centers and beyond. </p>
    <div>
      <h3>Thermal design: 2U pays dividends again</h3>
      <a href="#thermal-design-2u-pays-dividends-again">
        
      </a>
    </div>
    <p>The 2U1N form factor we adopted in Gen 12 continues to pay dividends. Gen 13 uses 5x 80mm fans (up from 4x in Gen 12) to handle the increased thermal load from the 500W CPU. The larger fan volume, combined with the 2U chassis airflow characteristics, means fans operate well below maximum duty cycle at typical ambient temperatures, keeping fan power in the &lt; 50W range per fan.</p>
    <div>
      <h2>Drop-in accelerator support</h2>
      <a href="#drop-in-accelerator-support">
        
      </a>
    </div>
    <table><tr><td><p>Gen 12</p></td><td><p>x2 single width FHFL or x1 double width FHFL</p></td></tr><tr><td><p>Gen 13</p></td><td><p>x2 double width FHFL</p></td></tr></table><p>Maintaining the modularity of our fleet is a core requirement for our server design. This requirement enabled Cloudflare to quickly retrofit and <a href="https://blog.cloudflare.com/workers-ai?_gl=1*1gag2w6*_gcl_au*MzM4MjEyMTE0LjE3Njg5NDQ2NjA.*_ga*YzE1ZWNmMTgtNWNmOC00ZDJhLTkyYjUtMzQ0NjNiZjE1OWY1*_ga_SQCRB0TXZW*czE3NzMzNTQzNjQkbzE1JGcxJHQxNzczMzU0NTQ4JGoxOCRsMCRoMCRkQmROOWVoOFpxajBtSWtMTGRCa1VUVDJaY2RoaXBxTmY4QQ../#a-road-to-global-gpu-coverage"><u>deploy GPUs globally to more than 100 cities in 2024</u></a>. In Gen 13, we are continuing the support of high-performance PCIe add-in cards.</p><p>On Gen 13, the 2U chassis layout is updated and configured to support more demanding power and thermal requirements. While Gen 12 was limited to a single double-width GPU, the Gen 13 architecture now supports two double-width PCIe cards.</p>
    <div>
      <h2>A launchpad to scale Cloudflare to greater heights</h2>
      <a href="#a-launchpad-to-scale-cloudflare-to-greater-heights">
        
      </a>
    </div>
    <p>Every generation of Cloudflare servers is an exercise in balancing competing constraints: performance versus power, capacity versus cost, flexibility versus simplicity. Gen 13 comes with 2x core count, 2x memory capacity, 4x network bandwidth, 1.5x storage capacity, and future-proofing for accelerator deployments — all while improving total cost of ownership and maintaining a robust management feature set and security posture that our global fleet demands.</p><p>Gen 13 servers are fully qualified and will be deployed to serve millions of requests across Cloudflare’s global network in more than 330 cities. As always, Cloudflare’s journey to serve the Internet as efficiently as possible does not end here. As the deployment of Gen 13 begins, we are planning the architecture for Gen 14.</p><p>If you are excited about helping build a better Internet, come join us. <a href="https://www.cloudflare.com/careers/jobs/"><u>We are hiring</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Hardware]]></category>
            <category><![CDATA[Infrastructure]]></category>
            <category><![CDATA[Engineering]]></category>
            <category><![CDATA[AMD]]></category>
            <guid isPermaLink="false">7KkjVfneO6PwoHTEAiSYVM</guid>
            <dc:creator>Syona Sarma</dc:creator>
            <dc:creator>JQ Lau</dc:creator>
            <dc:creator>Ma Xiong</dc:creator>
            <dc:creator>Victor Hwang</dc:creator>
        </item>
        <item>
            <title><![CDATA[Launching Cloudflare’s Gen 13 servers: trading cache for cores for 2x edge compute performance]]></title>
            <link>https://blog.cloudflare.com/gen13-launch/</link>
            <pubDate>Mon, 23 Mar 2026 13:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare’s Gen 13 servers double our compute throughput by rethinking the balance between cache and cores. Moving to high-core-count AMD EPYC ™ Turin CPUs, we traded large L3 cache for raw compute density. By running our new Rust-based FL2 stack, we completely mitigated the latency penalty to unlock twice the performance. ]]></description>
            <content:encoded><![CDATA[ <p>Two years ago, Cloudflare deployed our <a href="https://blog.cloudflare.com/cloudflare-gen-12-server-bigger-better-cooler-in-a-2u1n-form-factor/"><u>12th Generation server fleet</u></a>, based on AMD EPYC™ Genoa-X processors with their massive 3D V-Cache. That cache-heavy architecture was a perfect match for our request handling layer, FL1 at the time. But as we evaluated next-generation hardware, we faced a dilemma — the CPUs offering the biggest throughput gains came with a significant cache reduction. Our legacy software stack wasn't optimized for this, and the potential throughput benefits were being capped by increasing latency.</p><p>This blog describes how the <a href="https://blog.cloudflare.com/20-percent-internet-upgrade/"><u>FL2 transition</u></a>, our Rust-based rewrite of Cloudflare's core request handling layer, allowed us to prove Gen 13's full potential and unlock performance gains that would have been impossible on our previous stack. FL2 removes the dependency on the larger cache, allowing for performance to scale with cores while maintaining our SLAs. Today, we are proud to announce the launch of Cloudflare's Gen 13 based on AMD EPYC™ 5th Gen Turin-based servers running FL2, effectively capturing and scaling performance at the edge. </p>
    <div>
      <h2>What AMD EPYCTurin brings to the table</h2>
      <a href="#what-amd-epycturin-brings-to-the-table">
        
      </a>
    </div>
    <p><a href="https://www.amd.com/en/products/processors/server/epyc/9005-series.html"><u>AMD's EPYC™ 5th Generation Turin-based processors</u></a> deliver more than just a core count increase. The architecture delivers improvements across multiple dimensions of what Cloudflare servers require.</p><ul><li><p><b>2x core count:</b> up to 192 cores versus Gen 12's 96 cores, with SMT providing 384 threads</p></li><li><p><b>Improved IPC:</b> Zen 5's architectural improvements deliver better instructions-per-cycle compared to Zen 4</p></li><li><p><b>Better power efficiency</b>: Despite the higher core count, Turin consumes up to 32% fewer watts per core compared to Genoa-X</p></li><li><p><b>DDR5-6400 support</b>: Higher memory bandwidth to feed all those cores</p></li></ul><p>However, Turin's high density OPNs make a deliberate tradeoff: prioritizing throughput over per core cache. Our analysis across the Turin stack highlighted this shift. For example, comparing the highest density Turin OPN to our Gen 12 Genoa-X processors reveals that Turin's 192 cores share 384MB of L3 cache. This leaves each core with access to just 2MB, one-sixth of Gen 12's allocation. For any workload that relies heavily on cache locality, which ours did, this reduction posed a serious challenge.</p><table><tr><td><p>Generation</p></td><td><p>Processor</p></td><td><p>Cores/Threads</p></td><td><p>L3 Cache/Core</p></td></tr><tr><td><p>Gen 12</p></td><td><p>AMD Genoa-X 9684X</p></td><td><p>96C/192T</p></td><td><p>12MB (3D V-Cache)</p></td></tr><tr><td><p>Gen 13 Option 1</p></td><td><p>AMD Turin 9755</p></td><td><p>128C/256T</p></td><td><p>4MB</p></td></tr><tr><td><p>Gen 13 Option 2</p></td><td><p>AMD Turin 9845</p></td><td><p>160C/320T</p></td><td><p>2MB</p></td></tr><tr><td><p>Gen 13 Option 3</p></td><td><p>AMD Turin 9965</p></td><td><p>192C/384T</p></td><td><p>2MB</p></td></tr></table>
    <div>
      <h2>Diagnosing the problem with performance counters</h2>
      <a href="#diagnosing-the-problem-with-performance-counters">
        
      </a>
    </div>
    <p>For our FL1 request handling layer, NGINX- and LuaJIT-based code, this cache reduction presented a significant challenge. But we didn't just assume it would be a problem; we measured it.</p><p>During the CPU evaluation phase for Gen 13, we collected CPU performance counters and profiling data to identify exactly what was happening under the hood using <a href="https://docs.amd.com/r/en-US/68658-uProf-getting-started-guide/Identifying-Issues-Using-uProfPcm"><u>AMD uProf tool</u></a>. The data showed:</p><ul><li><p>L3 cache miss rates increased dramatically compared to Gen 12's server equipped with 3D V-cache processors</p></li><li><p>Memory fetch latency dominated request processing time as data that previously stayed in L3 now required trips to DRAM</p></li><li><p>The latency penalty scaled with utilization as we pushed CPU usage higher, and cache contention worsened</p></li></ul><p>L3 cache hits complete in roughly 50 cycles; L3 cache misses requiring DRAM access take 350+ cycles, an order of magnitude difference. With 6x less cache per core, FL1 on Gen 13 was hitting memory far more often, incurring latency penalties.</p>
    <div>
      <h2>The tradeoff: latency vs. throughput </h2>
      <a href="#the-tradeoff-latency-vs-throughput">
        
      </a>
    </div>
    <p>Our initial tests running FL1 on Gen 13 confirmed what the performance counters had already suggested. While the Turin processor could achieve higher throughput, it came at a steep latency cost.</p><table><tr><td><p>Metric</p></td><td><p>Gen 12 (FL1)</p></td><td><p>Gen 13 - AMD Turin 9755 (FL1)</p></td><td><p>Gen 13 - AMD Turin 9845 (FL1)</p></td><td><p>Gen 13 - AMD Turin 9965 (FL1)</p></td><td><p>Delta</p></td></tr><tr><td><p>Core count</p></td><td><p>baseline</p></td><td><p>+33%</p></td><td><p>+67%</p></td><td><p>+100%</p></td><td><p></p></td></tr><tr><td><p>FL throughput</p></td><td><p>baseline</p></td><td><p>+10%</p></td><td><p>+31%</p></td><td><p>+62%</p></td><td><p>Improvement</p></td></tr><tr><td><p>Latency at low to moderate CPU utilization</p></td><td><p>baseline</p></td><td><p>+10%</p></td><td><p>+30%</p></td><td><p>+30%</p></td><td><p>Regression</p></td></tr><tr><td><p>Latency at high CPU utilization</p></td><td><p>baseline</p></td><td><p>&gt; 20% </p></td><td><p>&gt; 50% </p></td><td><p>&gt; 50% </p></td><td><p>Unacceptable</p></td></tr></table><p>The Gen 13 evaluation server with AMD Turin 9965 that generated 60% throughput gain was compelling, and the performance uplift provided the most improvement to Cloudflare’s total cost of ownership (TCO). </p><p>But a more than 50% latency penalty is not acceptable. The increase in request processing latency would directly impact customer experience. We faced a familiar infrastructure question: do we accept a solution with no TCO benefit, accept the increased latency tradeoff, or find a way to boost efficiency without adding latency?</p>
    <div>
      <h2>Incremental gains with performance tuning</h2>
      <a href="#incremental-gains-with-performance-tuning">
        
      </a>
    </div>
    <p>To find a path to an optimal outcome, we collaborated with AMD to analyze the Turin 9965 data and run targeted optimization experiments. We systematically tested multiple configurations:</p><ul><li><p><b>Hardware Tuning:</b> Adjusting hardware prefetchers and Data Fabric (DF) Probe Filters, which showed only marginal gains</p></li><li><p><b>Scaling Workers</b>: Launching more FL1 workers, which improved throughput but cannibalized resources from other production services</p></li><li><p><b>CPU Pinning &amp; Isolation:</b> Adjusting workload isolation configurations to find optimal mix, with limited success </p></li></ul><p>The configuration that ultimately provided the most value was <b>AMD’s Platform Quality of Service (PQOS). PQOS </b>extensions enable fine-grained regulation of shared resources like cache and memory bandwidth. Since Turin processors consist of one I/O Die and up to 12 Core Complex Dies (CCDs), each sharing an L3 cache across up to 16 cores, we put this to the test. Here is how the different experimental configurations performed. </p><p>First, we used PQOS to allocate a dedicated L3 cache share within a single CCD for FL1, the gains were minimal. However, when we scaled the concept to the socket level, dedicating an <i>entire</i> CCD strictly to FL1, we saw meaningful throughput gains while keeping latency acceptable.</p><div>
<figure>
<table><colgroup><col></col><col></col><col></col><col></col></colgroup>
<tbody>
<tr>
<td>
<p><span><span>Configuration</span></span></p>
</td>
<td>
<p><span><span>Description</span></span></p>
</td>
<td>
<p><span><span>Illustration</span></span></p>
</td>
<td>
<p><span><span>Performance gain</span></span></p>
</td>
</tr>
<tr>
<td>
<p><span><span>NUMA-aware core affinity </span></span><br /><span><span>(equivalent to PQOS at socket level)</span></span></p>
</td>
<td>
<p><span><span>6 out of 12 CCD (aligned with NUMA domain) run FL.</span></span></p>
<p> </p>
<p><span><span>32MB L3 cache in each CCD shared among all cores. </span></span></p>
</td>
<td>
<p><span><span><img src="https://images.ctfassets.net/zkvhlag99gkb/4CBSHY02oIZOiENgFrzLSz/0c6c2ac8ef0096894ff4827e30d25851/image3.png" /></span></span></p>
</td>
<td>
<p><span><span>&gt;15% incremental </span></span></p>
<p><span><span>throughput gain</span></span></p>
</td>
</tr>
<tr>
<td>
<p><span><span>PQOS config 1</span></span></p>
</td>
<td>
<p><span><span>1 of 2 vCPU on each physical core in each CCD runs FL. </span></span></p>
<p> </p>
<p><span><span>FL gets 75% of the 32MB L3 cache of each CCD.</span></span></p>
</td>
<td>
<p><span><span><img src="https://images.ctfassets.net/zkvhlag99gkb/3iJo1BBRueQRy92R3aXbGx/596c3231fa0e66f20de70ea02615f9a7/image2.png" /></span></span></p>
</td>
<td>
<p><span><span>&lt; 5% incremental throughput gain</span></span></p>
<p> </p>
<p><span><span>Other services show minor signs of degradation</span></span></p>
</td>
</tr>
<tr>
<td>
<p><span><span>PQOS config 2</span></span></p>
</td>
<td>
<p><span><span>1 of 2 vCPU in each physical core in each CCD runs FL.</span></span></p>
<p> </p>
<p><span><span>FL gets 50% of the 32MB L3 cache of each CCD.</span></span></p>
</td>
<td>
<p><span><span><img src="https://images.ctfassets.net/zkvhlag99gkb/3iJo1BBRueQRy92R3aXbGx/596c3231fa0e66f20de70ea02615f9a7/image2.png" /></span></span></p>
</td>
<td>
<p><span><span>&lt; 5% incremental throughput gain</span></span></p>
</td>
</tr>
<tr>
<td>
<p><span><span>PQOS config 3</span></span></p>
</td>
<td>
<p><span><span>2 vCPU on 50% of the physical core in each CCD runs FL. </span></span></p>
<p> </p>
<p><span><span>FL gets 50% of  the 32MB L3 cache of each CCD.</span></span></p>
</td>
<td>
<p><span><span><img src="https://images.ctfassets.net/zkvhlag99gkb/7FKLfSxnSNUlXJCw8CJGzU/69c7b81b6cee5a2c7040ecc96748084b/image5.png" /></span></span></p>
</td>
<td>
<p><span><span>&lt; 5% incremental throughput gain</span></span></p>
</td>
</tr>
</tbody>
</table>
</figure>
</div>
    <div>
      <h2>The opportunity: FL2 was already in progress</h2>
      <a href="#the-opportunity-fl2-was-already-in-progress">
        
      </a>
    </div>
    <p>Hardware tuning and resource configuration provided modest gains, but to truly unlock the performance potential of the Gen 13 architecture, we knew we would have to rewrite our software stack to fundamentally change how it utilized system resources.</p><p>Fortunately, we weren't starting from scratch. As we <a href="https://blog.cloudflare.com/20-percent-internet-upgrade/"><u>announced during Birthday Week 2025</u></a>, we had already been rebuilding FL1 from the ground up. FL2 is a complete rewrite of our request handling layer in Rust, built on our <a href="https://blog.cloudflare.com/pingora-open-source/"><u>Pingora</u></a> and <a href="https://blog.cloudflare.com/introducing-oxy/"><u>Oxy</u></a> frameworks, replacing 15 years of NGINX and LuaJIT code.</p><p>The FL2 project wasn't initiated to solve the Gen 13 cache problem — it was driven by the need for better security (Rust's memory safety), faster development velocity (strict module system), and improved performance across the board (less CPU, less memory, modular execution).</p><p>FL2's cleaner architecture, with better memory access patterns and less dynamic allocation, might not depend on massive L3 caches the way FL1 did. This gave us an opportunity to use the FL2 transition to prove whether Gen 13's throughput gains could be realized without the latency penalty.</p>
    <div>
      <h2>Proving it out: FL2 on Gen 13</h2>
      <a href="#proving-it-out-fl2-on-gen-13">
        
      </a>
    </div>
    <p>As the FL2 rollout progressed, production metrics from our Gen 13 servers validated what we had hypothesized.</p><table><tr><td><p>Metric</p></td><td><p>Gen 13 AMD Turin 9965 (FL1)</p></td><td><p>Gen 13 AMD Turin 9965 (FL2)</p></td></tr><tr><td><p>FL requests per CPU%</p></td><td><p>baseline</p></td><td><p>50% higher</p></td></tr><tr><td><p>Latency vs Gen 12</p></td><td><p>baseline</p></td><td><p>70% lower</p></td></tr><tr><td><p>Throughput vs Gen 12</p></td><td><p>62% higher</p></td><td><p>100% higher</p></td></tr></table><p>The out-of-the-box efficiency gains on our new FL2 stack were substantial, even before any system optimizations. FL2 slashed the latency penalty by 70%, allowing us to push Gen 13 to higher CPU utilization while strictly meeting our latency SLAs. Under FL1, this would have been impossible.</p><p>By effectively eliminating the cache bottleneck, FL2 enables our throughput to scale linearly with core count. The impact is undeniable on the high-density AMD Turin 9965: we achieved a 2x performance gain, unlocking the true potential of the hardware. With further system tuning, we expect to squeeze even more power out of our Gen 13 fleet.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1jV1q0n9PgmbbNzDl8E1J1/2ead24a20cc10836ba041f73a16f3883/image6.png" />
          </figure>
    <div>
      <h2>Generational improvement with Gen 13</h2>
      <a href="#generational-improvement-with-gen-13">
        
      </a>
    </div>
    <p>With FL2 unlocking the immense throughput of the high-core-count AMD Turin 9965, we have officially selected these processors for our Gen 13 deployment. Hardware qualification is complete, and Gen 13 servers are now shipping at scale to support our global rollout.</p>
    <div>
      <h3>Performance improvements</h3>
      <a href="#performance-improvements">
        
      </a>
    </div>
    <table><tr><td><p>
</p></td><td><p>Gen 12 </p></td><td><p>Gen 13 </p></td></tr><tr><td><p>Processor</p></td><td><p>AMD EPYC™ 4th Gen Genoa-X 9684X</p></td><td><p>AMD EPYC™ 5th Gen Turin 9965</p></td></tr><tr><td><p>Core count</p></td><td><p>96C/192T</p></td><td><p>192C/384T</p></td></tr><tr><td><p>FL throughput</p></td><td><p>baseline</p></td><td><p>Up to +100%</p></td></tr><tr><td><p>Performance per watt</p></td><td><p>baseline</p></td><td><p>Up to +50%</p></td></tr></table>
    <div>
      <h3>Gen 13 business impact</h3>
      <a href="#gen-13-business-impact">
        
      </a>
    </div>
    <p><b>Up to 2x throughput vs Gen 12 </b>for uncompromising customer experience: By doubling our throughput capacity while staying within our latency SLAs, we guarantee our applications remain fast and responsive, and able to absorb massive traffic spikes.</p><p><b>50% better performance/watt vs Gen 12 </b>for sustainable scaling: This gain in power efficiency not only reduces data center expansion costs, but allows us to process growing traffic with a vastly lower carbon footprint per request.</p><p><b>60% higher rack throughput vs Gen 12 </b>for global edge upgrades: Because we achieved this throughput density while keeping the rack power budget constant, we can seamlessly deploy this next generation compute anywhere in the world across our global edge network, delivering top tier performance exactly where our customers want it.</p>
    <div>
      <h2>Gen 13 + FL2: ready for the edge </h2>
      <a href="#gen-13-fl2-ready-for-the-edge">
        
      </a>
    </div>
    <p>Our legacy request serving layer FL1 hit a cache contention wall on Gen 13, forcing an unacceptable tradeoff between throughput and latency. Instead of compromising, we built FL2. </p><p>Designed with a vastly leaner memory access pattern, FL2 removes our dependency on massive L3 caches and allows linear scaling with core count. Running on the Gen 13 AMD Turin platform, FL2 unlocks 2x the throughput and a 50% boost in power efficiency all while keeping latency within our SLAs. This leap forward is a great reminder of the importance of hardware-software co-design. Unconstrained by cache limits, Gen 13 servers are now ready to be deployed to serve millions of requests across Cloudflare’s global network.</p><p>If you're excited about working on infrastructure at global scale, <a href="https://www.cloudflare.com/careers/jobs"><u>we're hiring</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Hardware]]></category>
            <category><![CDATA[Performance]]></category>
            <category><![CDATA[Infrastructure]]></category>
            <category><![CDATA[Rust]]></category>
            <category><![CDATA[AMD]]></category>
            <category><![CDATA[Engineering]]></category>
            <guid isPermaLink="false">4shbA7eyT2KredK7RJyizK</guid>
            <dc:creator>Syona Sarma</dc:creator>
            <dc:creator>JQ Lau</dc:creator>
            <dc:creator>Jesse Brandeburg</dc:creator>
        </item>
        <item>
            <title><![CDATA[Analysis of the EPYC 145% performance gain in Cloudflare Gen 12 servers]]></title>
            <link>https://blog.cloudflare.com/analysis-of-the-epyc-145-performance-gain-in-cloudflare-gen-12-servers/</link>
            <pubDate>Tue, 15 Oct 2024 15:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare’s Gen 12 server is the most powerful and power efficient server that we have deployed to date. Through sensitivity analysis, we found that Cloudflare workloads continue to scale with higher core count and higher CPU frequency, as well as achieving a significant boost in performance with larger L3 cache per core. ]]></description>
            <content:encoded><![CDATA[ <p>Cloudflare's <a href="https://www.cloudflare.com/network"><u>network</u></a> spans more than 330 cities in over 120 countries, serving over 60 million HTTP requests per second and 39 million DNS queries per second on average. These numbers will continue to grow, and at an accelerating pace, as will Cloudflare’s infrastructure to support them. While we can continue to scale out by deploying more servers, it is also paramount for us to develop and deploy more performant and more efficient servers.</p><p>At the heart of each server is the processor (central processing unit, or CPU). Even though many aspects of a server rack can be redesigned to improve the cost to serve a request, CPU remains the biggest lever, as it is typically the primary compute resource in a server, and the primary enabler of new technologies.</p><p><a href="https://blog.cloudflare.com/gen-12-servers/"><u>Cloudflare’s 12th Generation server with AMD EPYC 9684-X (codenamed Genoa-X) is 145% more performant and 63% more efficient</u></a>. These are big numbers, but where do the performance gains come from? Cloudflare’s hardware system engineering team did a sensitivity analysis on three variants of 4th generation AMD EPYC processor to understand the contributing factors.</p><p>For the 4th generation AMD EPYC Processors, AMD offers three architectural variants: </p><ol><li><p>mainstream classic Zen 4 cores, codenamed Genoa</p></li><li><p>efficiency optimized dense Zen 4c cores, codenamed Bergamo</p></li><li><p>cache optimized Zen 4 cores with 3D V-cache, codenamed Genoa-X</p></li></ol>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7hb5yDJksMIwbWVQcCzuoC/c085edca54d3820f7e93564791b0289a/image6.png" />
            
            </figure><p><sup>Figure 1 (from left to right): AMD EPYC 9654 (Genoa), AMD EPYC 9754 (Bergamo), AMD EPYC 9684X (Genoa-X)</sup></p><p>Key features common across the 4th Generation AMD EPYC processors:</p><ul><li><p>Up to 12x Core Complex Dies (CCDs)</p></li><li><p>Each core has a private 1MB L2 cache</p></li><li><p>The CCDs connect to memory, I/O, and each other through an I/O die</p></li><li><p>Configurable Thermal Design Power (cTDP) up to 400W</p></li><li><p>Support up to 12 channels of DDR5-4800 1DPC</p></li><li><p>Support up to 128 lanes PCIe Gen 5</p></li></ul><p>Classic Zen 4 Cores (Genoa):</p><ul><li><p>Each Core Complex (CCX) has 8x Zen 4 Cores (16x Threads)</p></li><li><p>Each CCX has a shared 32 MB L3 cache (4 MB/core)</p></li><li><p>Each CCD has 1x CCX</p></li></ul><p>Dense Zen 4c Cores (Bergamo):</p><ul><li><p>Each CCX has 8x Zen 4c Cores (16x Threads)</p></li><li><p>Each CCX has a shared 16 MB L3 cache (2 MB/core)</p></li><li><p>Each CCD has 2x CCX</p></li></ul><p>Classic Zen 4 Cores with 3D V-cache (Genoa-X):</p><ul><li><p>Each CCX has 8x Zen 4 Cores (16x Threads)</p></li><li><p>Each CCX has a shared 96MB L3 cache (12 MB/core)</p></li><li><p>Each CCD has 1x CCX</p></li></ul><p>For more information on 4th generation AMD EPYC Processors architecture, see: <a href="https://www.amd.com/system/files/documents/4th-gen-epyc-processor-architecture-white-paper.pdf"><u>https://www.amd.com/system/files/documents/4th-gen-epyc-processor-architecture-white-paper.pdf</u></a> </p><p>The following table is a summary of the specification of the AMD EPYC 7713 CPU in our <a href="https://blog.cloudflare.com/the-epyc-journey-continues-to-milan-in-cloudflares-11th-generation-edge-server/"><u>Gen 11 server</u></a> against the three CPU candidates, one from each variant of the 4th generation AMD EPYC Processors architecture:</p><div>
    <figure>
        <table>
            <colgroup>
                <col></col>
                <col></col>
                <col></col>
                <col></col>
                <col></col>
            </colgroup>
            <tbody>
                <tr>
                    <td>
                        <p><span><span><strong>CPU Model</strong></span></span></p>
                    </td>
                    <td>
                        <p><a href="https://www.amd.com/en/products/specifications/server-processor.html"><span><span><strong><u>AMD EPYC 7713</u></strong></span></span></a></p>
                    </td>
                    <td>
                        <p><a href="https://www.amd.com/en/products/specifications/server-processor.html"><span><span><strong><u>AMD EPYC 9654</u></strong></span></span></a></p>
                    </td>
                    <td>
                        <p><a href="https://www.amd.com/en/products/specifications/server-processor.html"><span><span><strong><u>AMD EPYC 9754</u></strong></span></span></a></p>
                    </td>
                    <td>
                        <p><a href="https://www.amd.com/en/products/specifications/server-processor.html"><span><span><strong><u>AMD EPYC 9684X</u></strong></span></span></a></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span><strong>Series</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span>Milan</span></span></p>
                    </td>
                    <td>
                        <p><span><span>Genoa</span></span></p>
                    </td>
                    <td>
                        <p><span><span>Bergamo</span></span></p>
                    </td>
                    <td>
                        <p><span><span>Genoa-X</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span><strong># of CPU Cores</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span>64</span></span></p>
                    </td>
                    <td>
                        <p><span><span>96</span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>128</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span>96</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span><strong># of Threads</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span>128</span></span></p>
                    </td>
                    <td>
                        <p><span><span>192</span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>256</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span>192</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span><strong>Base Clock</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span>2.0 GHz</span></span></p>
                    </td>
                    <td>
                        <p><span><span>2.4 GHz</span></span></p>
                    </td>
                    <td>
                        <p><span><span>2.25 GHz</span></span></p>
                    </td>
                    <td>
                        <p><span><span>2.4 GHz</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span><strong>All Core Boost Clock</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span>~2.7 GHz*</span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>3.55 Ghz</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span>3.1 Ghz</span></span></p>
                    </td>
                    <td>
                        <p><span><span>3.42 Ghz</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span><strong>Total L3 Cache</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span>256 MB</span></span></p>
                    </td>
                    <td>
                        <p><span><span>384 MB</span></span></p>
                    </td>
                    <td>
                        <p><span><span>256 MB</span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>1152 MB</strong></span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span><strong>L3 cache per core</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span>4 MB / core</span></span></p>
                    </td>
                    <td>
                        <p><span><span>4 MB / core</span></span></p>
                    </td>
                    <td>
                        <p><span><span>2 MB / core</span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>12 MB / core</strong></span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span><strong>Maximum configurable TDP</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span>240W</span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>400W</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>400W</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>400W</strong></span></span></p>
                    </td>
                </tr>
            </tbody>
        </table>
    </figure>
</div><p><sup>* AMD EPYC 7713 all core boost clock is based on Cloudflare production data, not the official specification from AMD</sup></p>
    <div>
      <h2>cf_benchmark</h2>
      <a href="#cf_benchmark">
        
      </a>
    </div>
    <p>Readers may remember that Cloudflare introduced <a href="https://github.com/cloudflare/cf_benchmark"><u>cf_benchmark</u></a> when we evaluated <a href="https://blog.cloudflare.com/arm-takes-wing/"><u>Qualcomm's ARM chips</u></a>, using it as our first pass benchmark to shortlist <a href="https://blog.cloudflare.com/an-epyc-trip-to-rome-amd-is-cloudflares-10th-generation-edge-server-cpu/"><u>AMD’s Rome CPU for our Gen 10 servers</u></a> and to evaluate <a href="https://blog.cloudflare.com/arms-race-ampere-altra-takes-on-aws-graviton2/"><u>our chosen ARM CPU Ampere Altra Max against AWS Graviton 2</u></a>. Likewise, we ran cf_benchmark against the three candidate CPUs for our 12th Gen servers: AMD EPYC 9654 (Genoa), AMD EPYC 9754 (Bergamo), and AMD EPYC 9684X (Genoa-X). The majority of cf_benchmark workloads are compute bound, and given more cores or higher CPU frequency, they score better. The graph and the table below show the benchmark performance comparison of the three CPU candidates with Genoa 9654 as the baseline, where &gt; 1.00x indicates better performance.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6YNQ8YB5XV3fZEVpVFT7lM/2dadd7fda9832716f198dee2d5fbfd22/image5.png" />
          </figure><div>
    <figure>
        <table>
            <colgroup>
                <col></col>
                <col></col>
                <col></col>
                <col></col>
            </colgroup>
            <tbody>
                <tr>
                    <td> </td>
                    <td>
                        <p><span><span><strong>Genoa 9654 (baseline)</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>Bergamo 9754</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>Genoa-X 9684X</strong></span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span><strong>openssl_pki</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.00x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.16x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.01x</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span><strong>openssl_aead</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.00x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.20x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.01x</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span><strong>luajit</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.00x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>0.86x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.00x</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span><strong>brotli</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.00x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.11x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>0.98x</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span><strong>gzip</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.00x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>0.87x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.01x</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span><strong>go</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.00x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.09x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.00x</span></span></p>
                    </td>
                </tr>
            </tbody>
        </table>
    </figure>
</div><p>Bergamo 9754 with 128 cores scores better in openssl_pki, openssl_aead, brotli, and go benchmark suites, and performs less favorably in luajit and gzip benchmark suites. Genoa-X 9684X (with significantly more L3 cache) doesn’t offer a significant boost in performance for these compute-bound benchmarks.</p><p>These benchmarks are representative of some of the common workloads Cloudflare runs, and are useful in identifying software scaling issues, system configuration bottlenecks, and the impact of CPU design choices on workload-specific performance. However, the benchmark suite is not an exhaustive list of all workloads Cloudflare runs in production, and in reality, the workloads included in the benchmark suites are almost certainly not the exclusive workload running on the CPU. In short, though benchmark results can be informative, they do not represent a good indication of production performance when a mix of these workloads run on the same processor.</p>
    <div>
      <h2>Performance simulation</h2>
      <a href="#performance-simulation">
        
      </a>
    </div>
    <p>To get an early indication of production performance, Cloudflare has an internal performance simulation tool that exercises our software stack to fetch a fixed asset repeatedly. The simulation tool can be configured to fetch a specified fixed-size asset and configured to include or exclude services like WAF or Workers in the request path. Below, we show the simulated performance between the three CPUs for an asset size of 10 KB, where &gt;1.00x indicates better performance.</p><div>
    <figure>
        <table>
            <colgroup>
                <col></col>
                <col></col>
                <col></col>
                <col></col>
                <col></col>
            </colgroup>
            <tbody>
                <tr>
                    <td> </td>
                    <td>
                        <p><span><span><strong>Milan 7713</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>Genoa 9654</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>Bergamo 9754</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>Genoa-X 9684X</strong></span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span><strong>Lab simulation performance multiplier</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.00x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>2.20x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.95x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>2.75x</span></span></p>
                    </td>
                </tr>
            </tbody>
        </table>
    </figure>
</div><p>Based on these results, Bergamo 9754, which has the highest core count, but smallest L3 cache per core, is least performant among the three candidates, followed by Genoa 9654. The Genoa-X 9684X with the largest L3 cache per core is the most performant. This data suggests that our software stack is very sensitive to L3 cache size, in addition to core count and CPU frequency. This is interesting and worth a deep dive into a sensitivity analysis of our workload against a few (high level) CPU design points, especially core scaling, frequency scaling, and L2/L3 cache sizes scaling.</p>
    <div>
      <h2>Sensitivity analysis</h2>
      <a href="#sensitivity-analysis">
        
      </a>
    </div>
    
    <div>
      <h3>Core sensitivity</h3>
      <a href="#core-sensitivity">
        
      </a>
    </div>
    <p>Number of cores is the headline specification that practically everyone talks about, and one of the easiest improvements CPU vendors can make to increase performance per socket. The AMD Genoa 9654 has 96 cores, 50% more than the 64 cores available on the AMD Milan 7713 CPUs that we used in our Gen 11 servers. Is more always better? Does Cloudflare’s primary workload scale with core count and effectively utilize all available cores?</p><p>The figure and table below shows the result of a core scaling experiment performed on an AMD Genoa 9654 configured with 96 cores, 80 cores, 64 cores, and 48 cores, which was done by incrementally disabling 2x CCD (8 cores/CCD) at each step. The result is GREAT, as Cloudflare’s simulated primary workload scales linearly with core count on AMD Genoa CPUs.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/f1w9JxhP8aLIoFONMq5tr/b3269fc121b9bfd8f4d9a3394a73c599/image4.png" />
          </figure><div>
    <figure>
        <table>
            <colgroup>
                <col></col>
                <col></col>
                <col></col>
            </colgroup>
            <tbody>
                <tr>
                    <td>
                        <p><span><span><strong>Core count</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>Core increase</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>Performance increase</strong></span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span><strong>48</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.00x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.00</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span><strong>64</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.33x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.39x</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span><strong>80</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.67x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.71x</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span><strong>96</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span>2.00x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>2.05x</span></span></p>
                    </td>
                </tr>
            </tbody>
        </table>
    </figure>
</div>
    <div>
      <h3>TDP sensitivity</h3>
      <a href="#tdp-sensitivity">
        
      </a>
    </div>
    <p><a href="https://blog.cloudflare.com/thermal-design-supporting-gen-12-hardware-cool-efficient-and-reliable/"><u>Thermal Design Power (TDP), is the maximum amount of heat generated by a CPU that the cooling system is designed to dissipate</u></a>, but more commonly refers to the power consumption of the processor under the maximum theoretical loads. AMD Genoa 9654’s default TDP is 360W, but can be configured up to 400W TDP. Is more always better? Does Cloudflare continue to see meaningful performance improvement up to 400W, or does performance stagnate at some point?</p><p>The chart below shows the result of sweeping the TDP of the AMD Genoa 9654 (in power determinism mode) from 240W to 400W. (Note: x-axis step size is not linear).</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/216pSnjCDQOLwtRS822ugu/17dc1885d34ae10a8a1cecb6584b7dd7/image3.png" />
          </figure><p>Cloudflare’s simulated primary workload continues to see incremental performance improvements up to the maximum configurable 400W, albeit at a less favorable perf/watt ratio.</p><p>Looking at TDP sensitivity data is a quick and easy way to identify if performance stagnates at some power point, but what does power sensitivity actually measure? There are several factors contributing to CPU power consumption, but let's focus on one of the primary factors: dynamic power consumption. Dynamic power consumption is approximately <i>CV</i><i><sup>2</sup></i><i>f</i>, where C is the switched load capacitance, V is the regulated voltage, and f is the frequency. In modern processors like the AMD Genoa 9654, the CPU dynamically scales its voltage along with frequency, so theoretically, CPU dynamic power is loosely proportional to f<sup>3</sup>. In other words, measuring TDP sensitivity is measuring the frequency sensitivity of a workload. Does the data agree? Yes!</p><div>
    <figure>
        <table>
            <colgroup>
                <col></col>
                <col></col>
                <col></col>
            </colgroup>
            <tbody>
                <tr>
                    <td>
                        <p><span><span><strong>cTDP</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>All core boost frequency (GHz)</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>Perf (rps) / baseline</strong></span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span><strong>240</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span>2.47</span></span></p>
                    </td>
                    <td>
                        <p><span><span>0.78x</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span><strong>280</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span>2.75</span></span></p>
                    </td>
                    <td>
                        <p><span><span>0.87x</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span><strong>320</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span>2.93</span></span></p>
                    </td>
                    <td>
                        <p><span><span>0.93x</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span><strong>340</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span>3.13</span></span></p>
                    </td>
                    <td>
                        <p><span><span>0.97x</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span><strong>360</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span>3.3</span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.00x</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span><strong>380</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span>3.4</span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.03x</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span><strong>390</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span>3.465</span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.04x</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span><strong>400</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span>3.55</span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.05x</span></span></p>
                    </td>
                </tr>
            </tbody>
        </table>
    </figure>
</div>
    <div>
      <h3>Frequency sensitivity</h3>
      <a href="#frequency-sensitivity">
        
      </a>
    </div>
    <p>Instead of relying on an indirect measure through the TDP, let’s measure frequency sensitivity directly by sweeping the maximum boost frequency.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2oobymubOiZrpJjSExQxxo/a02985506ecc803e8b09d7f999b86c55/image2.png" />
          </figure><p>At above 3GHz, the data shows that Cloudflare’s primary workload sees roughly 2% incremental improvement for every 0.1GHz all core average frequency increment. We hit the 400W power cap at 3.545GHz. This is notably higher than the typical all core boost frequency that Cloudflare Gen 11 servers with AMD Milan 7713 at 2.7GHz see in production, or at 2.4GHz in our performance simulation, which is amazing!</p>
    <div>
      <h3>L3 cache size sensitivity</h3>
      <a href="#l3-cache-size-sensitivity">
        
      </a>
    </div>
    <p>What about L3 cache size sensitivity? L3 cache size is one of the primary design choices and major differences between the trio of Genoa, Bergamo, and Genoa-X. Genoa 9654 has 4 MB L3/core, Bergamo 9754 has 2 MB L3/core, and Genoa-X has 12 MB L3/core. L3 cache is the last and largest “memory” bank on-chip before having to access memory on DIMMs outside the chip that would take significantly more CPU cycles.</p><p>We ran an experiment on the Genoa 9654 to check how performance scales with L3 cache size. L3 cache size per core is reduced through MSR writes (but could also be done using <a href="https://www.intel.com/content/www/us/en/developer/articles/technical/use-intel-resource-director-technology-to-allocate-last-level-cache-llc.htm"><u>Intel RDT</u></a>) and L3 cache per core is increased by disabling physical cores in a CCD (which reduces the number of cores sharing the fixed size 32 MB L3 cache per CCD effectively growing the L3 cache per core). Below is the result of the experiment, where &gt;1.00x indicates better performance:</p><div>
    <figure>
        <table>
            <colgroup>
                <col></col>
                <col></col>
                <col></col>
                <col></col>
                <col></col>
                <col></col>
                <col></col>
                <col></col>
                <col></col>
            </colgroup>
            <tbody>
                <tr>
                    <td>
                        <p><span><span><strong>L3 cache size increase vs baseline 4MB per core</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>0.25x</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>0.5x</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>0.75x</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>1x</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>1.14x</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>1.33x</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>1.60x</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>2.00x</strong></span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span><strong>rps/core / baseline</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span>0.67x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>0.78x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>0.89x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.00x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.08x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.15x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.25x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.31x</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span><strong>L3 cache miss rate per CCD</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span>56.04%</span></span></p>
                    </td>
                    <td>
                        <p><span><span>39.15%</span></span></p>
                    </td>
                    <td>
                        <p><span><span>30.37%</span></span></p>
                    </td>
                    <td>
                        <p><span><span>23.55%</span></span></p>
                    </td>
                    <td>
                        <p><span><span>22.39%</span></span></p>
                    </td>
                    <td>
                        <p><span><span>19.73%</span></span></p>
                    </td>
                    <td>
                        <p><span><span>16.94%</span></span></p>
                    </td>
                    <td>
                        <p><span><span>14.28%</span></span></p>
                    </td>
                </tr>
            </tbody>
        </table>
    </figure>
</div>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/cjfPuhUHBfABvQ5hJGchb/e470ca9bb45b0c69b5069556ce866647/image7.png" />
          </figure><p>Even though the expectation was that the impact of a different L3 cache size gets diminished by the faster DDR5 and larger memory bandwidth, Cloudflare’s simulated primary workload is quite sensitive to L3 cache size. The L3 cache miss rate dropped from 56% with only 1 MB L3 per core, to 14.28% with 8 MB L3/core. Changing the L3 cache size by 25% affects the performance by approximately 11%, and we continue to see performance increase to 2x L3 cache size, though the performance increase starts to diminish when we get to 2x L3 cache per core.</p><p>Do we see the same behavior when comparing Genoa 9654, Bergamo 9754 and Genoa-X 9684X? We ran an experiment comparing the impact of L3 cache size, controlling for core count and all core boost frequency, and we also saw significant deltas. Halving the L3 cache size from 4 MB/core to 2 MB/core reduces performance by 24%, roughly matching the experiment above. However, increasing the cache 3x from 4 MB/core to 12 MB/core only increases performance by 25%, less than the indication provided by previous experiments. This is likely because the performance gain we saw on experiment result above could be partially attributed to less cache contention due to reduced number of cores based on how we set up the test. Nevertheless, these are significant deltas!</p><div>
    <figure>
        <table>
            <colgroup>
                <col></col>
                <col></col>
                <col></col>
                <col></col>
            </colgroup>
            <tbody>
                <tr>
                    <td>
                        <p><span><span><strong>L3/core</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>2MB/core</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>4MB/core</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>12MB/core</strong></span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span><strong>Perf (rps) / baseline</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span>0.76x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>1x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.25x</span></span></p>
                    </td>
                </tr>
            </tbody>
        </table>
    </figure>
</div>
    <div>
      <h3>Putting it all together</h3>
      <a href="#putting-it-all-together">
        
      </a>
    </div>
    <p>The table below summarizes how each factor from sensitivity analysis above contributes to the overall performance gain. There are an additional 6% to 14% of unaccounted performance improvement that are contributed by other factors like larger L2 cache, higher memory bandwidth, and miscellaneous CPU architecture changes that improve IPC.</p><div>
    <figure>
        <table>
            <colgroup>
                <col></col>
                <col></col>
                <col></col>
                <col></col>
                <col></col>
            </colgroup>
            <tbody>
                <tr>
                    <td> </td>
                    <td>
                        <p><span><span><strong>Milan</strong></span></span></p>
                        <p><span><span><strong>7713</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>Genoa</strong></span></span></p>
                        <p><span><span><strong>9654</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>Bergamo</strong></span></span></p>
                        <p><span><span><strong>9754</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>Genoa-X</strong></span></span></p>
                        <p><span><span><strong>9684X</strong></span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span><strong>Lab simulation performance multiplier</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span>1x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>2.2x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.95x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>2.75x</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span><strong>Performance multiplier due to Core scaling</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span>1x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.5x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>2x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.5x</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span><strong>Performance multiplier due to Frequency scaling</strong></span></span></p>
                        <p><span><span><strong>(*Note: Milan 7713 all core frequency is ~2.4GHz when running simulated workload at 100% CPU utilization)</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span>1x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.32x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.21x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.29x</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span><strong>Performance multiplier due to L3 cache size scaling</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span>1x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>1x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>0.76x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.25x</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span><strong>Performance multiplier due to other factors like larger L2 cache, higher memory bandwidth, miscellaneous CPU architecture changes that improve IPC</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span>1x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.11x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.06x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.14x</span></span></p>
                    </td>
                </tr>
            </tbody>
        </table>
    </figure>
</div>
    <div>
      <h2>Performance evaluation in production</h2>
      <a href="#performance-evaluation-in-production">
        
      </a>
    </div>
    <p>How do these CPU candidates perform with real-world traffic and an actual production workload mix? The table below summarizes the performance of the three CPUs in lab simulation and in production. Genoa-X 9684X continues to outperform in production.</p><p>In addition, the Gen 12 server equipped with Genoa-X offered outstanding performance but only consumed 1.5x more power per system than our Gen 11 server with Milan 7713. In other words, we see a 63% increase in performance per watt. Genoa-X 9684X provides the best TCO improvement among the 3 options, and was ultimately chosen as the CPU for our Gen 12 server.</p><div>
    <figure>
        <table>
            <colgroup>
                <col></col>
                <col></col>
                <col></col>
                <col></col>
                <col></col>
            </colgroup>
            <tbody>
                <tr>
                    <td> </td>
                    <td>
                        <p><span><span><strong>Milan 7713</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>Genoa 9654</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>Bergamo 9754</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>Genoa-X 9684X</strong></span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span><strong>Lab simulation performance multiplier</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span>1x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>2.2x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.95x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>2.75x</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span><strong>Production performance multiplier</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span>1x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>2x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>2.15x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>2.45x</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span><strong>Production performance per watt multiplier</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span>1x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.33x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.38x</span></span></p>
                    </td>
                    <td>
                        <p><span><span>1.63x</span></span></p>
                    </td>
                </tr>
            </tbody>
        </table>
    </figure>
</div><p>The Gen 12 server with AMD Genoa-X 9684X is the most powerful and the most power efficient server Cloudflare has built to date. It serves as the underlying platform for all the incredible services that Cloudflare offers to our customers globally, and will help power the growth of Cloudflare infrastructure for the next several years with improved cost structure. </p><p>Hardware engineers at Cloudflare work closely with our infrastructure engineering partners and externally with our vendors to design and develop world-class servers to best serve our customers. </p><p><a href="https://www.cloudflare.com/careers/jobs/"><u>Come join us</u></a> at Cloudflare to help build a better Internet!</p> ]]></content:encoded>
            <category><![CDATA[AMD]]></category>
            <category><![CDATA[EPYC]]></category>
            <category><![CDATA[Hardware]]></category>
            <category><![CDATA[Cloudflare Network]]></category>
            <guid isPermaLink="false">1Sj8x3oCQRSZqU8oNgMmkE</guid>
            <dc:creator>JQ Lau</dc:creator>
            <dc:creator>Syona Sarma</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare’s 12th Generation servers — 145% more performant and 63% more efficient]]></title>
            <link>https://blog.cloudflare.com/gen-12-servers/</link>
            <pubDate>Wed, 25 Sep 2024 13:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare is thrilled to announce the general deployment of our next generation of server — Gen 12 powered by AMD Genoa-X processors. This new generation of server focuses on delivering exceptional performance across all Cloudflare services, enhanced support for AI/ML workloads, significant strides in power efficiency, and improved security features. ]]></description>
            <content:encoded><![CDATA[ <p>Cloudflare is thrilled to announce the general deployment of our next generation of servers — Gen 12 powered by AMD EPYC 9684X (code name “Genoa-X”) processors. This next generation focuses on delivering exceptional performance across all Cloudflare services, enhanced support for AI/ML workloads, significant strides in power efficiency, and improved security features.</p><p>Here are some key performance indicators and feature improvements that this generation delivers as compared to the <a href="https://blog.cloudflare.com/the-epyc-journey-continues-to-milan-in-cloudflares-11th-generation-edge-server/"><u>prior generation</u></a>: </p><p>Beginning with performance, with close engineering collaboration between Cloudflare and AMD on optimization, Gen 12 servers can serve more than twice as many requests per second (RPS) as Gen 11 servers, resulting in lower Cloudflare infrastructure build-out costs.</p><p>Next, our power efficiency has improved significantly, by more than 60% in RPS per watt as compared to the prior generation. As Cloudflare continues to expand our infrastructure footprint, the improved efficiency helps reduce Cloudflare’s operational expenditure and carbon footprint as a percentage of our fleet size.</p><p>Third, in response to the growing demand for AI capabilities, we've updated the thermal-mechanical design of our Gen 12 server to support more powerful GPUs. This aligns with the <a href="https://www.cloudflare.com/lp/pg-ai/?utm_medium=cpc&amp;utm_source=google&amp;utm_campaign=2023-q4-acq-gbl-developers-wo-ge-general-paygo_mlt_all_g_search_bg_exp__dev&amp;utm_content=workers-ai&amp;gad_source=1&amp;gclid=CjwKCAjwl6-3BhBWEiwApN6_kjigJdDvEYqHPYi8tdXuTe4APbqX923v-CBjpGiAVwITNhp8GrW3ARoCyJ4QAvD_BwE&amp;gclsrc=aw.ds"><u>Workers AI</u></a> objective to support larger large language models and increase throughput for smaller models. This enhancement underscores our ongoing commitment to advancing AI inference capabilities</p><p>Fourth, to underscore our security-first position as a company, we've integrated hardware <a href="https://trustedcomputinggroup.org/about/what-is-a-root-of-trust-rot/"><u>root of trust</u></a> (HRoT) capabilities to ensure the integrity of boot firmware and board management controller firmware. Continuing to embrace open standards, the baseboard management and security controller (Data Center Secure Control Module or <a href="https://drive.google.com/file/d/13BxuseSrKo647hjIXjp087ei8l5QQVb0/view"><u>OCP DC-SCM</u></a>) that we’ve designed into our systems is modular and vendor-agnostic, enabling a unified <a href="https://www.openbmc.org/"><u>openBMC</u></a> image, quicker prototyping, and allowing for reuse.</p><p>Finally, given the increasing importance of supply assurance and reliability in infrastructure deployments, our approach includes a robust multi-vendor strategy to mitigate supply chain risks, ensuring continuity and resiliency of our infrastructure deployment.</p><p>Cloudflare is dedicated to constantly improving our server fleet, empowering businesses worldwide with enhanced performance, efficiency, and security.</p>
    <div>
      <h2>Gen 12 Servers </h2>
      <a href="#gen-12-servers">
        
      </a>
    </div>
    <p>Let's take a closer look at our Gen 12 server. The server is powered by a 4th generation AMD EPYC Processor, paired with 384 GB of DDR5 RAM, 16 TB of NVMe storage, a dual-port 25 GbE NIC, and two 800 watt power supply units.</p>
<div><table><thead>
  <tr>
    <th><span>Generation</span></th>
    <th><span>Gen 12 Compute</span></th>
    <th><span>Previous Gen 11 Compute</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>Form Factor</span></td>
    <td><span>2U1N - Single socket</span></td>
    <td><span>1U1N - Single socket</span></td>
  </tr>
  <tr>
    <td><span>Processor</span></td>
    <td><span>AMD EPYC 9684X Genoa-X 96-Core Processor</span></td>
    <td><span>AMD EPYC 7713 Milan 64-Core Processor</span></td>
  </tr>
  <tr>
    <td><span>Memory</span></td>
    <td><span>384GB of DDR5-4800</span><br /><span>x12 memory channel</span></td>
    <td><span>384GB of DDR4-3200</span><br /><span>x8 memory channel</span></td>
  </tr>
  <tr>
    <td><span>Storage</span></td>
    <td><span>x2 E1.S NVMe</span><br /><span>Samsung PM9A3 7.68TB / Micron 7450 Pro 7.68TB</span></td>
    <td><span>x2 M.2 NVMe</span><br /><span>2x Samsung PM9A3 x 1.92TB</span></td>
  </tr>
  <tr>
    <td><span>Network</span></td>
    <td><span>Dual 25 Gbe OCP 3.0 </span><br /><span>Intel Ethernet Network Adapter E810-XXVDA2 / NVIDIA Mellanox ConnectX-6 Lx</span></td>
    <td><span>Dual 25 Gbe OCP 2.0</span><br /><span>Mellanox ConnectX-4 dual-port 25G</span></td>
  </tr>
  <tr>
    <td><span>System Management</span></td>
    <td><span>DC-SCM 2.0</span><br /><span>ASPEED AST2600 (BMC) + AST1060 (HRoT)</span></td>
    <td><span>ASPEED AST2500 (BMC)</span></td>
  </tr>
  <tr>
    <td><span>Power Supply</span></td>
    <td><span>800W - Titanium Grade</span></td>
    <td><span>650W - Titanium Grade</span></td>
  </tr>
</tbody></table></div>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/ywinOgSFpevEcQSZLhESv/b61d70a1504b4d873d0bbf2e83221bf6/BLOG-2116_2.png" />
          </figure><p><sup><i>Cloudflare Gen 12 server</i></sup></p>
    <div>
      <h3>CPU</h3>
      <a href="#cpu">
        
      </a>
    </div>
    <p>During the design phase, we conducted an extensive survey of the CPU landscape. These options offer valuable choices as we consider how to shape the future of Cloudflare's server technology to match the needs of our customers. We evaluated many candidates in the lab, and short-listed three standout CPU candidates from the 4th generation AMD EPYC Processor lineup: Genoa 9654, Bergamo 9754, and Genoa-X 9684X for production evaluation. The table below summarizes the differences in <a href="https://www.amd.com/content/dam/amd/en/documents/products/epyc/epyc-9004-series-processors-data-sheet.pdf"><u>specifications</u></a> of the short-listed candidates for Gen 12 servers against the AMD EPYC 7713 used in our Gen 11 servers. Notably, all three candidates offer significant increase in core count and marked increase in all core boost clock frequency.</p>
<div><table><thead>
  <tr>
    <th><span>CPU Model</span></th>
    <th><a href="https://www.amd.com/en/products/specifications/server-processor.html"><span>AMD EPYC 7713</span></a></th>
    <th><a href="https://www.amd.com/en/products/specifications/server-processor.html"><span>AMD EPYC 9654</span></a></th>
    <th><a href="https://www.amd.com/en/products/specifications/server-processor.html"><span>AMD EPYC 9754</span></a></th>
    <th><a href="https://www.amd.com/en/products/specifications/server-processor.html"><span>AMD EPYC 9684X</span></a></th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><span>Series</span></td>
    <td><span>Milan</span></td>
    <td><span>Genoa</span></td>
    <td><span>Bergamo</span></td>
    <td><span>Genoa-X</span></td>
  </tr>
  <tr>
    <td><span># of CPU Cores</span></td>
    <td><span>64</span></td>
    <td><span>96</span></td>
    <td><span>128</span></td>
    <td><span>96</span></td>
  </tr>
  <tr>
    <td><span># of Threads</span></td>
    <td><span>128</span></td>
    <td><span>192</span></td>
    <td><span>256</span></td>
    <td><span>192</span></td>
  </tr>
  <tr>
    <td><span>Base Clock</span></td>
    <td><span>2.0 GHz</span></td>
    <td><span>2.4 GHz</span></td>
    <td><span>2.25 GHz</span></td>
    <td><span>2.4 GHz</span></td>
  </tr>
  <tr>
    <td><span>Max Boost Clock</span></td>
    <td><span>3.67 GHz</span></td>
    <td><span>3.7 Ghz</span></td>
    <td><span>3.1 Ghz</span></td>
    <td><span>3.7 Ghz</span></td>
  </tr>
  <tr>
    <td><span>All Core Boost Clock</span></td>
    <td><span>2.7 GHz *</span></td>
    <td><span>3.55 GHz</span></td>
    <td><span>3.1GHz</span></td>
    <td><span>3.42 GHz</span></td>
  </tr>
  <tr>
    <td><span>Total L3 Cache</span></td>
    <td><span>256 MB</span></td>
    <td><span>384 MB</span></td>
    <td><span>256 MB</span></td>
    <td><span>1152 MB</span></td>
  </tr>
  <tr>
    <td><span>L3 cache per core</span></td>
    <td><span>4MB / core</span></td>
    <td><span>4MB / core</span></td>
    <td><span>2MB / core</span></td>
    <td><span>12MB / core</span></td>
  </tr>
  <tr>
    <td><span>Maximum configurable TDP</span></td>
    <td><span>240W</span></td>
    <td><span>400W</span></td>
    <td><span>400W</span></td>
    <td><span>400W</span></td>
  </tr>
</tbody></table></div><p><sub>*Note: AMD EPYC 7713 all core boost clock frequency of 2.7 GHz is not an official specification of the CPU but based on data collected at Cloudflare production fleet.</sub></p><p>During production evaluation, the configuration of all three CPUs were optimized to the best of our knowledge, including thermal design power (TDP) configured to 400W for maximum performance. The servers are set up to run the same processes and services like any other server we have in production, which makes for a great side-by-side comparison. </p>
<div><table><thead>
  <tr>
    <th></th>
    <th><span>Milan 7713</span></th>
    <th><span>Genoa 9654</span></th>
    <th><span>Bergamo 9754</span></th>
    <th><span>Genoa-X 9684X</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>Production performance (request per second) multiplier</span></td>
    <td><span>1x</span></td>
    <td><span>2x</span></td>
    <td><span>2.15x</span></td>
    <td><span>2.45x</span></td>
  </tr>
  <tr>
    <td><span>Production efficiency (request per second per watt) multiplier</span></td>
    <td><span>1x</span></td>
    <td><span>1.33x</span></td>
    <td><span>1.38x</span></td>
    <td><span>1.63x</span></td>
  </tr>
</tbody></table></div>
    <div>
      <h4>AMD EPYC Genoa-X in Cloudflare Gen 12 server</h4>
      <a href="#amd-epyc-genoa-x-in-cloudflare-gen-12-server">
        
      </a>
    </div>
    <p>Each of these CPUs outperforms the previous generation of processors by at least 2x. AMD EPYC 9684X Genoa-X with 3D V-cache technology gave us the greatest performance improvement, at 2.45x, when compared against our Gen 11 servers with AMD EPYC 7713 Milan.</p><p>Comparing the performance between Genoa-X 9684X and Genoa 9654, we see a ~22.5% performance delta. The primary difference between the two CPUs is the amount of L3 cache available on the CPU. Genoa-X 9684X has 1152 MB of L3 cache, which is three times the Genoa 9654 with 384 MB of L3 cache. Cloudflare workloads benefit from more low level cache being accessible and avoid the much larger latency penalty associated with fetching data from memory.</p><p>Genoa-X 9684X CPU delivered ~22.5% improved performance consuming the same amount of 400W power compared to Genoa 9654. The 3x larger L3 cache does consume additional power, but only at the expense of sacrificing 3% of highest achievable all core boost frequency on Genoa-X 9684X, a favorable trade-off for Cloudflare workloads.</p><p>More importantly, Genoa-X 9684X CPU delivered 145% performance improvement with only 50% system power increase, offering a 63% power efficiency improvement that will help drive down operational expenditure tremendously. It is important to note that even though a big portion of the power efficiency is due to the CPU, it needs to be paired with optimal thermal-mechanical design to realize the full benefit. Earlier last year, <a href="https://blog.cloudflare.com/cloudflare-gen-12-server-bigger-better-cooler-in-a-2u1n-form-factor/"><u>we made the thermal-mechanical design choice to double the height of the server chassis to optimize rack density and cooling efficiency across our global data centers</u></a>. We estimated that moving from 1U to 2U would reduce fan power by 150W, which would decrease system power from 750 watts to 600 watts. Guess what? We were right — a Gen 12 server consumes 600 watts per system at a typical ambient temperature of 25°C.</p><p>While high performance often comes at a higher price, fortunately AMD EPYC 9684X offer an excellent balance between cost and capability. A server designed with this CPU provides top-tier performance without necessitating a huge financial outlay, resulting in a good Total Cost of Ownership improvement for Cloudflare.</p>
    <div>
      <h3>Memory</h3>
      <a href="#memory">
        
      </a>
    </div>
    <p>AMD Genoa-X CPU supports twelve memory channels of DDR5 RAM up to 4800 mega transfers per second (MT/s) and per socket Memory Bandwidth of 460.8 GB/s. The twelve channels are fully utilized with 32 GB ECC 2Rx8 DDR5 RDIMM with one DIMM per channel configuration for a combined total memory capacity of 384 GB. </p><p>Choosing the optimal memory capacity is a balancing act, as maintaining an optimal memory-to-core ratio is important to make sure CPU capacity or memory capacity is not wasted. Some may remember that our Gen 11 servers with 64 core AMD EPYC 7713 CPUs are also configured with 384 GB of memory, which is about 6 GB per core. So why did we choose to configure our Gen 12 servers with 384 GB of memory when the core count is growing to 96 cores? Great question! A lot of memory optimization work has happened since we introduced Gen 11, including some that we blogged about, like <a href="https://blog.cloudflare.com/scalable-machine-learning-at-cloudflare/"><u>Bot Management code optimization</u></a> and <a href="https://blog.cloudflare.com/how-we-built-pingora-the-proxy-that-connects-cloudflare-to-the-internet/"><u>our transition to highly efficient Pingora</u></a>. In addition, each service has a memory allocation that is sized for optimal performance. The per-service memory allocation is programmed and monitored utilizing Linux control group resource management features. When sizing memory capacity for Gen 12, we consulted with the team who monitor resource allocation and surveyed memory utilization metrics collected from our fleet. The result of the analysis is that the optimal memory-to-core ratio is 4 GB per CPU core, or 384 GB total memory capacity. This configuration is validated in production. We chose dual rank memory modules over single rank memory modules because they have higher memory throughput, which improves server performance (read more about <a href="https://blog.cloudflare.com/ddr4-memory-organization-and-how-it-affects-memory-bandwidth/"><u>memory module organization and its effect on memory bandwidth</u></a>). </p><p>The table below shows the result of running the <a href="https://www.intel.com/content/www/us/en/developer/articles/tool/intelr-memory-latency-checker.html"><u>Intel Memory Latency Checker (MLC)</u></a> tool to measure peak memory bandwidth for the system and to compare memory throughput between 12 channels of dual-rank (2Rx8) 32 GB DIMM and 12 channels of single rank (1Rx4) 32 GB DIMM. Dual rank DIMMs have slightly higher (1.8%) read memory bandwidth, but noticeably higher write bandwidth. As write ratios increased from 25% to 50%, the memory throughput delta increased by 10%.</p>
<div><table><thead>
  <tr>
    <th><span>Benchmark</span></th>
    <th><span>Dual rank advantage over single rank</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>Intel MLC ALL Reads</span></td>
    <td><span>101.8%</span></td>
  </tr>
  <tr>
    <td><span>Intel MLC 3:1 Reads-Writes</span></td>
    <td><span>107.7%</span></td>
  </tr>
  <tr>
    <td><span>Intel MLC 2:1 Reads-Writes</span></td>
    <td><span>112.9%</span></td>
  </tr>
  <tr>
    <td><span>Intel MLC 1:1 Reads-Writes</span></td>
    <td><span>117.8%</span></td>
  </tr>
  <tr>
    <td><span>Intel MLC Stream-triad like</span></td>
    <td><span>108.6%</span></td>
  </tr>
</tbody></table></div><p>The table below shows the result of running the <a href="https://www.amd.com/en/developer/zen-software-studio/applications/spack/stream-benchmark.html"><u>AMD STREAM benchmark</u></a> to measure sustainable main memory bandwidth in MB/s and the corresponding computation rate for simple vector kernels. In all 4 types of vector kernels, dual rank DIMMs provide a noticeable advantage over single rank DIMMs.</p>
<div><table><thead>
  <tr>
    <th><span>Benchmark</span></th>
    <th><span>Dual rank advantage over single rank</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>Stream Copy</span></td>
    <td><span>115.44%</span></td>
  </tr>
  <tr>
    <td><span>Stream Scale</span></td>
    <td><span>111.22%</span></td>
  </tr>
  <tr>
    <td><span>Stream Add</span></td>
    <td><span>109.06%</span></td>
  </tr>
  <tr>
    <td><span>Stream Triad</span></td>
    <td><span>107.70%</span></td>
  </tr>
</tbody></table></div>
    <div>
      <h3>Storage</h3>
      <a href="#storage">
        
      </a>
    </div>
    <p>Cloudflare’s Gen X server and Gen 11 server support <a href="https://en.wikipedia.org/wiki/M.2"><u>M.2</u></a> form factor drives. We liked the M.2 form factor mainly because it was compact. The M.2 specification was introduced in 2012, but today, the connector system is dated and the industry has concerns about its ability to maintain signal integrity with the high speed signal specified by <a href="https://www.xda-developers.com/pcie-5/"><u>PCIe 5.0</u></a> and <a href="https://pcisig.com/pci-express-6.0-specification"><u>PCIe 6.0</u></a> specifications. The 8.25W thermal limit of the M.2 form factor also limits the number of flash dies that can be fitted, which limits the maximum supported capacity per drive. To address these concerns, the industry has introduced the <a href="https://americas.kioxia.com/content/dam/kioxia/en-us/business/ssd/data-center-ssd/asset/KIOXIA_Meta_Microsoft_EDSFF_E1_S_Intro_White_Paper.pdf"><u>E1.S</u></a> specification and is transitioning from the M.2 form factor to the E1.S form factor. </p><p>In Gen 12, we are making the change to the <a href="https://www.snia.org/forums/cmsi/knowledge/formfactors#EDSFF"><u>EDSFF</u></a> E1 form factor, more specifically the E1.S 15mm. E1.S 15mm, though still in a compact form factor, provides more space to fit more flash dies for larger capacity support. The form factor also has better cooling design to support more than 25W of sustained power.</p><p>While the AMD Genoa-X CPU supports 128 PCIe 5.0 lanes, we continue to use NVMe devices with PCIe Gen 4.0 x4 lanes, as PCIe Gen 4.0 throughput is sufficient to meet drive bandwidth requirements and keep server design costs optimal. The server is equipped with two 8 TB NVMe drives for a total of 16 TB available storage. We opted for two 8 TB drives instead of four 4 TB drives because the dual 8 TB configuration already provides sufficient I/O bandwidth for all Cloudflare workloads that run on each server.</p>
<div><table><thead>
  <tr>
    <th><span>Sequential Read (MB/s) :</span></th>
    <th><span>6,700</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>Sequential Write (MB/s) :</span></td>
    <td><span>4,000</span></td>
  </tr>
  <tr>
    <td><span>Random Read IOPS:</span></td>
    <td><span>1,000,000</span></td>
  </tr>
  <tr>
    <td><span>Random Write IOPS: </span></td>
    <td><span>200,000</span></td>
  </tr>
  <tr>
    <td><span>Endurance</span></td>
    <td><span>1 DWPD</span></td>
  </tr>
  <tr>
    <td><span>PCIe GEN4 x4 lane throughput</span></td>
    <td><span>7880 MB/s</span></td>
  </tr>
</tbody></table></div><p><sub><i>Storage devices performance specification</i></sub></p>
    <div>
      <h3>Network</h3>
      <a href="#network">
        
      </a>
    </div>
    <p>Cloudflare servers and top-of-rack (ToR) network equipment operate at <a href="https://en.wikipedia.org/wiki/25_Gigabit_Ethernet"><u>25 GbE</u></a> speeds. In Gen 12, we utilized a <a href="https://www.opencompute.org/wiki/Server/DC-MHS"><u>DC-MHS</u></a> motherboard-inspired design, and upgraded from an <a href="https://drive.google.com/file/d/1VGAtABAKU9fq3KfClYhFOgGFN3oe63Uw/view?usp=sharing"><u>OCP 2.0 form factor</u></a> to an <a href="https://drive.google.com/file/d/1U3oEGiSWfupG4SnIdPuJ_8Nte2lJRqTN/view?usp=sharing"><u>OCP 3.0 form factor</u></a>, which provides tool-less serviceability of the NIC. The OCP 3.0 form factor also occupies less space in the 2U server compared to PCIe-attached NICs, which improves airflow and frees up space for other application-specific PCIe cards, such as GPUs.</p><p>Cloudflare has been using the Mellanox CX4-Lx EN dual port 25 GbE NIC since our <a href="https://blog.cloudflare.com/a-tour-inside-cloudflares-g9-servers/"><u>Gen 9 servers in 2018</u></a>. Even though the NIC has served us well over the years, we are single sourced. During the pandemic, we were faced with supply constraints and extremely long lead times. The team scrambled to qualify the Broadcom M225P dual port 25 GbE NIC as our second-sourced NIC in 2022, ensuring we could continue to turn up servers to serve customer demand. With the lessons learned from single-sourcing the Gen 11 NIC, we are now dual-sourcing and have chosen the Intel Ethernet Network Adapter E810 and NVIDIA Mellanox ConnectX-6 Lx to support Gen 12. These two NICs are compliant with the <a href="https://www.opencompute.org/wiki/Server/NIC"><u>OCP 3.0 specification</u></a> and offer more MSI-X queues that can then be mapped to the increased core count on the AMD EPYC 9684X. The Intel Ethernet Network Adapter comes with an additional advantage, offering full Generic Segmentation Offload (GSO) support including VLAN-tagged encapsulated traffic, whereast many vendors either only support <a href="https://netdevconf.info/1.2/papers/LCO-GSO-Partial-TSO-MangleID.pdf"><u>Partial GSO</u></a> or do not support it at all today. With Full GSO support, the kernel spent noticeably less time in softirq segmenting packets, and servers with Intel E810 NICs are processing approximately 2% more requests per second.</p>
    <div>
      <h3>Improved security with DC-SCM: Project Argus</h3>
      <a href="#improved-security-with-dc-scm-project-argus">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6ur3YccqqXckIL6oWKd6Lq/5352252ff8e5c1fb15eb02d1572a0689/BLOG-2116_3.png" />
          </figure><p><sup><i>DC-SCM in Gen 12 server (Project Argus)</i></sup></p><p>Gen 12 servers are integrated with <a href="https://blog.cloudflare.com/introducing-the-project-argus-datacenter-ready-secure-control-module-design-specification/"><u>Project Argus</u></a>, one of the industry first implementations of <a href="https://drive.google.com/file/d/13BxuseSrKo647hjIXjp087ei8l5QQVb0/view"><u>Data Center Secure Control Module 2.0 (DC-SCM 2.0)</u></a>. DC-SCM 2.0 decouples server management and security functions away from the motherboard. The baseboard management controller (BMC), hardware root of trust (HRoT), trusted platform module (TPM), and dual BMC/BIOS flash chips are all installed on the DC-SCM. </p><p>On our Gen X and Gen 11 server, Cloudflare moved our secure boot trust anchor from the system Basic Input/Output System (BIOS) or the Unified Extensible Firmware Interface (UEFI) firmware to hardware-rooted boot integrity — <a href="https://blog.cloudflare.com/anchoring-trust-a-hardware-secure-boot-story/"><u>AMD’s implementation of Platform Secure Boot (PSB)</u></a> or <a href="https://blog.cloudflare.com/armed-to-boot/"><u>Ampere’s implementation of Single Domain Secure Boot</u></a>. These solutions helped secure Cloudflare infrastructure from BIOS / UEFI firmware attacks. However, we are still vulnerable to out-of-band attacks through compromising the BMC firmware. BMC is a microcontroller that provides out-of-band monitoring and management capabilities for the system. When compromised, attackers can read processor console logs accessible by BMC and control server power states for example. On Gen 12, the HRoT on the DC-SCM serves as the trust store of cryptographic keys and is responsible to authenticate the BIOS/UEFI firmware (independent of CPU vendor) and the BMC firmware for secure boot process.</p><p>In addition, on the DC-SCM, there are additional flash storage devices to enable storing back-up BIOS/UEFI firmware and BMC firmware to allow rapid recovery when a corrupted or malicious firmware is programmed, and to be resilient to flash chip failure due to aging.</p><p>These updates make our Gen 12 server more secure and more resilient to firmware attacks.</p>
    <div>
      <h3>Power</h3>
      <a href="#power">
        
      </a>
    </div>
    <p>A Gen 12 server consumes 600 watts at a typical ambient temperature of 25°C. Even though this is a 50% increase from the 400 watts consumed by the Gen 11 server, as mentioned above in the CPU section, this is a relatively small price to pay for a 145% increase in performance. We’ve paired the server up with dual 800W common redundant power supplies (CRPS) with 80 PLUS Titanium grade efficiency. Both power supply units (PSU) operate actively with distributed power and current. The units are hot-pluggable, allowing the server to operate with redundancy and maximize uptime.</p><p><a href="https://www.clearesult.com/80plus/program-details"><u>80 PLUS</u></a> is a PSU efficiency certification program. The Titanium grade efficiency PSU is 2% more efficient than the Platinum grade efficiency PSU between typical operating load of 25% to 50%. 2% may not sound like a lot, but considering the size of Cloudflare fleet with servers deployed worldwide, 2% savings over the lifetime of all Gen 12 deployment is a reduction of more than 7 GWh, <a href="https://www.epa.gov/energy/greenhouse-gas-equivalencies-calculator#results"><u>equivalent to carbon sequestered by more than 3400 acres of U.S. forests in one year</u></a>.  This upgrade also means our Gen 12 server complies with <a href="https://www.unicomengineering.com/blog/eu-lot-9-update-the-coming-server-power-migration/"><u>EU Lot9 requirements</u></a> and can be deployed in the EU region.</p>
<div><table><thead>
  <tr>
    <th><span>80 PLUS certification</span></th>
    <th><span>10%</span></th>
    <th><span>20%</span></th>
    <th><span>50%</span></th>
    <th><span>100%</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>80 PLUS Platinum</span></td>
    <td><span>-</span></td>
    <td><span>92%</span></td>
    <td><span>94%</span></td>
    <td><span>90%</span></td>
  </tr>
  <tr>
    <td><span>80 PLUS Titanium</span></td>
    <td><span>90%</span></td>
    <td><span>94%</span></td>
    <td><span>96%</span></td>
    <td><span>91%</span></td>
  </tr>
</tbody></table></div>
    <div>
      <h3>Drop-in GPU support</h3>
      <a href="#drop-in-gpu-support">
        
      </a>
    </div>
    <p>Demand for machine learning and AI workloads exploded in 2023, and Cloudflare <a href="https://blog.cloudflare.com/workers-ai/"><u>introduced Workers AI </u></a>to serve the needs of our customers. Cloudflare retrofitted or deployed GPUs worldwide in a portion of our Gen 11 server fleet to support the growth of Workers AI. Our Gen 12 server is also designed to accommodate the addition of more powerful GPUs. This gives Cloudflare the flexibility to support Workers AI in all regions of the world, and to strategically place GPUs in regions to reduce inference latency for our customers. With this design, the server can run Cloudflare’s full software stack. During times when GPUs see lower utilization, the server continues to serve general web requests and remains productive.</p><p>The electrical design of the motherboard is designed to support up to two PCIe add-in cards and the power distribution board is sized to support an additional 400W of power. The mechanics are sized to support either a single FHFL (full height, full length) double width GPU PCIe card, or two FHFL single width GPU PCIe cards. The thermal solution including the component placement, fans, and air duct design are sized to support adding GPUs with TDP up to 400W.</p>
    <div>
      <h3>Looking to the future</h3>
      <a href="#looking-to-the-future">
        
      </a>
    </div>
    <p>Gen 12 Servers are currently deployed and live in multiple Cloudflare data centers worldwide, and already process millions of requests per second. Cloudflare’s EPYC journey has not ended — the 5th-gen AMD EPYC CPUs (code name “Turin”) are already available for testing, and we are very excited to start the architecture planning and design discussion for the Gen 13 server. <a href="https://www.cloudflare.com/careers/jobs/"><u>Come join us</u></a> at Cloudflare to help build a better Internet!</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[EPYC]]></category>
            <category><![CDATA[AMD]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Cloudflare Network]]></category>
            <category><![CDATA[Hardware]]></category>
            <guid isPermaLink="false">sdvPBDBwhcEcrODVeOE7A</guid>
            <dc:creator>JQ Lau</dc:creator>
            <dc:creator>Ma Xiong</dc:creator>
            <dc:creator>Syona Sarma</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare Gen 12 Server: bigger, better, cooler in a 2U1N form factor]]></title>
            <link>https://blog.cloudflare.com/cloudflare-gen-12-server-bigger-better-cooler-in-a-2u1n-form-factor/</link>
            <pubDate>Fri, 01 Dec 2023 18:45:57 GMT</pubDate>
            <description><![CDATA[ Cloudflare Gen 12 Compute servers are moving to 2U1N form factor to optimize the thermal design to accommodate both high-power CPUs (>350W) and GPUs effectively while maintaining performance and reliability ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4k4pAqN8lhelYK7bHj3j2n/853ec5b25a2e62e08b85f611d0f76eb6/image5.png" />
            
            </figure><p>Two years ago, Cloudflare undertook a significant upgrade to our compute server hardware as we deployed our cutting-edge <a href="/the-epyc-journey-continues-to-milan-in-cloudflares-11th-generation-edge-server/">11th Generation server fleet</a>, based on AMD EPYC Milan x86 processors. It's nearly time for another refresh to our x86 infrastructure, with deployment planned for 2024. This involves upgrading not only the processor itself, but many of the server's components. It must be able to accommodate the GPUs that drive inference on <a href="/workers-ai/">Workers AI</a>, and leverage the latest advances in memory, storage, and security. Every aspect of the server is rigorously evaluated — including the server form factor itself.</p><p>One crucial variable always in consideration is temperature. The latest generations of x86 processors have yielded significant leaps forward in performance, with the tradeoff of higher power draw and heat output. In this post we will explore this trend, and how it informed our decision to adopt a new physical footprint for our next-generation fleet of servers.</p><p>In preparation for the upcoming refresh, we conducted an extensive survey of the x86 CPU landscape. AMD recently introduced its latest offerings: Genoa, Bergamo, and Genoa-X, featuring the power of their innovative Zen 4 architecture. At the same time, Intel unveiled Sapphire Rapids as part of its 4th Generation Intel Xeon Scalable Processor Platform, code-named “Eagle Stream”, showcasing their own advancements. These options offer valuable choices as we consider how to shape the future of Cloudflare's server technology to match the needs of our customers.</p><p>A continuing challenge we face across x86 CPU vendors, including the new Intel and AMD chipsets, is the rapidly increasing CPU Thermal Design Point (TDP) generation over generation. TDP is defined to be the maximum heat dissipated by the CPU under load that a cooling system should be designed to cool; TDP also describes the maximum power consumption of the CPU socket. This plot shows the CPU TDP trend of each hardware server generation since 2014:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5Ae6NrHId3uKhuS6mIhW7F/055e89fef13a710b3cc45607841de8ec/image4.png" />
            
            </figure><p>At Cloudflare, our <a href="/a-tour-inside-cloudflares-g9-servers/">Gen 9 server</a> was based on Intel Skylake 6162 with a TDP of 150W, our <a href="/cloudflares-gen-x-servers-for-an-accelerated-future/">Gen 10 server</a> was based on AMD Rome 7642 at 240W, and our <a href="/the-epyc-journey-continues-to-milan-in-cloudflares-11th-generation-edge-server/">Gen 11 server</a> was based on AMD Milan 7713 at 240W. Today, <a href="https://www.amd.com/system/files/documents/epyc-9004-series-processors-data-sheet.pdf">AMD EPYC 9004 Series SKU Stack</a> default TDP goes up to 360W and is configurable up to 400W. <a href="https://ark.intel.com/content/www/us/en/ark/products/codename/126212/products-formerly-sapphire-rapids.html#@Server">Intel Sapphire Rapid SKU stack</a> default TDP goes up to 350W. This trend of rising TDP is expected to continue with the next generation of x86 CPU offerings.</p>
    <div>
      <h2>Designing multi-generational cooling solutions</h2>
      <a href="#designing-multi-generational-cooling-solutions">
        
      </a>
    </div>
    <p>Cloudflare Gen 10 servers and Gen 11 servers were designed in a 1U1N form factor, with air cooling to maximize rack density (1U means the server form factor is 1 Rack Unit, which is 1.75” in height or thickness; 1N means there is one server node per chassis). However, to cool more than 350 Watt TDP CPU with air in a 1U1N form factor requires fans to be spinning at 100% duty cycle (running all the time, at max speed). A single fan running at full speed consumes about 40W, and a typical server configuration of 7–8 dual rotor fans per server can hit 280–320 W to power the fans alone. At peak loads, the total system power consumed, including the cooling fans, processor, and other components, can eclipse 750 Watt per server.</p><p>The 1U form factor can fit a maximum of eight 40mm dual rotor fans, which sets an upper bound on the temperature range it can support. We first take into account ambient room temperature, which we assume to be 40° C (the maximum expected temperature under normal conditions). Under these conditions we determined that air-cooled servers, with all eight fans running at 100% duty cycle, can support CPUs with a maximum TDP of 400W.</p><p>This poses a challenge, because the next generation of AMD CPUs, while being socket compatible with the current gen, rise up to 500W TDP and we expect other vendors to follow a similar trend in subsequent generations. In order to future-proof, and re-use as much of Gen 12 design as possible for future generations across all x86 CPU products, we will need a scalable thermal solution. Moreover, many co-location facilities where Cloudflare deploys servers have a rack power limit. With total system power consumption at north of 750 Watt per node, and after accounting for space utilized by networking gear, we would have been underutilizing rack space by as much as 50%.</p>
    <div>
      <h3>We have a problem!</h3>
      <a href="#we-have-a-problem">
        
      </a>
    </div>
    <p>We do have a variety of SKU options available to use on each CPU generation, and if power is the primary constraint, we could choose to limit the TDP and use a lower core count, low-power SKU. To evaluate this, the hardware team ran a synthetic workload benchmark in the lab across several CPU SKUs. We found that Cloudflare services continue to scale with cores effectively up to 128 cores or 256 hardware threads, resulting in significant performance gain, and Total Cost of Ownership (TCO) benefit, at and above 360W TDP.</p><p>However, while the performance metric and TCO metric look good on a per-server basis, this is only part of the story: servers go into a server rack when they are deployed, and server racks come with constraints and limitations that have to be taken into design consideration. The two limiting factors are rack power budget and rack height. Taking these two rack-level constraints into account, how does the combined Total Cost of Ownership (TCO) benefit scale with TDP? We ran a performance sweep across the configurable TDP range of the highest core count CPUs and noticed that rack-level TCO benefit stagnates when CPU TDP rises above roughly 340W.</p><p>TCO advantage stagnates because we hit our rack power budget limit — the incremental performance gain per server, coinciding with an incremental increase of CPU TDP above 340W, is negated by the reduction in the number of servers that can be installed in a rack to remain within the rack’s power budget. Even with CPU TDP power capped at 340W, we are still underutilizing the rack, with 30% of the space still available.</p><p>Thankfully, there is an alternative to power capping and compromising on possible performance gain, by increasing the chassis height to a 2U form factor (from 1.75” in height to 3.5” in height). The benefits from doing this include:</p><ul><li><p>Larger fans (up to 80mm) that can move more air</p></li><li><p>Allowing for a taller and larger heatsink that can dissipate heat more effectively</p></li><li><p>Less air impedance within the chassis since the majority of components are 1U height</p></li><li><p>Providing sufficient room to add PCIe attached accelerators / GPUs, including dual-slot form factor options</p></li></ul><p><i>Click images to enlarge</i></p><p>2U chassis design is nothing new, and is actually very common in the industry for various reasons, one of which is better airflow to dissipate more heat, but it does come with the tradeoff of taking up more space and limiting the number of servers than can be installed in a rack. Since we are power constrained instead of space constrained, the tradeoff did not negatively impact our design.</p><p>Thermal simulations provided by Cloudflare vendors showed that 4x 60mm fans or 4x 80mm fans at less than 40 Watt per fan is sufficient to cool the system. That is a theoretical savings of at least 150 Watt compared to 8x 40mm fans in a 1U design, which would result in significant Operational Expenditure (OPEX) savings and a boost to TCO improvement. Switching to a 2U form factor also gives us the benefit of fully utilizing our rack power budget and our rack space, and provides ample room for the addition of PCIe attached accelerators / GPUs, including dual-slot form factor options.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>It might seem counter-intuitive, but our observations indicate that growing the server chassis, and utilizing more space per node actually increases rack density and improves overall TCO benefit over previous generation deployments, since it allows for a better thermal design. We are very happy with the result of this technical readiness investigation, and are actively working on validating our Gen 12 Compute servers and launching them into production soon. Stay tuned for more details on our Gen 12 designs.</p><p>If you are excited about helping build  a better Internet, come join us, <a href="https://www.cloudflare.com/careers/jobs/">we are hiring</a>!</p> ]]></content:encoded>
            <category><![CDATA[AMD]]></category>
            <category><![CDATA[Hardware]]></category>
            <category><![CDATA[Cloudflare Network]]></category>
            <guid isPermaLink="false">2aWmwl401SPH5eZNo2wHMg</guid>
            <dc:creator>JQ Lau</dc:creator>
            <dc:creator>Syona Sarma</dc:creator>
        </item>
        <item>
            <title><![CDATA[Measuring Hyper-Threading and Turbo Boost]]></title>
            <link>https://blog.cloudflare.com/measuring-hyper-threading-and-turbo-boost/</link>
            <pubDate>Tue, 05 Oct 2021 12:58:35 GMT</pubDate>
            <description><![CDATA[ Contemporary x86 processors implement some variants of Hyper-Threading and Turbo Boost. We decided to learn about the implications of these two technologies. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>We often put together experiments that measure hardware performance to improve our understanding and provide insights to our hardware partners. We recently wanted to know more about Hyper-Threading and Turbo Boost. The last time we assessed these two technologies was when we were still <a href="/a-tour-inside-cloudflares-g9-servers/">deploying the Intel Xeons</a> (Skylake/Purley), but beginning with our Gen X servers we <a href="/technical-details-of-why-cloudflare-chose-amd-epyc-for-gen-x-servers/">switched over to the AMD EPYC</a> (Zen 2/Rome). This blog is about our latest attempt at quantifying the performance impact of Hyper-Threading and Turbo Boost on our AMD-based servers running our software stack.</p><p>Intel briefly introduced Hyper-Threading with NetBurst (Northwood) back in 2002, then reintroduced Hyper-Threading six years later with Nehalem along with Turbo Boost. AMD presented their own implementation of these technologies with Zen in 2017, but AMD’s version of Turbo Boost actually dates back to AMD K10 (Thuban), in 2010, when it used to be called Turbo Core. Since Zen, Hyper-Threading and Turbo Boost are known as simultaneous multithreading (SMT) and Core Performance Boost (CPB), respectively. The underlying implementation of Hyper-Threading and Turbo Boost differs between the two vendors, but the high-level concept remains the same.</p><p>Hyper-Threading or simultaneous multithreading creates a second hardware thread within a processor’s core, also known as a logical core, by duplicating various parts of the core to support the context of a second application thread. The two hardware threads execute simultaneously within the core, across their dedicated and remaining shared resources. If neither hardware threads contend over a particular shared resource, then the throughput can be drastically increased.</p><p>Turbo Boost or Core Performance Boost opportunistically allows the processor to operate beyond its rated base frequency as long as the processor operates within guidelines set by Intel or AMD. Generally speaking, the higher the frequency, the faster the processor finishes a task.</p>
    <div>
      <h2>Simulated Environment</h2>
      <a href="#simulated-environment">
        
      </a>
    </div>
    
    <div>
      <h3>CPU Specification</h3>
      <a href="#cpu-specification">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/STcQJqLbUhlxNm7hYmEc7/b33ed9909e4c56c6db4eb7d8090447dc/image2-12.png" />
            
            </figure><p>Our <a href="/cloudflares-gen-x-servers-for-an-accelerated-future/">Gen X or 10th generation servers</a> are powered by the <a href="https://www.amd.com/en/products/cpu/amd-epyc-7642">AMD EPYC 7642</a>, based on the Zen 2 microarchitecture. The vast majority of the Zen 2-based processors along with its successor Zen 3 that our <a href="/the-epyc-journey-continues-to-milan-in-cloudflares-11th-generation-edge-server/">Gen 11 servers are based on</a>, supports simultaneous multithreading and Core Performance Boost.</p><p>Similar to Intel’s Hyper-Threading, AMD implemented 2-way simultaneous multithreading. The AMD EPYC 7642 has 48 cores, and with simultaneous multithreading enabled it can simultaneously execute 96 hardware threads. Core Performance Boost allows the AMD EPYC 7642 to operate anywhere between 2.3 to 3.3 GHz, depending on the workload and limitations imposed on the processor. With Core Performance Boost disabled, the processor will operate at 2.3 GHz, the rated base frequency on the AMD EPYC 7642. We took our usual simulated traffic pattern of 10 KiB cached assets over HTTPS, <a href="/keepalives-considered-harmful/">provided by our performance team</a>, to generate a sustained workload that saturated the processor to 100% CPU utilization.</p>
    <div>
      <h3>Results</h3>
      <a href="#results">
        
      </a>
    </div>
    <p>After establishing a baseline with simultaneous multithreading and Core Performance Boost disabled, we started enabling one feature at a time. When we enabled Core Performance Boost, the processor operated near its peak turbo frequency, hovering between 3.2 to 3.3 GHz which is more than 39% higher than the base frequency. Higher operating frequency directly translated into 40% additional requests per second. We then disabled Core Performance Boost and enabled simultaneous multithreading. Similar to Core Performance Boost, simultaneous multithreading alone improved requests per second by 43%. Lastly, by enabling both features, we observed an 86% improvement in requests per second.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2NfKzNxMrKrtufGQuYjcHp/483e38c99e370940454158898f7fbe76/image4-8.png" />
            
            </figure><p>Latencies were generally lowered by either or both Core Performance Boost and simultaneous multithreading. While Core Performance Boost consistently maintained a lower latency than the baseline, simultaneous multithreading gradually took longer to process a request as it reached tail latencies. Though not depicted in the figure below, when we examined beyond p9999 or 99.99th percentile, simultaneous multithreading, even with the help of Core Performance Boost, exponentially increased in latency by more than 150% over the baseline, presumably due to the two hardware threads contending over a shared resource within the core.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4aLHnhtg8eOGWoUCBe2g98/2865b9db5d5746d47c510ea6e80eec8f/image7-4.png" />
            
            </figure>
    <div>
      <h2>Production Environment</h2>
      <a href="#production-environment">
        
      </a>
    </div>
    <p>Moving into production, since our <a href="https://radar.cloudflare.com/">traffic fluctuates throughout the day</a>, we took four identical Gen X servers and measured in parallel during peak hours. The only changes we made to the servers were enabling and disabling simultaneous multithreading and Core Performance Boost to create a comprehensive test matrix. We conducted the experiment in two different regions to identify any anomalies and mismatching trends. All trends were alike.</p><p>Before diving into the results, we should preface that the baseline server operated at a higher CPU utilization than others. Every generation, our servers deliver a noticeable improvement in performance. So our load balancer, named Unimog, <a href="/unimog-cloudflares-edge-load-balancer/">sends a different number of connections to the target server based on its generation</a> to balance out the CPU utilization. When we disabled simultaneous multithreading and Core Performance Boost, the baseline server’s performance degraded to the point where Unimog encountered a “guard rail” or the lower limit on the requests sent to the server, and so its CPU utilization rose instead. Given that the baseline server operated at a higher CPU utilization, the baseline server processed more requests per second to meet the minimum performance threshold.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4f2ViyRNLst20bSKZ5I9fe/4fddc3014f888809e9b0a9f120234004/image8-5.png" />
            
            </figure>
    <div>
      <h3>Results</h3>
      <a href="#results">
        
      </a>
    </div>
    <p>Due to the skewed baseline, when core performance boost was enabled, we only observed 7% additional requests per second. Next, simultaneous multithreading alone improved requests per second by 41%. Lastly, with both features enabled, we saw an 86% improvement in requests per second.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6E6GlehsjYmQmKnpjxgkut/cb7c54930c5d90c16d8ce45ecea429fe/image6-6.png" />
            
            </figure><p>Though we lack concrete baseline data, we can normalize requests per second by CPU utilization to approximate the improvement for each scenario. Once normalized, the estimated improvement in requests per second from core performance boost and simultaneous multithreading were 36% and 80%, respectively. With both features enabled, requests per second improved by 136%.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6OGBo0BJ9POefn6afYw5Mc/97f445ce481673d379a7f52ec2907be6/image5-6.png" />
            
            </figure><p>Latency was not as interesting since the baseline server operated at a higher CPU utilization, and in turn, it produced a higher tail latency than we would have otherwise expected. All other servers maintained a lower latency due to their lower CPU utilization in conjunction with Core Performance Boost, simultaneous multithreading, or both.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4JPkPm4EP6DXEGd99nSyVo/5adb7926ab3a25c02ecf4b7c5e74d14b/image1-10.png" />
            
            </figure><p>At this point, our experiment did not go as we had planned. Our baseline is skewed, and we only got half useful answers. However, we find experimenting to be important because we usually end up finding other helpful insights as well.</p><p>Let’s add power data. Since our baseline server was operating at a higher CPU utilization, we knew it was serving more requests and therefore, consumed more power than it needed to. Enabling Core Performance Boost allowed the processor to run up to its peak turbo frequency, increasing power consumption by 35% over the skewed baseline. More interestingly, enabling simultaneous multithreading increased power consumption by only 7%. Combining Core Performance Boost with simultaneous multithreading resulted in 58% increase in power consumption.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5Qxyi4d9qWZDxc7F09w0bv/73b88a522a15b35d5796478fdb11b6de/image3-6.png" />
            
            </figure><p>AMD’s implementation of simultaneous multithreading appears to be power efficient as it achieves 41% additional requests per second while consuming only 7% more power compared to the skewed baseline. For completeness, using the data we have, we bridged performance and power together to obtain performance per watt to summarize power efficiency. We divided the non-normalized requests per second by power consumption to produce the requests per watt figure below. Our Gen X servers attained the best performance per watt by enabling just simultaneous multithreading.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5ZY19SD0eKHUmUo6Zx3bCA/2b0250f752aea47f944a4c6d0b77bf5c/image9-4.png" />
            
            </figure>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>In our assessment of AMD’s implementation of Hyper-Threading and Turbo Boost, the original experiment we designed to measure requests per second and latency did not pan out as expected. As soon as we entered production, our baseline measurement was skewed due to the imbalance in CPU utilization and only partially reproduced our lab results.</p><p>We added power to the experiment and found other meaningful insights. By analyzing the performance and power characteristics of simultaneous multithreading and Core Performance Boost, we concluded that simultaneous multithreading could be a power-efficient mechanism to attain additional requests per second. Drawbacks of simultaneous multithreading include long tail latency that is currently curtailed by enabling Core Performance Boost. While the higher frequency enabled by Core Performance Boost provides latency reduction and more requests per second, we are more mindful that the increase in power consumption is quite significant.</p><p>Do you want to help shape the Cloudflare network? This blog was a glimpse of the work we do at Cloudflare. <a href="https://www.cloudflare.com/careers/">Come join us</a> and help complete the feedback loop for our developers and hardware partners.</p> ]]></content:encoded>
            <category><![CDATA[Hardware]]></category>
            <category><![CDATA[AMD]]></category>
            <category><![CDATA[EPYC]]></category>
            <category><![CDATA[Partners]]></category>
            <category><![CDATA[Performance]]></category>
            <guid isPermaLink="false">6eb1jfSeTaXx6866LDX8Nl</guid>
            <dc:creator>Sung Park</dc:creator>
        </item>
        <item>
            <title><![CDATA[The EPYC journey continues to Milan in Cloudflare’s 11th generation Edge Server]]></title>
            <link>https://blog.cloudflare.com/the-epyc-journey-continues-to-milan-in-cloudflares-11th-generation-edge-server/</link>
            <pubDate>Tue, 31 Aug 2021 13:00:45 GMT</pubDate>
            <description><![CDATA[ At Cloudflare we aim to introduce a new server platform to our edge network every 12 to 18 months or so, to ensure that we keep up with the latest industry technologies and developments.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>When I was interviewing to join Cloudflare in 2014 as a member of the SRE team, we had just introduced our <a href="/a-tour-inside-cloudflares-latest-generation-servers/">generation 4 server</a>, and I was excited about the prospects. Since then, Cloudflare, the industry and I have all changed dramatically. The best thing about working for a rapidly growing company like Cloudflare is that as the company grows, new roles open up to enable career development. And so, having left the SRE team last year, I joined the recently formed hardware engineering team, a team that simply didn’t exist in 2014.</p><p>We aim to introduce a new server platform to our edge network every 12 to 18 months or so, to ensure that we keep up with the latest industry technologies and developments. We announced the <a href="/a-tour-inside-cloudflares-g9-servers/">generation 9 server</a> in October 2018 and we announced the <a href="/a-tour-inside-cloudflares-g9-servers/">generation 10 server</a> in February 2020. We consider this length of cycle optimal: short enough to stay nimble and take advantage of the latest technologies, but long enough to offset the time taken by our hardware engineers to test and validate the entire platform. When we are <a href="https://www.cloudflare.com/network/">shipping servers to over 200 cities</a> around the world with a variety of regulatory standards, it’s essential to get things right the first time.</p><p>We continually work with our silicon vendors to receive product roadmaps and stay on top of the latest technologies. Since mid-2020, the hardware engineering team at Cloudflare has been working on our generation 11 server.</p><p>Requests per Watt is one of our defining characteristics when testing new hardware and we use it to identify how much more efficient a new hardware generation is than the previous generation. We continually strive to reduce our operational costs and <a href="/the-climate-and-cloudflare/">power consumption reduction</a> is one of the most important parts of this. It’s good for the planet and we can fit more servers into a rack, reducing our physical footprint.</p><p>The design of these Generation 11 x86 servers has been in parallel with our efforts to design next-generation edge servers using the <a href="/arms-race-ampere-altra-takes-on-aws-graviton2/">Ampere Altra</a> Arm architecture. You can read more about our tests in a blog post by my colleague Sung and we will document our work on Arm at the edge in a subsequent blog post.</p><p>We evaluated Intel’s latest generation of “Ice Lake” Xeon processors. Although Intel’s chips were able to compete with AMD in terms of raw performance, the power consumption was several hundred watts higher per server - that’s enormous. This meant that Intel’s Performance per Watt was unattractive.</p><p>We previously described how we had deployed AMD EPYC 7642’s processors in our generation 10 server. This has 48 cores and is based on AMD’s 2nd generation EPYC architecture, code named Rome. For our generation 11 server, we evaluated 48, 56 and 64 core samples based on AMD’s 3rd generation EPYC architecture, code named Milan. We were interested to find that comparing the two 48 core processors directly, we saw a performance boost of several percent in the <a href="https://www.amd.com/en/events/epyc">3rd generation EPYC</a> architecture. We therefore had high hopes for the 56 core and 64 core chips.</p><p>So, based on the samples we received from our vendors and our subsequent testing, hardware from AMD and Ampere made the shortlist for our generation 11 server. On this occasion, we decided that Intel did not meet our requirements. However, it’s healthy that Intel and AMD compete and innovate in the x86 space and we look forward to seeing how Intel’s next generation shapes up.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1ajXt7adRxT5V2FVBVw7IJ/56872f59f7e681093f6591fa6ef42840/IMG_4118.jpeg.jpeg" />
            
            </figure>
    <div>
      <h3>Testing and validation process</h3>
      <a href="#testing-and-validation-process">
        
      </a>
    </div>
    <p>Before we go on to talk about the hardware, I’d like to say a few words about the testing process we went through to test out generation 11 servers.</p><p>As we elected to proceed with AMD chips, we were able to use our generation 10 servers as our Engineering Validation Test platform, with the only changes being the new silicon and updated firmware. We were able to perform these upgrades ourselves in our hardware validation lab.</p><p>Cloudflare’s network is built with commodity hardware and we source the hardware from multiple vendors, known as ODMs (Original Design Manufacturer) who build the servers to our specifications.</p><p>When you are working with bleeding edge silicon and experimental firmware, not everything is plain sailing. We worked with one of our ODMs to eliminate an issue which was causing the Linux kernel to panic on boot. Once resolved, we used a variety of synthetic benchmarking tools to verify the performance including <a href="https://github.com/cloudflare/cf_benchmark">cf_benchmark</a>, as well as an internal tool which applies a synthetic load to our entire software stack.</p><p>Once we were satisfied, we ordered Design Validation Test samples, which were manufactured by our ODMs with the new silicon. We continued to test these and iron out the inevitable issues that arise when you are developing custom hardware. To ensure that performance matched our expectations, we used synthetic benchmarking to test the new silicon. We also began testing it in our production environment by gradually introducing customer traffic to them as confidence grew.</p><p>Once the issues were resolved, we ordered the Product Validation Test samples, which were again manufactured by our ODMs, taking into account the feedback obtained in the DVT phase. As these are intended to be production grade, we work with the broader Cloudflare teams to deploy these units like a mass production order.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5qZWpcqMs5De6LfHMCudjn/b7ca612abce657d53e51f2199ebbc7a9/20210722_113627.jpeg.jpeg" />
            
            </figure>
    <div>
      <h3>CPU</h3>
      <a href="#cpu">
        
      </a>
    </div>
    <p>Previously: AMD EPYC 7642 48-Core ProcessorNow: AMD EPYC 7713 64-Core Processor</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6aIqXZIeXOANPykcbW8tJJ/bc57b2023460a731e92e8c2e9b924afd/IMG_4203.jpeg.jpeg" />
            
            </figure>
<div><table><thead>
  <tr>
    <th></th>
    <th><a href="https://www.amd.com/en/products/cpu/amd-epyc-7642"><span>AMD EPYC 7642</span></a></th>
    <th><a href="https://www.amd.com/en/products/cpu/amd-epyc-7643"><span>AMD EPYC 7643</span></a></th>
    <th><a href="https://www.amd.com/en/products/cpu/amd-epyc-7663"><span>AMD EPYC 7663 </span></a></th>
    <th><a href="https://www.amd.com/en/products/cpu/amd-epyc-7713"><span>AMD EPYC 7713</span></a></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>Status</span></td>
    <td><span>Incumbent</span></td>
    <td><span>Candidate</span></td>
    <td><span>Candidate</span></td>
    <td><span>Candidate</span></td>
  </tr>
  <tr>
    <td><span>Core Count</span></td>
    <td><span>48</span></td>
    <td><span>48</span></td>
    <td><span>56</span></td>
    <td><span>64</span></td>
  </tr>
  <tr>
    <td><span>Thread Count</span></td>
    <td><span>96</span></td>
    <td><span>96</span></td>
    <td><span>112</span></td>
    <td><span>128</span></td>
  </tr>
  <tr>
    <td><span>Base Clock</span></td>
    <td><span>2.3GHz</span></td>
    <td><span>2.3GHz</span></td>
    <td><span>2.0GHz</span></td>
    <td><span>2.0GHz</span></td>
  </tr>
  <tr>
    <td><span>Max Boost Clock</span></td>
    <td><span>3.3GHz</span></td>
    <td><span>3.6GHz</span></td>
    <td><span>3.5GHz</span></td>
    <td><span>3.675GHz</span></td>
  </tr>
  <tr>
    <td><span>Total L3 Cache</span></td>
    <td><span>256MB</span></td>
    <td><span>256MB</span></td>
    <td><span>256MB</span></td>
    <td><span>256MB</span></td>
  </tr>
  <tr>
    <td><span>Default TDP</span></td>
    <td><span>225W</span></td>
    <td><span>225W</span></td>
    <td><span>240W</span></td>
    <td><span>225W </span></td>
  </tr>
  <tr>
    <td><span>Configurable TDP</span></td>
    <td><span>240W</span></td>
    <td><span>240W</span></td>
    <td><span>240W</span></td>
    <td><span>240W</span></td>
  </tr>
</tbody></table></div><p>In the above chart, TDP refers to Thermal Design Power, a measure of the heat dissipated. All of the above processors have a configurable TDP - assuming the cooling solution is capable - giving more performance at the expense of increased power consumption. We tested all processors configured at their highest supported TDP.</p><p>The 64 core processors have 33% more cores than the 48 core processors so you might hypothesize that we would see a corresponding 33% increase in performance, although our benchmarks saw slightly more modest gains. This can be explained because the 64 core processors have lower base clock frequencies to fit within the same 225W power envelope.</p><p>In production testing, we found that the 64 core EPYC 7713 gave us around a 29% performance boost over the incumbent, whilst having similar power consumption and thermal properties.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3rYE9p57gOFlQ5xwL6D29t/aaf3624f32e1bf15ecf91db8c25dd96a/IMG_4196.jpeg.jpeg" />
            
            </figure>
    <div>
      <h2>Memory</h2>
      <a href="#memory">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2Z86awdPPL6Yt3G5GRZP41/c76caa9c5cbf76f718e5b3da99a989b2/_MG_4100.jpeg.jpeg" />
            
            </figure><p>Previously: 256GB DDR4-2933Now: 384GB DDR4-3200</p><p>Having made a decision about the processor, the next step was to determine the optimal amount of memory for our workload. We ran a series of experiments with our chosen EPYC 7713 processor and 256GB, 384GB and 512GB memory configurations. We started off by running synthetic benchmarks with tools such as <a href="https://www.cs.virginia.edu/stream/">STREAM</a> to ensure that none of the configurations performed unexpectedly poorly and to generate a baseline understanding of the performance.</p><p>After the synthetic benchmarks, we proceeded to test the various configurations with production workloads to empirically determine the optimal quantity. We use Prometheus and Grafana to gather and display a rich set of metrics from all of our servers so that we can monitor and spot trends, and we re-used the same infrastructure for our performance analysis.</p><p>As well as measuring available memory, previous experience has shown us that one of the best ways to ensure that we have enough memory is to observe request latency and disk IO performance. If there is insufficient memory, we expect to see request latency and disk IO volume and latency to increase. The reason for this is that our core HTTP server uses memory to cache web assets and if there is insufficient memory the assets will be ejected from memory prematurely and more assets will be fetched from disk instead of memory, degrading performance.</p><p>Like most things in life, it’s a balancing act. We want enough memory to take advantage of the fact that serving web assets directly from memory is much faster than even the best NVMe disks. We also want to future proof our platform to enable the new features such as the ones that we recently announced in <a href="/tag/security-week/">security week</a> and <a href="/tag/developer-week/">developer week</a>. However, we don’t want to spend unnecessarily on excess memory that will never be used. We found that the 512GB configuration did not provide a performance boost to justify the extra cost and settled on the 384GB configuration.</p><p>We also tested the performance impact of switching from DDR4-2933 to DDR4-3200 memory. We found that it provided a performance boost of several percent and the pricing has improved to the point where it is cost beneficial to make the change.</p>
    <div>
      <h2>Disk</h2>
      <a href="#disk">
        
      </a>
    </div>
    <p>Previously: 3x Samsung PM983 x 960GBNow: 2x Samsung PM9A3 x 1.92TB</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1bzxh2LTLqrPAHlIWynt9t/10c8fe856bf9e3a6feed7a8624451c66/20210722_113829.jpeg.jpeg" />
            
            </figure><p>We validated samples by studying the manufacturer’s data sheets and <a href="https://fio.readthedocs.io/en/latest/fio_doc.html">testing using fio</a> to ensure that the results being obtained in our test environment were in line with the published specifications. We also developed an automation framework to help compare different drive models using fio. The framework helps us to restore the drives close to factory settings, precondition the drives, perform the sequential and random tests in our environment, and analyze the data results to evaluate the bandwidth and latency results. Since our SSD samples were arriving in our test center at different months, having an automated framework helped in dealing with speedy evaluations by reducing our time spent testing and doing analysis.</p><p>For Gen 11 we decided to move to a 2x 2TB configuration from the original 3x 1TB configuration giving us an extra 1 TB of storage. This also meant we could use the higher performance of a 2TB drive and save around 6W of power since there is one less SSD.</p><p>After analyzing the performances of various 2TB drives, their latencies and endurances, we chose Samsung’s PM9A3 SSDs as our Gen11 drives. The results we obtained below were consistent with the manufacturer's claims.</p><p>Sequential performance:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5yOK70l1XM5JJij5aeUHTQ/5653a3e28925e06407b88193780acd77/pasted-image-0-5.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5nifB3rxB1gHWX6JBxBhfY/707192c1a778c46a15aff4b419d42c13/pasted-image-0--1--2.png" />
            
            </figure><p>Random Performance:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/A8p9s1IjM4UnmNmaMPx0T/358e01b8832c5e1ed7c1479a05c6cd5a/pasted-image-0--2-.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/61rhIo4w8sSIYbUiGfHRn9/d07ed49c5b957e3b1dde1db58aa1e484/pasted-image-0--3-.png" />
            
            </figure><p>Compared to our previous generation drives, we could see a 1.5x - 2x improvement in read and write bandwidths. The higher values for the PM9A3 can be attributed to the fact that these are PCIe 4.0 drives, have more intelligent SSD controllers and an upgraded NAND architecture.</p>
    <div>
      <h3>Network</h3>
      <a href="#network">
        
      </a>
    </div>
    <p>Previously: Mellanox ConnectX-4 dual-port 25GNow: Mellanox ConnectX-4 dual-port 25G</p><p>There is no change on the network front; the Mellanox ConnectX-4 is a solid performer which continues to meet our needs. We investigated higher speed Ethernet, but we do not currently see this as beneficial. Cloudflare’s network is built on cheap commodity hardware and the highly distributed nature of Cloudflare’s network means we don’t have discrete DDoS scrubbing centres. All points of presence operate as scrubbing centres. This means that we distribute the load across our entire network and do not need to employ higher speed and more expensive Ethernet devices.</p>
    <div>
      <h3>Open source firmware</h3>
      <a href="#open-source-firmware">
        
      </a>
    </div>
    <p>Transparency, security and integrity is absolutely critical to us at Cloudflare. Last year, we described how we had <a href="/anchoring-trust-a-hardware-secure-boot-story/">deployed Platform Secure Boot</a> to create trust that we were running the software that we thought we were.</p><p>Now, we are pleased to announce that we are deploying open source firmware to our servers using OpenBMC. With access to the source code, we have been able to configure BMC features such as the fan PID controller, having BIOS POST codes recorded and accessible, and managing networking ports and devices. Prior to OpenBMC, requesting these features from our vendors led to varying results and misunderstandings of the scope and capabilities of the BMC. After working with the BMC source code much more directly, we have the flexibility to work on features ourselves to our liking, or understand why the BMC is incapable of running our desired software.</p><p>Whilst our current BMC is an industry standard, we feel that OpenBMC better suits our needs and gives us advantages such as allowing us to deal with upstream security issues without a dependency on our vendors. Some opportunities with security include integration of desired authentication modules, usage of specific software packages, staying up to date with the latest Linux kernel, and controlling a variety of attack vectors. Because we have a kernel lockdown implemented, flashing tooling is difficult to use in our environment. With access to source code of the flashing tools, we have an understanding of what the tools need access to, and assess whether or not this meets our standard of security.</p>
    <div>
      <h3>Summary</h3>
      <a href="#summary">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/01wz0PNamn0NdOlkgxbgUs/104ce7600fe455376a0e39538c929882/IMG_4195.jpeg.jpeg" />
            
            </figure><p>The jump between our generation 9 and generation 10 servers was enormous. To summarise, we changed from a dual-socket Intel platform to a single socket AMD platform. We upgraded the SATA SSDs to NVMe storage devices, and physically the multi-node chassis changed to a 1U form factor.</p><p>At the start of the generation 11 project we weren’t sure if we would be making such radical changes again. However, after a thorough testing of the latest chips and a review of how well the generation 10 server has performed in production for over a year, our generation 11 server built upon the solid foundations of generation 10 and ended up as a refinement rather than total revamp. Despite this, and bearing in mind that performance varies by time of day and geography, we are pleased that generation 11 is capable of serving approximately 29% more requests than generation 10 without an increase in power consumption.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1hWA14j4FrNG6KYDaHVFI4/30172744bf9b0d94a08c8a8abdb34b3f/Screenshot-2021-06-22-at-15.27.27.png" />
            
            </figure><p>Thanks to Denny Mathew and Ryan Chow’s work on benchmarking and OpenBMC, respectively.</p><p>If you are interested in working with bleeding edge hardware, open source server firmware, solving interesting problems, helping to improve our performance, and are interested in helping us work on our generation 12 server platform (amongst many other things!), <a href="https://www.cloudflare.com/careers/jobs/?department=Infrastructure&amp;location=default">we’re hiring</a>.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Network]]></category>
            <category><![CDATA[EPYC]]></category>
            <category><![CDATA[AMD]]></category>
            <category><![CDATA[Hardware]]></category>
            <guid isPermaLink="false">1c9Tnv6ewSXOVymWrMDUzF</guid>
            <dc:creator>Chris Howells</dc:creator>
        </item>
        <item>
            <title><![CDATA[Branch predictor: How many "if"s are too many? Including x86 and M1 benchmarks!]]></title>
            <link>https://blog.cloudflare.com/branch-predictor/</link>
            <pubDate>Thu, 06 May 2021 13:00:00 GMT</pubDate>
            <description><![CDATA[ Is it ok to have if clauses that will basically never be run? Surely, there must be some performance cost to that... ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Some time ago I was looking at a hot section in our code and I saw this:</p>
            <pre><code>
	if (debug) {
    	  log("...");
    }
    </code></pre>
            <p>This got me thinking. This code is in a performance critical loop and it looks like a waste - we never run with the "debug" flag enabled<sup>[</sup><a href="#footnotes"><sup>1</sup></a><sup>].</sup> Is it ok to have <code>if</code> clauses that will basically never be run? Surely, there must be some performance cost to that...</p>
    <div>
      <h3>Just how bad is peppering the code with avoidable <code>if</code> statements?</h3>
      <a href="#just-how-bad-is-peppering-the-code-with-avoidable-if-statements">
        
      </a>
    </div>
    <p>Back in the days the general rule was: a fully predictable branch has close to zero CPU cost.</p><p>To what extent is this true? If one branch is fine, then how about ten? A hundred? A thousand? When does adding one more <code>if</code> statement become a bad idea?</p><p>At some point the negligible cost of simple branch instructions surely adds up to a significant amount. As another example, a colleague of mine found this snippet in our production code:</p>
            <pre><code>
const char *getCountry(int cc) {
		if(cc == 1) return "A1";
        if(cc == 2) return "A2";
        if(cc == 3) return "O1";
        if(cc == 4) return "AD";
        if(cc == 5) return "AE";
        if(cc == 6) return "AF";
        if(cc == 7) return "AG";
        if(cc == 8) return "AI";
        ...
        if(cc == 252) return "YT";
        if(cc == 253) return "ZA";
        if(cc == 254) return "ZM";
        if(cc == 255) return "ZW";
        if(cc == 256) return "XK";
        if(cc == 257) return "T1";
        return "UNKNOWN";
}
        </code></pre>
            <p>Obviously, this code could be improved<sup>[</sup><a href="#footnotes"><sup>2</sup></a><sup>]</sup>. But when I thought about it more: <i>should</i> it be improved? Is there an actual performance hit of a code that consists of a series of simple branches?</p>
    <div>
      <h3>Understanding the cost of jump</h3>
      <a href="#understanding-the-cost-of-jump">
        
      </a>
    </div>
    <p>We must start our journey with a bit of theory. We want to figure out if the CPU cost of a branch increases as we add more of them. As it turns out, assessing the cost of a branch is not trivial. On modern processors it takes between one and twenty CPU cycles. There are at least four categories of control flow instructions<sup>[</sup><a href="#footnotes"><sup>3</sup></a><sup>]</sup>: unconditional branch (jmp on x86), call/return, conditional branch (e.g. je on x86) taken and conditional branch not taken. The taken branches are especially problematic: without special care they are inherently costly - we'll explain this in the following section. To bring down the cost, modern CPU's try to predict the future and figure out the branch <b>target</b> before the branch is actually fully executed! This is done in a special part of the processor called the branch predictor unit (BPU).</p><p>The branch predictor attempts to figure out a destination of a branching instruction very early and with very little context. This magic happens <b>before</b> the "decoder" pipeline stage and the predictor has very limited data available. It only has some past history and the address of the current instruction. If you think about it - this is super powerful. Given only current instruction pointer it can assess, with very high confidence, where the target of the jump will be.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1K3IOo5qkOOP0hhRtwu10U/e9997acf855e96044f2684e1adae87b0/pasted-image-0-1.png" />
            
            </figure><p>Source: <a href="https://en.wikipedia.org/wiki/Branch_predictor">https://en.wikipedia.org/wiki/Branch_predictor</a></p><p>The BPU maintains a couple of data structures, but today we'll focus on Branch Target Buffer (BTB). It's a place where the BPU remembers the target instruction pointer of previously taken branches. The whole mechanism is much more complex, take a look a the <a href="http://www.ece.uah.edu/~milenka/docs/VladimirUzelac.thesis.pdf">Vladimir Uzelac's Master thesis</a> for details about branch prediction on CPU's from 2008 era:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1FO1GCWdkQYR5I90qGopuH/9f51e10ca53eafd001f2238896103327/pasted-image-0--1-.png" />
            
            </figure><p>For the scope of this article we'll simplify and focus on the BTB only. We'll try to show how large it is and how it behaves under different conditions.</p>
    <div>
      <h3>Why is branch prediction needed?</h3>
      <a href="#why-is-branch-prediction-needed">
        
      </a>
    </div>
    <p>But first, why is branch prediction used at all? In order to get the best performance, the CPU pipeline must feed a constant flow of instructions. Consider what happens to the multi-stage CPU pipeline on a branch instruction. To illustrate let's consider the following ARM program:</p>
            <pre><code>
	BR label_a;
    X1
    ...
label_a:
 	Y1
    </code></pre>
            <p>Assuming a simplistic CPU model, the operations would flow through the pipeline like this:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6LYtd8rNXEQA2X2pOPwmvd/e9bbdd4d54a72dc78730684252d01a84/3P6PIWN6gPAdYzP8oDgrsaOMKgUmG51zIiFhbm071cZKM276S7vRb5atpTwlKrM1lFHRYsobw8P4e-Z9t1Vb9TGeutpBe2CkMrNGruWO8yb5Qz0vZ6Qn6RbOi5Tp.png" />
            
            </figure><p>In the first cycle the BR instruction is fetched. This is an unconditional branch instruction changing the execution flow of the CPU. At this point it's not yet decoded, but the CPU would like to fetch another instruction already! Without a branch predictor in cycle 2 the fetch unit either has to wait or simply continues to the next instruction in memory, hoping it will be the right one.</p><p>In our example, instruction X1 is fetched even though this isn't the correct instruction to run. In cycle 4, when the branch instruction finishes the execute stage, the CPU will be able to understand the mistake, and roll back the speculated instructions before they have any effect. At this point the fetch unit is updated to correctly get the right instruction - Y1 in our case.</p><p>This situation of losing a number of cycles due to fetching code from an incorrect place is called a "frontend bubble". Our theoretical CPU has a two-cycle frontend bubble when a branch target wasn’t predicted right.</p><p>In this example we see that, although the CPU does the right thing in the end, without good branch prediction it wasted effort on bad instructions. In the past, various techniques have been used to reduce this problem, such as static branch prediction and branch delay slots. But the dominant CPU designs today rely <a href="https://danluu.com/branch-prediction/#one-bit">on <i>dynamic branch prediction</i></a>. This technique is able to mostly avoid the frontend bubble problem, by predicting the correct address of the next instruction even for branches that aren’t fully decoded and executed yet.</p>
    <div>
      <h3>Playing with the BTB</h3>
      <a href="#playing-with-the-btb">
        
      </a>
    </div>
    <p>Today we're focusing on the BTB - a data structure managed by the branch predictor responsible for figuring out a target of a branch. It's important to note that the BTB is distinct from and independent of the system assessing if the branch was taken or not taken. Remember, we want to figure out if a cost of a branch increases as we run more of them.</p><p>Preparing an experiment to stress only the BTB is relatively simple (<a href="https://xania.org/201602/bpu-part-three">based on Matt Godbolt's work</a>). It turns out a sequence of unconditional <code>jmps</code> is totally sufficient. Consider this x86 code:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/259y50RQfLqaFcsT4tljGb/5010158442d4e2afb6ebb3c105ee4e5b/pasted-image-0--2-.png" />
            
            </figure><p>This code stresses the BTB to an extreme - it just consists of a chain of <code>jmp +2</code> statements (i.e. literally jumping to the next instruction). In order to avoid wasting cycles on frontend pipeline bubbles, each taken jump needs a BTB hit. This branch prediction must happen very early in the CPU pipeline, before instruction decode is finished. This same mechanism is needed for any taken branch, whether it's unconditional, conditional or a function call.</p><p>The code above was run inside a test harness that measures how many CPU cycles elapse for each instruction. For example, in this run we're measuring times of dense - every two bytes - 1024 jmp instructions one after another:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3QofFzm1kmqFLuRrFPoyjQ/857bfbfe366720bd480653388f97eac8/pasted-image-0--3-.png" />
            
            </figure><p>We’ll look at the results of experiments like this for a few different CPUs. But in this instance, it was run on a machine with an AMD EPYC 7642. Here, the cold run took 10.5 cycles per jmp, and then all subsequent runs took ~3.5 cycles per jmp. The code is prepared in such a way to make sure it's the BTB that is slowing down the first run. Take a look at the full code, there is quite some magic to warm up the L1 cache and iTLB without priming the BTB.</p><p><b>Top tip 1. On this CPU a branch instruction that is taken but not predicted, costs ~7 cycles more than one that is taken and predicted.</b> Even if the branch was unconditional.</p>
    <div>
      <h3>Density matters</h3>
      <a href="#density-matters">
        
      </a>
    </div>
    <p>To get a full picture we also need to think about the density of jmp instructions in the code. The code above did eight jmps per 16-byte code block. This is a lot. For example, the code below contains one jmp instruction in each block of 16 bytes. Notice that the <code>nop</code> opcodes are jumped over. The block size doesn't change the number of executed instructions, only the code density:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2uenA2zFAkPWQZldVX7gba/8823f9ec4052618dc06c21d0621a6581/pasted-image-0--4-.png" />
            
            </figure><p>Varying the jmp block size might be important. It allows us to control the placement of the jmp opcodes. Remember the BTB is indexed by instruction pointer address. Its value and its alignment might influence the placement in the BTB and help us reveal the BTB layout. Increasing the alignment will cause more nop padding to be added. The sequence of a single measured instruction - jmp in this case - and zero or more nops, I will call "block", and its size "block size". Notice that the larger the block size, the larger the working code size for the CPU. At larger values we might see some performance drop due to exhausting L1 cache space.</p>
    <div>
      <h3>The experiment</h3>
      <a href="#the-experiment">
        
      </a>
    </div>
    <p>Our experiment is crafted to show the performance drop depending on the number of branches, across different working code sizes. Hopefully, we will be able to prove the performance is mostly dependent on the number of blocks - and therefore the BTB size, and not the working code size.</p><p>See the <a href="https://github.com/cloudflare/cloudflare-blog/tree/master/2021-05-branch-prediction">code on GitHub</a>. If you want to see the generated machine code, though, you need to run a special command. It's created procedurally by the code, customized by passed parameters. Here's an example <code>gdb</code> incantation:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7aAKMXz8aruwblN67UvZd7/1eb2aea7cc6e2e3a9c0f6823f60b8f08/pasted-image-0--5-.png" />
            
            </figure><p>Let's bring this experiment forward, what if we took the best times of each run - with a fully primed BTB - for varying values of jmp block sizes and number of blocks - working set size? Here you go:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4ZNeolS38Q90vdHPvehvVa/64e0c16d163d74c05cfbaee6f8d9e15c/pasted-image-0--6-.png" />
            
            </figure><p>This is an astonishing chart. First, it's obvious something happens at the 4096 jmp mark[<a href="#footnotes">4</a>] regardless of how large the jmp block sizes - how many nop's we skip over. Reading it aloud:</p><ul><li><p>On the far left, we see that if the amount of code is small enough - less than 2048 bytes (256 times a block of 8 bytes) - it's possible to hit some kind of uop/L1 cache and get ~1.5 cycles per fully predicted branch. This is amazing.</p></li><li><p>Otherwise, if you keep your hot loop to 4096 branches then, no matter how dense your code is you are likely to see ~3.4 cycles per fully predicted branch</p></li><li><p>Above 4096 branches the branch predictor gives up and the cost of each branch shoots to ~10.5 cycles per jmp. This is consistent with what we saw above - unpredicted branch on flushed BTB took ~10.5 cycles.</p></li></ul><p>Great, so what does it mean? Well, you should avoid branch instructions if you want to avoid branch misses because you have at most 4096 of fast BTB slots. This is not a very pragmatic advice though - it's not like we deliberately put many unconditional <code>jmp</code>s in real code!</p><p>There are a couple of takeaways for the discussed CPU. I repeated the experiment with an always-taken conditional branch sequence and the resulting chart looks almost identical. The only difference being the predicted taken conditional-je instruction being 2 cycles slower than unconditional jmp.</p><p>An entry to BTB is added wherever a branch is "taken" - that is, the jump actually happens. An unconditional "jmp" or always taken conditional branch, will cost a BTB slot. To get best performance make sure to not have more than 4096 taken branches in the hot loop. The good news is that branches never-taken don't take space in the BTB. We can illustrate this with another experiment:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5gd2JKAwYMpgk5obd4E0Hz/29876ce9f45dc01edbb8e3966bcb9903/pasted-image-0--7-.png" />
            
            </figure><p>This boring code is going over not-taken <code>jne</code> followed by two nops (block size=4). Aimed with this test (jne never-taken), the previous one (jmp always-taken) and a conditional branch <code>je</code> always-taken, we can draw this chart:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1F2STBl6iw6OpkwqmIdlR9/1de627e490ddd75816bb3899f99f6ed7/pasted-image-0--8-.png" />
            
            </figure><p>First, without any surprise we can see the conditional 'je always-taken' is getting slightly more costly than the simple unconditional <code>jmp</code>, but only after the 4096 branches mark. This makes sense, the conditional branch is resolved later in the pipeline so the frontend bubble is longer. Then take a look at the blue line hovering near zero. This is the "jne never-taken" line flat at 0.3 clocks / block, no matter how many blocks we run in sequence. The takeaway is clear - you can have as many never-taken branches as you want, without incurring any cost. There isn't any spike at 4096 mark, meaning BTB is not used in this case. It seems the conditional jump not seen before is guessed to be not-taken.</p><p><b>Top tip 2: conditional branches never-taken are basically free</b> - at least on this CPU.</p><p>So far we established that branches always-taken occupy BTB, branches never taken do not. How about other control flow instructions, like the <code>call</code>?</p><p>I haven't been able to find this in the literature, but it seems call/ret also need the BTB entry for best performance. I was able to illustrate this on our AMD EPYC. Let's take a look at this test:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/fHArGj5TWvRM9AEM84NLJ/1951dfe42651b923551d90dddc63964f/pasted-image-0--9-.png" />
            
            </figure><p>This time we'll issue a number of <code>callq</code> instructions followed by <code>ret</code> - both of which should be fully predicted. The experiment is crafted so that each callq calls a unique function, to allow for retq prediction - each one returns to exactly one caller.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/677H5BdxwzZ9B6hbebRFnB/a1ac1602d5efab51152a6219fde87579/pasted-image-0--10-.png" />
            
            </figure><p>This chart confirms the theory: no matter the code density - with the exception of 64-byte block size being notably slower -  the cost of predicted call/ret starts to deteriorate after the 2048 mark. At this point the BTB is filled with call and ret predictions and can't handle any more data. This leads to an important conclusion:</p><p><b>Top tip 3. In the hot code you want to have less than 2K function calls</b> - on this CPU.</p><p>In our test CPU a sequence of fully predicted call/ret takes about 7 cycles, which is about the same as two unconditional predicted <code>jmp</code> opcodes. It's consistent with our results above.</p><p>So far we thoroughly checked AMD EPYC 7642. We started with this CPU because the branch predictor is relatively simple and the charts were easy to read. It turns out more recent CPUs are less clear.</p>
    <div>
      <h3>AMD EPYC 7713</h3>
      <a href="#amd-epyc-7713">
        
      </a>
    </div>
    <p>Newer AMD is more complex than the previous generations. Let's run the two most important experiments. First, the <code>jmp</code> one:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/12g3Zg18eWyHIwyUuf63gj/0cb89d1fc7a8fedbed929aa455e11dbf/pasted-image-0--11-.png" />
            
            </figure><p>For the always-taken branches case we can see a very good, sub 1 cycle, timings when the number of branches doesn't exceed 1024 and the code isn't too dense.</p><p><b>Top tip 4. On this CPU it's possible to get &lt;1 cycle per predicted jmp when the hot loop fits in ~32KiB.</b></p><p>Then there is some noise starting after the 4096 jmps mark. This is followed by a complete drop of speed at about 6000 branches. This is in line with the theory that BTB is 4096 entries long. We can speculate that some other prediction mechanism is successfully kicking in beyond that, and keeps up the performance up the ~6k mark.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4X5KfDDT36u9u2MGb7iu7t/90c4fbf3d04cd386ed6eecae645f25f8/pasted-image-0--12-.png" />
            
            </figure><p>The call/ret chart shows a similar tale, the timings start breaking after 2048 mark, and completely fail to be predicted beyond ~3000.</p>
    <div>
      <h3>Xeon Gold 6262</h3>
      <a href="#xeon-gold-6262">
        
      </a>
    </div>
    <p>The Intel Xeon looks different from the AMD:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7hIriKLopeE26yIor9qGG/9ab0d56fd18cfd4a782b7270845bc66f/pasted-image-0--13-.png" />
            
            </figure><p>Our test shows the predicted taken branch costs 2 cycles. Intel has documented a clock penalty for very dense branching code - this explains the 4-byte block size line hovering at ~3 cycles. The branch cost breaks at the 4096 jmp mark, confirming the theory that the Intel BTB can hold 4096 entries. The 64-byte block size chart looks confusing, but really isn't. The branch cost stays at flat 2 cycles up till the 512 jmp count. Then it increases. This is caused by the internal layout of the BTB which is said to be 8-way associative. It seems with the 64-byte block size we can utilize at most half of the 4096 BTB slots.</p><p><b>Top tip 5. On Intel avoid placing your jmp/call/ret instructions at regular 64-byte intervals.</b></p><p>Then the call/ret chart:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1XU4ofCRG3wfjLZlwsEhBv/2df1b19f6623a26be745caf35c28ee33/pasted-image-0--14-.png" />
            
            </figure><p>Similarly, we can see the branch predictions failing after the 2048 jmp mark - in this experiment one block uses two flow control instructions: call and ret. This again confirms the BTB size of 4K entries. The 64-byte block size is generally slower due to the nop padding but also breaks faster due to the instructions alignment issue. Notice, we haven't seen this effect on AMD.</p>
    <div>
      <h3>Apple Silicon M1</h3>
      <a href="#apple-silicon-m1">
        
      </a>
    </div>
    <p>So far we saw examples of AMD and Intel server grade CPUs. How does an Apple Silicon M1 fit in this picture?</p><p>We expect it to be very different - it's designed for mobile and it's using ARM64 architecture. Let's see our two experiments:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/bJ9lzYsIARltRCkOEJVZZ/6b1b5edcdf00e3c0a3c353fff72c8232/pasted-image-0--15-.png" />
            
            </figure><p>The predicted <code>jmp</code> test shows an interesting story. First, when the code fits 4096 bytes (1024*4 or 512*8, etc) you can expect a predicted <code>jmp</code> to cost 1 clock cycle. This is an excellent score.</p><p>Beyond that, generally, you can expect a cost of 3 clock cycles per predicted jmp. This is also very good. This starts to deteriorate when the working code grows beyond ~200KiB. This is visible with block size 64 breaking at 3072 mark 3072*64=196K, and for block 32 at 6144: 6144*32=196K. At this point the prediction seems to stop working. The documentation indicates that the M1 CPU has 192 KB L1 of instruction cache - our experiment matches that.</p><p>Let's compare the "predicted jmp" with the "unpredicted jmp" chart. Take this chart with a grain of salt, because flushing the branch predictor is notoriously difficult.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2VCtlgVdJkSB3MFuYbR5MQ/31b80611cab5b7a5ec14514dd375e7ea/pasted-image-0--16-.png" />
            
            </figure><p>However, even if we don't trust the flush-bpu code (<a href="https://xania.org/201602/bpu-part-three">adapted from Matt Godbolt</a>), this chart reveals two things. First, the "unpredicted" branch cost seems to be correlated with the branch distance. The longer the branch the costlier it is. We haven't seen such behaviour on x86 CPUs.</p><p>Then there is the cost itself. We saw a predicted sequence of branches cost, and what a supposedly-unpredicted jmp costs. In the first chart we saw that beyond ~192KiB working code, the branch predictor seems to become ineffective. The supposedly-flushed BPU seems to show the same cost. For example, the cost of a 64-byte block size jmp with a small working set size is 3 cycles. A miss is ~8 cycles. For a large working set size both times are ~8 cycles. It seems that the BTB is linked to the L1 cache state. <a href="https://www.realworldtech.com/forum/?threadid=159985&amp;curpostid=160001">Paul A. Clayton suggested</a> a possibility of such a design back in 2016.</p><p><b>Top tip 6. on M1 the predicted-taken branch generally takes 3 cycles and unpredicted but taken has varying cost, depending on jmp length. BTB is likely linked with L1 cache.</b></p><p>The call/ret chart is funny:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/54s7q6CJamAHCjpzDHFm6V/d1db62f4d8ec351ce539709cb6a512cd/pasted-image-0--17-.png" />
            
            </figure><p>Like in the chart before, we can see a big benefit if hot code fits within 4096 bytes (512*4 or 256*8). Otherwise, you can count on 4-6 cycles per call/ret sequence (or, bl/ret as it's known in ARM). The chart shows funny alignment issues. It's unclear what they are caused by. Beware, comparing the numbers in this chart with x86 is unfair, since ARM <code>call</code> operation differs substantially from the x86 variant.</p><p>M1 seems pretty fast, with predicted branches usually at 3 clock cycles. Even unpredicted branches never cost more than 8 ticks in our benchmark. Call+ret sequence for dense code should fit under 5 cycles.</p>
    <div>
      <h3>Summary</h3>
      <a href="#summary">
        
      </a>
    </div>
    <p>We started our journey from a piece of trivial code, and asked a basic question: how costly is  adding a never-taken <code>if</code> branch in the hot portion of code?</p><p>Then we quickly dived in very low level CPU features. By the end of this article, hopefully, an astute reader might get better intuition how a modern branch predictors works.</p><p>On x86 the hot code needs to split the BTB budget between function calls and taken branches. The BTB has only a size of 4096 entries. There are strong benefits in keeping the hot code under 16KiB.</p><p>On the other hand on M1 the BTB seems to be limited by L1 instruction cache. If you're writing super hot code, ideally it should fit 4KiB.</p><p>Finally, can you add this one more <code>if</code> statement? If it's never-taken, it's probably ok. I found no evidence that such branches incur any extra cost. But do avoid always-taken branches and function calls.</p><p><b>Sources</b></p><p>I'm not the first person to investigate how BTB works. I based my experiments on:</p><ul><li><p><a href="http://www.ece.uah.edu/~milenka/docs/VladimirUzelac.thesis.pdf">Vladimir Uzelac thesis</a></p></li><li><p><a href="https://xania.org/201602/bpu-part-three">Matt Godbolt work</a>. The series has 5 articles.</p></li><li><p><a href="https://www.realworldtech.com/forum/?threadid=159985&amp;curpostid=159985">Travis Downs BTB questions</a> on Real World Tech</p></li><li><p><a href="https://stackoverflow.com/questions/38811901/slow-jmp-instruction">various</a> <a href="https://stackoverflow.com/questions/51822731/why-did-intel-change-the-static-branch-prediction-mechanism-over-these-years">stackoverflow</a> <a href="https://stackoverflow.com/questions/38512886/btb-size-for-haswell-sandy-bridge-ivy-bridge-and-skylake">discussions</a>. Especially <a href="https://stackoverflow.com/questions/31280817/what-branch-misprediction-does-the-branch-target-buffer-detect">this one</a> and <a href="https://stackoverflow.com/questions/31642902/intel-cpus-instruction-queue-provides-static-branch-prediction">this</a></p></li><li><p><a href="https://www.agner.org/optimize/microarchitecture.pdf">Agner Fog</a> microarchitecture guide has a good section on branch predictions.</p></li></ul>
    <div>
      <h3>Acknowledgements</h3>
      <a href="#acknowledgements">
        
      </a>
    </div>
    <p>Thanks to <a href="/author/david-wragg/">David Wragg</a> and <a href="https://twitter.com/danluu">Dan Luu</a> for technical expertise and proofreading help.</p>
    <div>
      <h3>PS</h3>
      <a href="#ps">
        
      </a>
    </div>
    <p>Oh, oh. But this is not the whole story! Similar research was the base to the <a href="https://spectreattack.com/spectre.pdf">Spectre v2</a> attack. The attack was exploiting the little known fact that the BPU state was not cleared between context switches. With the correct technique it was possible to train the BPU - in the case of Spectre it was iBTB - and force a privileged piece of code to be speculatively executed. This, combined with a cache side-channel data leak, allowed an attacker to steal secrets from the privileged kernel. Powerful stuff.</p><p>A proposed solution was to avoid using shared BTB. This can be done in two ways: make the indirect jumps to always fail to predict, or fix the CPU to avoid sharing BTB state across isolation domains. This is a long story, maybe for another time...</p><hr /><p><a>Footnotes</a></p><p>1. One historical solution to this specific 'if debug' problem is called "runtime nop'ing". The idea is to modify the code in runtime and patch the never-taken branch instruction with a <code>nop</code>. For example, see the "ISENABLED" discussion on <a href="https://bugzilla.mozilla.org/showbug.cgi?id=370906.">https://bugzilla.mozilla.org/showbug.cgi?id=370906.</a></p><p>2. Fun fact: modern compilers are pretty smart. New gcc (&gt;=11) and older clang (&gt;=3.7) are able to actually optimize it quite a lot. <a href="https://godbolt.org/z/KWYEW3d9s">See for yourself</a>. But, let's not get distracted by that. This article is about low level machine code branch instructions!</p><p>3. This is a simplification. There are of course more control flow instructions, like: software interrupts, syscalls, VMENTER/VMEXIT.</p><p>4. Ok, I'm slightly overinterpreting the chart. Maybe the 4096 jmp mark is due to the 4096 uop cache or some instruction decoder artifact? To prove this spike is indeed BTB related I looked at Intel BPUCLEARS.EARLY and BACLEAR.CLEAR performance counters. Its value is small for block count under 4096 and large for block count greater than 5378. This is strong evidence that the performance drop is indeed caused by the BPU and likely BTB.</p> ]]></content:encoded>
            <category><![CDATA[Deep Dive]]></category>
            <category><![CDATA[Programming]]></category>
            <category><![CDATA[AMD]]></category>
            <category><![CDATA[EPYC]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <guid isPermaLink="false">2pvX64jHrEfNLMSmJO1Iv4</guid>
            <dc:creator>Marek Majkowski</dc:creator>
        </item>
        <item>
            <title><![CDATA[Why Cloudflare Chose AMD EPYC for Gen X Servers]]></title>
            <link>https://blog.cloudflare.com/technical-details-of-why-cloudflare-chose-amd-epyc-for-gen-x-servers/</link>
            <pubDate>Sun, 01 Mar 2020 13:00:00 GMT</pubDate>
            <description><![CDATA[ Looking back at this week's posts on the design, specifications, and performance of Cloudflare’s Gen X servers using AMD CPUs. Every server can run every service. This architectural decision has helped us achieve higher efficiency across the Cloudflare network.  ]]></description>
            <content:encoded><![CDATA[ <p>From the very beginning Cloudflare used Intel CPU-based servers (and, also, Intel components for things like NICs and SSDs). But we're always interested in optimizing the cost of running our service so that we can provide products at a low cost and high gross margin.</p><p>We're also mindful of events like the <a href="/meltdown-spectre-non-technical/">Spectre and Meltdown</a> vulnerabilities and have been working with outside parties on research into mitigation and exploitation which we hope to publish later this year.</p><p>We looked very seriously at <a href="/arm-takes-wing/">ARM-based CPUs</a> and continue to keep our software up to date for the ARM architecture so that we can use ARM-based CPUs when the requests per watt is interesting to us.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1FIEEXHXXE0TW20DbFc1qG/bd65d80a3f61062de14027374907daf5/gen-x-color-Friday--twitter_2x.png" />
            
            </figure><p>In the meantime, we've deployed AMD's EPYC processors as part of Gen X server platform and for the first time are not using any Intel components at all. This week, we announced details of this tenth generation of servers. Below is a recap of why we're excited about the design, specifications, and performance of our newest hardware.</p>
    <div>
      <h2><a href="/cloudflares-gen-x-servers-for-an-accelerated-future/">Servers for an Accelerated Future</a></h2>
      <a href="#">
        
      </a>
    </div>
    <p>Every server can run every service. This architectural decision has helped us achieve higher efficiency across the Cloudflare network. It has also given us more flexibility to build new software or adopt the newest available hardware.</p><p>Notably, Intel is not inside. We are not using their hardware for any major server components such as the CPU, board, memory, storage, network interface card (or any type of accelerator).</p><p>This time, AMD is inside.</p><p>Compared with our prior server (<a href="/a-tour-inside-cloudflares-g9-servers/">Gen 9</a>), our Gen X server processes as much as 36% more requests while costing substantially less. Additionally, it enables a ~50% decrease in L3 cache miss rate and up to 50% decrease in NGINX p99 latency, powered by a CPU rated at 25% lower TDP (thermal design power) per core.</p>
    <div>
      <h2><a href="/an-epyc-trip-to-rome-amd-is-cloudflares-10th-generation-edge-server-cpu/">Gen X CPU benchmarks</a></h2>
      <a href="#">
        
      </a>
    </div>
    <p>To identify the most efficient CPU for our software stack, we ran several benchmarks for key workloads such as cryptography, compression, regular expressions, and LuaJIT. Then, we simulated typical requests we see, before testing servers in live production to measure requests per watt.    </p><p>Based on our findings, we selected the single socket 48-core AMD 2nd Gen EPYC 7642.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/40QrPSva2FXZVgsZTln51q/2c15c60f27009a1ad459ae390afc49c6/pasted-image-0.png" />
            
            </figure>
    <div>
      <h2><a href="/impact-of-cache-locality/">Impact of Cache Locality</a></h2>
      <a href="#">
        
      </a>
    </div>
    <p>The single AMD EPYC 7642 performed very well during our lab testing, beating our <a href="/a-tour-inside-cloudflares-g9-servers/">Gen 9</a> server with dual Intel Xeon Platinum 6162 with the same total number of cores. Key factors we noticed were its large L3 cache, which led to a low L3 cache miss rate, as well as a higher sustained operating frequency.</p>
    <div>
      <h2><a href="/gen-x-performance-tuning/">Gen X Performance Tuning</a></h2>
      <a href="#">
        
      </a>
    </div>
    <p>Partnering with AMD, we tuned the 2nd Gen EPYC 7642 processor to achieve additional 6% performance. We achieved this by using power determinism and configuring the CPU's Thermal Design Power (TDP).</p>
    <div>
      <h2><a href="/securing-memory-at-epyc-scale/">Securing Memory at EPYC Scale</a></h2>
      <a href="#">
        
      </a>
    </div>
    <p>Finally, we described how we use Secure Memory Encryption (SME), an interesting security feature within the System on a Chip architecture of the AMD EPYC line. We were impressed by how we could achieve RAM encryption without significant decrease in performance. This reduces the worry that any data could be exfiltrated from a stolen server.</p><p>We enjoy designing hardware that improves the security, performance and reliability of our global network, trusted by over 26 million Internet properties.</p><p>Want to help us evaluate new hardware? <a href="https://www.cloudflare.com/careers/">Join us</a>!</p> ]]></content:encoded>
            <category><![CDATA[EPYC]]></category>
            <category><![CDATA[AMD]]></category>
            <category><![CDATA[Gen X]]></category>
            <category><![CDATA[Cache]]></category>
            <category><![CDATA[Hardware]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">6UGZUyfF1u3jKKAHkxgDNq</guid>
            <dc:creator>Nitin Rao</dc:creator>
        </item>
        <item>
            <title><![CDATA[Gen X Performance Tuning]]></title>
            <link>https://blog.cloudflare.com/gen-x-performance-tuning/</link>
            <pubDate>Thu, 27 Feb 2020 14:00:00 GMT</pubDate>
            <description><![CDATA[ We have partnered with AMD to get the best performance out of this processor and today, we are highlighting our tuning efforts that led to an additional 6% performance. Thermal design power (TDP) and dynamic power, amongst others, play a critical role when tuning a system.  ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6TT8W5Et00ZvvobGvrgMEY/1424f7ef2b4d6cc216b4a7ababc750ee/image7-3.png" />
            
            </figure><p>We are using AMD 2nd Gen EPYC 7642 for our <a href="/cloudflares-gen-x-servers-for-an-accelerated-future/">tenth generation “Gen X” servers</a>. We found <a href="/an-epyc-trip-to-rome-amd-is-cloudflares-10th-generation-edge-server-cpu/">many aspects of this processor compelling</a> such as its <a href="/impact-of-cache-locality/">increase in performance due to its frequency bump and cache-to-core ratio</a>. We have partnered with AMD to get the best performance out of this processor and today, we are highlighting our tuning efforts that led to an additional 6% performance.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/616TtlNrwVkyeHby6ReKSD/8127c8a658986889c1d54810ecd9433f/image14.png" />
            
            </figure>
    <div>
      <h2>Thermal Design Power &amp; Dynamic Power</h2>
      <a href="#thermal-design-power-dynamic-power">
        
      </a>
    </div>
    <p>Thermal design power (TDP) and dynamic power, amongst others, play a critical role when tuning a system. Many share a common belief that thermal design power is the maximum or average power drawn by the processor. The 48-core <a href="https://www.amd.com/en/products/cpu/amd-epyc-7642">AMD EPYC 7642</a> has a TDP rating of 225W which is just as high as the 64-core <a href="https://www.amd.com/en/products/cpu/amd-epyc-7742">AMD EPYC 7742</a>. It comes to mind that fewer cores should translate into lower power consumption, so why is the AMD EPYC 7642 expected to draw just as much power as the AMD EPYC 7742?</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/420qeX6xoS09gLZaWH9F8p/051cc471d235fce238703d227f65a5f7/image10-2.png" />
            
            </figure><p><i>TDP Comparison between the EPYC 7642, EPYC 7742 and top-end EPYC 7H12</i></p><p>Let’s take a step back and understand that TDP does not always mean the maximum or average power that the processor will draw. At a glance, TDP may provide a good estimate about the processor’s power draw; TDP is really about how much heat the processor is expected to generate. TDP should be used as a guideline for designing cooling solutions with appropriate thermal capacitance. The cooling solution is expected to indefinitely dissipate heat up to the TDP, in turn, this can help the chip designers determine a power budget and create new processors around that constraint. In the case of the AMD EPYC 7642, the extra power budget was spent on retaining all of its 256 MiB L3 cache and the cores to operate at a higher sustained frequency as needed during our peak hours.</p><p>The overall power drawn by the processor depends on many different factors; dynamic or active power establishes the relationship between power and frequency. Dynamic power is a function of capacitance - the chip itself, supply voltage, frequency of the processor, and activity factor. The activity factor is dependent on the characteristics of the workloads running on the processor. Different workloads will have different characteristics. Some examples include <a href="/cloudflare-workers-unleashed/">Cloudflare Workers</a> or <a href="/seamless-remote-work-with-cloudflare-access/">Cloudflare for Teams</a>, the hotspots in these or any other particular programs will utilize different parts of the processor, affecting the activity factor.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/12AEwpVHIGoT9XGcqHo83V/319acd3abb48e14d48d5defd4dc8a0c4/image-7.png" />
            
            </figure>
    <div>
      <h2>Determinism Modes &amp; Configurable TDP</h2>
      <a href="#determinism-modes-configurable-tdp">
        
      </a>
    </div>
    <p>The latest AMD processors continue to implement AMD Precision Boost which opportunistically allows the processor to operate at a higher frequency than its base frequency. How much higher can the frequency go depends on many different factors such as electrical, thermal, and power limitations.</p><p>AMD offers a knob known as determinism modes. This knob affects power and frequency, placing emphasis one over the other depending on the determinism selected. <a href="https://www.amd.com/system/files/2017-06/Power-Performance-Determinism.pdf">There is a white paper</a> posted on AMD's website that goes into the nuanced details of determinism, and remembering how frequency and power are related - this was the simplest definition I took away from the paper:</p><p><b>Performance Determinism</b> - Power is a function of frequency.</p><p><b>Power Determinism</b> - Frequency is a function of power.</p><p>Another knob that is available to us is Configurable Thermal Design Power (cTDP), this knob allows the end-user to reconfigure the factory default thermal design power. The AMD EPYC 7642 is rated at 225W; however, we have been given guidance from AMD that this particular part can be reconfigured up to 240W. As mentioned previously, the cooling solution must support up to the TDP to avoid throttling. We have also done our due diligence and tested that our cooling solution can reliably dissipate up to 240W, even at higher ambient temperatures.</p><p>We gave these two knobs a try and got the results shown in the figures below using 10 KiB web assets over HTTPS <a href="/author/ivan/">provided by our performance team</a> over a sustained period of time to properly heat up the processor. The figures are broken down into the average operating frequencies across all 48 cores, total package power drawn from the socket, and the highest reported die temperature out of the 8 dies that are laid out on the AMD EPYC 7642. Each figure will compare the results we obtained from power and performance determinism, and finally, cTDP at 240W using power determinism.</p><p>Performance determinism clearly emphasized stabilizing operating frequency by matching its frequency to its lowest performing core over time. This mode appears to be ideal for tuners that emphasizes predictability over maximizing the processor’s operating frequency. In other words, the number of cycles that the processor has at its disposal every second should be predictable. This is useful if two or more cores share data dependencies. By allowing the cores to work in unison; this can prevent one core stalling another. Power determinism on the other hand, maximized its power and frequency as much as it can.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2yiJbib8HJNMkMZc7On1K4/19411800792deb9713cc65c943fc232c/image12.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2KXAIrxO2pCwpDAMlHyKW/49769b6a78d78577c13ea0109d4410ac/image11-2.png" />
            
            </figure><p>Heat generated by power can be compounded even further by ambient temperature. All components should stay within safe operating temperature at all times as specified by the vendor’s datasheet. If unsure, please reach out to your vendor as soon as possible. We put the AMD EPYC 7642 through several thermal tests and determined that it will operate within safe operating temperatures. Before diving into the figure below, it is important to mention that our fan speed ramped up over time, preventing the processor from reaching critical temperature; nevertheless, this meant that our cooling solution worked as intended - a combination of fans and a heatsink rated for 240W. We have yet to see the processors throttle in production. The figure below shows the highest temperature of the 8 dies laid out on the AMD EPYC 7642.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/S2zzgzeBBeyhzBXxkU1Zz/8c7880845e49ceec2cf761265724d046/image8-3.png" />
            
            </figure><p>An unexpected byproduct of performance determinism was frequency jitter. It took a longer time for performance determinism to achieve steady-state frequency, which contradicted predictable performance that performance determinism mode was meant to achieve.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6LK2tZgnsa8m86vPQYdjrS/87474e0b887e438d97254fde28b2f8ac/image9-1.png" />
            
            </figure><p>Finally, here are the deltas out from the real world with performance determinism and TDP set to factory default of 225W as the baseline. Deltas under 10% can be influenced by a wide variety of factors especially in production, however, none of our data showed a negative trend. Here are the averages.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2ZT2L9MokepRw2GmZbKuxE/f3facacec8efefc02cf704137f09930e/image-5.png" />
            
            </figure>
    <div>
      <h2>Nodes Per Socket (NPS)</h2>
      <a href="#nodes-per-socket-nps">
        
      </a>
    </div>
    <p>The AMD EPYC 7642 <a href="/impact-of-cache-locality/">physically lays out its 8 dies across 4 different quadrants on a single package</a>. By having such a layout, AMD supports dividing its dies into NUMA domains and this feature is called Nodes Per Socket or NPS. Available NPS options differ model-to-model, the AMD EPYC 7642 supports 4, 2, and 1 node(s) per socket. We thought it might be worth our time exploring this option despite the fact that none of these choices will end up yielding a shared last level cache. We did not observe any significant deltas from the NPS options in terms of performance.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7aTakzcywLXCq9cVbJUapE/99073a5a3e205037b7561c641258091f/image15.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/26jjqqzj8LFJecjFpq8ucU/4306eb572afd428965162fbe454172c4/image13-1.png" />
            
            </figure>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Tuning the processor to yield an additional 6% throughput or requests per second out of the box has been a great start. Some parts will have more rooms for improvement than others.  In production we were able to achieve an additional 2% requests per seconds with power determinism, and we achieved another 4% by reconfiguring the TDP to 240W. We will continue to work with AMD as well as internal teams within Cloudflare to identify additional areas of improvement. If you like the idea of supercharging the Internet on behalf of our customers, then <a href="https://www.cloudflare.com/careers/jobs/">come join us</a>.</p> ]]></content:encoded>
            <category><![CDATA[EPYC]]></category>
            <category><![CDATA[AMD]]></category>
            <category><![CDATA[Hardware]]></category>
            <category><![CDATA[Gen X]]></category>
            <guid isPermaLink="false">5S0eOaziY4ST5v0PscTOAW</guid>
            <dc:creator>Sung Park</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare’s Gen X: 
Servers for an Accelerated Future]]></title>
            <link>https://blog.cloudflare.com/cloudflares-gen-x-servers-for-an-accelerated-future/</link>
            <pubDate>Mon, 24 Feb 2020 13:00:00 GMT</pubDate>
            <description><![CDATA[ We designed and built Cloudflare’s network to be able to grow capacity quickly and inexpensively; to allow every server, in every city, to run every service; and to allow us to shift customers and traffic across our network efficiently. ]]></description>
            <content:encoded><![CDATA[ <p></p><blockquote><p><i>“Every server can run every service.”</i></p></blockquote><p>We designed and built Cloudflare’s network to be able to grow capacity quickly and inexpensively; to allow every server, in every city, to run every service; and to allow us to shift customers and traffic across our network efficiently. We deploy standard, commodity hardware, and our product developers and customers do not need to worry about the underlying servers. Our software automatically manages the deployment and execution of our developers’ code and our customers’ code across our network. Since we manage the execution and prioritization of code running across our network, we are both able to optimize the performance of our highest tier customers and effectively leverage idle capacity across our network.</p><p>An alternative approach might have been to run several fragmented networks with specialized servers designed to run specific features, such as the <a href="https://www.cloudflare.com/waf/">Firewall</a>, <a href="https://www.cloudflare.com/ddos/">DDoS protection</a> or <a href="https://workers.cloudflare.com/">Workers</a>. However, we believe that approach would have resulted in wasted idle resources and given us less flexibility to build new software or adopt the newest available hardware. And a single optimization target means we can provide security and performance at the same time.</p><p>We use Anycast to route a web request to the nearest Cloudflare data center (from among <a href="https://www.cloudflare.com/network/">200 cities</a>), improving performance and maximizing the surface area to fight attacks.</p><p>Once a datacenter is selected, we use Unimog, Cloudflare’s custom load balancing system, to dynamically balance requests across diverse generations of servers. We load balance at different layers: between cities, between physical deployments located across a city, between external Internet ports, between internal cables, between servers, and even between logical CPU threads within a server.</p><p>As demand grows, we can scale out by simply adding new servers, points of presence (PoPs), or cities to the global pool of available resources. If any server component has a hardware failure, it is gracefully de-prioritized or removed from the pool, to be batch repaired by our operations team. This architecture has enabled us to have no dedicated Cloudflare staff at any of the 200 cities, instead relying on help for infrequent physical tasks from the ISPs (or data centers) hosting our equipment.</p>
    <div>
      <h3>Gen X: Intel Not Inside</h3>
      <a href="#gen-x-intel-not-inside">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2cA2rKV0dU3II6avmDY7BS/fba1453ea9bc827e399f57971ce871af/AMD_EPYC-39.jpg" />
            
            </figure><p>We recently turned up our tenth generation of servers, “Gen X”, already deployed across major US cities, and in the process of being shipped worldwide. Compared with our prior server (<a href="/a-tour-inside-cloudflares-g9-servers/">Gen 9</a>), it processes as much as 36% more requests while costing substantially less. Additionally, it enables a ~50% decrease in L3 cache miss rate and up to 50% decrease in NGINX p99 latency, powered by a CPU rated at 25% lower TDP (thermal design power) per core.</p><p>Notably, for the first time, Intel is not inside. We are not using their hardware for any major server components such as the CPU, board, memory, storage, network interface card (or any type of accelerator). Given how critical Intel is to our industry, this would until recently have been unimaginable, and is in contrast with <a href="/a-tour-inside-cloudflares-g9-servers/">prior</a> <a href="/a-tour-inside-cloudflares-latest-generation-servers/">generations</a> which made extensive use of their hardware.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3uCBtfJ5jawcCdsdyipghO/0d99e66b8db30d4c872d2a540cf9072c/AMD2.png" />
            
            </figure><p><i>Intel-based Gen 9 server</i></p><p>This time, AMD is inside.</p><p>We were particularly impressed by the 2nd Gen AMD EPYC processors because they proved to be far more efficient for our customers’ workloads. Since the pendulum of technology leadership swings back and forth between providers, we wouldn’t be surprised if that changes over time. However, we were happy to adapt quickly to the components that made the most sense for us.</p>
    <div>
      <h3>Compute</h3>
      <a href="#compute">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3qvmFWlrsejJpb9NEG6ZqU/d830111b48ee0ed930132e88660063ba/pasted-image-0.png" />
            
            </figure><p>CPU efficiency is very important to our server design. Since we have a compute-heavy workload, our servers are typically limited by the CPU before other components. Cloudflare’s software stack scales quite well with additional cores. So, we care more about core-count and power-efficiency than dimensions such as clock speed.</p><p>We selected the AMD EPYC 7642 processor in a single-socket configuration for Gen X. This CPU has 48-cores (96 threads), a base clock speed of 2.4 GHz, and an L3 cache of 256 MB. While the rated power (225W) may seem high, it is lower than the combined TDP in our Gen 9 servers and we preferred the performance of this CPU over lower power variants. Despite AMD offering a higher core count option with 64-cores, the performance gains for our software stack and usage weren’t compelling enough.</p><p>We have deployed the AMD EPYC 7642 in half a dozen Cloudflare data centers; it is considerably more powerful than a dual-socket pair of <a href="/a-tour-inside-cloudflares-g9-servers/">high-core count Intel processors</a> (Skylake as well as Cascade Lake) we used in the last generation.</p><p>Readers of our blog might remember our excitement around ARM processors. We even ported the entirety of our software stack to run on ARM, just as it does with x86, and have been maintaining that ever since even though it calls for slightly more work for our software engineering teams. We did this leading up to the launch of <a href="/arm-takes-wing/">Qualcomm’s Centriq server CPU</a>, which eventually got shuttered. While none of the off-the-shelf ARM CPUs available this moment are interesting to us, we remain optimistic about high core count offerings launching in 2020 and beyond, and look forward to a day when our servers are a mix of x86 (Intel and AMD) and ARM.</p><p>We aim to replace servers when the efficiency gains enabled by new equipment outweigh their cost.</p><p>The performance we’ve seen from the AMD EPYC 7642 processor has encouraged us to accelerate replacement of multiple generations of Intel-based servers.</p><p>Compute is our largest investment in a server. Our heaviest workloads, from the Firewall to <a href="/cloud-computing-without-containers/">Workers</a> (our serverless offering), often require more compute than other server resources. Also, the average size in kilobytes of a web request across our network tends to be small, influenced in part by the relative popularity of APIs and mobile applications. Our approach to server design is very different than traditional <a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/">content delivery networks</a> engineered to deliver large object video libraries, for whom servers focused on storage might make more sense, and re-architecting to offer serverless is prohibitively capital intensive.</p><p>Our Gen X server is intentionally designed with an “empty” PCIe slot for a potential add on card, if it can perform some functions more efficiently than the primary CPU. Would that be a GPU, FPGA, SmartNIC, custom ASIC, TPU or something else? We’re intrigued to explore the possibilities.</p><p>In accompanying blog posts over the next few days, our hardware engineers will describe how AMD 7642 performed against the benchmarks we care about. We are thankful for their hard work.</p>
    <div>
      <h2>Memory, Storage &amp; Network</h2>
      <a href="#memory-storage-network">
        
      </a>
    </div>
    <p>Since we are typically limited by CPU, Gen X represented an opportunity to grow components such as RAM and SSD more slowly than compute.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/26Sxt3ggFdoQ0GzKF3EGss/368a3e0d940daae20b6feeb5a3e4fba2/AMD_EPYC-13.jpg" />
            
            </figure><p>For memory, we continued to use 256GB of RAM, as in our prior generation, but rated higher at 2933MHz. For storage, we continue to have ~3TB, but moved to 3x1TB form factor using NVME flash (instead of SATA) with increased available IOPS and higher endurance, which enables <a href="https://www.usenix.org/conference/vault20/presentation/korchagin">full disk encryption using LUKS without penalty</a>. For the network card, we continue to use Mellanox 2x25G NIC.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5ss6eIWw5bu2PmPfE000Wi/131a3644d28be05b4d00a113bc9565ea/AMD_EPYC-5.jpg" />
            
            </figure><p>We moved from our multi-node chassis back to a simple 1U form factor, designed to be lighter and less error prone during operational work at the data center. We also added multiple new ODM partners to diversify how we manufacture our equipment and to take advantage of additional global warehousing.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7idOzt8cy7gwxh6oeOSpI5/302f275f840d028cbd2b0bb6a1a351cc/AMD_EPYC-7.jpg" />
            
            </figure>
    <div>
      <h3>Network Expansion</h3>
      <a href="#network-expansion">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7kEiqTtyNgUY66Rh34MpuG/42da64e77acad6084872058580ed2d89/AMD_EPYC-35.jpg" />
            
            </figure><p>Our newest generation of servers give us the flexibility to continue to build out our network even closer to every user on Earth. We’re proud of the hard work from across engineering teams on Gen X, and are grateful for the support of our partners. Be on the lookout for more blogs about these servers in the coming days.</p> ]]></content:encoded>
            <category><![CDATA[Hardware]]></category>
            <category><![CDATA[Cloudflare Network]]></category>
            <category><![CDATA[Data Center]]></category>
            <category><![CDATA[Partners]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[EPYC]]></category>
            <category><![CDATA[AMD]]></category>
            <guid isPermaLink="false">5NWs1t95pGWDeOMVAkbQ4S</guid>
            <dc:creator>Nitin Rao</dc:creator>
            <dc:creator>John Graham-Cumming</dc:creator>
        </item>
    </channel>
</rss>