
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Wed, 08 Apr 2026 08:09:18 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Advancing Threat Intelligence: JA4 fingerprints and inter-request signals]]></title>
            <link>https://blog.cloudflare.com/ja4-signals/</link>
            <pubDate>Mon, 12 Aug 2024 14:00:00 GMT</pubDate>
            <description><![CDATA[ Explore how Cloudflare's JA4 fingerprinting and inter-request signals provide robust and scalable insights for advanced web security and threat detection.
 ]]></description>
            <content:encoded><![CDATA[ <p>For many years, Cloudflare has used advanced fingerprinting techniques to help block online threats, in products like our <a href="https://blog.cloudflare.com/meet-gatebot-a-bot-that-allows-us-to-sleep"><u>DDoS engine</u></a>, <a href="https://blog.cloudflare.com/patching-the-internet-fixing-the-wordpress-br/"><u>our WAF</u></a>, and <a href="https://www.cloudflare.com/application-services/products/bot-management/"><u>Bot Management</u></a>. For the purposes of Bot Management, fingerprinting characteristic elements of client software help us quickly identify what kind of software is making an HTTP request. It’s an efficient and accurate way to differentiate a browser from a Python script, while preserving user privacy. These fingerprints are used on their own for simple rules, and they underpin complex machine learning models as well. </p><p>Making sure our fingerprints keep pace with the pace of change on the Internet is a constant and critical task. Bots will always adapt to try and look more browser-like. Less frequently, browsers will introduce major changes to their behavior and affect the entire Internet landscape. Last year, Google <a href="https://chromestatus.com/feature/5124606246518784"><u>did exactly that</u></a>, making older TLS fingerprints almost useless for identifying the latest version of Chrome.</p>
    <div>
      <h2>JA3 Fingerprint </h2>
      <a href="#ja3-fingerprint">
        
      </a>
    </div>
    <p>JA3 fingerprint introduced by <a href="https://github.com/salesforce/ja3"><u>Salesforce researchers</u></a> in 2017 and later adopted by Cloudflare, involves creating a hash of the TLS ClientHello message. This hash includes the ordered list of TLS cipher suites, extensions, and other parameters, providing a unique identifier for each client. Cloudflare customers can use JA3 to build detection rules and gain insight into their network traffic.</p><p>In early 2023, Google <a href="https://chromestatus.com/feature/5124606246518784"><u>implemented a change in Chromium-based browsers</u></a> to shuffle the order of TLS extensions – a strategy aimed at disrupting the detection capabilities of JA3 and enhancing the robustness of the TLS ecosystem. This modification was prompted by concerns that fixed fingerprint patterns could lead to rigid server implementations, potentially causing complications each time Chrome updates were rolled out. Over time, JA3 became less useful due to the following reasons:</p><ul><li><p><b>Randomization of TLS extensions:</b> Browsers began randomizing the order of TLS extensions in their ClientHello messages. This change meant that the JA3 fingerprints, which relied on the sequential order of these extensions, would vary with each connection, making it unreliable for identifying unique clients​. (Further information can be found at <a href="https://www.stamus-networks.com/blog/ja3-fingerprints-fade-browsers-embrace-tls-extension-randomization"><u>Stamus Networks</u></a>.)​</p></li><li><p><b>Inconsistencies across tools</b>: Different tools and databases that implemented JA3 fingerprinting often produced varying results due to discrepancies in how they handled TLS extensions and other protocol elements. This inconsistency hindered the effectiveness of JA3 fingerprints for reliable cross-organization sharing and threat intelligence.​ (Further information can be found at <a href="https://fingerprint.com/blog/limitations-ja3-fingerprinting-accurate-device-identification/"><u>Fingerprint</u></a>.)​</p></li><li><p><b>Limited scope and lack of adaptability</b>: JA3 focused solely on elements within the TLS ClientHello packet, covering only a narrow portion of the OSI model’s layers. This limited scope often missed crucial context about a client's environment. Additionally, as newer transport layer protocols like QUIC became popular, JA3’s methodology – originally designed for older client implementations of TLS and not including modern protocols – proved ineffective.</p></li></ul>
    <div>
      <h2>Enter JA4 fingerprint</h2>
      <a href="#enter-ja4-fingerprint">
        
      </a>
    </div>
    <p>In response to these challenges, <a href="https://foxio.io/"><u>FoxIO</u></a> developed JA4, a successor to JA3 that offers a more robust, adaptable, and reliable method for fingerprinting TLS clients across various protocols, including emerging standards like QUIC. Officially launched in September 2023, JA4 is part of the broader <a href="https://blog.foxio.io/ja4%2B-network-fingerprinting"><u>JA4+ suite</u></a> that includes fingerprints for multiple protocols such as TLS, HTTP, and SSH. This suite is designed to be interpretable by both humans and machines, thereby enhancing threat detection and security analysis capabilities.</p><p>JA4 fingerprint is resistant to the randomization of TLS extensions and incorporates additional useful dimensions, such as Application Layer Protocol Negotiation (ALPN), which were not part of JA3. The introduction of JA4 has been met with positive reception in the cybersecurity community, with several open-source tools and commercial products beginning to incorporate it into their systems, including <a href="https://developers.cloudflare.com/bots/concepts/ja3-ja4-fingerprint/"><u>Cloudflare</u></a>. The JA4 fingerprint is available under the <a href="https://github.com/FoxIO-LLC/ja4/blob/main/License%20FAQ.md"><u>BSD 3-Clause license</u></a>, promoting seamless upgrades from JA3. Other fingerprints within the suite, such as JA4S (TLS Server Response) and JA4H (HTTP Client Fingerprinting), are licensed under the proprietary FoxIO License, which is designed for broader use but requires specific arrangements for commercial monetization.</p><p>Let’s take a look at specific JA4 fingerprint example, representing the latest version of Google Chrome on Linux:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7gjWV3tr6fAzSFNq9Z8Xeu/360f0079d987ebc8f8c61f4596b158be/2361-2.png" />
          </figure><ol><li><p><b>Protocol Identifier (t): </b>Indicates the use of TLS over TCP. This identifier is crucial for determining the underlying protocol, distinguishing it from <i>q</i> for QUIC or <i>d</i> for DTLS.</p></li><li><p><b>TLS Version (13): </b>Represents TLS version 1.3, confirming that the client is using one of the latest secure protocols. The version number is derived from analyzing the highest version supported in the ClientHello, excluding any <a href="https://www.rfc-editor.org/rfc/rfc8701.html"><u>GREASE</u></a> values.</p></li><li><p><b>SNI Presence (d): </b>The presence of a domain name in the <a href="https://www.cloudflare.com/en-gb/learning/ssl/what-is-sni/"><u>Server Name Indication</u></a>. This indicates that the client specifies a domain (d), rather than an IP address (i would indicate the absence of SNI).</p></li><li><p><b>Cipher Suites Count (15): </b>Reflects the total number of cipher suites included in the ClientHello, excluding any GREASE values. It provides insight into the cryptographic options the client is willing to use.</p></li><li><p><b>Extensions Count (16): </b>Indicates the count of distinct extensions presented by the client in the ClientHello. This measure helps identify the range of functionalities or customizations the client supports.</p></li><li><p><b>ALPN Values (h2): </b>Represents the Application-Layer Protocol Negotiation protocol, in this case, HTTP/2, which indicates the protocol preferences of the client for optimized web performance.</p></li><li><p><b>Cipher Hash (8daaf6152771): </b>A truncated SHA256 hash of the list of cipher suites, sorted in hexadecimal order. This unique hash serves as a compact identifier for the client’s cipher suite preferences.</p></li><li><p><b>Extension Hash (02713d6af862): </b>A truncated SHA256 hash of the sorted list of extensions combined with the list of signature algorithms. This hash provides a unique identifier that helps differentiate clients based on the extensions and signature algorithms they support.</p></li></ol><p>Here is a <a href="https://www.wireshark.org/"><u>Wireshark</u></a> example of TLS ClientHello from the latest Chrome on Linux querying <a href="https://www.cloudflare.com"><u>https://www.cloudflare.com</u></a>:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3a1jNGnnYTNZbyshIvWhtb/ead13d6dfdcef44a433bdd3f9c72952e/2361-3.png" />
          </figure><p>Integrating JA4 support into Cloudflare required rethinking our approach to parsing TLS ClientHello messages, which were previously handled in separate implementations across C, Lua, and Go. Recognizing the need to boost performance and ensure memory safety, we developed a new Rust-based crate, client-hello-parser. This unified parser not only simplifies modifications by centralizing all related logic but also prepares us for future transitions, such as replacing nginx with an upcoming Rust-based service. Additionally, this streamlined parser facilitates the exposure of JA4 fingerprints across our platform, improving the integration with Cloudflare's firewall rules, Workers, and analytics systems.</p>
    <div>
      <h2>Parsing ClientHello</h2>
      <a href="#parsing-clienthello">
        
      </a>
    </div>
    <p>client-hello-parser is an internal Rust crate designed for parsing TLS ClientHello messages. It aims to simplify the process of analyzing TLS traffic by providing a straightforward way to decode and inspect the initial handshake messages sent by clients when establishing TLS connections. This crate efficiently populates a ClientHelloParsed struct with relevant parsed fields, including version 1 and version 2 fingerprints, and JA3 and JA4 hashes, which are essential for network traffic analysis and fingerprinting.</p><p>Key benefits of the client-hello-parser library include:</p><ul><li><p><b>Optimized memory usage</b>: The library achieves amortized zero heap allocations, verified through extensive testing with the <a href="https://crates.io/crates/dhat"><u>dhat</u></a> crate to track memory allocations. Utilizing the <a href="https://crates.io/crates/tinyvec"><u>tiny_vec</u></a> crate, it begins with stack allocations for small vectors backed by fixed-size arrays, resorting to heap allocations only when these vectors exceed their initial size. This method ensures efficient reuse of all vectors, maintaining amortized zero heap allocations.</p></li><li><p><b>Memory safety:</b> Reinforced by Rust's robust borrow checker and complemented by extensive fuzzing, which has helped identify and resolve potential security vulnerabilities previously undetected in C implementations.</p></li><li><p><b>Ultra-low latency</b>: The parser benefits from using <a href="https://crates.io/crates/faster-hex"><u>faster_hex</u></a> for efficient hex encoding/decoding, which utilizes SIMD instructions to speed up processing. The use of Rust iterators also helps in optimizing performance, often allowing the compiler to generate SIMD-optimized assembly code. This efficiency is further enhanced through the use of BigEndianIterator, which allows for efficient streaming-like processing of TLS ClientHello bytes in a single pass.</p></li></ul><p>Parser benchmark results:</p>
            <pre><code>client_hello_benchmark/parse/parse-short-502
                        time:   [497.15 ns 497.23 ns 497.33 ns]
                        thrpt:  [2.0107 Melem/s 2.0111 Melem/s 2.0115 Melem/s]
client_hello_benchmark/parse/parse-long-1434
                        time:   [992.82 ns 993.55 ns 994.99 ns]
                        thrpt:  [1.0050 Melem/s 1.0065 Melem/s 1.0072 Melem/s]</code></pre>
            <p>
The benchmark results demonstrate that the parser efficiently handles different sizes of ClientHello messages, with shorter messages being processed at a rate of approximately 2 million elements per second, and longer messages at around 1 million elements per second, showcasing the effectiveness of SIMD optimizations and Rust's iterator performance in real-world applications.</p><p><b>Robust testing suite:</b> Includes dozens of real-life TLS ClientHello message examples, with parsed components verified against Wireshark with <a href="https://github.com/fullylegit/ja3"><u>JA3</u></a> and <a href="https://github.com/FoxIO-LLC/ja4/tree/main/wireshark"><u>JA4</u></a> plugins. Additionally, <a href="https://github.com/rust-fuzz/cargo-fuzz"><u>Cargo fuzzer</u></a> with memory sanitizer ensures no memory leaks or edge cases leading to core dumps. Backward compatibility tests with the legacy C parser, imported as a dependency and called via FFI, confirm that both parsers yield equivalent results.</p><p><b>Seamless integration with nginx</b>: The crate, compiled as a dynamic library, is linked to the nginx binary, ensuring a smooth transition from the legacy parser to the new Rust-based parser through backwards compatibility tests.</p><p>The transition to a new Rust-based parser has enabled the retirement of multiple implementations across different languages (C, Lua, and Go), significantly enhancing performance and parser robustness against edge cases. This shift also facilitates the easier integration of new features and business logic for parsing TLS ClientHello messages, streamlining future expansions and security updates.</p><p>With Cloudflare JA4 fingerprints implemented on our network, we were left with another problem to solve. When JA3 was released, we saw some scenarios where customers were surprised by traffic from a new JA3 fingerprint and blocked it, only to find the fingerprint was a new browser release, or an OS update had caused a change in the fingerprint used by their mobile device. By giving customers just a hash, customers still lack context. We wanted to give our customers the necessary context to help them make informed decisions about the safety of a fingerprint, so they can act quickly and confidently on it. As more of our customers embrace AI, we’ve heard more demand from our customers to break out the signals that power our bot detection. These customers want to run complex models on proprietary data that has to stay in their control, but they want to have Cloudflare’s unique perspective on Internet traffic when they do it. To us, both use cases sounded like the same problem. </p>
    <div>
      <h2>Enter JA4 Signals </h2>
      <a href="#enter-ja4-signals">
        
      </a>
    </div>
    <p>In the ever-evolving landscape of web security, traditional fingerprinting techniques like JA3 and JA4 have proven invaluable for identifying and managing web traffic. However, these methods alone are not sufficient to address the sophisticated tactics employed by malicious agents. Fingerprints can be easily spoofed, they change frequently, and traffic patterns and behaviors are constantly evolving. This is where JA4 Signals come into play, providing a robust and comprehensive approach to traffic analysis.</p><p>JA4 Signals are inter-request features computed based on the last hour of all traffic that Cloudflare sees globally. On a daily basis, we analyze over <b>15 million</b> unique JA4 fingerprints generated from more than 500 million user agents and billions of IP addresses. This breadth of data enables JA4 Signals to provide aggregated statistics that offer deeper insights into global traffic patterns – far beyond what single-request or connection fingerprinting can achieve. These signals are crucial for enhancing security measures, whether through simple firewall rules, Workers scripts, or advanced machine learning models.</p><p>Let's consider a specific example of JA4 Signals from a Firewall events activity log, which involves the latest version of Chrome:</p><p>This example highlights that a particular HTTP request received a Bot Score of 95, suggesting it likely originated from a human user operating a browser rather than an automated program or a bot. Analyzing JA4 Signals in this context provides deeper insight into the behavior of this client (latest Linux Chrome) in comparison to other network clients and their respective JA4 fingerprints. Here are a few examples of the signals our customers can see on any request:</p><table><tr><td><p><b><u>JA4 Signal</u></b></p></td><td><p><b><u>Description</u></b></p></td><td><p><b><u>Value example</u></b></p></td><td><p><b><u>Interpretation</u></b></p></td></tr><tr><td><p>browser_ratio_1h</p></td><td><p>The ratio of requests originating from browser-based user agents for the JA4 fingerprint in the last hour. Higher values suggest a higher proportion of browser-based requests.</p></td><td><p>0.942</p></td><td><p>Indicates a 94.2% browser-based request rate for this JA4.</p></td></tr><tr><td><p>cache_ratio_1h</p></td><td><p>The ratio of cacheable responses for the JA4 fingerprint in the last hour. Higher values suggest a higher proportion of responses that can be cached.</p></td><td><p>0.534</p></td><td><p>Shows a 53.4% cacheable response rate for this JA4.</p></td></tr><tr><td><p>h2h3_ratio_1h</p></td><td><p>The ratio of HTTP/2 and HTTP/3 requests combined with the total number of requests for the JA4 fingerprint in the last hour. Higher values indicate a higher proportion of HTTP/2 and HTTP/3 requests compared to other protocol versions.</p></td><td><p>0.987</p></td><td><p>Reflects a 98.7% rate of HTTP/2 and HTTP/3 requests.</p></td></tr><tr><td><p>reqs_quantile_1h</p></td><td><p>The quantile position of the JA4 fingerprint based on the number of requests across all fingerprints in the last hour. Higher values indicate a relatively higher number of requests compared to other fingerprints.</p></td><td><p>1</p></td><td><p>High volume of requests compared to other JA4s.</p></td></tr></table><p>The JA4 fingerprint and JA4 Signals are now available in the Firewall Rules UI, Bot Analytics and Workers. Customers can now use these fields to write custom rules, rate-limiting rules, transform rules, or Workers logic using JA4 fingerprint and JA4 Signals. </p><p>Let's demonstrate how to use JA4 Signals with the following Worker example. This script processes incoming requests by parsing and categorizing JA4 Signals, providing a clear structure for further analysis or rule application within Cloudflare Workers:</p>
            <pre><code>/**
 * Event listener for 'fetch' events. This triggers on every request to the worker.
 */
addEventListener('fetch', event =&gt; {
  event.respondWith(handleRequest(event.request))
})

/**
 * Main handler for incoming requests.
 * @param {Request} request - The incoming request object from the fetch event.
 * @returns {Response} A response object with JA4 Signals in JSON format.
 */
async function handleRequest(request) {
  // Safely access the ja4Signals object using optional chaining, which prevents errors if properties are undefined.
  const ja4Signals = request.cf?.botManagement?.ja4Signals || {};

  // Construct the response content, including both the original ja4Signals and the parsed signals.
  const responseContent = {
    ja4Signals: ja4Signals,
    jaSignalsParsed: parseJA4Signals(ja4Signals)
  };

  // Return a JSON response with appropriate headers.
  return new Response(JSON.stringify(responseContent), {
    status: 200,
    headers: {
      "content-type": "application/json;charset=UTF-8"
    }
  })
}

/**
 * Parses the JA4 Signals into categorized groups based on their names.
 * @param {Object} ja4Signals - The JA4 Signals object that may contain various metrics.
 * @returns {Object} An object with categorized JA4 Signals: ratios, ranks, and quantiles.
 */
function parseJA4Signals(ja4Signals) {
  // Define the keys for each category of signals.
  const ratios = ['h2h3_ratio_1h', 'heuristic_ratio_1h', 'browser_ratio_1h', 'cache_ratio_1h'];
  const ranks = ['uas_rank_1h', 'paths_rank_1h', 'reqs_rank_1h', 'ips_rank_1h'];
  const quantiles = ['reqs_quantile_1h', 'ips_quantile_1h'];

  // Return an object with each category containing only the signals that are present.
  return {
    ratios: filterKeys(ja4Signals, ratios),
    ranks: filterKeys(ja4Signals, ranks),
    quantiles: filterKeys(ja4Signals, quantiles)
  };
}

/**
 * Filters the keys in the ja4Signals object that match the list of specified keys and are not undefined.
 * @param {Object} ja4Signals - The JA4 Signals object.
 * @param {Array&lt;string&gt;} keys - An array of keys to filter from the ja4Signals object.
 * @returns {Object} A filtered object containing only the specified keys that are present in ja4Signals.
 */
function filterKeys(ja4Signals, keys) {
  const filtered = {};
  // Iterate over the specified keys and add them to the filtered object if they exist in ja4Signals.
  keys.forEach(key =&gt; {
    // Check if the key exists and is not undefined to handle optional presence of each signal.
    if (ja4Signals &amp;&amp; ja4Signals[key] !== undefined) {
      filtered[key] = ja4Signals[key];
    }
  });
  return filtered;
}</code></pre>
            
    <div>
      <h2><b>Benefits of JA4 Signals</b></h2>
      <a href="#benefits-of-ja4-signals">
        
      </a>
    </div>
    <ul><li><p><b>Comprehensive traffic analysis</b>: JA4 Signals aggregate data over an hour to provide a holistic view of traffic patterns. This method enhances the ability to identify emerging threats and abnormal behaviors by analyzing changes over time rather than in isolation.</p></li><li><p><b>Precision in anomaly detection</b>: Leveraging detailed inter-request features, JA4 Signals enable the precise detection of anomalies that may be overlooked by single-request fingerprinting. This leads to more accurate identification of sophisticated cyber threats.</p></li><li><p><b>Globally scalable insights</b>: By synthesizing data at a global scale, JA4 Signals harness the strength of Cloudflare’s network intelligence. This extensive analysis makes the system less susceptible to manipulation and provides a resilient foundation for security protocols.</p></li><li><p><b>Dynamic security enforcement</b>: JA4 Signals can dynamically inform security rules, from simple firewall configurations to complex machine learning algorithms. This adaptability ensures that security measures evolve in tandem with changing traffic patterns and emerging threats.</p></li><li><p><b>Reduction in false positives and negatives</b>: With the detailed insights provided by JA4 Signals, security systems can distinguish between legitimate and malicious traffic more effectively, reducing the occurrence of false positives and negatives and improving overall system reliability.</p></li></ul>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>The introduction of JA4 fingerprint and JA4 Signals marks a significant milestone in advancing Cloudflare’s security offerings, including Bot Management and <a href="https://www.cloudflare.com/ddos/"><u>DDoS protection</u></a>. These tools not only enhance the robustness of our traffic analysis but also showcase the continuous evolution of our network fingerprinting techniques. The efficiency of computing JA4 fingerprints enables real-time detection and response to emerging threats. Similarly, by leveraging aggregated statistics and inter-request features, JA4 Signals provide deep insights into traffic patterns at speeds measured in microseconds, ensuring that no detail is too small to be captured and analyzed.</p><p>These security features are underpinned by the scalable techniques and open-sourced libraries outlined in <a href="https://blog.cloudflare.com/scalable-machine-learning-at-cloudflare"><u>"Every request, every microsecond: scalable machine learning at Cloudflare"</u></a>. This discussion highlights how Cloudflare's innovations not only analyze vast amounts of data but also transform this analysis into actionable, reliable, and dynamically adaptable security measures.</p><p>Any Enterprise business with a bot problem will benefit from Cloudflare’s unique JA4 implementation and our perspective on bot traffic, but customers who run their own internal threat models will also benefit from access to data insights from a network that processes over 50 million requests per second. Please <a href="https://www.cloudflare.com/plans/enterprise/contact/"><u>get in touch</u></a> with us to learn more about our Bot Management offering.</p> ]]></content:encoded>
            <category><![CDATA[Bot Management]]></category>
            <category><![CDATA[Threat Intelligence]]></category>
            <category><![CDATA[WAF]]></category>
            <category><![CDATA[Application Services]]></category>
            <guid isPermaLink="false">4sRriOEqIpi6j3IvpnSB6B</guid>
            <dc:creator>Alex Bocharov</dc:creator>
            <dc:creator>Adam Martinetti</dc:creator>
        </item>
        <item>
            <title><![CDATA[Making WAF ML models go brrr: saving decades of processing time]]></title>
            <link>https://blog.cloudflare.com/making-waf-ai-models-go-brr/</link>
            <pubDate>Thu, 25 Jul 2024 13:00:46 GMT</pubDate>
            <description><![CDATA[ In this post, we discuss performance optimizations we've implemented for our WAF ML product. We'll guide you through code examples, benchmarks, and we'll share the impressive latency reduction numbers ]]></description>
            <content:encoded><![CDATA[ <p>We made our WAF Machine Learning models <b>5.5x</b> faster, reducing execution time by approximately <b>82%</b>, from <b>1519</b> to <b>275</b> microseconds! Read on to find out how we achieved this remarkable improvement.</p><p><a href="https://developers.cloudflare.com/waf/about/waf-attack-score/">WAF Attack Score</a> is Cloudflare's machine learning (ML)-powered layer built on top of our <a href="https://developers.cloudflare.com/waf/">Web Application Firewall (WAF)</a>. Its goal is to complement the WAF and detect attack bypasses that we haven't encountered before. This has proven invaluable in <a href="/detecting-zero-days-before-zero-day">catching zero-day vulnerabilities</a>, like the one detected in <a href="/how-cloudflares-ai-waf-proactively-detected-ivanti-connect-secure-critical-zero-day-vulnerability">Ivanti Connect Secure</a>, before they are publicly disclosed and enhancing our customers' protection against emerging and unknown threats.</p><p>Since its <a href="/waf-ml">launch in 2022</a>, WAF attack score adoption has grown exponentially, now protecting millions of Internet properties and running real-time inference on tens of millions of requests per second. The feature's popularity has driven us to seek performance improvements, enabling even broader customer use and enhancing Internet security.</p><p>In this post, we will discuss the performance optimizations we've implemented for our WAF ML product. We'll guide you through specific code examples and benchmark numbers, demonstrating how these enhancements have significantly improved our system's efficiency. Additionally, we'll share the impressive latency reduction numbers observed after the rollout.</p><p>Before diving into the optimizations, let's take a moment to review the inner workings of the WAF Attack Score, which powers our WAF ML product.</p>
    <div>
      <h2>WAF Attack Score system design</h2>
      <a href="#waf-attack-score-system-design">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3Bis9LE38A3aK4k7HEn7k9/44ae7b31096471a5256961715f8c7991/unnamed--4--6.png" />
            
            </figure><p>Cloudflare's WAF attack score identifies various traffic types and attack vectors (<a href="https://www.cloudflare.com/learning/security/threats/how-to-prevent-sql-injection/">SQLi</a>, <a href="https://www.cloudflare.com/learning/security/how-to-prevent-xss-attacks/">XSS</a>, Command Injection, etc.) based on structural or statistical content properties. Here's how it works during inference:</p><ol><li><p><b>HTTP Request Content</b>: Start with raw HTTP input.</p></li><li><p><b>Normalization &amp; Transformation</b>: Standardize and clean the data, applying normalization, content substitutions, and de-duplication.</p></li><li><p><b>Feature Extraction</b>: Tokenize the transformed content to generate statistical and structural data.</p></li><li><p><b>Machine Learning Model Inference</b>: Analyze the extracted features with pre-trained models, mapping content representations to classes (e.g., XSS, SQLi or <a href="https://www.cloudflare.com/learning/security/what-is-remote-code-execution/">RCE</a>) or scores.</p></li><li><p><b>Classification Output in WAF</b>: Assign a score to the input, ranging from 1 (likely malicious) to 99 (likely clean), guiding security actions.</p></li></ol>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/ZzHRYXU27VYB5F3F3QjXf/9e5248610a1e89ac8c73a446925abb69/cfce15fb-ce84-4489-a05a-6872b9e502b8.png" />
            
            </figure><p>Next, we will explore feature extraction and inference optimizations.</p>
    <div>
      <h2>Feature extraction optimizations</h2>
      <a href="#feature-extraction-optimizations">
        
      </a>
    </div>
    <p>In the context of the WAF Attack Score ML model, feature extraction or pre-processing is essentially a process of tokenizing the given input and producing a float tensor of 1 x m size:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7DIxHJ5zLkdeknndiNGbk0/e802888a2212ddfcae688f1c4201587f/8cc41311-3a09-4c39-b47c-9dc449760ee2.png" />
            
            </figure><p>In our initial pre-processing implementation, this is achieved via a sliding window of 3 bytes over the input with the help of Rust’s <a href="https://doc.rust-lang.org/std/collections/struct.HashMap.html">std::collections::HashMap</a> to look up the tensor index for a given ngram.</p>
    <div>
      <h3>Initial benchmarks</h3>
      <a href="#initial-benchmarks">
        
      </a>
    </div>
    <p>To establish performance baselines, we've set up four benchmark cases representing example inputs of various lengths, ranging from 44 to 9482 bytes. Each case exemplifies typical input sizes, including those for a request body, user agent, and URI. We run benchmarks using the <a href="https://bheisler.github.io/criterion.rs/book/getting_started.html">Criterion.rs</a> statistics-driven micro-benchmarking tool:</p>
            <pre><code>RUSTFLAGS="-C opt-level=3 -C target-cpu=native" cargo criterion</code></pre>
            <p>Here are initial numbers for these benchmarks executed on a Linux laptop with a 13th Gen Intel® Core™ i7-13800H processor:</p>
<table><thead>
  <tr>
    <th><span>Benchmark case</span></th>
    <th><span>Pre-processing time, μs</span></th>
    <th><span>Throughput, MiB/s</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>preprocessing/long-body-9482</span></td>
    <td><span>248.46</span></td>
    <td><span>36.40</span></td>
  </tr>
  <tr>
    <td><span>preprocessing/avg-body-1000</span></td>
    <td><span>28.19</span></td>
    <td><span>33.83</span></td>
  </tr>
  <tr>
    <td><span>preprocessing/avg-url-44</span></td>
    <td><span>1.45</span></td>
    <td><span>28.94</span></td>
  </tr>
  <tr>
    <td><span>preprocessing/avg-ua-91</span></td>
    <td><span>2.87</span></td>
    <td><span>30.24</span></td>
  </tr>
</tbody></table><p>An important observation from these results is that pre-processing time correlates with the length of the input string, with throughput ranging from 28 MiB/s to 36 MiB/s. This suggests that considerable time is spent iterating over longer input strings. Optimizing this part of the process could significantly enhance performance. The dependency of processing time on input size highlights a key area for performance optimization. To validate this, we should examine where the processing time is spent by analyzing flamegraphs created from a 100-second profiling session visualized using <a href="https://www.honeycomb.io/blog/golang-observability-using-the-new-pprof-web-ui-to-debug-memory-usage">pprof</a>:</p>
            <pre><code>RUSTFLAGS="-C opt-level=3 -C target-cpu=native" cargo criterion -- --profile-time 100
 
go tool pprof -http=: target/criterion/profile/preprocessing/avg-body-1000/profile.pb</code></pre>
            
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/WGhFT3j6vn4QGFOmdyNGO/8dd1c6e4d171cd2c407af7bf4d9a9ac7/unnamed--5--6.png" />
            
            </figure><p>Looking at the pre-processing flamegraph above, it's clear that most of the time was spent on the following two operations:</p>
<table><thead>
  <tr>
    <th><span>Function name</span></th>
    <th><span>% Time spent</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>std::collections::hash::map::HashMap&lt;K,V,S&gt;::get</span></td>
    <td><span>61.8%</span></td>
  </tr>
  <tr>
    <td><span>regex::regex::bytes::Regex::replace_all</span></td>
    <td><span>18.5%</span></td>
  </tr>
</tbody></table><p>Let's tackle the HashMap lookups first. Lookups are happening inside the <i>tensor_populate_ngrams</i> function, where input is split into windows of 3 bytes representing ngram and then lookup inside two hash maps:</p>
            <pre><code>fn tensor_populate_ngrams(tensor: &amp;mut [f32], input: &amp;[u8]) {   
   // Populate the NORM ngrams
   let mut unknown_norm_ngrams = 0;
   let norm_offset = 1;
 
   for s in input.windows(3) {
       match NORM_VOCAB.get(s) {
           Some(pos) =&gt; {
               tensor[*pos as usize + norm_offset] += 1.0f32;
           }
           None =&gt; {
               unknown_norm_ngrams += 1;
           }
       };
   }
 
   // Populate the SIG ngrams
   let mut unknown_sig_ngrams = 0;
   let sig_offset = norm_offset + NORM_VOCAB.len();
 
   let res = SIG_REGEX.replace_all(&amp;input, b"#");
 
   for s in res.windows(3) {
       match SIG_VOCAB.get(s) {
           Some(pos) =&gt; {
               // adding +1 here as the first position will be the unknown_sig_ngrams
               tensor[*pos as usize + sig_offset + 1] += 1.0f32;
           }
           None =&gt; {
               unknown_sig_ngrams += 1;
           }
       }
   }
}</code></pre>
            <p>So essentially the pre-processing function performs a ton of hash map lookups, the volume of which depends on the size of the input string, e.g. 1469 lookups for the given benchmark case <i>avg-body-1000</i>.</p>
    <div>
      <h3>Optimization attempt #1: HashMap → Aho-Corasick</h3>
      <a href="#optimization-attempt-1-hashmap-aho-corasick">
        
      </a>
    </div>
    <p>Rust hash maps are generally quite fast. However, when that many lookups are being performed, it's not very cache friendly.</p><p>So can we do better than hash maps, and what should we try first? The answer is the <a href="https://docs.rs/aho-corasick/latest/aho_corasick/">Aho-Corasick library</a>.</p><p>This library provides multiple pattern search principally through an implementation of the <a href="https://en.wikipedia.org/wiki/Aho%E2%80%93Corasick_algorithm">Aho-Corasick algorithm</a>, which builds a fast finite state machine for executing searches in linear time.</p><p>We can also tune Aho-Corasick settings based on this recommendation:</p><blockquote><p><i>“You might want to use</i> <a href="https://docs.rs/aho-corasick/1.1.3/aho_corasick/struct.AhoCorasickBuilder.html#method.kind"><i>AhoCorasickBuilder::kind</i></a> <i>to set your searcher to always use</i> <a href="https://docs.rs/aho-corasick/1.1.3/aho_corasick/enum.AhoCorasickKind.html#variant.DFA"><i>AhoCorasickKind::DFA</i></a> <i>if search speed is critical and memory usage isn’t a concern.”</i></p></blockquote>
            <pre><code>static ref NORM_VOCAB_AC: AhoCorasick = AhoCorasick::builder().kind(Some(AhoCorasickKind::DFA)).build(&amp;[    
    "abc",
    "def",
    "wuq",
    "ijf",
    "iru",
    "piw",
    "mjw",
    "isn",
    "od ",
    "pro",
    ...
]).unwrap();</code></pre>
            <p>Then we use the constructed AhoCorasick dictionary to lookup ngrams using its <a href="https://docs.rs/aho-corasick/latest/aho_corasick/struct.AhoCorasick.html#method.find_overlapping_iter">find_overlapping_iter</a> method:</p>
            <pre><code>for mat in NORM_VOCAB_AC.find_overlapping_iter(&amp;input) {
    tensor_input_data[mat.pattern().as_usize() + 1] += 1.0;
}</code></pre>
            <p>We ran benchmarks and compared them against the baseline times shown above:</p>
<table><thead>
  <tr>
    <th><span>Benchmark case</span></th>
    <th><span>Baseline time, μs</span></th>
    <th><span>Aho-Corasick time, μs</span></th>
    <th><span>Optimization</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>preprocessing/long-body-9482</span></td>
    <td><span>248.46</span></td>
    <td><span>129.59</span></td>
    <td><span>-47.84% or 1.64x</span></td>
  </tr>
  <tr>
    <td><span>preprocessing/avg-body-1000</span></td>
    <td><span>28.19</span></td>
    <td>	<span>16.47</span></td>
    <td><span>-41.56% or 1.71x</span></td>
  </tr>
  <tr>
    <td><span>preprocessing/avg-url-44</span></td>
    <td><span>1.45</span></td>
    <td><span>1.01</span></td>
    <td><span>-30.38% or 1.44x</span></td>
  </tr>
  <tr>
    <td><span>preprocessing/avg-ua-91</span></td>
    <td><span>2.87</span></td>
    <td><span>1.90</span></td>
    <td><span>-33.60% or 1.51x</span></td>
  </tr>
</tbody></table><p>That's substantially better – Aho-Corasick DFA does wonders.</p>
    <div>
      <h3>Optimization attempt #2: Aho-Corasick → match</h3>
      <a href="#optimization-attempt-2-aho-corasick-match">
        
      </a>
    </div>
    <p>One would think optimization with Aho-Corasick DFA is enough and that it seems unlikely that anything else can beat it. Yet, we can throw Aho-Corasick away and simply use the Rust match statement and let the compiler do the optimization for us!</p>
            <pre><code>#[inline]
const fn norm_vocab_lookup(ngram: &amp;[u8; 3]) -&gt; usize {     
    match ngram {
        b"abc" =&gt; 1,
        b"def" =&gt; 2,
        b"wuq" =&gt; 3,
        b"ijf" =&gt; 4,
        b"iru" =&gt; 5,
        b"piw" =&gt; 6,
        b"mjw" =&gt; 7,
        b"isn" =&gt; 8,
        b"od " =&gt; 9,
        b"pro" =&gt; 10,
        ...
        _ =&gt; 0,
    }
}```</code></pre>
            <p>Here's how it performs in practice, based on the assembly generated by the <a href="https://godbolt.org/z/dqTq5n5Y3">Godbolt compiler explorer</a>. The corresponding assembly code efficiently implements this lookup by employing a jump table and byte-wise comparisons to determine the return value based on input sequences, optimizing for quick decisions and minimal branching. Although the example only includes ten ngrams, it's important to note that in applications like our WAF Attack Score ML models, we deal with thousands of ngrams. This simple match-based approach outshines both HashMap lookups and the Aho-Corasick method.</p>
<table><thead>
  <tr>
    <th><span>Benchmark case</span></th>
    <th><span>Baseline time, μs</span></th>
    <th><span>Match time, μs</span></th>
    <th><span>Optimization</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>preprocessing/long-body-9482</span></td>
    <td><span>248.46</span></td>
    <td>	<span>112.96</span></td>
    <td><span>-54.54% or 2.20x</span></td>
  </tr>
  <tr>
    <td><span>preprocessing/avg-body-1000</span></td>
    <td><span>28.19</span></td>
    <td>	<span>13.12</span></td>
    <td><span>-53.45% or 2.15x</span></td>
  </tr>
  <tr>
    <td><span>preprocessing/avg-url-44</span></td>
    <td><span>1.45</span></td>
    <td><span>0.75</span></td>
    <td><span>-48.37% or 1.94x</span></td>
  </tr>
  <tr>
    <td><span>preprocessing/avg-ua-91</span></td>
    <td><span>2.87</span></td>
    <td><span>1.4076</span></td>
    <td><span>-50.91% or 2.04x</span></td>
  </tr>
</tbody></table><p>Switching to match gave us another 7-18% drop in latency, depending on the case.</p>
    <div>
      <h3>Optimization attempt #3: Regex → WindowedReplacer</h3>
      <a href="#optimization-attempt-3-regex-windowedreplacer">
        
      </a>
    </div>
    <p>So, what exactly is the purpose of <i>Regex::replace_all</i> in pre-processing? Regex is defined and used like this:</p>
            <pre><code>pub static SIG_REGEX: Lazy&lt;Regex&gt; =
    Lazy::new(|| RegexBuilder::new("[a-z]+").unicode(false).build().unwrap());
    ... 
    let res = SIG_REGEX.replace_all(&amp;input, b"#");
    for s in res.windows(3) {
        tensor[sig_vocab_lookup(s.try_into().unwrap())] += 1.0;
    }</code></pre>
            <p>Essentially, all we need is to:</p><ol><li><p>Replace every sequence of lowercase letters in the input with a single byte "#".</p></li><li><p>Iterate over replaced bytes in a windowed fashion with a step of 3 bytes representing an ngram.</p></li><li><p>Look up the ngram index and increment it in the tensor.</p></li></ol><p>This logic seems simple enough that we could implement it more efficiently with a single pass over the input and without any allocations:</p>
            <pre><code>type Window = [u8; 3];
type Iter&lt;'a&gt; = Peekable&lt;std::slice::Iter&lt;'a, u8&gt;&gt;;

pub struct WindowedReplacer&lt;'a&gt; {
    window: Window,
    input_iter: Iter&lt;'a&gt;,
}

#[inline]
fn is_replaceable(byte: u8) -&gt; bool {
    matches!(byte, b'a'..=b'z')
}

#[inline]
fn next_byte(iter: &amp;mut Iter) -&gt; Option&lt;u8&gt; {
    let byte = iter.next().copied()?;
    if is_replaceable(byte) {
        while iter.next_if(|b| is_replaceable(**b)).is_some() {}
        Some(b'#')
    } else {
        Some(byte)
    }
}

impl&lt;'a&gt; WindowedReplacer&lt;'a&gt; {
    pub fn new(input: &amp;'a [u8]) -&gt; Option&lt;Self&gt; {
        let mut window: Window = Default::default();
        let mut iter = input.iter().peekable();
        for byte in window.iter_mut().skip(1) {
            *byte = next_byte(&amp;mut iter)?;
        }
        Some(WindowedReplacer {
            window,
            input_iter: iter,
        })
    }
}

impl&lt;'a&gt; Iterator for WindowedReplacer&lt;'a&gt; {
    type Item = Window;

    #[inline]
    fn next(&amp;mut self) -&gt; Option&lt;Self::Item&gt; {
        for i in 0..2 {
            self.window[i] = self.window[i + 1];
        }
        let byte = next_byte(&amp;mut self.input_iter)?;
        self.window[2] = byte;
        Some(self.window)
    }
}</code></pre>
            <p>By utilizing the <i>WindowedReplacer</i>, we simplify the replacement logic:</p>
            <pre><code>if let Some(replacer) = WindowedReplacer::new(&amp;input) {                
    for ngram in replacer.windows(3) {
        tensor[sig_vocab_lookup(ngram.try_into().unwrap())] += 1.0;
    }
}</code></pre>
            <p>This new approach not only eliminates the need for allocating additional buffers to store replaced content, but also leverages Rust's iterator optimizations, which the compiler can more effectively optimize. You can view an example of the assembly output for this new iterator at the provided <a href="https://godbolt.org/z/fjaoP7z6Y">Godbolt link</a>.</p><p>Now let's benchmark this and compare against the original implementation:</p>
<table><thead>
  <tr>
    <th><span>Benchmark case</span></th>
    <th><span>Baseline time, μs</span></th>
    <th><span>Match time, μs</span></th>
    <th><span>Optimization</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>preprocessing/long-body-9482</span></td>
    <td><span>248.46</span></td>
    <td>	<span>51.00</span></td>
    <td><span>-79.47% or 4.87x</span></td>
  </tr>
  <tr>
    <td><span>preprocessing/avg-body-1000</span></td>
    <td><span>28.19</span></td>
    <td>	<span>5.53</span></td>
    <td><span>-80.36% or 5.09x</span></td>
  </tr>
  <tr>
    <td><span>preprocessing/avg-url-44</span></td>
    <td><span>1.45</span></td>
    <td><span>0.40</span></td>
    <td><span>-72.11% or 3.59x</span></td>
  </tr>
  <tr>
    <td><span>preprocessing/avg-ua-91</span></td>
    <td><span>2.87</span></td>
    <td><span>0.69</span></td>
    <td><span>-76.07% or 4.18x</span></td>
  </tr>
</tbody></table><p>The new letters replacement implementation has doubled the preprocessing speed compared to the previously optimized version using match statements, and it is four to five times faster than the original version!</p>
    <div>
      <h3>Optimization attempt #4: Going nuclear with branchless ngram lookups</h3>
      <a href="#optimization-attempt-4-going-nuclear-with-branchless-ngram-lookups">
        
      </a>
    </div>
    <p>At this point, 4-5x improvement might seem like a lot and there is no point pursuing any further optimizations. After all, using an ngram lookup with a match statement has beaten the following methods, with benchmarks omitted for brevity:</p>
<table><thead>
  <tr>
    <th><span>Lookup method</span></th>
    <th><span>Description</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><a href="https://doc.rust-lang.org/std/collections/struct.HashMap.html"><span>std::collections::HashMap</span></a></td>
    <td><span>Uses </span><a href="https://github.com/rust-lang/hashbrown"><span>Google’s SwissTable</span></a><span> design with SIMD lookups to scan multiple hash entries in parallel. </span></td>
  </tr>
  <tr>
    <td><a href="https://docs.rs/aho-corasick/latest/aho_corasick/#"><span>Aho-Corasick</span></a><span> matcher with and without </span><a href="https://docs.rs/aho-corasick/latest/aho_corasick/dfa/struct.DFA.html"><span>DFA</span></a></td>
    <td><span>Also utilizes SIMD instructions in some cases.</span></td>
  </tr>
  <tr>
    <td><a href="https://crates.io/crates/phf"><span>phf crate</span></a><span> </span></td>
    <td><span>A library to generate efficient lookup tables at compile time using </span><a href="https://en.wikipedia.org/wiki/Perfect_hash_function"><span>perfect hash functions</span></a><span>.</span></td>
  </tr>
  <tr>
    <td><a href="https://crates.io/crates/ph"><span>ph crate</span></a></td>
    <td><span>Another Rust library of data structures based on perfect hashing. </span></td>
  </tr>
  <tr>
    <td><a href="https://crates.io/crates/quickphf"><span>quickphf crate</span></a></td>
    <td><span>A Rust crate that allows you to use static compile-time generated hash maps and hash sets using </span><a href="https://arxiv.org/abs/2104.10402"><span>PTHash perfect hash functions</span></a><span>.</span></td>
  </tr>
</tbody></table><p>However, if we look again at <a href="https://godbolt.org/z/dqTq5n5Y3">the assembly of the norm_vocab_lookup function</a>, it is clear that the execution flow has to perform a bunch of comparisons using <i>cmp</i> instructions. This creates many branches for the CPU to handle, which can lead to branch mispredictions. Branch mispredictions occur when the CPU incorrectly guesses the path of execution, causing delays as it discards partially completed instructions and fetches the correct ones. By reducing or eliminating these branches, we can avoid these mispredictions and improve the efficiency of the lookup process. How can we get rid of those branches when there is a need to look up thousands of unique ngrams?</p><p>Since there are only 3 bytes in each ngram, we can build two lookup tables of 256 x 256 x 256 size, storing the ngram tensor index. With this naive approach, our memory requirements will be: 256 x 256 x 256 x 2 x 2 = 64 MB, which seems like a lot.</p><p>However, given that we only care about ASCII bytes 0..127, then memory requirements can be lower: 128 x 128 x 128 x 2 x 2 = 8 MB, which is better. However, we will need to check for bytes &gt;= 128, which will introduce a branch again.</p><p>So can we do better? Considering that the actual number of distinct byte values used in the ngrams is significantly less than the total possible 256 values, we can reduce memory requirements further by employing the following technique:</p><p>1. To avoid the branching caused by comparisons, we use precomputed offset lookup tables. This means instead of comparing each byte of the ngram during each lookup, we precompute the positions of each possible byte in a lookup table. This way, we replace the comparison operations with direct memory accesses, which are much faster and do not involve branching. We build an ngram bytes offsets lookup const array, storing each unique ngram byte offset position multiplied by the number of unique ngram bytes:</p>
            <pre><code>const NGRAM_OFFSETS: [[u32; 256]; 3] = [
    [
        // offsets of first byte in ngram
    ],
    [
        // offsets of second byte in ngram
    ],
    [
        // offsets of third byte in ngram
    ],
];</code></pre>
            <p>2. Then to obtain the ngram index, we can use this simple const function:</p>
            <pre><code>#[inline]
const fn ngram_index(ngram: [u8; 3]) -&gt; usize {
    (NGRAM_OFFSETS[0][ngram[0] as usize]
        + NGRAM_OFFSETS[1][ngram[1] as usize]
        + NGRAM_OFFSETS[2][ngram[2] as usize]) as usize
}</code></pre>
            <p>3. To look up the tensor index based on the ngram index, we construct another const array at compile time using a list of all ngrams, where N is the number of unique ngram bytes:</p>
            <pre><code>const NGRAM_TENSOR_IDX: [u16; N * N * N] = {
    let mut arr = [0; N * N * N];
    arr[ngram_index(*b"abc")] = 1;
    arr[ngram_index(*b"def")] = 2;
    arr[ngram_index(*b"wuq")] = 3;
    arr[ngram_index(*b"ijf")] = 4;
    arr[ngram_index(*b"iru")] = 5;
    arr[ngram_index(*b"piw")] = 6;
    arr[ngram_index(*b"mjw")] = 7;
    arr[ngram_index(*b"isn")] = 8;
    arr[ngram_index(*b"od ")] = 9;
    ...
    arr
};</code></pre>
            <p>4. Finally, to update the tensor based on given ngram, we lookup the ngram index, then the tensor index, and then increment it with help of <a href="https://doc.rust-lang.org/std/primitive.slice.html#method.get_unchecked_mut">get_unchecked_mut</a>, which avoids unnecessary (in this case) boundary checks and eliminates another source of branching:</p>
            <pre><code>#[inline]
fn update_tensor_with_ngram(tensor: &amp;mut [f32], ngram: [u8; 3]) {
    let ngram_idx = ngram_index(ngram);
    debug_assert!(ngram_idx &lt; NGRAM_TENSOR_IDX.len());
    unsafe {
        let tensor_idx = *NGRAM_TENSOR_IDX.get_unchecked(ngram_idx) as usize;
        debug_assert!(tensor_idx &lt; tensor.len());
        *tensor.get_unchecked_mut(tensor_idx) += 1.0;
    }
}</code></pre>
            <p>This logic works effectively, passes correctness tests, and most importantly, it's completely branchless! Moreover, the memory footprint of used lookup arrays is tiny – just ~500 KiB of memory – which easily fits into modern CPU L2/L3 caches, ensuring that expensive cache misses are rare and performance is optimal.</p><p>The last trick we will employ is loop unrolling for ngrams processing. By taking 6 ngrams (corresponding to 8 bytes of the input array) at a time, the compiler can unroll the second loop and auto-vectorize it, leveraging parallel execution to improve performance:</p>
            <pre><code>const CHUNK_SIZE: usize = 6;

let chunks_max_offset =
    ((input.len().saturating_sub(2)) / CHUNK_SIZE) * CHUNK_SIZE;
for i in (0..chunks_max_offset).step_by(CHUNK_SIZE) {
    for ngram in input[i..i + CHUNK_SIZE + 2].windows(3) {
        update_tensor_with_ngram(tensor, ngram.try_into().unwrap());
    }
}</code></pre>
            <p>Tying up everything together, our final pre-processing benchmarks show the following:</p>
<table><thead>
  <tr>
    <th><span>Benchmark case</span></th>
    <th><span>Baseline time, μs</span></th>
    <th><span>Branchless time, μs</span></th>
    <th><span>Optimization</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>preprocessing/long-body-9482</span></td>
    <td><span>248.46</span></td>
    <td><span>21.53</span></td>
    <td><span>-91.33% or 11.54x</span></td>
  </tr>
  <tr>
    <td><span>preprocessing/avg-body-1000</span></td>
    <td><span>28.19</span></td>
    <td><span>2.33</span></td>
    <td><span>-91.73% or 12.09x</span></td>
  </tr>
  <tr>
    <td><span>preprocessing/avg-url-44</span></td>
    <td><span>1.45</span></td>
    <td>	<span>0.26</span></td>
    <td><span>-82.34% or 5.66x</span></td>
  </tr>
  <tr>
    <td><span>preprocessing/avg-ua-91</span></td>
    <td><span>2.87</span></td>
    <td>	<span>0.43</span></td>
    <td><span>-84.92% or 6.63x</span></td>
  </tr>
</tbody></table><p>The longer input is, the higher the latency drop will be due to branchless ngram lookups and loop unrolling, ranging from <b>six to twelve times faster</b> than baseline implementation.</p><p>After trying various optimizations, the final version of pre-processing retains optimization attempts 3 and 4, using branchless ngram lookup with offset tables and a single-pass non-allocating replacement iterator.</p><p>There are potentially more CPU cycles left on the table, and techniques like memory pre-fetching and manual SIMD intrinsics could speed this up a bit further. However, let's now switch gears into looking at inference latency a bit closer.</p>
    <div>
      <h2>Model inference optimizations</h2>
      <a href="#model-inference-optimizations">
        
      </a>
    </div>
    
    <div>
      <h3>Initial benchmarks</h3>
      <a href="#initial-benchmarks">
        
      </a>
    </div>
    <p>Let’s have a look at original performance numbers of the WAF Attack Score ML model, which uses <a href="https://github.com/tensorflow/tensorflow/releases/tag/v2.6.0">TensorFlow Lite 2.6.0</a>:</p>
<table><thead>
  <tr>
    <th><span>Benchmark case</span></th>
    <th><span>Inference time, μs</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>inference/long-body-9482</span></td>
    <td><span>247.31</span></td>
  </tr>
  <tr>
    <td><span>inference/avg-body-1000</span></td>
    <td><span>246.31</span></td>
  </tr>
  <tr>
    <td><span>inference/avg-url-44</span></td>
    <td><span>246.40</span></td>
  </tr>
  <tr>
    <td><span>inference/avg-ua-91</span></td>
    <td><span>246.88</span></td>
  </tr>
</tbody></table><p>Model inference is actually independent of the original input length, as inputs are transformed into tensors of predetermined size during the pre-processing phase, which we optimized above. From now on, we will refer to a singular inference time when benchmarking our optimizations.</p><p>Digging deeper with profiler, we observed that most of the time is spent on the following operations:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3uy64gatRk8PfdnpRz5Xm5/0d3da469c30e5941524289c1b13574c5/unnamed--6--6.png" />
            
            </figure>
<table><thead>
  <tr>
    <th><span>Function name</span></th>
    <th><span>% Time spent</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>tflite::tensor_utils::PortableMatrixBatchVectorMultiplyAccumulate</span></td>
    <td><span>42.46%</span></td>
  </tr>
  <tr>
    <td><span>tflite::tensor_utils::PortableAsymmetricQuantizeFloats</span></td>
    <td><span>30.59%</span></td>
  </tr>
  <tr>
    <td><span>tflite::optimized_ops::SoftmaxImpl</span></td>
    <td><span>12.02%</span></td>
  </tr>
  <tr>
    <td><span>tflite::reference_ops::MaximumMinimumBroadcastSlow</span></td>
    <td><span>5.35%</span></td>
  </tr>
  <tr>
    <td><span>tflite::ops::builtin::elementwise::LogEval</span></td>
    <td><span>4.13%</span></td>
  </tr>
</tbody></table><p>The most expensive operation is matrix multiplication, which boils down to iteration within <a href="https://github.com/tensorflow/tensorflow/blob/v2.6.0/tensorflow/lite/kernels/internal/reference/portable_tensor_utils.cc#L119-L136">three nested loops</a>:</p>
            <pre><code>void PortableMatrixBatchVectorMultiplyAccumulate(const float* matrix,
                                                 int m_rows, int m_cols,
                                                 const float* vector,
                                                 int n_batch, float* result) {
  float* result_in_batch = result;
  for (int b = 0; b &lt; n_batch; b++) {
    const float* matrix_ptr = matrix;
    for (int r = 0; r &lt; m_rows; r++) {
      float dot_prod = 0.0f;
      const float* vector_in_batch = vector + b * m_cols;
      for (int c = 0; c &lt; m_cols; c++) {
        dot_prod += *matrix_ptr++ * *vector_in_batch++;
      }
      *result_in_batch += dot_prod;
     ++result_in_batch;
    }
  }
}</code></pre>
            <p>This doesn’t look very efficient and many <a href="https://en.algorithmica.org/hpc/algorithms/matmul/">blogs</a> and <a href="https://www.cs.utexas.edu/~flame/pubs/GotoTOMS_revision.pdf">research papers</a> have been written on how matrix multiplication can be optimized, which basically boils down to:</p><ul><li><p><b>Blocking</b>: Divide matrices into smaller blocks that fit into the cache, improving cache reuse and reducing memory access latency.</p></li><li><p><b>Vectorization</b>: Use SIMD instructions to process multiple data points in parallel, enhancing efficiency with vector registers.</p></li><li><p><b>Loop Unrolling</b>: Reduce loop control overhead and increase parallelism by executing multiple loop iterations simultaneously.</p></li></ul><p>To gain a better understanding of how these techniques work, we recommend watching this video, which brilliantly depicts the process of matrix multiplication:</p>
<p></p>
    <div>
      <h3>Tensorflow Lite with AVX2</h3>
      <a href="#tensorflow-lite-with-avx2">
        
      </a>
    </div>
    <p>TensorFlow Lite does, in fact, support SIMD matrix multiplication – we just need to enable it and re-compile the TensorFlow Lite library:</p>
            <pre><code>if [[ "$(uname -m)" == x86_64* ]]; then
    # On x86_64 target x86-64-v3 CPU to enable AVX2 and FMA.
    arguments+=("--copt=-march=x86-64-v3")
fi</code></pre>
            <p>After running profiler again using the SIMD-optimized TensorFlow Lite library:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6NmJhJoYG42ZZhU41m0Uj5/1d9fce45d44f98b41375d6a56f1a7cac/unnamed--7--5.png" />
            
            </figure><p>Top operations as per profiler output:</p>
<table><thead>
  <tr>
    <th><span>Function name</span></th>
    <th><span>% Time spent</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>tflite::tensor_utils::SseMatrixBatchVectorMultiplyAccumulateImpl</span></td>
    <td><span>43.01%</span></td>
  </tr>
  <tr>
    <td><span>tflite::tensor_utils::NeonAsymmetricQuantizeFloats</span></td>
    <td><span>22.46%</span></td>
  </tr>
  <tr>
    <td><span>tflite::reference_ops::MaximumMinimumBroadcastSlow</span></td>
    <td><span>7.82%</span></td>
  </tr>
  <tr>
    <td><span>tflite::optimized_ops::SoftmaxImpl</span></td>
    <td><span>6.61%</span></td>
  </tr>
  <tr>
    <td><span>tflite::ops::builtin::elementwise::LogEval</span></td>
    <td><span>4.63%</span></td>
  </tr>
</tbody></table><p>Matrix multiplication now uses <a href="https://github.com/tensorflow/tensorflow/blob/15ec568b5505727c940b651aeb2a9643b504086c/tensorflow/lite/kernels/internal/optimized/sse_tensor_utils.cc#L161-L199">AVX2 instructions</a>, which uses blocks of 8x8 to multiply and accumulate the multiplication result.</p><p>Proportionally, matrix multiplication and <a href="https://www.cloudflare.com/learning/ai/what-is-quantization/">quantization</a> operations take a similar time share when compared to non-SIMD version, however in absolute numbers, it’s almost twice as fast when SIMD optimizations are enabled:</p>
<table><thead>
  <tr>
    <th><span>Benchmark case</span></th>
    <th><span>Baseline time, μs</span></th>
    <th><span>SIMD time, μs</span></th>
    <th><span>Optimization</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>inference/avg-body-1000</span></td>
    <td><span>246.31</span></td>
    <td><span>130.07</span></td>
    <td><span>-47.19% or 1.89x</span></td>
  </tr>
</tbody></table><p>Quite a nice performance boost just from a few lines of build config change!</p>
    <div>
      <h3>Tensorflow Lite with XNNPACK</h3>
      <a href="#tensorflow-lite-with-xnnpack">
        
      </a>
    </div>
    <p>Tensorflow Lite comes with a useful benchmarking tool called <a href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/tools/benchmark">benchmark_model</a>, which also has a built-in profiler.</p><p>The tool can be built locally using the command:</p>
            <pre><code>bazel build -j 4 --copt=-march=native -c opt tensorflow/lite/tools/benchmark:benchmark_model</code></pre>
            <p>After building, benchmarks were run with different settings:</p>
<table><thead>
  <tr>
    <th><span>Benchmark run</span></th>
    <th><span>Inference time, μs</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>benchmark_model --graph=model.tflite --num_runs=100000 --use_xnnpack=false</span></td>
    <td><span>105.61</span></td>
  </tr>
  <tr>
    <td><span>benchmark_model --graph=model.tflite --num_runs=100000 --use_xnnpack=true --xnnpack_force_fp16=true</span></td>
    <td><span>111.95</span></td>
  </tr>
  <tr>
    <td><span>benchmark_model --graph=model.tflite --num_runs=100000 --use_xnnpack=true</span></td>
    <td><span>49.05</span></td>
  </tr>
</tbody></table><p>Tensorflow Lite with XNNPACK enabled emerges as a leader, achieving ~50% latency reduction, when compared to the original Tensorflow Lite implementation.</p><p>More technical details about XNNPACK can be found in these blog posts:</p><ul><li><p><a href="https://blog.tensorflow.org/2022/06/Profiling-XNNPACK-with-TFLite.html">Profiling XNNPACK with TFLite</a></p></li><li><p><a href="https://blog.tensorflow.org/2024/04/faster-dynamically-quantized-inference-with-xnnpack.html">Faster Dynamically Quantized Inference with XNNPack</a></p></li></ul><p>Re-running benchmarks with XNNPack enabled, we get the following results:</p>
<table><thead>
  <tr>
    <th><span>Benchmark case</span></th>
    <th><span>Baseline time, μs</span><br /><span>TFLite 2.6.0</span></th>
    <th><span>SIMD time, μs</span><br /><span>TFLite 2.6.0</span></th>
    <th><span>SIMD time, μs</span><br /><span>TFLite 2.16.1</span></th>
    <th><span>SIMD + XNNPack time, μs</span><br /><span>TFLite 2.16.1</span></th>
    <th><span>Optimization</span></th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><span>inference/avg-body-1000</span></td>
    <td><span>246.31</span></td>
    <td><span>130.07</span></td>
    <td><span>115.17</span></td>
    <td><span>56.22</span></td>
    <td><span>-77.17% or 4.38x</span></td>
  </tr>
</tbody></table><p>By upgrading TensorFlow Lite from 2.6.0 to 2.16.1 and enabling SIMD optimizations along with the XNNPack, we were able to decrease WAF ML model inference time more than <b>four-fold</b>, achieving a <b>77.17%</b> reduction.</p>
    <div>
      <h2>Caching inference result</h2>
      <a href="#caching-inference-result">
        
      </a>
    </div>
    <p>While making code faster through pre-processing and inference optimizations is great, it's even better when code doesn't need to run at all. This is where caching comes in. <a href="https://en.wikipedia.org/wiki/Amdahl%27s_law">Amdahl's Law</a> suggests that optimizing only parts of a program has diminishing returns. By avoiding redundant executions with caching, we can achieve significant performance gains beyond the limitations of traditional code optimization.</p><p>A simple key-value cache would quickly occupy all available memory on the server due to the high cardinality of URLs, HTTP headers, and HTTP bodies. However, because "everything on the Internet has an L-shape" or more specifically, follows a <a href="https://en.wikipedia.org/wiki/Zipf%27s_law">Zipf's law</a> distribution, we can optimize our caching strategy.</p><p><a href="https://en.wikipedia.org/wiki/Zipf%27s_law">Zipf</a>'<a href="https://en.wikipedia.org/wiki/Zipf%27s_law">s law</a> states that in many natural datasets, the frequency of any item is inversely proportional to its rank in the frequency table. In other words, a few items are extremely common, while the majority are rare. By analyzing our request data, we found that URLs, HTTP headers, and even HTTP bodies follow this distribution. For example, here is the user agent header frequency distribution against its rank:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/50OWcB7Buza1Jp77ePY75X/e25e66e7665fccc454df026e5ca37729/unnamed--8--3.png" />
            
            </figure><p>By caching the top-N most frequently occurring inputs and their corresponding inference results, we can ensure that both pre-processing and inference are skipped for the majority of requests. This is where the <a href="https://en.wikipedia.org/wiki/Cache_replacement_policies#LRU">Least Recently Used (LRU)</a> cache comes in – frequently used items stay hot in the cache, while the least recently used ones are evicted.</p><p>We use <a href="https://github.com/openresty/lua-resty-lrucache">lua-resty-mlcache</a> as our caching solution, allowing us to share cached inference results between different Nginx workers via a shared memory dictionary. The LRU cache effectively exploits the <a href="https://en.wikipedia.org/wiki/Space%E2%80%93time_tradeoff">space-time trade-off</a>, where we trade a small amount of memory for significant CPU time savings.</p><p>This approach enables us to achieve a <b>~70%</b> cache hit ratio, significantly reducing latency further, as we will analyze in the final section below.</p>
    <div>
      <h2>Optimization results</h2>
      <a href="#optimization-results">
        
      </a>
    </div>
    <p>The optimizations discussed in this post were rolled out in several phases to ensure system correctness and stability.</p><p>First, we enabled SIMD optimizations for TensorFlow Lite, which reduced WAF ML total execution time by approximately <b>41.80%,</b> decreasing from <b>1519</b> ➔ <b>884 μs</b> on average.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/15SMdamloYpjyUy5ZwH9o0/ab4ec787f870d27a45ff513db1f696c8/unnamed--9--3.png" />
            
            </figure><p>Next, we upgraded TensorFlow Lite from version 2.6.0 to 2.16.1, enabled XNNPack, and implemented pre-processing optimizations. This further reduced WAF ML total execution time by <b>~40.77%</b>, bringing it down from <b>932</b> ➔ <b>552 μs</b> on average. The initial average time of 932 μs was slightly higher than the previous 884 μs due to the increased number of customers using this feature and the months that passed between changes.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/01EuBB1eVopVjUjWsVvwrK/0d908285bd4296d75f5c98918cf1a561/unnamed--10--3.png" />
            
            </figure><p>Lastly, we introduced LRU caching, which led to an additional reduction in WAF ML total execution time by <b>~50.18%</b>, from <b>552</b> ➔ <b>275 μs</b> on average.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6epSwp5jz4ZMaVfdwahZnN/8c23c1b6a90bf301f9e7f6566d4f3295/unnamed--11--3.png" />
            
            </figure><p>Overall, we cut WAF ML execution time by <b>~81.90%</b>, decreasing from <b>1519</b> ➔ <b>275 μs</b>, or <b>5.5x</b> faster!</p><p>To illustrate the significance of this: with Cloudflare’s average rate of 9.5 million requests per second passing through WAF ML, saving <b>1244 microseconds</b> per request equates to saving ~<b>32 years</b> of processing time every single day! That’s in addition to the savings of <b>523 microseconds</b> per request or <b>65 years</b> of processing time per day demonstrated last year in our <a href="/scalable-machine-learning-at-cloudflare">Every request, every microsecond: scalable machine learning at Cloudflare</a> post about our Bot Management product.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>We hope you enjoyed reading about how we made our WAF ML models go brrr, just as much as we enjoyed implementing these optimizations to bring scalable WAF ML to more customers on a truly global scale.</p><p>Looking ahead, we are developing even more sophisticated ML security models. These advancements aim to bring our <a href="https://www.cloudflare.com/application-services/products/waf/">WAF</a> and <a href="https://www.cloudflare.com/application-services/products/bot-management/">Bot Management</a> products to the next level, making them even more useful and effective for our customers.</p> ]]></content:encoded>
            <category><![CDATA[Machine Learning]]></category>
            <category><![CDATA[WAF]]></category>
            <category><![CDATA[Performance]]></category>
            <category><![CDATA[Optimization]]></category>
            <category><![CDATA[Rust]]></category>
            <category><![CDATA[AI WAF]]></category>
            <category><![CDATA[WAF Attack Score]]></category>
            <guid isPermaLink="false">6y0Im81Uj2lKntznfYHfUY</guid>
            <dc:creator>Alex Bocharov</dc:creator>
        </item>
        <item>
            <title><![CDATA[Declare your AIndependence: block AI bots, scrapers and crawlers with a single click]]></title>
            <link>https://blog.cloudflare.com/declaring-your-aindependence-block-ai-bots-scrapers-and-crawlers-with-a-single-click/</link>
            <pubDate>Wed, 03 Jul 2024 13:00:26 GMT</pubDate>
            <description><![CDATA[ To help preserve a safe Internet for content creators, we’ve just launched a brand new “easy button” to block all AI bots. It’s available for all customers, including those on our free tier ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/D59Fq5QkC4J7Jjo5lM4Fm/fcc55b665562d321bd84f88f53f46b22/image7-1.png" />
            
            </figure><p>To help preserve a safe Internet for content creators, we’ve just launched a brand new “easy button” to <a href="https://www.cloudflare.com/learning/ai/how-to-block-ai-crawlers/">block all AI bots</a>. It’s available for all customers, including those on our free tier.</p><p>The popularity of <a href="https://www.cloudflare.com/learning/ai/what-is-generative-ai/">generative AI</a> has made the demand for content used to train models or run inference on skyrocket, and, although some AI companies clearly identify their web scraping bots, not all AI companies are being transparent. Google reportedly <a href="https://www.reuters.com/technology/reddit-ai-content-licensing-deal-with-google-sources-say-2024-02-22/">paid $60 million a year</a> to license Reddit’s user generated content, and most recently, Perplexity has been <a href="https://rknight.me/blog/perplexity-ai-is-lying-about-its-user-agent/">accused of impersonating legitimate visitors</a> in order to scrape content from websites. The value of original content in bulk has never been higher.</p><p>Last year, <a href="/ai-bots">Cloudflare announced the ability for customers to easily block AI bots</a> that behave well. These bots follow <a href="https://www.cloudflare.com/learning/bots/what-is-robots-txt/">robots.txt</a>, and don’t use unlicensed content to train their models or run inference for <a href="https://blogs.nvidia.com/blog/what-is-retrieval-augmented-generation/">RAG</a> applications using website data. Even though these AI bots follow the rules, Cloudflare customers overwhelmingly opt to <a href="https://www.cloudflare.com/learning/ai/how-to-prevent-web-scraping/">block them</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5aAA77Hl9OM2vtI611QcUI/0992e096262e348b451efd8be296fa27/image9.png" />
            
            </figure><p>We hear clearly that customers don’t want AI bots visiting their websites, and especially those that do so dishonestly. To help, we’ve added a brand new one-click to block all AI bots. It’s available for all customers, including those on the free tier. To enable it, simply navigate to the <a href="https://dash.cloudflare.com/?to=/:account/:zone/security/bots/configure">Security &gt; Bots</a> section of the Cloudflare dashboard, and click the toggle labeled AI Scrapers and Crawlers.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/xD0lhy89vZb34dtIWukt1/3e0e1ed979a33d344e53d4da2a819e1e/image2.png" />
            
            </figure><p>This feature will automatically be updated over time as we see new fingerprints of offending bots we identify as widely scraping the web for model training. To ensure we have a comprehensive understanding of all AI crawler activity, we surveyed traffic across our network.</p>
    <div>
      <h3>AI bot activity today</h3>
      <a href="#ai-bot-activity-today">
        
      </a>
    </div>
    <p>The graph below illustrates the most popular AI bots seen on Cloudflare’s network in terms of their request volume. We looked at common AI crawler user agents and aggregated the number of requests on our platform from these AI user agents over the last year:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/13pNq4MJJB92Dcs1ghxC6k/b7e7acc7e65e9e0958eed5d4b4cb0594/image6.png" />
            
            </figure><p>When looking at the number of requests made to Cloudflare sites, we see that <i>Bytespider</i>, <i>Amazonbot</i>, <i>ClaudeBot</i>, and <i>GPTBot</i> are the top four AI crawlers. Operated by ByteDance, the Chinese company that owns TikTok, <i>Bytespider</i> is reportedly used to gather training data for its large language models (LLMs), including those that support its ChatGPT rival, Doubao. <i>Amazonbot</i> and <i>ClaudeBot</i> follow <i>Bytespider</i> in request volume. <i>Amazonbot</i>, reportedly used to index content for Alexa’s question-answering, sent the second-most number of requests and <i>ClaudeBot</i>, used to train the Claude chat bot, has recently increased in request volume.</p><p>Among the top AI bots that we see, <i>Bytespider</i> not only leads in terms of number of requests but also in both the extent of its Internet property crawling and the frequency with which it is blocked. Following closely is <i>GPTBot</i>, which ranks second in both crawling and being blocked. <i>GPTBot</i>, managed by OpenAI, collects training data for its LLMs, which underpin AI-driven products such as ChatGPT. In the table below, “Share of websites accessed” refers to the proportion of websites protected by Cloudflare that were accessed by the named AI bot.</p>
<table><thead>
  <tr>
    <th><span>AI Bot</span></th>
    <th><span>Share of Websites Accessed</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>Bytespider</span></td>
    <td><span>40.40%</span></td>
  </tr>
  <tr>
    <td><span>GPTBot</span></td>
    <td><span>35.46%</span></td>
  </tr>
  <tr>
    <td><span>ClaudeBot</span></td>
    <td><span>11.17%</span></td>
  </tr>
  <tr>
    <td><span>ImagesiftBot</span></td>
    <td><span>8.75%</span></td>
  </tr>
  <tr>
    <td><span>CCBot</span></td>
    <td><span>2.14%</span></td>
  </tr>
  <tr>
    <td><span>ChatGPT-User</span></td>
    <td><span>1.84%</span></td>
  </tr>
  <tr>
    <td><span>omgili</span></td>
    <td><span>0.10%</span></td>
  </tr>
  <tr>
    <td><span>Diffbot</span></td>
    <td><span>0.08%</span></td>
  </tr>
  <tr>
    <td><span>Claude-Web</span></td>
    <td><span>0.04%</span></td>
  </tr>
  <tr>
    <td><span>PerplexityBot</span></td>
    <td><span>0.01%</span></td>
  </tr>
</tbody></table><p>While our analysis identified the most popular crawlers in terms of request volume and number of Internet properties accessed, many customers are likely not aware of the more popular AI crawlers actively crawling their sites. Our Radar team performed an analysis of the top robots.txt entries across the <a href="https://radar.cloudflare.com/domains">top 10,000 Internet domains</a> to identify the most commonly actioned AI bots, then looked at how frequently we saw these bots on sites protected by Cloudflare.</p><p>In the graph below, which looks at disallowed crawlers for these sites, we see that customers most often reference <i>GPTBot, CCBot</i>, and <i>Google</i> in robots.txt, but do not specifically disallow popular AI crawlers like <i>Bytespider</i> and <i>ClaudeBot</i>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6m4jV8g9sQ0BLR7OIonsoB/a4c3100a34160c96aea07c4ed4bc6a8d/image3.png" />
            
            </figure><p>With the Internet now flooded with these AI bots, we were curious to see how website operators have already responded. In June, AI bots accessed around 39% of the top one million Internet properties using Cloudflare, but only 2.98% of these properties took measures to block or challenge those requests. Moreover, the higher-ranked (more popular) an Internet property is, the more likely it is to be targeted by AI bots, and correspondingly, the more likely it is to block such requests.</p>
<table><thead>
  <tr>
    <th><span>Top N Internet properties by number of visitors seen by Cloudflare</span></th>
    <th><span>% accessed by AI bots</span></th>
    <th><span>% blocking AI bots</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>10</span></td>
    <td><span>80.0%</span></td>
    <td><span>40.0%</span></td>
  </tr>
  <tr>
    <td><span>100</span></td>
    <td><span>63.0%</span></td>
    <td><span>16.0%</span></td>
  </tr>
  <tr>
    <td><span>1,000</span></td>
    <td><span>53.2%</span></td>
    <td><span>8.8%</span></td>
  </tr>
  <tr>
    <td><span>10,000</span></td>
    <td><span>47.99%</span></td>
    <td><span>8.92%</span></td>
  </tr>
  <tr>
    <td><span>100,000</span></td>
    <td><span>44.53%</span></td>
    <td><span>6.36%</span></td>
  </tr>
  <tr>
    <td><span>1,000,000</span></td>
    <td><span>38.73%</span></td>
    <td><span>2.98%</span></td>
  </tr>
</tbody></table>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6gCWVJMv9GajRT3H8BQ5EM/effb5e2b52c0bdecb99f5f4e339c8d1d/image4.png" />
            
            </figure><p>We see website operators completely block access to these AI crawlers using robots.txt. However, these blocks are reliant on the bot operator respecting robots.txt and adhering to <a href="https://www.rfc-editor.org/rfc/rfc9309.html#name-the-user-agent-line">RFC9309</a> (ensuring variations on user against all match the product token) to honestly identify who they are when they visit an Internet property, but user agents are trivial for bot operators to change.</p>
    <div>
      <h3>How we find AI bots pretending to be real web browsers</h3>
      <a href="#how-we-find-ai-bots-pretending-to-be-real-web-browsers">
        
      </a>
    </div>
    <p>Sadly, we’ve observed bot operators attempt to appear as though they are a real browser by using a spoofed user agent. We’ve monitored this activity over time, and we’re proud to say that our global machine learning model has always recognized this activity as a bot, even when operators lie about their user agent.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4JpBRAGuQ1DOCTSFu9yHbH/9c11b569a30f68ddb1b4c197054ed1c8/image1.png" />
            
            </figure><p>Take one example of a specific bot that <a href="https://rknight.me/blog/perplexity-ai-is-lying-about-its-user-agent/">others</a> observed to be <a href="https://www.wired.com/story/perplexity-is-a-bullshit-machine/">hiding their activity</a>. We ran an analysis to see how our machine learning models scored traffic from this bot. In the diagram below, you can see that all <a href="https://developers.cloudflare.com/bots/concepts/bot-score/">bot scores</a> are firmly below 30, indicating that our scoring thinks this activity is likely to be coming from a bot.</p><p>The diagram reflects scoring of the requests using <a href="/residential-proxy-bot-detection-using-machine-learning">our newest model</a>, where “hotter” colors indicate more requests falling in that band, and “cooler” colors meaning fewer requests did. We can see the vast majority of requests fell into the bottom two bands, showing that Cloudflare’s model gave the offending bot a score of 9 or less. The user agent changes have no effect on the score, because this is the very first thing we expect bot operators to do.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1y0G6D2b512V1UAgR6sooD/4cc9b659f091e84facbec66a30baafad/image5.png" />
            
            </figure><p>Any customer with an existing WAF rule set to challenge visitors with a bot score below 30 (our recommendation) automatically blocked all of this AI bot traffic with no new action on their part. The same will be true for future AI bots that use similar techniques to hide their activity.</p><p>We leverage Cloudflare global signals to calculate our Bot Score, which for AI bots like the one above, reflects that we correctly identify and score them as a “likely bot.”</p><p>When bad actors attempt to crawl websites at scale, they generally use tools and frameworks that we are able to fingerprint. For every fingerprint we see, we use Cloudflare’s network, which sees over 57 million requests per second on average, to understand how much we should trust this fingerprint. To power our models, we compute global aggregates across many signals. Based on these signals, our models were able to appropriately flag traffic from evasive AI bots, like the example mentioned above, as bots.</p><p>The upshot of this globally aggregated data is that we can immediately detect new scraping tools and their behavior without needing to manually fingerprint the bot, ensuring that customers stay protected from the newest waves of bot activity.</p><p>If you have a tip on an AI bot that’s not behaving, we’d love to investigate. There are two options you can use to report misbehaving AI crawlers:</p><p>1. Enterprise Bot Management customers can submit a False Negative <a href="https://developers.cloudflare.com/bots/concepts/feedback-loop/">Feedback Loop</a> report via Bot Analytics by simply selecting the segment of traffic where they noticed misbehavior:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/iwX6mvdnqg3KGRN0dMHou/a7c0a39275680db58f49c9292ca180c7/image8.png" />
            
            </figure><p>2. We’ve also set up a <a href="https://docs.google.com/forms/d/14bX0RJH_0w17_cAUiihff5b3WLKzfieDO4upRlo5wj8/edit">reporting tool</a> where any Cloudflare customer can submit reports of an AI bot scraping your website without permission.</p><p>We fear that some AI companies intent on circumventing rules to access content will persistently adapt to evade bot detection. We will continue to keep watch and add more bot blocks to our AI Scrapers and Crawlers rule and evolve our machine learning models to help keep the Internet a place where content creators can thrive and keep full control over which models their content is used to train or run inference on.</p> ]]></content:encoded>
            <category><![CDATA[Bots]]></category>
            <category><![CDATA[Bot Management]]></category>
            <category><![CDATA[AI Bots]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Machine Learning]]></category>
            <category><![CDATA[Generative AI]]></category>
            <guid isPermaLink="false">4iUvyS3jKebfV9pHwg7pol</guid>
            <dc:creator>Alex Bocharov</dc:creator>
            <dc:creator>Santiago Vargas</dc:creator>
            <dc:creator>Adam Martinetti</dc:creator>
            <dc:creator>Reid Tatoris</dc:creator>
            <dc:creator>Carlos Azevedo</dc:creator>
        </item>
        <item>
            <title><![CDATA[Every request, every microsecond: scalable machine learning at Cloudflare]]></title>
            <link>https://blog.cloudflare.com/scalable-machine-learning-at-cloudflare/</link>
            <pubDate>Mon, 19 Jun 2023 13:00:51 GMT</pubDate>
            <description><![CDATA[ We'll describe the technical strategies that have enabled us to expand the number of machine learning features and models, all while substantially reducing the processing time for each HTTP request on our network ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7ezkO9JuKoLVAFi0ynmBfc/57b5f4032ff2789953cabb7f60c3a9ea/image7-3.png" />
            
            </figure><p>In this post, we will take you through the advancements we've made in our <a href="https://www.cloudflare.com/learning/ai/what-is-machine-learning/">machine learning</a> capabilities. We'll describe the technical strategies that have enabled us to expand the number of machine learning features and models, all while substantially reducing the processing time for each HTTP request on our network. Let's begin.</p>
    <div>
      <h2>Background</h2>
      <a href="#background">
        
      </a>
    </div>
    <p>For a comprehensive understanding of our evolved approach, it's important to grasp the context within which our machine learning detections operate. Cloudflare, on average, serves over <b>46 million HTTP requests per second</b>, surging to more than 63 million requests per second during peak times.</p><p>Machine learning detection plays a crucial role in ensuring the security and integrity of this vast network. In fact, it classifies the largest volume of requests among all our detection mechanisms, providing the final <a href="https://developers.cloudflare.com/bots/concepts/bot-score/">Bot Score</a> decision for <b>over 72%</b> of all HTTP requests. Going beyond, we run several machine learning models in shadow mode for every HTTP request.</p><p>At the heart of our machine learning infrastructure lies our reliable ally, <a href="https://catboost.ai/">CatBoost</a>. It enables ultra low-latency model inference and ensures high-quality predictions to detect novel threats such as <a href="/machine-learning-mobile-traffic-bots/">stopping bots targeting our customers' mobile apps</a>. However, it's worth noting that <b>machine learning model inference</b> is just one component of the overall latency equation. Other critical components include <b>machine learning feature extraction and preparation</b>. In our quest for optimal performance, we've continuously optimized each aspect contributing to the overall latency of our system.</p><p>Initially, our machine learning models relied on <b>single-request features</b>, such as presence or value of certain headers. However, given the ease of spoofing these attributes, we evolved our approach. We turned to <b>inter-request features</b> that leverage aggregated information across multiple dimensions of a request in a sliding time window. For example, we now consider factors like the number of unique user agents associated with certain request attributes.</p><p>The extraction and preparation of inter-request features were handled by <b>Gagarin</b>, a Go-based feature serving platform we developed. As a request arrived at Cloudflare, we extracted dimension keys from the request attributes. We then looked up the corresponding machine learning features in the <a href="https://github.com/thibaultcha/lua-resty-mlcache">multi-layered cache</a>. If the desired machine learning features were not found in the cache, a <b>memcached "get" request</b> was made to Gagarin to fetch those. Then machine learning features were plugged into CatBoost models to produce detections, which were then surfaced to the customers via Firewall and Workers fields and internally through our <a href="/http-analytics-for-6m-requests-per-second-using-clickhouse/">logging pipeline to ClickHouse</a>. This allowed our data scientists to run further experiments, producing more features and models.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1TDWrBIkxKHgZk1fjZnBJe/1340bc0dc964dd4b9022708463d91b24/image3-3.png" />
            
            </figure><p>Previous system design for serving machine learning features over Unix socket using Gagarin.</p><p>Initially, Gagarin exhibited decent latency, with a median latency around <b>200 microseconds</b> to serve all machine learning features for given keys. However, as our system evolved and we introduced more features and dimension keys, coupled with increased traffic, the cache hit ratio began to wane. The median latency had increased to <b>500 microseconds</b> and during peak times, the latency worsened significantly, with the p99 latency soaring to roughly <b>10 milliseconds</b>. Gagarin underwent extensive low-level tuning, optimization, profiling, and benchmarking. Despite these efforts, we encountered the limits of inter-process communication (IPC) using Unix Domain Socket (UDS), among other challenges, explored below.</p>
    <div>
      <h3>Problem definition</h3>
      <a href="#problem-definition">
        
      </a>
    </div>
    <p>In summary, the previous solution had its drawbacks, including:</p><ul><li><p><b>High tail latency</b>: during the peak time, a portion of requests experienced increased  latency caused by CPU contention on the Unix socket and Lua garbage collector.</p></li><li><p><b>Suboptimal resource utilization:</b> CPU and RAM utilization was not optimized to the full potential, leaving less resources for other services running on the server.</p></li><li><p><b>Machine learning features availability</b>: decreased due to memcached timeouts, which resulted in a higher likelihood of false positives or false negatives for a subset of the requests.</p></li><li><p><b>Scalability constraints</b>: as we added more machine learning features, we approached the scalability limit of our infrastructure.</p></li></ul><p>Equipped with a comprehensive understanding of the challenges and armed with quantifiable metrics, we ventured into the next phase: seeking a more efficient way to fetch and serve machine learning features.</p>
    <div>
      <h2>Exploring solutions</h2>
      <a href="#exploring-solutions">
        
      </a>
    </div>
    <p>In our quest for more efficient methods of fetching and serving machine learning features, we evaluated several alternatives. The key approaches included:</p><p><b>Further optimizing Gagarin</b>: as we pushed our Go-based memcached server to its limits, we encountered a lower bound on latency reductions. This arose from IPC over UDS synchronization overhead and multiple data copies, the serialization/deserialization overheads, as well as the inherent latency of garbage collector and the performance of hashmap lookups in Go.</p><p><b>Considering Quicksilver</b>: we contemplated using <a href="/tag/quicksilver/">Quicksilver</a>, but the volume and update frequency of machine learning features posed capacity concerns and potential negative impacts on other use cases. Moreover, it uses a Unix socket with the memcached protocol, reproducing the same limitations previously encountered.</p><p><b>Increasing multi-layered cache size:</b> we investigated expanding cache size to accommodate tens of millions of dimension keys. However, the associated memory consumption, due to duplication of these keys and their machine learning features across worker threads, rendered this approach untenable.</p><p><b>Sharding the Unix socket</b>: we considered sharding the Unix socket to alleviate contention and improve performance. Despite showing potential, this approach only partially solved the problem and introduced more system complexity.</p><p><b>Switching to RPC:</b> we explored the option of using RPC for communication between our front line server and Gagarin. However, since RPC still requires some form of communication bus (such as TCP, UDP, or UDS), it would not significantly change the performance compared to the memcached protocol over UDS, which was already simple and minimalistic.</p><p>After considering these approaches, we shifted our focus towards investigating alternative Inter-Process Communication (IPC) mechanisms.</p>
    <div>
      <h3>IPC mechanisms</h3>
      <a href="#ipc-mechanisms">
        
      </a>
    </div>
    <p>Adopting a <a href="https://en.wikipedia.org/wiki/First_principle">first principles</a> design approach, we questioned: "What is the most efficient low-level method for data transfer between two processes provided by the operating system?" Our goal was to find a solution that would enable the direct serving of machine learning features from memory for corresponding HTTP requests. By eliminating the need to traverse the Unix socket, we aimed to reduce CPU contention, improve latency, and minimize data copying.</p><p>To identify the most efficient IPC mechanism, we evaluated various options available within the Linux ecosystem. We used <a href="https://github.com/goldsborough/ipc-bench">ipc-bench</a>, an open-source benchmarking tool specifically designed for this purpose, to measure the latencies of different IPC methods in our test environment. The measurements were based on sending one million 1,024-byte messages forth and back (i.e., ping pong) between two processes.</p>
<table>
<thead>
  <tr>
    <th><span>IPC method</span></th>
    <th><span>Avg duration, μs</span></th>
    <th><span>Avg throughput, msg/s</span></th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><span>eventfd (bi-directional)</span></td>
    <td><span>9.456</span></td>
    <td><span>105,533</span></td>
  </tr>
  <tr>
    <td><span>TCP sockets</span></td>
    <td><span>8.74</span></td>
    <td><span>114,143</span></td>
  </tr>
  <tr>
    <td><span>Unix domain sockets</span></td>
    <td><span>5.609</span></td>
    <td><span>177,573</span></td>
  </tr>
  <tr>
    <td><span>FIFOs (named pipes)</span></td>
    <td><span>5.432</span></td>
    <td><span>183,388</span></td>
  </tr>
  <tr>
    <td><span>Pipe</span></td>
    <td><span>4.733</span></td>
    <td><span>210,369</span></td>
  </tr>
  <tr>
    <td><span>Message Queue</span></td>
    <td><span>4.396</span></td>
    <td><span>226,421</span></td>
  </tr>
  <tr>
    <td><span>Unix Signals</span></td>
    <td><span>2.45</span></td>
    <td><span>404,844</span></td>
  </tr>
  <tr>
    <td><span>Shared Memory</span></td>
    <td><span>0.598</span></td>
    <td><span>1,616,014</span></td>
  </tr>
  <tr>
    <td><span>Memory-Mapped Files</span></td>
    <td><span>0.503</span></td>
    <td><span>1,908,613</span></td>
  </tr>
</tbody>
</table><p>Based on our evaluation, we found that Unix sockets, while taking care of synchronization, were not the fastest IPC method available. The two fastest IPC mechanisms were shared memory and memory-mapped files. Both approaches offered similar performance, with the former using a specific tmpfs volume in /dev/shm and dedicated system calls, while the latter could be stored in any volume, including tmpfs or HDD/SDD.</p>
    <div>
      <h3>Missing ingredients</h3>
      <a href="#missing-ingredients">
        
      </a>
    </div>
    <p>In light of these findings, we decided to employ <a href="https://en.wikipedia.org/wiki/Memory-mapped_file"><b>memory-mapped files</b></a> as the IPC mechanism for serving machine learning features. This choice promised reduced latency, decreased CPU contention, and minimal data copying. However, it did not inherently offer data synchronization capabilities like Unix sockets. Unlike Unix sockets, memory-mapped files are simply files in a Linux volume that can be mapped into memory of the process. This sparked several critical questions:</p><ol><li><p>How could we efficiently fetch an array of hundreds of float features for given dimension keys when dealing with a file?</p></li><li><p>How could we ensure safe, concurrent and frequent updates for tens of millions of keys?</p></li><li><p>How could we avert the CPU contention previously encountered with Unix sockets?</p></li><li><p>How could we effectively support the addition of more dimensions and features in the future?</p></li></ol><p>To address these challenges we needed to further evolve this new approach by adding a few key ingredients to the recipe.</p>
    <div>
      <h2>Augmenting the Idea</h2>
      <a href="#augmenting-the-idea">
        
      </a>
    </div>
    <p>To realize our vision of memory-mapped files as a method for serving machine learning features, we needed to employ several key strategies, touching upon aspects like data synchronization, data structure, and deserialization.</p>
    <div>
      <h3>Wait-free synchronization</h3>
      <a href="#wait-free-synchronization">
        
      </a>
    </div>
    <p>When dealing with concurrent data, ensuring safe, concurrent, and frequent updates is paramount. Traditional locks are often not the most efficient solution, especially when dealing with high concurrency environments. Here's a rundown on three different synchronization techniques:</p><p><b>With-lock synchronization</b>: a common approach using mechanisms like mutexes or spinlocks. It ensures only one thread can access the resource at a given time, but can suffer from contention, blocking, and priority inversion, just as evident with Unix sockets.</p><p><b>Lock-free synchronization</b>: this non-blocking approach employs atomic operations to ensure at least one thread always progresses. It eliminates traditional locks but requires careful handling of edge cases and race conditions.</p><p><b>Wait-free synchronization:</b> a more advanced technique that guarantees every thread makes progress and completes its operation without being blocked by other threads. It provides stronger progress guarantees compared to lock-free synchronization, ensuring that each thread completes its operation within a finite number of steps.</p>
<table>
<thead>
  <tr>
    <th></th>
    <th><span>Disjoint Access Parallelism</span></th>
    <th><span>Starvation Freedom</span></th>
    <th><span>Finite Execution Time</span></th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><span>With lock</span></td>
    <td><img src="https://lh3.googleusercontent.com/mWo5ulSY0xrqrZamHxCTIpbHoK_IZXV3ApCFxhltGDZorv-jCY8cQm5O8aep8OIWqxxoHuO6EJLLMfUIBePSbyKEHPV9vF2FRvGWkK0VPxQrEuGONLQmmHmt0HLN4OyWQoXhKTZa2PILE7uv13on3IY" /></td>
    <td><img src="https://lh3.googleusercontent.com/mWo5ulSY0xrqrZamHxCTIpbHoK_IZXV3ApCFxhltGDZorv-jCY8cQm5O8aep8OIWqxxoHuO6EJLLMfUIBePSbyKEHPV9vF2FRvGWkK0VPxQrEuGONLQmmHmt0HLN4OyWQoXhKTZa2PILE7uv13on3IY" /></td>
    <td><img src="https://lh3.googleusercontent.com/mWo5ulSY0xrqrZamHxCTIpbHoK_IZXV3ApCFxhltGDZorv-jCY8cQm5O8aep8OIWqxxoHuO6EJLLMfUIBePSbyKEHPV9vF2FRvGWkK0VPxQrEuGONLQmmHmt0HLN4OyWQoXhKTZa2PILE7uv13on3IY" /></td>
  </tr>
  <tr>
    <td><span>Lock-free</span></td>
    <td><img src="https://lh5.googleusercontent.com/INhbT87NAysOwV9HTJAV8a1eIMOaGW7VXSsEfyEGoM2J1TvjhlBsuDoFzuHRF-9CJav33USYa69OlyrfgYovpbKNo_WCgJWq3LOJkZavZLu61QUb-Up4G3i166cVvOrBYB2wqIU065iBV3FWOpHm0pE" /></td>
    <td><img src="https://lh3.googleusercontent.com/mWo5ulSY0xrqrZamHxCTIpbHoK_IZXV3ApCFxhltGDZorv-jCY8cQm5O8aep8OIWqxxoHuO6EJLLMfUIBePSbyKEHPV9vF2FRvGWkK0VPxQrEuGONLQmmHmt0HLN4OyWQoXhKTZa2PILE7uv13on3IY" /></td>
    <td><img src="https://lh3.googleusercontent.com/mWo5ulSY0xrqrZamHxCTIpbHoK_IZXV3ApCFxhltGDZorv-jCY8cQm5O8aep8OIWqxxoHuO6EJLLMfUIBePSbyKEHPV9vF2FRvGWkK0VPxQrEuGONLQmmHmt0HLN4OyWQoXhKTZa2PILE7uv13on3IY" /></td>
  </tr>
  <tr>
    <td><span>Wait-free</span></td>
    <td><img src="https://lh5.googleusercontent.com/INhbT87NAysOwV9HTJAV8a1eIMOaGW7VXSsEfyEGoM2J1TvjhlBsuDoFzuHRF-9CJav33USYa69OlyrfgYovpbKNo_WCgJWq3LOJkZavZLu61QUb-Up4G3i166cVvOrBYB2wqIU065iBV3FWOpHm0pE" /></td>
    <td><img src="https://lh5.googleusercontent.com/INhbT87NAysOwV9HTJAV8a1eIMOaGW7VXSsEfyEGoM2J1TvjhlBsuDoFzuHRF-9CJav33USYa69OlyrfgYovpbKNo_WCgJWq3LOJkZavZLu61QUb-Up4G3i166cVvOrBYB2wqIU065iBV3FWOpHm0pE" /></td>
    <td><img src="https://lh5.googleusercontent.com/INhbT87NAysOwV9HTJAV8a1eIMOaGW7VXSsEfyEGoM2J1TvjhlBsuDoFzuHRF-9CJav33USYa69OlyrfgYovpbKNo_WCgJWq3LOJkZavZLu61QUb-Up4G3i166cVvOrBYB2wqIU065iBV3FWOpHm0pE" /></td>
  </tr>
</tbody>
</table><p>Our <a href="https://en.wikipedia.org/wiki/Non-blocking_algorithm#Wait-freedom">wait-free</a> data access pattern draws inspiration from <a href="https://www.kernel.org/doc/html/next/RCU/whatisRCU.html">Linux kernel's Read-Copy-Update (RCU) pattern</a> and the <a href="https://github.com/pramalhe/ConcurrencyFreaks/blob/master/papers/left-right-2014.pdf">Left-Right concurrency control technique</a>. In our solution, we maintain two copies of the data in separate memory-mapped files. Write access to this data is managed by a single writer, with multiple readers able to access the data concurrently.</p><p>We store the synchronization state, which coordinates access to these data copies, in a third memory-mapped file, referred to as "state". This file contains an atomic 64-bit integer, which represents an <code><b>InstanceVersion</b></code> and a pair of additional atomic 32-bit variables, tracking the number of active readers for each data copy. The <code><b>InstanceVersion</b></code> consists of the currently active data file index (1 bit), the data size (39 bits, accommodating data sizes up to 549 GB), and a data checksum (24 bits).</p>
    <div>
      <h3>Zero-copy deserialization</h3>
      <a href="#zero-copy-deserialization">
        
      </a>
    </div>
    <p>To efficiently store and fetch machine learning features, we needed to address the challenge of deserialization latency. Here, <a href="https://en.wikipedia.org/wiki/Zero-copy">zero-copy</a> deserialization provides an answer. This technique reduces the time and memory required to access and use data by directly referencing bytes in the serialized form.</p><p>We turned to <a href="https://rkyv.org/">rkyv</a>, a zero-copy deserialization framework in Rust, to help us with this task. rkyv implements total zero-copy deserialization, meaning no data is copied during deserialization and no work is done to deserialize data. It achieves this by structuring its encoded representation to match the in-memory representation of the source type.</p><p>One of the key features of rkyv that our solution relies on is its ability to access <code><b>HashMap</b></code> data structures in a zero-copy fashion. This is a unique capability among Rust serialization libraries and one of the main reasons we chose rkyv for our implementation. It also has a vibrant <a href="https://discord.gg/65F6MdnbQh">Discord community</a>, eager to offer best-practice advice and accommodate feature requests.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3m6VU5QBUW41ck2SEd9omL/31db11a33dcf7e8a32d2a563138dce09/Screenshot-2023-06-16-at-18.18.02.png" />
            
            </figure><p><a href="https://rkyv.org/feature-comparison.html">Feature comparison: rkyv vs FlatBuffers and Cap'n Proto</a></p>
    <div>
      <h2>Enter mmap-sync crate</h2>
      <a href="#enter-mmap-sync-crate">
        
      </a>
    </div>
    <p>Leveraging the benefits of <b>memory-mapped files</b>, <b>wait-free synchronization</b> and <b>zero-copy deserialization</b>, we've crafted a unique and powerful tool for managing high-performance, concurrent data access between processes. We've packaged these concepts into a Rust crate named <a href="https://github.com/cloudflare/mmap-sync"><code><b>mmap-sync</b></code></a>, which we're thrilled to open-source for the wider community.</p><p>At the core of the <code><b>mmap-sync</b></code> package is a structure named <code><b>Synchronizer</b></code>. It offers an avenue to read and write any data expressible as a Rust struct. Users simply have to implement or derive a specific Rust trait surrounding struct definition - a task requiring just a single line of code. The <code><b>Synchronizer</b></code> presents an elegantly simple interface, equipped with "write" and "read" methods.</p>
            <pre><code>impl Synchronizer {
    /// Write a given `entity` into the next available memory mapped file.
    pub fn write&lt;T&gt;(&amp;mut self, entity: &amp;T, grace_duration: Duration) -&gt; Result&lt;(usize, bool), SynchronizerError&gt; {
        …
    }

    /// Reads and returns `entity` struct from mapped memory wrapped in `ReadResult`
    pub fn read&lt;T&gt;(&amp;mut self) -&gt; Result&lt;ReadResult&lt;T&gt;, SynchronizerError&gt; {
        …
    }
}

/// FeaturesMetadata stores features along with their metadata
#[derive(Archive, Deserialize, Serialize, Debug, PartialEq)]
#[archive_attr(derive(CheckBytes))]
pub struct FeaturesMetadata {
    /// Features version
    pub version: u32,
    /// Features creation Unix timestamp
    pub created_at: u32,
    /// Features represented by vector of hash maps
    pub features: Vec&lt;HashMap&lt;u64, Vec&lt;f32&gt;&gt;&gt;,
}</code></pre>
            <p>A read operation through the <code><b>Synchronizer</b></code> performs zero-copy deserialization and returns a "guarded" <code><b>Result</b></code> encapsulating a reference to the Rust struct using <a href="https://rust-unofficial.github.io/patterns/patterns/behavioural/RAII.html">RAII design pattern</a>. This operation also increments the atomic counter of active readers using the struct. Once the <code><b>Result</b></code> is out of scope, the <code><b>Synchronizer</b></code> decrements the number of readers.</p><p>The synchronization mechanism used in <code><b>mmap-sync</b></code> is not only "lock-free" but also "wait-free". This ensures an upper bound on the number of steps an operation will take before it completes, thus providing a performance guarantee.</p><p>The data is stored in shared mapped memory, which allows the <code><b>Synchronizer</b></code> to “write” to it and “read” from it concurrently. This design makes <code><b>mmap-sync</b></code> a highly efficient and flexible tool for managing shared, concurrent data access.</p><p>Now, with an understanding of the underlying mechanics of <code><b>mmap-sync</b></code>, let's explore how this package plays a key role in the broader context of our Bot Management platform, particularly within the newly developed components: the <code><b>bliss</b></code> service and library.</p>
    <div>
      <h2>System design overhaul</h2>
      <a href="#system-design-overhaul">
        
      </a>
    </div>
    <p>Transitioning from a Lua-based module that made memcached requests over Unix socket to Gagarin in Go to fetch machine learning features, our new design represents a significant evolution. This change pivots around the introduction of <code><b>mmap-sync</b></code>, our newly developed Rust package, laying the groundwork for a substantial performance upgrade. This development led to a comprehensive system redesign and introduced two new components that form the backbone of our <i>Bots Liquidation Intelligent Security System</i> - or <b>BLISS</b>, in short: the bliss service and the bliss library.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4Dnl0GqLtcqoPZ3ceZuxTG/efa55a49b326fe71821f2f6be9935b8a/after-bliss-diagram-v2.png" />
            
            </figure>
    <div>
      <h3>Bliss service</h3>
      <a href="#bliss-service">
        
      </a>
    </div>
    <p>The <b>bliss</b> <b>service</b> operates as a Rust-based, multi-threaded sidecar daemon. It has been designed for optimal batch processing of vast data quantities and extensive I/O operations. Among its key functions, it fetches, parses, and stores machine learning features and dimensions for effortless data access and manipulation. This has been made possible through the incorporation of the <a href="https://tokio.rs/">Tokio</a> event-driven platform, which allows for efficient, non-blocking I/O operations.</p>
    <div>
      <h3>Bliss library</h3>
      <a href="#bliss-library">
        
      </a>
    </div>
    <p>Operating as a single-threaded dynamic library, the <b>bliss library</b> seamlessly integrates into each worker thread using the Foreign Function Interface (FFI) via a Lua module. Optimized for minimal resource usage and ultra-low latency, this lightweight library performs tasks without the need for heavy I/O operations. It efficiently serves machine learning features and generates corresponding detections.</p><p>In addition to leveraging the <code><b>mmap-sync</b></code> package for efficient machine learning feature access, our new design includes several other performance enhancements:</p><ul><li><p><b>Allocations-free operation:</b> bliss library re-uses pre-allocated data structures and performs no heap allocations, only low-cost stack allocations. To enforce our zero-allocation policy, we run integration tests using the <a href="https://docs.rs/dhat/latest/dhat/">dhat heap profiler</a>.</p></li><li><p><b>SIMD optimizations</b>: wherever possible, the bliss library employs vectorized CPU instructions. For instance, AVX2 and SSE4 instruction sets are used to expedite <a href="https://crates.io/crates/faster-hex">hex-decoding</a> of certain request attributes, enhancing speed by tenfold.</p></li><li><p><b>Compiler tuning:</b> We compile both the bliss service and library with the following flags for superior performance:</p></li></ul>
            <pre><code>[profile.release]
codegen-units = 1
debug = true
lto = "fat"
opt-level = 3</code></pre>
            <ul><li><p><b>Benchmarking &amp; profiling:</b> We use <a href="https://bheisler.github.io/criterion.rs/book/index.html">Criterion</a> for benchmarking every major feature or component within bliss. Moreover, we are also able to use the Go pprof profiler on Criterion benchmarks to view flame graphs and more:</p></li></ul>
            <pre><code>cargo bench -p integration -- --verbose --profile-time 100

go tool pprof -http=: ./target/criterion/process_benchmark/process/profile/profile.pb</code></pre>
            <p>This comprehensive overhaul of our system has not only streamlined our operations but also has been instrumental in enhancing the overall performance of our Bot Management platform. Stay tuned to witness the remarkable changes brought about by this new architecture in the next section.</p>
    <div>
      <h2>Rollout results</h2>
      <a href="#rollout-results">
        
      </a>
    </div>
    <p>Our system redesign has brought some truly "blissful" dividends. Above all, our commitment to a seamless user experience and the trust of our customers have guided our innovations. We ensured that the transition to the new design was seamless, maintaining full backward compatibility, with no customer-reported false positives or negatives encountered. This is a testament to the robustness of the new system.</p><p>As the old adage goes, the proof of the pudding is in the eating. This couldn't be truer when examining the dramatic latency improvements achieved by the redesign. Our overall processing latency for HTTP requests at Cloudflare improved by an average of <b>12.5%</b> compared to the previous system.</p><p>This improvement is even more significant in the Bot Management module, where latency improved by an average of <b>55.93%</b>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5CEtjyqdZ3W1566LlcJCmY/8df07e319288090092413f3217b46464/image6.png" />
            
            </figure><p>Bot Management module latency, in microseconds.</p><p>More specifically, our machine learning features fetch latency has improved by several orders of magnitude:</p>
<table>
<thead>
  <tr>
    <th><span>Latency metric</span></th>
    <th><span>Before (μs)</span></th>
    <th><span>After (μs)</span></th>
    <th><span>Change</span></th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><span>p50</span></td>
    <td><span>532</span></td>
    <td><span>9</span></td>
    <td><span>-98.30%</span><span> or </span><span>x59</span></td>
  </tr>
  <tr>
    <td><span>p99</span></td>
    <td><span>9510</span></td>
    <td><span>18</span></td>
    <td><span>-99.81%</span><span> or </span><span>x528</span></td>
  </tr>
  <tr>
    <td><span>p999</span></td>
    <td><span>16000</span></td>
    <td><span>29</span></td>
    <td><span>-99.82%</span><span> or </span><span>x551</span></td>
  </tr>
</tbody>
</table><p>To truly grasp this impact, consider this: with Cloudflare’s average rate of 46 million requests per second, a saving of <b>523 microseconds</b> per request equates to saving over 24,000 days or <b>65 years</b> of processing time every single day!</p><p>In addition to latency improvements, we also reaped other benefits from the rollout:</p><ul><li><p><b>Enhanced feature availability</b>: thanks to eliminating Unix socket timeouts, machine learning feature availability is now a robust 100%, resulting in fewer false positives and negatives in detections.</p></li><li><p><b>Improved resource utilization</b>: our system overhaul liberated resources equivalent to thousands of CPU cores and hundreds of gigabytes of RAM - a substantial enhancement of our server fleet's efficiency.</p></li><li><p><b>Code cleanup:</b> another positive spin-off has been in our Lua and Go code. Thousands of lines of less performant and less memory-safe code have been weeded out, reducing technical debt.</p></li><li><p><b>Upscaled machine learning capabilities:</b> last but certainly not least, we've significantly expanded our machine learning features, dimensions, and models. This upgrade empowers our machine learning inference to handle hundreds of machine learning features and dozens of dimensions and models.</p></li></ul>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>In the wake of our redesign, we've constructed a powerful and efficient system that truly embodies the essence of 'bliss'. Harnessing the advantages of memory-mapped files, wait-free synchronization, allocation-free operations, and zero-copy deserialization, we've established a robust infrastructure that maintains peak performance while achieving remarkable reductions in latency. As we navigate towards the future, we're committed to leveraging this platform to further improve our Security machine learning products and cultivate innovative features. Additionally, we're excited to share parts of this technology through an open-sourced Rust package <a href="https://github.com/cloudflare/mmap-sync"><code><b>mmap-sync</b></code></a>.</p><p>As we leap into the future, we are building upon our platform's impressive capabilities, exploring new avenues to amplify the power of machine learning. We are deploying a new machine learning model built on BLISS with select customers. If you are a Bot Management subscriber and want to test the new model, please reach out to your account team.</p><p>Separately, we are on the lookout for more Cloudflare customers who want to run their own machine learning models at the edge today. If you’re a developer considering making the switch to Workers for your application, sign up for our <a href="/introducing-constellation/">Constellation AI closed beta</a>. If you’re a Bot Management customer and looking to run an already trained, lightweight model at the edge, <a href="https://www.cloudflare.com/lp/byo-machine-learning-model/">we would love to hear from you</a>. Let's embark on this path to bliss together.</p> ]]></content:encoded>
            <category><![CDATA[Speed Week]]></category>
            <category><![CDATA[Performance]]></category>
            <category><![CDATA[Bot Management]]></category>
            <category><![CDATA[Deep Dive]]></category>
            <category><![CDATA[Rust]]></category>
            <category><![CDATA[Machine Learning]]></category>
            <category><![CDATA[AI]]></category>
            <guid isPermaLink="false">5okRPYMr2V3a9DYVL5fLJw</guid>
            <dc:creator>Alex Bocharov</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare Bot Management: machine learning and more]]></title>
            <link>https://blog.cloudflare.com/cloudflare-bot-management-machine-learning-and-more/</link>
            <pubDate>Wed, 06 May 2020 11:00:00 GMT</pubDate>
            <description><![CDATA[ This is the ongoing story of Bot Management at Cloudflare and also an introduction to a series of blog posts about the detection mechanisms powering it ]]></description>
            <content:encoded><![CDATA[ <p></p>
    <div>
      <h3>Introduction</h3>
      <a href="#introduction">
        
      </a>
    </div>
    <p>Building <a href="https://www.cloudflare.com/application-services/products/bot-management/">Cloudflare Bot Management platform</a> is an exhilarating experience. It blends Distributed Systems, Web Development, Machine Learning, Security and Research (and every discipline in between) while fighting ever-adaptive and motivated adversaries at the same time.</p><p>This is the ongoing story of Bot Management at Cloudflare and also an introduction to a series of blog posts about the detection mechanisms powering it. I’ll start with several definitions from the Bot Management world, then introduce the product and technical requirements, leading to an overview of the platform we’ve built. Finally, I’ll share details about the detection mechanisms powering our platform.</p><p>Let’s start with Bot Management’s nomenclature.</p>
    <div>
      <h3>Some Definitions</h3>
      <a href="#some-definitions">
        
      </a>
    </div>
    <p><a href="https://www.cloudflare.com/learning/bots/what-is-a-bot/"><b>Bot</b></a> - an autonomous program on a network that can interact with computer systems or users, imitating or replacing a human user's behavior, performing repetitive tasks much faster than human users could.</p><p><b>Good bots</b> - bots which are useful to businesses they interact with, e.g. <a href="https://www.cloudflare.com/learning/bots/what-is-a-web-crawler/">search engine bots</a> like Googlebot, Bingbot or bots that operate on social media platforms like Facebook Bot.</p><p><b>Bad bots</b> - bots which are designed to perform malicious actions, ultimately hurting businesses, e.g. <a href="https://www.cloudflare.com/learning/bots/what-is-credential-stuffing/">credential stuffing</a> bots, <a href="https://www.cloudflare.com/learning/bots/what-is-data-scraping/">third-party scraping</a> bots, <a href="https://www.cloudflare.com/learning/bots/what-is-a-spambot/">spam</a> bots and sneakerbots.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5nwyutK4DVA1cjsfMY8ore/fd2ebf6ec92ec7795ea423f4e7d61713/image12.png" />
            
            </figure><p><a href="https://www.cloudflare.com/learning/bots/what-is-bot-management/"><b>Bot Management</b></a> - blocking undesired or malicious Internet bot traffic while still allowing useful bots to access web properties by detecting bot activity, discerning between desirable and undesirable bot behavior, and identifying the sources of the undesirable activity.</p><p><a href="https://www.cloudflare.com/learning/ddos/glossary/web-application-firewall-waf/"><b>WAF</b></a> - a security system that monitors and controls network traffic based on a set of security rules.</p>
    <div>
      <h3>Gathering requirements</h3>
      <a href="#gathering-requirements">
        
      </a>
    </div>
    <p>Cloudflare has been stopping malicious bots from accessing websites or misusing <a href="https://www.cloudflare.com/learning/security/api/what-is-an-api/">APIs</a> from the very <a href="/cloudflare-uses-intelligent-caching-to-avoid/">beginning</a>, at the same time <a href="/cleaning-up-bad-bots/">helping the climate</a> by offsetting the carbon costs from the bots. Over time it became clear that we needed a dedicated platform which would unite different bot fighting techniques and streamline the customer experience. In designing this new platform, we tried to fulfill the following key requirements.</p><ul><li><p><b>Complete, not complex</b> - customers can turn on/off Bot Management with a single click of a button, to protect their websites, mobile applications, or <a href="https://www.cloudflare.com/application-services/solutions/api-security/">APIs</a>.</p></li><li><p><b>Trustworthy</b> - customers want to know whether they can trust the website visitor is who they say they are and provide a certainty indicator for that trust level.</p></li><li><p><b>Flexible</b> - customers should be able to define what subset of the traffic Bot Management mitigations should be applied to, e.g. only login URLs, pricing pages or sitewide.</p></li><li><p><b>Accurate</b> - Bot Management detections should have a very small error, e.g. none or very few human visitors ever should be mistakenly identified as bots.</p></li><li><p><b>Recoverable</b> - in case a wrong prediction was made, human visitors still should be able to access websites as well as good bots being let through.</p></li></ul><p>Moreover, the goal for new Bot Management product was to make it work well on the following use cases:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5z0o8rXLkecZWUhx1UDRHF/78e1e55cc47dac3d8bb58ac274870387/image8.png" />
            
            </figure>
    <div>
      <h3>Technical requirements</h3>
      <a href="#technical-requirements">
        
      </a>
    </div>
    <p>Additionally, to the product requirements above, we engineers had a list of must-haves for the new Bot Management platform. The most critical were:</p><ul><li><p><b>Scalability</b> - the platform should be able to calculate a score on every request, even at over 10 million requests per second.</p></li><li><p><b>Low latency</b> - detections must be performed extremely quickly, not slowing down request processing by more than 100 microseconds, and not requiring additional hardware.</p></li><li><p><b>Configurability</b> - it should be possible to configure what detections are applied on what traffic, including on per domain/data center/server level.</p></li><li><p><b>Modifiability</b> - the platform should be easily extensible with more detection mechanisms, different mitigation actions, richer analytics and logs.</p></li><li><p><b>Security</b> - no sensitive information from one customer should be used to build models that protect another customer.</p></li><li><p><b>Explainability &amp; debuggability</b> - we should be able to explain and tune predictions in an intuitive way.</p></li></ul><p>Equipped with these requirements, back in 2018, our small team of engineers got to work to design and build the next generation of Cloudflare Bot Management.</p>
    <div>
      <h3>Meet the Score</h3>
      <a href="#meet-the-score">
        
      </a>
    </div>
    <blockquote><p><i>“Simplicity is the ultimate sophistication.”- Leonardo Da Vinci</i></p></blockquote><p>Cloudflare operates on a vast scale. At the time of this writing, this means covering 26M+ Internet properties, processing on average 11M requests per second (with peaks over 14M), and examining more than 250 request attributes from different protocol levels. The key question is how to harness the power of such “gargantuan” data to <a href="https://www.cloudflare.com/products/zero-trust/threat-defense/">protect all of our customers from modern day cyberthreats</a> in a simple, reliable and explainable way?</p><p>Bot management is hard. Some bots are much harder to detect and require looking at multiple dimensions of request attributes over a long time, and sometimes a single request attribute could give them away. More signals may help, but are they generalizable?</p><p>When we classify traffic, should customers decide what to do with it or are there decisions we can make on behalf of the customer? What concept could possibly address all these uncertainty problems and also help us to deliver on the requirements from above?</p><p>As you might’ve guessed from the section title, we came up with the concept of Trusted Score or simply <b>The Score</b> <i>-</i> one thing to rule them all - indicating the likelihood between 0 and 100 whether a request originated from a human (high score) vs. an automated program (low score).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3caFJFhAGnz4989GtGg5s4/c03d3fd90168d7d35f79dcc547912db7/image5-1.jpg" />
            
            </figure><p><a href="https://www.flickr.com/photos/purple-lover/13583362554">"One Ring to rule them all"</a> by idreamlikecrazy, used under <a href="https://creativecommons.org/licenses/by/2.0/">CC BY</a> / Desaturated from original</p><p>Okay, let’s imagine that we are able to assign such a score on every incoming HTTP/HTTPS request, what are we or the customer supposed to do with it? Maybe it’s enough to provide such a score in the logs. Customers could then analyze them on their end, find the most frequent IPs with the lowest scores, and then use the <a href="https://www.cloudflare.com/application-services/products/waf/">Cloudflare Firewall</a> to block those IPs. Although useful, such a process would be manual, prone to error and most importantly cannot be done in real time to protect the customer's Internet property.</p><p>Fortunately, around the same time we started worked on this system , our colleagues from the Firewall team had <a href="/announcing-firewall-rules/">just announced Firewall Rules</a>. This new capability provided customers the ability to control requests in a flexible and intuitive way, inspired by the widely known Wireshark®  language. Firewall rules supported a variety of request fields, and we thought - why not have the score be one of these fields? Customers could then write granular rules to block very specific attack types. That’s how the <code>cf.bot_management.score</code> field was born.</p><p>Having a score in the heart of Cloudflare Bot Management addressed multiple product and technical requirements with one strike - it’s simple, flexible, configurable, and it provides customers with telemetry about bots on a per request basis. Customers can adjust the score threshold in firewall rules, depending on their sensitivity to false positives/negatives. Additionally, this intuitive score allows us to extend our detection capabilities under the hood without customers needing to adjust any configuration.</p><p>So how can we produce this score and how hard is it? Let’s explore it in the following section.</p>
    <div>
      <h3>Architecture overview</h3>
      <a href="#architecture-overview">
        
      </a>
    </div>
    <p>What is powering the Bot Management score? The short answer is a set of <a href="https://www.cloudflare.com/learning/serverless/glossary/serverless-microservice/">microservices</a>. Building this platform we tried to re-use as many pipelines, databases and components as we could, however many services had to be built from scratch. Let’s have a look at overall architecture (this overly simplified version contains Bot Management related services):</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5OSC0Ty0XKfbmw651kO9bQ/ebfbf8204161f3db6622291ddb7e4721/image13.png" />
            
            </figure>
    <div>
      <h3>Core Bot Management services</h3>
      <a href="#core-bot-management-services">
        
      </a>
    </div>
    <p>In a nutshell our systems process data received from the edge data centers, produce and store data required for bot detection mechanisms using the following technologies:</p><ul><li><p><b>Databases &amp; data stores</b> - <a href="/squeezing-the-firehose/">Kafka</a>, <a href="/http-analytics-for-6m-requests-per-second-using-clickhouse/">ClickHouse</a>, Postgres, Redis, Ceph.</p></li><li><p><b>Programming languages</b> - Go, Rust, Python, Java, Bash.</p></li><li><p><b>Configuration &amp; schema management</b> - Salt, <a href="/introducing-quicksilver-configuration-distribution-at-internet-scale/">Quicksilver</a>, <a href="https://capnproto.org/">Cap’n Proto</a>.</p></li><li><p><b>Containerization</b> - Docker, Kubernetes, Helm, Mesos/Marathon.</p></li></ul><p>Each of these services is built with resilience, performance, <a href="https://www.cloudflare.com/learning/performance/what-is-observability/">observability</a> and security in mind.</p>
    <div>
      <h3>Edge Bot Management module</h3>
      <a href="#edge-bot-management-module">
        
      </a>
    </div>
    <p>All bot detection mechanisms are applied on every request in real-time during the request processing stage in the Bot Management module running on every machine at Cloudflare’s edge locations. When a request comes in we extract and transform the required request attributes and feed them to our detection mechanisms. The Bot Management module produces the following output:</p><p><b>Firewall fields</b> - <a href="https://support.cloudflare.com/hc/en-us/articles/360027519452-Understanding-Cloudflare-Bot-Management">Bot Management fields</a>- <b>cf.bot_management.score</b> - an integer indicating the likelihood between 0 and 100 whether a request originated from an automated program (low score) to a human (high score).- <b>cf.bot_management.verified_bot</b> - a boolean indicating whether such request comes from a Cloudflare allowlisted bot.- <b>cf.bot_management.static_resource</b> - a boolean indicating whether request matches file extensions for many types of static resources.</p><p><b>Cookies</b> - most notably it produces <a href="https://community.cloudflare.com/t/cf-bm-cookie/56696"><b>cf_bm</b></a>, which helps manage incoming traffic that matches criteria associated with bots.</p><p><b>JS challenges</b> - for some of our detections and customers we inject into invisible JavaScript challenges, providing us with more signals for bot detection.</p><p><b>Detection logs</b> - we log through our data pipelines to ClickHouse details about each applied detection, used features and flags, some of which are used for analytics and customer logs, while others are used to debug and improve our models.</p><p>Once the Bot Management module has produced the required fields, the Firewall takes over the actual bot mitigation.</p>
    <div>
      <h3>Firewall integration</h3>
      <a href="#firewall-integration">
        
      </a>
    </div>
    <p>The Cloudflare Firewall's intuitive dashboard enables users to build powerful rules through easy clicks and also provides Terraform integration. Every request to the firewall is inspected against the rule engine. Suspicious requests can be blocked, challenged or logged as per the needs of the user while legitimate requests are routed to the destination, based on the score produced by the Bot Management module and the configured threshold.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5oSiddu0b8BknoqrAQfCYV/3c6c1e456a6d5dedae0e7e50ac7cf346/image6.png" />
            
            </figure><p>Firewall rules provide the following bot mitigation <a href="https://developers.cloudflare.com/firewall/cf-firewall-rules/actions/">actions</a>:</p><ul><li><p><b>Log</b> - records matching requests in the Cloudflare Logs provided to customers.</p></li><li><p><b>Bypass</b> - allows customers to dynamically disable Cloudflare security features for a request.</p></li><li><p><b>Allow</b> - matching requests are exempt from challenge and block actions triggered by other Firewall Rules content.</p></li><li><p><b>Challenge (Captcha)</b> - useful for ensuring that the visitor accessing the site is human, and not automated.</p></li><li><p><b>JS Challenge</b> - useful for ensuring that bots and spam cannot access the requested resource; browsers, however, are free to satisfy the challenge automatically.</p></li><li><p><b>Block</b> - matching requests are denied access to the site.</p></li></ul><p>Our <a href="/updates-to-firewall-analytics/">Firewall Analytics</a> tool, powered by ClickHouse and <a href="/introducing-the-graphql-analytics-api-exactly-the-data-you-need-all-in-one-place/">GraphQL API</a>, enables customers to quickly identify and investigate security threats using an intuitive interface. In addition to analytics, we provide detailed logs on all bots-related activity using either the <a href="https://developers.cloudflare.com/logs/logpull-api/">Logpull API</a> and/or <a href="/cloudflare-logpush-the-easy-way-to-get-your-logs-to-your-cloud-storage/">LogPush</a>, which provides the easy way to get your logs to your cloud storage.</p>
    <div>
      <h3>Cloudflare Workers integration</h3>
      <a href="#cloudflare-workers-integration">
        
      </a>
    </div>
    <p>In case a customer wants more flexibility on what to do with the requests based on the score, e.g. they might want to inject new, or change existing, HTML page content, or serve incorrect data to the bots, or stall certain requests, <a href="https://www.cloudflare.com/developer-platform/workers/">Cloudflare Workers</a> provide an option to do that. For example, using this small code-snippet, we can pass the score back to the origin server for more advanced real-time analysis or mitigation:</p>
            <pre><code>addEventListener('fetch', event =&gt; {
  event.respondWith(handleRequest(event.request))
})
 
async function handleRequest(request) {
  request = new Request(request);
 
  request.headers.set("Cf-Bot-Score", request.cf.bot_management.score)
 
  return fetch(request);
}</code></pre>
            <p>Now let’s have a look into how a single score is produced using multiple detection mechanisms.</p>
    <div>
      <h3>Detection mechanisms</h3>
      <a href="#detection-mechanisms">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3avrETqHNSco30pzXKi2Bi/d83a61d9a294c2f96ed8ae3101fc8839/image10.png" />
            
            </figure><p>The Cloudflare Bot Management platform currently uses five complementary detection mechanisms, producing their own scores, which we combine to form the single score going to the Firewall. Most of the detection mechanisms are applied on every request, while some are enabled on a per-customer basis to better fit their needs.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6kznWqiXGLzyUvqGbNgrco/3d3d777f8b268bcda1b319c841a518e9/image14.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2SlwP5FY8XUmzYUGbIne0n/f589b67119defdc46d4b80762f81d3f4/image4-1.png" />
            
            </figure><p>Having a score on every request for every customer has the following benefits:</p><ul><li><p><b>Ease of onboarding</b> - even before we enable Bot Management in active mode, we’re able to tell how well it’s going to work for the specific customer, including providing historical trends about bot activity.</p></li><li><p><b>Feedback loop</b> - availability of the score on every request along with all features has tremendous value for continuous improvement of our detection mechanisms.</p></li><li><p><b>Ensures scaling</b> - if we can compute for score every request and customer, it means that every Internet property behind Cloudflare is a potential Bot Management customer.</p></li><li><p><b>Global bot insights</b> - Cloudflare is sitting in front of more than 26M+ Internet properties, which allows us to understand and react to the tectonic shifts happening in security and threat intelligence over time.</p></li></ul><p>Overall globally, more than third of the Internet traffic visible to Cloudflare is coming from bad bots, while Bot Management customers have the ratio of bad bots even higher at ~43%!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/18xm3tKE1ir3wbIDSb1oet/8c7675efd856354dd50648b001662839/image7.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/79Mscr2mfcUGezTFmVG5nM/0bb09659f0707e84b8f8a15b52ba7af4/image9.png" />
            
            </figure><p>Let’s dive into specific detection mechanisms in chronological order of their integration with Cloudflare Bot Management.</p>
    <div>
      <h3>Machine learning</h3>
      <a href="#machine-learning">
        
      </a>
    </div>
    <p>The majority of decisions about the score are made using our machine learning models. These were also the first detection mechanisms to produce a score and to on-board customers back in 2018. The successful application of <a href="https://www.cloudflare.com/learning/ai/what-is-machine-learning/">machine learning</a> requires data high in <a href="/stop-the-bots-practical-lessons-in-machine-learning/">Quantity, Diversity, and Quality</a>, and thanks to both free and paid customers, Cloudflare has all three, enabling continuous learning and improvement of our models for all of our customers.</p><p>At the core of the machine learning detection mechanism is CatBoost  - a high-performance open source library for gradient boosting on decision trees. The choice of CatBoost was driven by the library’s outstanding capabilities:</p><ul><li><p><b>Categorical features support</b> - allowing us to train on even very high cardinality features.</p></li><li><p><b>Superior accuracy</b> - allowing us to reduce overfitting by using a novel gradient-boosting scheme.</p></li><li><p><b>Inference speed</b> - in our case it takes less than 50 microseconds to apply any of our models, making sure request processing stays extremely fast.</p></li><li><p><b>C and Rust API</b> - most of our business logic on the edge is written using Lua, more specifically LuaJIT, so having a compatible FFI interface to be able to apply models is fantastic.</p></li></ul><p>There are multiple CatBoost models run on Cloudflare’s Edge in the <a href="https://christophergs.com/machine%20learning/2019/03/30/deploying-machine-learning-applications-in-shadow-mode/#what">shadow mode</a> on <i>every request on every machine</i>. One of the models is run in active mode, which influences the final score going to Firewall. All ML detection results and features are logged and recorded in ClickHouse for further analysis, model improvement, analytics and customer facing logs. We feed both categorical and numerical features into our models, extracted from request attributes and inter-request features built using those attributes, calculated and delivered by the <i>Gagarin</i> inter-requests features platform.</p><p>We’re able to deploy new ML models in a matter of seconds using an extremely reliable and performant <a href="/introducing-quicksilver-configuration-distribution-at-internet-scale/">Quicksilver</a> configuration database. The same mechanism can be used to configure which version of an ML model should be run in active mode for a specific customer.</p><p>A deep dive into our machine learning detection mechanism deserves a blog post of its own and it will cover how do we train and validate our models on trillions of requests using GPUs, how model feature delivery and extraction works, and how we explain and debug model predictions both internally and externally.</p>
    <div>
      <h3>Heuristics engine</h3>
      <a href="#heuristics-engine">
        
      </a>
    </div>
    <p>Not all problems in the world are the best solved with machine learning. We can tweak the ML models in various ways, but in certain cases they will likely underperform basic heuristics. Often the problems machine learning is trying to solve are not entirely new. When building the Bot Management solution it became apparent that sometimes a single attribute of the request could give a bot away. This means that we can create a bunch of simple rules capturing bots in a straightforward way, while also ensuring lowest false positives.</p><p>The heuristics engine was the second detection mechanism integrated into the Cloudflare Bot Management platform in 2019 and it’s also applied on every request. We have multiple heuristic types and hundreds of specific rules based on certain attributes of the request, some of which are very hard to spoof. When any of the requests matches any of the heuristics - we assign the lowest possible score of 1.</p><p>The engine has the following properties:</p><ul><li><p><b>Speed</b> - if ML model inference takes less than 50 microseconds per model, hundreds of heuristics can be applied just under 20 microseconds!</p></li><li><p><b>Deployability</b> - the heuristics engine allows us to add new heuristic in a matter of seconds using <a href="/introducing-quicksilver-configuration-distribution-at-internet-scale/">Quicksilver</a>, and it will be applied on every request.</p></li><li><p><b>Vast coverage</b> - using a set of simple heuristics allows us to classify ~15% of global traffic and ~30% of Bot Management customers’ traffic as bots. Not too bad for a few if conditions, right?</p></li><li><p><b>Lowest false positives</b> - because we’re very sure and conservative on the heuristics we add, this detection mechanism has the lowest FP rate among all detection mechanisms.</p></li><li><p><b>Labels</b> <b>for ML</b> - because of the high certainty we use requests classified with heuristics to train our ML models, which then can generalize behavior learnt from from heuristics and improve detections accuracy.</p></li></ul><p>So heuristics gave us <a href="https://developers.google.com/machine-learning/guides/rules-of-ml#rule_7_turn_heuristics_into_features_or_handle_them_externally">a lift when tweaked with machine learning</a> and they contained a lot of the intuition about the bots, which helped to advance the Cloudflare Bot Management platform and allowed us to onboard more customers.</p>
    <div>
      <h3>Behavioral analysis</h3>
      <a href="#behavioral-analysis">
        
      </a>
    </div>
    <p>Machine learning and heuristics detections provide tremendous value, but both of them require human input on the labels, or basically a teacher to distinguish between right and wrong. While our supervised ML models can generalize well enough even on novel threats similar to what we taught them on, we decided to go further. What if there was an approach which doesn’t require a teacher, but rather can learn to distinguish bad behavior from the normal behavior?</p><p>Enter the behavioral analysis detection mechanism, initially developed in 2018 and integrated with the Bot Management platform in 2019. This is an unsupervised machine learning approach, which has the following properties:</p><ul><li><p><b>Fitting specific customer needs</b> - it’s automatically enabled for all Bot Management customers, calculating and analyzing normal visitor behavior over an extended period of time.</p></li><li><p><b>Detects bots never seen before</b> - as it doesn’t use known bot labels, it can detect bots and anomalies from the normal behavior on specific customer’s website.</p></li><li><p><b>Harder to evade</b> - anomalous behavior is often a direct result of the bot’s specific goal.</p></li></ul><p>Please stay tuned for a more detailed blog about behavioral analysis models and the platform powering this incredible detection mechanism, protecting many of our customers from unseen attacks.</p>
    <div>
      <h3>Verified bots</h3>
      <a href="#verified-bots">
        
      </a>
    </div>
    <p>So far we’ve discussed how to detect bad bots and humans. What about good bots, some of which are extremely useful for the customer website? Is there a need for a dedicated detection mechanism or is there something we could use from previously described detection mechanisms? While the majority of good bot requests (e.g. Googlebot, Bingbot, LinkedInbot) already have low score produced by other detection mechanisms, we also need a way to avoid accidental blocks of useful bots. That’s how the Firewall field <i>cf.bot_management.verified_bot</i> came into existence in 2019, allowing customers to decide for themselves whether they want to let all of the good bots through or restrict access to certain parts of the website.</p><p>The actual platform calculating Verified Bot flag deserves a detailed blog on its own, but in the nutshell it has the following properties:</p><ul><li><p><b>Validator based approach</b> - we support multiple validation mechanisms, each of them allowing us to reliably confirm good bot identity by clustering a set of IPs.</p></li><li><p><b>Reverse DNS validator</b> - performs a reverse DNS check to determine whether or not a bots IP address matches its alleged hostname.</p></li><li><p><b>ASN Block validator</b> - similar to rDNS check, but performed on ASN block.</p></li><li><p><b>Downloader validator</b> - collects good bot IPs from either text files or HTML pages hosted on bot owner sites.</p></li><li><p><b>Machine learning validator</b> - uses an unsupervised learning algorithm, clustering good bot IPs which are not possible to validate through other means.</p></li><li><p><b>Bots Directory</b> - a database with UI that stores and manages bots that pass through the Cloudflare network.</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7ENyHN774m5EVfRIQpQzXO/d5d8127a3235fc8f26eb44b2de2a3919/image2-1.png" />
            
            </figure><p>Bots directory UI sample‌‌</p><p>Using multiple validation methods listed above, the Verified Bots detection mechanism identifies hundreds of unique good bot identities, belonging to different companies and categories.</p>
    <div>
      <h3>JS fingerprinting</h3>
      <a href="#js-fingerprinting">
        
      </a>
    </div>
    <p>When it comes to Bot Management detection quality it’s all about the signal quality and quantity. All previously described detections use request attributes sent over the network and analyzed on the server side using different techniques. Are there more signals available, which can be extracted from the client to improve our detections?</p><p>As a matter of fact there are plenty, as every browser has unique implementation quirks. Every web browser graphics output such as canvas depends on multiple layers such as hardware (GPU) and software (drivers, operating system rendering). This highly unique output allows precise differentiation between different browser/device types. Moreover, this is achievable without sacrificing website visitor privacy as it’s not a supercookie, and it cannot be used to track and identify individual users, but only to confirm that request’s user agent matches other telemetry gathered through browser canvas API.</p><p>This detection mechanism is implemented as a challenge-response system with challenge injected into the webpage on Cloudflare’s edge. The challenge is then rendered in the background using provided graphic instructions and the result sent back to Cloudflare for validation and further action such as  producing the score. There is a lot going on behind the scenes to make sure we get reliable results without sacrificing users’ privacy while being tamper resistant to replay attacks. The system is currently in private beta and being evaluated for its effectiveness and we already see very promising results. Stay tuned for this new detection mechanism becoming widely available and the blog on how we’ve built it.</p><p>This concludes an overview of the five detection mechanisms we’ve built so far. It’s time to sum it all up!</p>
    <div>
      <h3>Summary</h3>
      <a href="#summary">
        
      </a>
    </div>
    <p>Cloudflare has the unique ability to collect data from trillions of requests flowing through its network every week. With this data, Cloudflare is able to identify likely bot activity with Machine Learning, Heuristics, Behavioral Analysis, and other detection mechanisms. Cloudflare Bot Management integrates seamlessly with other Cloudflare products, such as WAF  and Workers.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7thZNko18uGAk7PKwxuG8s/1baf723065de6b686c81d139fbaa0163/image1-1.png" />
            
            </figure><p>All this could not be possible without hard work across multiple teams! First of all thanks to everybody on the Bots Team for their tremendous efforts to make this platform come to life. Other Cloudflare teams, most notably: Firewall, Data, Solutions Engineering, Performance, SRE, helped us a lot to design, build and support this incredible platform.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3ox1rh9KPNDIy5h37liIEO/eadc3dc802b748ccc9279bc01b89b508/image11-1.jpg" />
            
            </figure><p>Bots team during Austin team summit 2019 hunting bots with axes :)</p><p>Lastly, there are more blogs from the Bots series coming soon, diving into internals of our detection mechanisms, so stay tuned for more exciting stories about Cloudflare Bot Management!</p> ]]></content:encoded>
            <category><![CDATA[Deep Dive]]></category>
            <category><![CDATA[Bot Management]]></category>
            <category><![CDATA[Bots]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Firewall]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Salt]]></category>
            <category><![CDATA[Machine Learning]]></category>
            <guid isPermaLink="false">5yHu3o1Z2Z1reLsRB0q026</guid>
            <dc:creator>Alex Bocharov</dc:creator>
        </item>
        <item>
            <title><![CDATA[HTTP Analytics for 6M requests per second using ClickHouse]]></title>
            <link>https://blog.cloudflare.com/http-analytics-for-6m-requests-per-second-using-clickhouse/</link>
            <pubDate>Tue, 06 Mar 2018 13:00:00 GMT</pubDate>
            <description><![CDATA[ One of our large scale data infrastructure challenges here at Cloudflare is around providing HTTP traffic analytics to our customers. HTTP Analytics is available to all our customers via two options: ]]></description>
            <content:encoded><![CDATA[ <p>One of our large scale data infrastructure challenges here at Cloudflare is around providing HTTP traffic analytics to our customers. HTTP Analytics is available to all our customers via two options:</p><ul><li><p>Analytics tab in Cloudflare dashboard</p></li><li><p>Zone Analytics API with 2 endpoints</p><ul><li><p><a href="https://api.cloudflare.com/#zone-analytics-dashboard">Dashboard endpoint</a></p></li><li><p><a href="https://api.cloudflare.com/#zone-analytics-analytics-by-co-locations">Co-locations endpoint</a> (Enterprise plan only)</p></li></ul></li></ul><p>In this blog post I'm going to talk about the exciting evolution of the Cloudflare analytics pipeline over the last year. I'll start with a description of the old pipeline and the challenges that we experienced with it. Then, I'll describe how we leveraged ClickHouse to form the basis of a new and improved pipeline. In the process, I'll share details about how we went about schema design and performance tuning for ClickHouse. Finally, I'll look forward to what the Data team is thinking of providing in the future.</p><p>Let's start with the old data pipeline.</p>
    <div>
      <h3>Old data pipeline</h3>
      <a href="#old-data-pipeline">
        
      </a>
    </div>
    <p>The previous pipeline was built in 2014. It has been mentioned previously in <a href="https://blog.cloudflare.com/scaling-out-postgresql-for-cloudflare-analytics-using-citusdb/">Scaling out PostgreSQL for CloudFlare Analytics using CitusDB</a> and <a href="https://blog.cloudflare.com/more-data-more-data/">More data, more data</a> blog posts from the Data team.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5MY5UJgdkM35pwDy4mMOxt/2c08b73e37001788547db620f00a5a92/Old-system-architecture.jpg" />
          </figure><p>It had following components:</p><ul><li><p><b>Log forwarder </b>- collected Cap'n Proto formatted logs from the edge, notably DNS and Nginx logs, and shipped them to Kafka in Cloudflare central datacenter.</p></li><li><p><b>Kafka cluster </b>- consisted of 106 brokers with x3 replication factor, 106 partitions, ingested Cap'n Proto formatted logs at average rate 6M logs per second.</p></li><li><p><b>Kafka consumers</b> - each of 106 partitions had dedicated Go consumer (a.k.a. Zoneagg consumer), which read logs and produced aggregates per partition per zone per minute and then wrote them into Postgres.
<b>Postgres database</b> - single instance PostgreSQL database (a.k.a. RollupDB), accepted aggregates from Zoneagg consumers and wrote them into temporary tables per partition per minute. It then rolled-up the aggregates into further aggregates with aggregation cron. More specifically:</p><ul><li><p>Aggregates per partition, minute, zone → aggregates data per minute, zone</p></li><li><p>Aggregates per minute, zone → aggregates data per hour, zone</p></li><li><p>Aggregates per hour, zone → aggregates data per day, zone</p></li><li><p>Aggregates per day, zone → aggregates data per month, zone</p></li></ul></li><li><p><b>Citus Cluster</b> - consisted of Citus main and 11 Citus workers with x2 replication factor (a.k.a. Zoneagg Citus cluster), the storage behind Zone Analytics API and our BI internal tools. It had replication cron, which did remote copy of tables from Postgres instance into Citus worker shards.</p></li><li><p><b>Zone Analytics API</b> - served queries from internal PHP API. It consisted of 5 API instances written in Go and queried Citus cluster, and was not visible to external users.</p></li><li><p><b>PHP API </b>- 3 instances of proxying API, which forwarded public API queries to internal Zone Analytics API, and had some business logic on zone plans, error messages, etc.</p></li><li><p><b>Load Balancer </b>- nginx proxy, forwarded queries to PHP API/Zone Analytics API.</p></li></ul><p>Cloudflare has grown tremendously since this pipeline was originally designed in 2014. It started off processing under 1M requests per second and grew to current levels of 6M requests per second. The pipeline had served us and our customers well over the years, but began to split at the seams. Any system should be re-engineered after some time, when requirements change.</p><p>Some specific disadvantages of the original pipeline were:</p><ul><li><p><b>Postgres SPOF</b> - single PostgreSQL instance was a SPOF (Single Point of Failure), as it didn't have replicas or backups and if we were to lose this node, whole analytics pipeline could be paralyzed and produce no new aggregates for Zone Analytics API.</p></li><li><p><b>Citus main SPOF</b> - Citus main was the entrypoint to all Zone Analytics API queries and if it went down, all our customers' Analytics API queries would return errors.</p></li><li><p><b>Complex codebase</b> - thousands of lines of bash and SQL for aggregations, and thousands of lines of Go for API and Kafka consumers made the pipeline difficult to maintain and debug.</p></li><li><p><b>Many dependencies</b> - the pipeline consisted of many components, and failure in any individual component could result in halting the entire pipeline.</p></li><li><p><b>High maintenance cost</b> - due to its complex architecture and codebase, there were frequent incidents, which sometimes took engineers from the Data team and other teams many hours to mitigate.</p></li></ul><p>Over time, as our request volume grew, the challenges of operating this pipeline became more apparent, and we realized that this system was being pushed to its limits. This realization inspired us to think about which components would be ideal candidates for replacement, and led us to build new data pipeline.</p><p>Our first design of an improved analytics pipeline centred around the use of the <a href="https://flink.apache.org/">Apache Flink</a> stream processing system. We had previously used Flink for other data pipelines, so it was a natural choice for us. However, these pipelines had been at a much lower rate than the 6M requests per second we needed to process for HTTP Analytics, and we struggled to get Flink to scale to this volume - it just couldn't keep up with ingestion rate per partition on all 6M HTTP requests per second.</p><p>Our colleagues on our DNS team had already built and productionized DNS analytics pipeline atop ClickHouse. They wrote about it in <a href="https://blog.cloudflare.com/how-cloudflare-analyzes-1m-dns-queries-per-second/">"How Cloudflare analyzes 1M DNS queries per second"</a> blog post. So, we decided to take a deeper look at ClickHouse.</p>
    <div>
      <h3>ClickHouse</h3>
      <a href="#clickhouse">
        
      </a>
    </div>
    <blockquote><p>"ClickHouse не тормозит."
Translation from Russian: ClickHouse doesn't have brakes (or isn't slow)
© ClickHouse core developers</p></blockquote><p>When exploring additional candidates for replacing some of the key infrastructure of our old pipeline, we realized that using a column oriented database might be well suited to our analytics workloads. We wanted to identify a column oriented database that was horizontally scalable and fault tolerant to help us deliver good uptime guarantees, and extremely performant and space efficient such that it could handle our scale. We quickly realized that ClickHouse could satisfy these criteria, and then some.</p><p><a href="https://clickhouse.yandex/">ClickHouse</a> is an open source column-oriented database management system capable of real time generation of analytical data reports using SQL queries. It is blazing fast, linearly scalable, hardware efficient, fault tolerant, feature rich, highly reliable, simple and handy. ClickHouse core developers provide great help on solving issues, merging and maintaining our PRs into ClickHouse. For example, engineers from Cloudflare have contributed a whole bunch of code back upstream:</p><ul><li><p>Aggregate function <a href="https://clickhouse.com/docs/en/sql-reference/aggregate-functions/reference/topk">topK</a> by <a href="https://github.com/vavrusa">Marek Vavruša</a></p></li><li><p>IP prefix dictionary by Marek Vavruša</p></li><li><p>SummingMergeTree engine optimizations by Marek Vavruša</p></li><li><p><a href="https://clickhouse.com/docs/en/engines/table-engines/integrations/kafka">Kafka table Engine</a> by Marek Vavruša. We're thinking to replace Kafka Go consumers with this engine when it will be stable enough and ingest from Kafka into ClickHouse directly.</p></li><li><p>Aggregate function <a href="https://clickhouse.yandex/docs/en/single/index.html#summapkey-value">sumMap</a> by <a href="https://github.com/bocharov">Alex Bocharov</a>. Without this function it would be impossible to build our new Zone Analytics API.</p></li><li><p><a href="https://github.com/yandex/ClickHouse/pull/1636">Mark cache fix</a> by Alex Bocharov</p></li><li><p><a href="https://github.com/yandex/ClickHouse/pull/1844">uniqHLL12 function fix</a> for big cardinalities by Alex Bocharov. The description of the issue and its fix should be an interesting reading.</p></li></ul>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6mN4cq6YrpiHjBfbP6vpCh/7181a8e68b4a63cd48e42e0eaf807191/ClickHouse-uniq-functions.png" />
          </figure><p>Along with filing many bug reports, we also report about every issue we face in our cluster, which we hope will help to improve ClickHouse in future.</p><p>Even though DNS analytics on ClickHouse had been a great success, we were still skeptical that we would be able to scale ClickHouse to the needs of the HTTP pipeline:</p><ul><li><p>Kafka DNS topic has on average 1.5M messages per second vs 6M messages per second for HTTP requests topic.</p></li><li><p>Kafka DNS topic average uncompressed message size is 130B vs 1630B for HTTP requests topic.</p></li><li><p>DNS query ClickHouse record consists of 40 columns vs 104 columns for HTTP request ClickHouse record.</p></li></ul><p>After unsuccessful attempts with Flink, we were skeptical of ClickHouse being able to keep up with the high ingestion rate. Luckily, early prototype showed promising performance and we decided to proceed with old pipeline replacement. The first step in replacing the old pipeline was to design a schema for the new ClickHouse tables.</p>
    <div>
      <h3>ClickHouse schema design</h3>
      <a href="#clickhouse-schema-design">
        
      </a>
    </div>
    <p>Once we identified ClickHouse as a potential candidate, we began exploring how we could port our existing Postgres/Citus schemas to make them compatible with ClickHouse.</p><p>For our <a href="https://api.cloudflare.com/#zone-analytics-dashboard">Zone Analytics API</a> we need to produce many different aggregations for each zone (domain) and time period (minutely / hourly / daily / monthly). For deeper dive about specifics of aggregates please follow Zone Analytics API documentation or this handy <a href="https://docs.google.com/spreadsheets/d/1zQ3yI3HB2p8hiM-Jwvq1-MaeEyIouix2I-iUAPZtJYw/edit#gid=1788221216">spreadsheet</a>.</p><p>These aggregations should be available for any time range for the last 365 days. While ClickHouse is a really great tool to work with non-aggregated data, with our volume of 6M requests per second we just cannot afford yet to store non-aggregated data for that long.</p><p>To give you an idea of how much data is that, here is some "napkin-math" capacity planning. I'm going to use an average insertion rate of 6M requests per second and $100 as a cost estimate of 1 TiB to calculate storage cost for 1 year in different message formats:</p><table><tr><th><p><b>Metric</b></p></th><th><p><b>Cap'n Proto</b></p></th><th><p><b>Cap'n Proto (zstd)</b></p></th><th><p><b>ClickHouse</b></p></th></tr><tr><td><p>Avg message/record size</p></td><td><p>1630 B</p></td><td><p>360 B</p></td><td><p>36.74 B</p></td></tr><tr><td><p>Storage requirements for 1 year</p></td><td><p>273.93 PiB</p></td><td><p>60.5 PiB</p></td><td><p>18.52 PiB (RF x3)</p></td></tr><tr><td><p>Storage cost for 1 year</p></td><td><p>$28M</p></td><td><p>$6.2M</p></td><td><p>$1.9M</p></td></tr></table><p>And that is if we assume that requests per second will stay the same, but in fact it's growing fast all the time.</p><p>Even though storage requirements are quite scary, we're still considering to store raw (non-aggregated) requests logs in ClickHouse for 1 month+. See "Future of Data APIs" section below.</p>
    <div>
      <h4>Non-aggregated requests table</h4>
      <a href="#non-aggregated-requests-table">
        
      </a>
    </div>
    <p>We store over <a href="https://docs.google.com/spreadsheets/d/1zQ3yI3HB2p8hiM-Jwvq1-MaeEyIouix2I-iUAPZtJYw/edit?usp=sharing">100+ columns</a>, collecting lots of different kinds of metrics about each request passed through Cloudflare. Some of these columns are also available in our <a href="https://support.cloudflare.com/hc/en-us/articles/216672448-Enterprise-Log-Share-Logpull-REST-API">Enterprise Log Share</a> product, however ClickHouse non-aggregated requests table has more fields.</p><p>With so many columns to store and huge storage requirements we've decided to proceed with the aggregated-data approach, which worked well for us before in old pipeline and which will provide us with backward compatibility.</p>
    <div>
      <h4>Aggregations schema design #1</h4>
      <a href="#aggregations-schema-design-1">
        
      </a>
    </div>
    <p>According to the <a href="https://api.cloudflare.com/#zone-analytics-dashboard">API documentation</a>, we need to provide lots of different requests breakdowns and to satisfy these requirements we decided to test the following approach:</p><ol><li><p>Create Cickhouse <a href="https://clickhouse.com/docs/en/sql-reference/statements/create/view">materialized views</a> with <a href="https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/aggregatingmergetree">ReplicatedAggregatingMergeTree</a> engine pointing to non-aggregated requests table and containing minutely aggregates data for each of the breakdowns:</p><ul><li><p><b>Requests totals</b> - containing numbers like total requests, bytes, threats, uniques, etc.</p></li><li><p><b>Requests by colo</b> - containing requests, bytes, etc. breakdown by edgeColoId - each of 120+ Cloudflare datacenters</p></li><li><p><b>Requests by http status</b> - containing breakdown by HTTP status code, e.g. 200, 404, 500, etc.</p></li><li><p><b>Requests by content type</b> - containing breakdown by response content type, e.g. HTML, JS, CSS, etc.</p></li><li><p><b>Requests by country</b> - containing breakdown by client country (based on IP)</p></li><li><p><b>Requests by threat type</b> - containing breakdown by threat type</p></li><li><p><b>Requests by browser</b> - containing breakdown by browser family, extracted from user agent</p></li><li><p><b>Requests by ip class</b> - containing breakdown by client IP class</p></li></ul></li><li><p>Write the code gathering data from all 8 materialized views, using two approaches:</p><ul><li><p>Querying all 8 materialized views at once using JOIN</p></li><li><p>Querying each one of 8 materialized views separately in parallel</p></li></ul></li><li><p>Run performance testing benchmark against common Zone Analytics API queries</p></li></ol>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3iQvOQSik5xLDR7lkG0X9H/ee505f826947f6844c3e001bb83c7e7b/Schema-design--1-1.jpg" />
          </figure><p>Schema design #1 didn't work out well. ClickHouse JOIN syntax forces to write monstrous query over 300 lines of SQL, repeating the selected columns many times because you can do only <a href="https://github.com/yandex/ClickHouse/issues/873">pairwise joins</a> in ClickHouse.</p><p>As for querying each of materialized views separately in parallel, benchmark showed prominent, but moderate results - query throughput would be a little bit better than using our Citus based old pipeline.</p>
    <div>
      <h4>Aggregations schema design #2</h4>
      <a href="#aggregations-schema-design-2">
        
      </a>
    </div>
    <p>In our second iteration of the schema design, we strove to keep a similar structure to our existing Citus tables. To do this, we experimented with the SummingMergeTree engine, which is described in detail by the excellent ClickHouse <a href="https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/summingmergetree">documentation</a>:</p><blockquote><p>In addition, a table can have nested data structures that are processed in a special way. If the name of a nested table ends in 'Map' and it contains at least two columns that meet the following criteria... then this nested table is interpreted as a mapping of key =&gt; (values...), and when merging its rows, the elements of two data sets are merged by 'key' with a summation of the corresponding (values...).</p></blockquote><p>We were pleased to find this feature, because the SummingMergeTree engine allowed us to significantly reduce the number of tables required as compared to our initial approach. At the same time, it allowed us to match the structure of our existing Citus tables. The reason was that the ClickHouse Nested structure ending in 'Map' was similar to the <a href="https://www.postgresql.org/docs/9.6/static/hstore.html">Postgres hstore</a> data type, which we used extensively in the old pipeline.</p><p>However, there were two existing issues with ClickHouse maps:</p><ul><li><p>SummingMergeTree does aggregation for all records with same primary key, but final aggregation across all shards should be done using some aggregate function, which didn't exist in ClickHouse.</p></li><li><p>For storing uniques (uniques visitors based on IP), we need to use AggregateFunction data type, and although SummingMergeTree allows you to create column with such data type, it will not perform aggregation on it for records with same primary keys.</p></li></ul><p>To resolve problem #1, we had to create a new aggregation function <a href="https://clickhouse.yandex/docs/en/single/index.html#summapkey-value">sumMap</a>. Luckily, ClickHouse source code is of excellent quality and its core developers are very helpful with reviewing and merging requested changes.</p><p>As for problem #2, we had to put uniques into separate materialized view, which uses the ReplicatedAggregatingMergeTree Engine and supports merge of AggregateFunction states for records with the same primary keys. We're considering adding the same functionality into SummingMergeTree, so it will simplify our schema even more.</p><p>We also created a separate materialized view for the Colo endpoint because it has much lower usage (5% for Colo endpoint queries, 95% for Zone dashboard queries), so its more dispersed primary key will not affect performance of Zone dashboard queries.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2ZmeMrcqIgAI99UJgfx1HK/80923c6f467d95a1bd8d74e838f91d99/Schema-design--2.jpg" />
          </figure><p>Once schema design was acceptable, we proceeded to performance testing.</p>
    <div>
      <h3>ClickHouse performance tuning</h3>
      <a href="#clickhouse-performance-tuning">
        
      </a>
    </div>
    <p>We explored a number of avenues for performance improvement in ClickHouse. These included tuning index granularity, and improving the merge performance of the SummingMergeTree engine.</p><p>By default ClickHouse recommends to use 8192 index granularity. There is <a href="https://medium.com/@f1yegor/clickhouse-primary-keys-2cf2a45d7324">nice article</a> explaining ClickHouse primary keys and index granularity in depth.</p><p>While default index granularity might be excellent choice for most of use cases, in our case we decided to choose the following index granularities:</p><ul><li><p>For the main non-aggregated requests table we chose an index granularity of 16384. For this table, the number of rows read in a query is typically on the order of millions to billions. In this case, a large index granularity does not make a huge difference on query performance.</p></li><li><p>For the aggregated requests_* stables, we chose an index granularity of 32. A low index granularity makes sense when we only need to scan and return a few rows. It made a huge difference in API performance - query latency decreased by 50% and throughput increased by ~3 times when we changed index granularity 8192 → 32.</p></li></ul><p>Not relevant to performance, but we also disabled the min_execution_speed setting, so queries scanning just a few rows won't return exception because of "slow speed" of scanning rows per second.</p><p>On the aggregation/merge side, we've made some ClickHouse optimizations as well, like <a href="https://github.com/yandex/ClickHouse/pull/1330">increasing SummingMergeTree maps merge speed</a> by x7 times, which we contributed back into ClickHouse for everyone's benefit.</p><p>Once we had completed the performance tuning for ClickHouse, we could bring it all together into a new data pipeline. Next, we describe the architecture for our new, ClickHouse-based data pipeline.</p>
    <div>
      <h3>New data pipeline</h3>
      <a href="#new-data-pipeline">
        
      </a>
    </div>
    <p>The new pipeline architecture re-uses some of the components from old pipeline, however it replaces its most weak components.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2UaoX6fPvFuhN1ecJpbe6v/8aee3bd1ab395cb4a4ac2b8db9575a23/New-system-architecture.jpg" />
          </figure><p>New components include:</p><ul><li><p><b>Kafka consumers </b>- 106 Go consumers per each partition consume Cap'n Proto raw logs and extract/prepare needed 100+ ClickHouse fields. Consumers no longer do any aggregation logic.</p></li><li><p><b>ClickHouse cluster</b> - 36 nodes with x3 replication factor. It handles non-aggregate requests logs ingestion and then produces aggregates using materialized views.</p></li><li><p><b>Zone Analytics API</b> - rewritten and optimized version of API in Go, with many meaningful metrics, healthchecks, failover scenarios.</p></li></ul><p>As you can see the architecture of new pipeline is much simpler and fault-tolerant. It provides Analytics for all our 7M+ customers' domains totalling more than 2.5 billion monthly unique visitors and over 1.5 trillion monthly page views.</p><p>On average we process 6M HTTP requests per second, with peaks of upto 8M requests per second.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/zjubA3CgetykRE8Angxln/7509dd5dea82ad7408339a5f78561b41/HTTP-Logfwdr-throughput.png" />
          </figure><p>Average log message size in <a href="https://capnproto.org/">Cap’n Proto</a> format used to be ~1630B, but thanks to amazing job on Kafka compression by our Platform Operations Team, it decreased significantly. Please see <a href="https://blog.cloudflare.com/squeezing-the-firehose/">"Squeezing the firehose: getting the most from Kafka compression"</a> blog post with deeper dive into those optimisations.</p>
    <div>
      <h4>Benefits of new pipeline</h4>
      <a href="#benefits-of-new-pipeline">
        
      </a>
    </div>
    <ul><li><p><b>No SPOF</b> - removed all SPOFs and bottlenecks, everything has at least x3 replication factor.</p></li><li><p><b>Fault-tolerant</b> - it's more fault-tolerant, even if Kafka consumer or ClickHouse node or Zone Analytics API instance fails, it doesn't impact the service.</p></li><li><p><b>Scalable</b> - we can add more Kafka brokers or ClickHouse nodes and scale ingestion as we grow. We are not so confident about query performance when cluster will grow to hundreds of nodes. However, Yandex team managed to scale their cluster to 500+ nodes, distributed geographically between several data centers, using two-level sharding.</p></li><li><p><b>Reduced complexity</b> - due to removing messy crons and consumers which were doing aggregations and <a href="https://www.cloudflare.com/learning/cloud/how-to-refactor-applications/">refactoring</a> API code we were able to:</p><ul><li><p>Shutdown Postgres RollupDB instance and free it up for reuse.</p></li><li><p>Shutdown Citus cluster 12 nodes and free it up for reuse. As we won't use Citus for serious workload anymore we can reduce our operational and support costs.</p></li><li><p>Delete tens of thousands of lines of old Go, SQL, Bash, and PHP code.</p></li><li><p>Remove WWW PHP API dependency and extra latency.</p></li></ul></li><li><p><b>Improved API throughput and latency </b>- with previous pipeline Zone Analytics API was struggling to serve more than 15 queries per second, so we had to introduce temporary hard rate limits for largest users. With new pipeline we were able to remove hard rate limits and now we are serving ~40 queries per second. We went further and did intensive load testing for new API and with current setup and hardware we are able serve up to ~150 queries per second and this is scalable with additional nodes.
</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1TCHFdGAndAank2N3agHQj/5db6811ef0918e8fc66b8543531f9733/Zone-Analytics-API-requests-latency-quantiles.png" />
          </figure><p></p></li><li><p><b>Easier to operate</b> - with shutdown of many unreliable components, we are finally at the point where it's relatively easy to operate this pipeline. ClickHouse quality helps us a lot in this matter.</p></li><li><p><b>Decreased amount of incidents</b> - with new more reliable pipeline, we now have fewer incidents than before, which ultimately has reduced on-call burden. Finally, we can sleep peacefully at night :-).</p></li></ul><p>Recently, we've improved the throughput and latency of the new pipeline even further with better hardware. I'll provide details about this cluster below.</p>
    <div>
      <h4>Our ClickHouse cluster</h4>
      <a href="#our-clickhouse-cluster">
        
      </a>
    </div>
    <p>In total we have 36 ClickHouse nodes. The new hardware is a big upgrade for us:</p><ul><li><p><b>Chassis</b> - Quanta D51PH-1ULH chassis instead of Quanta D51B-2U chassis (2x less physical space)</p></li><li><p><b>CPU</b> - 40 logical cores E5-2630 v3 @ 2.40 GHz instead of 32 cores E5-2630 v4 @ 2.20 GHz</p></li><li><p><b>RAM</b> - 256 GB RAM instead of 128 GB RAM</p></li><li><p><b>Disks</b> - 12 x 10 TB Seagate ST10000NM0016-1TT101 disks instead of 12 x 6 TB Toshiba TOSHIBA MG04ACA600E</p></li><li><p><b>Network</b> - 2 x 25G Mellanox ConnectX-4 in MC-LAG instead of 2 x 10G Intel 82599ES</p></li></ul><p>Our Platform Operations team noticed that ClickHouse is not great at running heterogeneous clusters yet, so we need to gradually replace all nodes in the existing cluster with new hardware, all 36 of them. The process is fairly straightforward, it's no different than replacing a failed node. The problem is that <a href="https://github.com/yandex/ClickHouse/issues/1821">ClickHouse doesn't throttle recovery</a>.</p><p>Here is more information about our cluster:</p><ul><li><p><b>Avg insertion rate</b> - all our pipelines bring together 11M rows per second.</p></li><li><p><b>Avg insertion bandwidth</b> - 47 Gbps.</p></li><li><p><b>Avg queries per second</b> - on average cluster serves ~40 queries per second with frequent peaks up to ~80 queries per second.</p></li><li><p><b>CPU time</b> - after recent hardware upgrade and all optimizations, our cluster CPU time is quite low.
</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/61sIThxxM4s9mgA8nQSibn/4e09df1a744f0c1e2cd92b8bf5bfdd5f/ClickHouse-CPU-usage.png" />
          </figure><p></p></li><li><p><b>Max disk IO</b> (device time) - it's low as well.
</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1rNrJvGvXd6VNvPRH2qe0l/f7813d61e834b420515047062e25e791/Max-disk-IO.png" />
          </figure><p></p><p></p></li></ul><p>In order to make the switch to the new pipeline as seamless as possible, we performed a transfer of historical data from the old pipeline. Next, I discuss the process of this data transfer.</p>
    <div>
      <h4>Historical data transfer</h4>
      <a href="#historical-data-transfer">
        
      </a>
    </div>
    <p>As we have 1 year storage requirements, we had to do one-time ETL (Extract Transfer Load) from the old Citus cluster into ClickHouse.</p><p>At Cloudflare we love Go and its goroutines, so it was quite straightforward to write a simple ETL job, which:</p><ul><li><p>For each minute/hour/day/month extracts data from Citus cluster</p></li><li><p>Transforms Citus data into ClickHouse format and applies needed business logic</p></li><li><p>Loads data into ClickHouse</p></li></ul><p>The whole process took couple of days and over 60+ billions rows of data were transferred successfully with consistency checks. The completion of this process finally led to the shutdown of old pipeline. However, our work does not end there, and we are constantly looking to the future. In the next section, I'll share some details about what we are planning.</p>
    <div>
      <h3>Future of Data APIs</h3>
      <a href="#future-of-data-apis">
        
      </a>
    </div>
    
    <div>
      <h4>Log Push</h4>
      <a href="#log-push">
        
      </a>
    </div>
    <p>We're currently working on something called "Log Push". Log push allows you to specify a desired data endpoint and have your HTTP request logs sent there automatically at regular intervals. At the moment, it's in private beta and going to support sending logs to:</p><ul><li><p>Amazon S3 bucket</p></li><li><p>Google Cloud Service bucket</p></li><li><p>Other storage services and platforms</p></li></ul><p>It's expected to be generally available soon, but if you are interested in this new product and you want to try it out please contact our Customer Support team.</p>
    <div>
      <h4>Logs SQL API</h4>
      <a href="#logs-sql-api">
        
      </a>
    </div>
    <p>We're also evaluating possibility of building new product called Logs SQL API. The idea is to provide customers access to their logs via flexible API which supports standard SQL syntax and JSON/CSV/TSV/XML format response.</p><p>Queries can extract:</p><ul><li><p><b>Raw requests logs fields</b> (e.g. SELECT field1, field2, ... FROM requests WHERE ...)</p></li><li><p><b>Aggregated data from requests logs</b> (e.g. SELECT clientIPv4, count() FROM requests GROUP BY clientIPv4 ORDER BY count() DESC LIMIT 10)</p></li></ul><p>Google BigQuery provides similar <a href="https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/query">SQL API</a> and Amazon has product callled <a href="https://docs.aws.amazon.com/kinesisanalytics/latest/sqlref/analytics-sql-reference.html">Kinesis Data analytics</a> with SQL API support as well.</p><p>Another option we're exploring is to provide syntax similar to <a href="https://api.cloudflare.com/#dns-analytics-properties">DNS Analytics API</a> with filters and dimensions.</p><p>We're excited to hear your feedback and know more about your analytics use case. It can help us a lot to build new products!</p>
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>All this could not be possible without hard work across multiple teams! First of all thanks to other Data team engineers for their tremendous efforts to make this all happen. Platform Operations Team made significant contributions to this project, especially Ivan Babrou and Daniel Dao. Contributions from Marek Vavruša in DNS Team were also very helpful.</p><p>Finally, Data team at Cloudflare is a small team, so if you're interested in building and operating distributed services, you stand to have some great problems to work on. Check out the <a href="https://boards.greenhouse.io/cloudflare/jobs/613800">Distributed Systems Engineer - Data</a> and <a href="https://boards.greenhouse.io/cloudflare/jobs/688056">Data Infrastructure Engineer</a> roles in London, UK and San Francisco, US, and let us know what you think.</p> ]]></content:encoded>
            <category><![CDATA[Analytics]]></category>
            <category><![CDATA[Data]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Kafka]]></category>
            <category><![CDATA[Cap'n Proto]]></category>
            <category><![CDATA[API]]></category>
            <category><![CDATA[php]]></category>
            <category><![CDATA[Load Balancing]]></category>
            <category><![CDATA[NGINX]]></category>
            <guid isPermaLink="false">6VEE3i8wXN2CDKWJJ16uXS</guid>
            <dc:creator>Alex Bocharov</dc:creator>
        </item>
    </channel>
</rss>