
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Thu, 09 Apr 2026 13:01:30 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Google’s AI advantage: why crawler separation is the only path to a fair Internet]]></title>
            <link>https://blog.cloudflare.com/uk-google-ai-crawler-policy/</link>
            <pubDate>Fri, 30 Jan 2026 17:01:04 GMT</pubDate>
            <description><![CDATA[ Google's dual-purpose crawler creates an unfair AI advantage. To protect publishers and foster competition, the UK’s Competition and Markets Authority must mandate crawler separation for search and AI. ]]></description>
            <content:encoded><![CDATA[ <p>Earlier this week, the UK’s Competition and Markets Authority (CMA) <a href="https://www.gov.uk/government/news/cma-proposes-package-of-measures-to-improve-google-search-services-in-uk"><u>opened its consultation</u></a> on a package of proposed conduct requirements for Google. The consultation invites comments on the proposed requirements before the CMA imposes any final measures. These new rules aim to address the lack of choice and transparency that publishers (broadly defined as “any party that makes content available on the web”) face over how Google uses search to fuel its generative AI services and features. These are the first consultations on conduct requirements launched under the digital markets competition regime in the UK. </p><p>We welcome the CMA’s recognition that publishers need a fairer deal and believe the proposed rules are a step into the right direction. Publishers should be entitled to have access to tools that enable them to control the inclusion of their content in generative AI services, and AI companies should have a level playing field on which to compete. </p><p>But we believe the CMA has not gone far enough and should do more to safeguard the UK’s creative sector and foster healthy competition in the market for generative and agentic AI. </p>
    <div>
      <h2>CMA designation of Google as having Strategic Market Status </h2>
      <a href="#cma-designation-of-google-as-having-strategic-market-status">
        
      </a>
    </div>
    <p>In January 2025, the UK’s regulatory landscape underwent a significant legal shift with the implementation of the Digital Markets, Competition and Consumers Act 2024 (DMCC). Rather than relying on antitrust investigations to address risks to competition, the CMA can now designate firms as having Strategic Market Status (SMS) when they hold substantial, entrenched market power. This designation allows for targeted CMA interventions in digital markets, such as imposing detailed conduct requirements, to improve competition. </p><p>In October 2025, the CMA <a href="https://assets.publishing.service.gov.uk/media/68e8b643cf65bd04bad76724/Final_decision_-_strategic_market_status_investigation_into_Google_s_general_search_services.pdf"><u>designated Google</u></a> as having SMS in general search and search advertising, given its 90 percent share of the search market in the UK. Crucially, this designation encompasses AI Overviews and AI Mode, with the CMA now having the authority to impose conduct requirements on Google’s search ecosystem. Final requirements imposed by the CMA are not merely suggestions but legally enforceable rules that can relate specifically to AI crawling with significant sanctions to ensure Google operates fairly. </p>
    <div>
      <h2>Publishers need a meaningful way to opt out of Google’s use of their content for generative AI</h2>
      <a href="#publishers-need-a-meaningful-way-to-opt-out-of-googles-use-of-their-content-for-generative-ai">
        
      </a>
    </div>
    <p>The CMA’s designation could not be more timely. As we’ve <a href="https://blog.cloudflare.com/building-a-better-internet-with-responsible-ai-bot-principles/"><u>said before</u></a>, we are indisputably in a time when the Internet needs clear “rules of the road” for AI crawling behavior. </p><p>As the CMA rightly <a href="https://assets.publishing.service.gov.uk/media/6979d0bf75d44370965520a0/Publisher_conduct_requirement.pdf"><u>states</u></a>, “publishers have no realistic option but to allow their content to be crawled for Google’s general search because of the market power Google holds in general search. However, Google currently uses that content in both its search generative AI features and in its broader generative AI services.” </p><p>In other words: the same content that Google scrapes for search indexing is also used for inference/grounding purposes, like AI Overviews and AI Mode, which rely on fetching live information from the Internet in response to real-time user queries. And that creates a big problem for publishers—and for competition.</p><p>Because publishers cannot afford to disallow or block Googlebot, Google’s search crawler, on their website, they have to accept that their content will be used in generative AI applications such as AI Overviews and AI Mode within Google Search that <a href="https://blog.cloudflare.com/crawlers-click-ai-bots-training/"><u>return very little, if any, traffic to their websites</u></a>. This undermines the ad-supported business models that have sustained digital publishing for decades, given the critical role of Google Search in driving human traffic to online advertising. It also means that Google’s generative AI applications enter into direct competition with publishers by reproducing their content, most often without attribution or compensation. </p><p>Publishers’ reluctance to block Google because of its dominance in search gives Google an unfair competitive advantage in the market for generative and agentic AI. Unlike other AI bot operators, Google can use its search crawler to gather data for a variety of AI functions with little fear that its access will be restricted. It has minimal incentive to pay publishers for that data, which it is already getting for free. </p><p>This prevents the emergence of a well-functioning marketplace where AI developers negotiate fair value for content. Instead, other AI companies are disincentivized from coming to the table, as they are structurally disadvantaged by a system that allows one dominant player to bypass compensation entirely. As the CMA itself <a href="https://assets.publishing.service.gov.uk/media/6979d05275d443709655209f/Introduction_to_the_consultation.pdf"><u>recognizes</u></a>, "[b]y not providing sufficient control over how this content is used, Google can limit the ability of publishers to monetise their content, while accessing content for AI-generated results in a way that its competitors cannot match”. </p>
    <div>
      <h2>Google’s advantage</h2>
      <a href="#googles-advantage">
        
      </a>
    </div>
    <p>Cloudflare data validates the concern about Google’s competitive advantage. Based on our data, Googlebot sees significantly more Internet content than its closest peers. </p><p>Over an observed period of two months, Googlebot successfully accessed individual pages almost two times more than ClaudeBot and GPTBot, three times more than Meta-ExternalAgent, and more than three times more than Bingbot. The difference was even more extreme for other popular AI crawlers: for instance, Googlebot saw 167 times more unique pages than PerplexityBot. Out of the sampled unique URLs using our network that we observed over the last two months, Googlebot crawled roughly 8%.</p><p><b>In rounded multiple terms, Googlebot sees:</b></p><ul><li><p>vs. ~1.70x the amount of unique URLs seen by ClaudeBot;</p></li><li><p>vs. ~1.76x the amount of unique URLs seen by GPTBot;</p></li><li><p>vs. ~2.99x the amount of unique URLs by Meta-ExternalAgent;</p></li><li><p>vs. ~3.26x the amount of unique URLs seen by Bingbot;</p></li><li><p>vs. ~5.09x the amount of unique URLs seen by Amazonbot;</p></li><li><p>vs. ~14.87x the amount of unique URLs seen by Applebot;</p></li><li><p>vs. ~23.73x the amount of unique URLs seen by Bytespider;</p></li><li><p>vs. ~166.98x the amount of unique URLs seen by PerplexityBot;</p></li><li><p>vs. ~714.48x the amount of unique URLs seen by CCBot; and</p></li><li><p>vs: ~1801.97x the amount of unique URLs seen by archive.org_bot.</p></li></ul><p>Googlebot also stands out in other Cloudflare datasets.  </p><p>Even though it ranks as the most active bot by overall traffic, publishers are far less likely to disallow or block Googlebot in their <a href="https://www.cloudflare.com/learning/bots/what-is-robots-txt/"><u>robots.txt file</u></a> compared to other crawlers. This is likely due to its importance in driving human traffic to their content—and, as a result, ad revenue—through search. </p><p>As shown below, almost no website explicitly disallows the dual-purpose Googlebot in full, reflecting how important this bot is to driving traffic via search referrals. (Note that partial disallows often impact certain parts of a website that are irrelevant for search engine optimization, or SEO, such as login endpoints.)</p>
<p>
Robots.txt merely allows the expression of crawling preferences; it is not an enforcement mechanism. Publishers rely on “good bots” to comply. To manage crawler access to their sites more effectively—and independently of a given bot’s compliance—publishers can set up a Web Application Firewall (WAF) with specific rules, technically preventing undesired crawlers from accessing their sites. Following the same logic as with robots.txt above, we would expect websites to block mostly other AI crawlers but not Googlebot. </p><p>Indeed, when comparing the numbers for customers using <a href="https://www.cloudflare.com/lp/pg-ai-crawl-control/"><u>AI Crawl Control</u></a>, Cloudflare’s own <a href="https://developers.cloudflare.com/ai-crawl-control/configuration/ai-crawl-control-with-waf/"><u>AI crawler blocking tool</u></a> that is integrated in our Application Security suite, between July 2025 and January 2026, one can see that the number of websites actively blocking other popular AI crawlers (e.g., GPTBot, Claudebot), was nearly seven times as high as the number of websites that blocked Googlebot and Bingbot. (Like Googlebot, Bingbot combines search and AI crawling and drives traffic to these sites, but given its small market share in search, its impact is less significant.)</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/344ATKpYmJHsSRlEtxQen5/2fc5da1211b4fd0189e026f0ec19548f/BLOG-3170_3.png" />
          </figure><p>So we agree with the CMA on the problem statement. But how can publishers be enabled to effectively opt out of Google using their content for its generative AI applications? We share the CMA’s conclusion that “in order to be able to make meaningful decisions about how Google uses their Search Content, (...) publishers need the ability effectively to opt their Search Content out of both Google’s search generative AI features and Google’s broader generative AI services.” </p><p>But we’re concerned that the CMA’s proposal is insufficient.</p>
    <div>
      <h2>CMA’s proposed publisher conduct requirements</h2>
      <a href="#cmas-proposed-publisher-conduct-requirements">
        
      </a>
    </div>
    <p>On January 28, 2026, the CMA published four sets of proposed conduct requirements for Google, including <a href="https://assets.publishing.service.gov.uk/media/6979ceae75d443709655209c/Publisher_conduct_requirement.pdf"><u>conduct requirements related to publishers</u></a>. According to the CMA, the proposed publisher rules are designed to address concerns that publishers (1) lack sufficient choice over how Google uses their content in its AI-generated responses, (2) have limited transparency into Google’s use of that content, and (3) do not get effective attribution for Google’s use of their content. The CMA recognized the importance of these concerns because of the role that Google search plays in finding content online. </p><p>The conduct requirements would mandate Google grant publishers <a href="https://assets.publishing.service.gov.uk/media/6979d05275d443709655209f/Introduction_to_the_consultation.pdf"><u>"meaningful and effective" </u></a>control over whether their content is used for AI features, like AI Overviews. Google would be prohibited from taking any action that negatively impacts the effectiveness of those control options, such as intentionally downranking the content in search. </p><p>To support informed decisionmaking, the CMA proposal also requires Google to increase transparency, by publishing clear documentation on how it uses crawled content for generative AI and on exactly what its various publisher controls cover in practice. Finally, the proposal would require Google to ensure effective attribution of publisher content and to provide publishers with detailed, disaggregated engagement data—including specific metrics for impressions, clicks, and "click quality"—to help them evaluate the commercial value of allowing their content to be used in AI-generated search summaries.</p>
    <div>
      <h2>The CMA’s proposed remedies are insufficient</h2>
      <a href="#the-cmas-proposed-remedies-are-insufficient">
        
      </a>
    </div>
    <p>Although we support the CMA’s efforts to improve options for publishers, we are concerned that the proposed requirements do not solve the underlying issue of promoting fair, transparent choice over how their content is used by Google. Publishers are effectively forced to use Google’s proprietary opt-out mechanisms, tied specifically to the Google platform and under the conditions set by Google, rather than granting them direct, autonomous control. <b>A framework where the platform dictates the rules, manages the technical controls, and defines the scope of application does not offer “effective control” to content creators or encourage competitive innovation in the market. Instead, it reinforces a state of permanent dependency.</b>  </p><p>Such a framework also reduces choice for publishers. Creating new opt-out controls makes it impossible for publishers to choose to use external tools to block Googlebot from accessing their content without jeopardizing their appearance in Search results. Instead, under the current proposal, content creators will still have to allow Googlebot to scrape their websites, with no enforcement mechanisms to deploy and limited visibility available if Google does not respect their signalled preferences. Enforcement of these requirements by the CMA, if done properly, will be very onerous, without guarantee that publishers will trust the solution.</p><p>In fact, Cloudflare has received feedback from its customers that Google’s current proprietary opt-out mechanisms, including Google-Extended and ‘nosnippet’, have failed to prevent content from being utilized in ways that publishers cannot control. These opt-out tools also do not enable mechanisms for fair compensation for publishers. </p><p>More broadly, as reflected in our proposed <a href="https://blog.cloudflare.com/building-a-better-internet-with-responsible-ai-bot-principles/"><u>responsible AI bot principles</u></a>, we believe that all AI bots should have one distinct purpose and declare it, so that website owners can make clear decisions over who can access their content and why. Unlike its leading competitors, such as OpenAI and Anthropic, Google does not comply with this principle for Googlebot, which is used for multiple purposes (search indexing, AI training, and inference/grounding). Simply requiring Google to develop a new opt-out mechanism would not allow publishers to achieve meaningful control over the use of their content. </p><p>The most effective way to give publishers that necessary control is to require Googlebot to be split up into separate crawlers. That way, publishers could allow crawling for traditional search indexing, which they need to attract traffic to their sites, but block access for unwanted use of their content in generative AI services and features. </p>
    <div>
      <h2>Requiring crawler separation is the only effective solution </h2>
      <a href="#requiring-crawler-separation-is-the-only-effective-solution">
        
      </a>
    </div>
    <p>To ensure a fair digital ecosystem, the CMA must instead empower content owners to prevent Google from accessing their data for particular purposes in the first place, rather than relying on Google-managed workarounds after the crawler has already accessed the content for other purposes. That approach also enables creators to set conditions for access to their content. </p><p>Although the CMA described crawler separation as an “equally effective intervention”, it ultimately rejected mandating separation based on Google’s input that it would be too onerous. We disagree.</p><p>Requiring Google to split up Googlebot by purpose — just like Google already does for its <a href="https://developers.google.com/crawling/docs/crawlers-fetchers/overview-google-crawlers"><u>nearly 20 other crawlers</u></a> — is not only technically feasible, but also a necessary and proportionate remedy that empowers website operators to have the granular control they currently lack, without increasing traffic load from crawlers to their websites (and in fact, perhaps even decreasing it, should they choose to block AI crawling).</p><p>To be clear, a crawler separation remedy benefits AI companies, by leveling the playing field between them and Google, in addition to giving UK-based publishers more control over their content. (There has been widespread public support for a crawler separation remedy by Daily Mail Group, the Guardian and the News Media Association.) Mandatory crawler separation is not a disadvantage to Google, nor does it undermine investment in AI. On the contrary, it is a pro-competitive safeguard that prevents Google from leveraging its search monopoly to gain an unfair advantage in the AI market. By decoupling these functions, we ensure that AI development is driven by fair-market competition rather than the exploitation of a single hyperscaler’s dominance.</p><p>******</p><p>The UK has a unique chance to lead the world in protecting the value of original and high-quality content on the Internet. However, we worry that the current proposals fall short. We would encourage rules that ensure that Google operates under the same conditions for content access as other AI developers, meaningfully restoring agency to publishers and paving the way for new business models promoting content monetization.</p><p>Cloudflare remains committed to engaging with the CMA and other partners during upcoming consultations to provide evidence-based data to help shape a final decision on conduct requirements that are targeted, proportional, and effective. The CMA still has an opportunity to ensure that the Internet becomes a fair marketplace for content creators and smaller AI players—not just a select few tech giants.</p> ]]></content:encoded>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[AI Bots]]></category>
            <category><![CDATA[Google]]></category>
            <category><![CDATA[Legal]]></category>
            <category><![CDATA[Policy & Legal]]></category>
            <guid isPermaLink="false">1csdasmGFE5gWnYFDBbN9j</guid>
            <dc:creator>Maria Palmieri</dc:creator>
            <dc:creator>Sebastian Hufnagel</dc:creator>
        </item>
        <item>
            <title><![CDATA[Policy, privacy and post-quantum: anonymous credentials for everyone]]></title>
            <link>https://blog.cloudflare.com/pq-anonymous-credentials/</link>
            <pubDate>Thu, 30 Oct 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ The world is adopting anonymous credentials for digital privacy, but these systems are vulnerable to quantum computers. This post explores the cryptographic challenges and promising research paths toward building new, quantum-resistant credentials from the ground up. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>The Internet is in the midst of one of the most complex transitions in its history: the migration to <a href="https://www.cloudflare.com/en-gb/pqc/"><u>post-quantum (PQ) cryptography.</u></a> Making a system safe against quantum attackers isn't just a matter of replacing elliptic curves and RSA with PQ alternatives, such as <a href="https://csrc.nist.gov/pubs/fips/203/final"><u>ML-KEM</u></a> and <a href="https://csrc.nist.gov/pubs/fips/204/final"><u>ML-DSA</u></a>. These algorithms have higher costs than their classical counterparts, making them unsuitable as drop-in replacements in many situations.</p><p>Nevertheless, we're <a href="https://blog.cloudflare.com/pq-2025/"><u>making steady progress</u></a> on the most important systems. As of this writing, <a href="https://radar.cloudflare.com/adoption-and-usage#post-quantum-encryption"><u>about 50%</u></a> of <a href="https://www.cloudflare.com/learning/ssl/transport-layer-security-tls/"><u>TLS connections</u></a> to Cloudflare's edge are safe against <a href="https://en.wikipedia.org/wiki/Harvest_now,_decrypt_later"><u>store-now/harvest-later attacks</u></a>. Quantum safe authentication is further out, as it will require more significant changes to how certificates work. Nevertheless, this year we've <a href="https://blog.cloudflare.com/bootstrap-mtc/"><u>taken a major step</u></a> towards making TLS deployable at scale with PQ certificates.</p><p>That said, TLS is only the lowest hanging fruit. There are <a href="https://github.com/fancy-cryptography/fancy-cryptography"><u>many more ways</u></a> we have come to rely on cryptography than key exchange and authentication and which aren’t as easy to migrate. In this blog post, we'll take a look at <b>Anonymous Credentials (ACs)</b>.</p><p>ACs solve a common privacy dilemma: how to prove a specific fact (for example that one has had a valid driver’s license for more than three years) without over-sharing personal information (like the place of birth)? Such problems are fundamental to a number of use cases, and ACs may provide the foundation we need to make these applications as private as possible.</p><p>Just like for TLS, the central question for ACs is whether there are drop-in, PQ replacements for its classical primitives that will work at the scale required, or will it be necessary to re-engineer the application to mitigate the cost of PQ.</p><p>We'll take a stab at answering this question in this post. We'll focus primarily on an emerging use case for ACs described in a <a href="https://blog.cloudflare.com/private-rate-limiting/"><u>concurrent post</u></a>: rate-limiting requests from agentic AI platforms and users. This demanding, high-scale use case is the perfect lens through which to evaluate the practical readiness of today's post-quantum research. We'll use it as our guiding problem to measure each cryptographic approach.</p><p>We'll first explore the current landscape of classical AC adoption across the tech industry and the public sector. Then, we’ll discuss what cryptographic researchers are currently looking into on the post-quantum side. Finally, we’ll take a look at what it'll take to bridge the gap between theory and real-world applications.</p><p>While anonymous credentials are only seeing their first real-world deployments in recent years, it is critical to start thinking about the post-quantum challenge concurrently. This isn’t a theoretical, too-soon problem given the store-now decrypt-later threat. If we wait for mass adoption before solving post-quantum anonymous credentials, ACs risk being dead on arrival. Fortunately, our survey of the state of the art shows the field is close to a practical solution. Let’s start by reviewing real-world use-cases of ACs. </p>
    <div>
      <h2>Real world (classical) anonymous credentials</h2>
      <a href="#real-world-classical-anonymous-credentials">
        
      </a>
    </div>
    <p>In 2026, the European Union is <a href="https://eur-lex.europa.eu/eli/reg/2024/1183/oj"><u>set to launch its digital identity wallet</u></a>, a system that will allow EU citizens, residents and businesses to digitally attest to their personal attributes. This will enable them, for example, to display their driver’s license on their phone or <a href="https://educatedguesswork.org/posts/age-verification-id/"><u>perform age</u></a> <a href="https://soatok.blog/2025/07/31/age-verification-doesnt-need-to-be-a-privacy-footgun/"><u>verification</u></a>. Cloudflare's use cases for ACs are a bit different and revolve around keeping our customers secure by, for example, rate limiting bots and humans as we <a href="https://blog.cloudflare.com/privacy-pass-standard/"><u>currently do with Privacy Pass</u></a>. The EU wallet is a massive undertaking in identity provisioning, and our work operates at a massive scale of traffic processing. Both initiatives are working to solve a shared fundamental problem: allowing an entity to prove a specific attribute about themselves without compromising their privacy by revealing more than they have to.</p><p>The EU's goal is a fully mobile, secure, and user-friendly digital ID. The current technical plan is ambitious, as laid out in the <a href="https://ec.europa.eu/digital-building-blocks/sites/spaces/EUDIGITALIDENTITYWALLET/pages/900014854/Version+2.0+of+the+Architecture+and+Reference+Framework+now+available"><u>Architecture Reference Framework (ARF)</u></a>. It defines the key privacy goals of unlinkability to guarantee that if a user presents attributes multiple times, the recipients cannot link these separate presentations to conclude that they concern the same user. However, currently proposed solutions fail to achieve this. The framework correctly identifies the core problem: attestations contain <i>unique, fixed elements such as hash values, […], public keys, and signatures</i> that colluding entities could store and compare to track individuals.</p><p>In its present form, the ARF's recommendation to mitigate cross-session linkability is <i>limited-time attestations</i>. The framework acknowledges in the text that this would <i>only partially mitigate Relying Party linkability</i>. An alternative proposal that would mitigate linkability risks are single-use credentials. They are not considered at the moment due to <i>complexity and management overhead</i>. The framework therefore leans on <i>organisational and enforcement measure</i>s to deter collusion instead of providing a stronger guarantee backed by cryptography.</p><p>This reliance on trust assumptions could become problematic, especially in the sensitive context of digital identity. When asked for feedback, c<a href="https://github.com/eu-digital-identity-wallet/eudi-doc-architecture-and-reference-framework/issues/200"><u>ryptographic researchers agree</u></a> that the proper solution would be to adopt anonymous credentials. However, this solution presents a long-term challenge. Well-studied methods for anonymous credentials, such as those based on <a href="https://datatracker.ietf.org/doc/draft-irtf-cfrg-bbs-signatures/"><u>BBS signatures</u></a>, are vulnerable to quantum computers. While some <a href="https://datatracker.ietf.org/doc/rfc9474/"><u>anonymous</u></a> <a href="https://datatracker.ietf.org/doc/draft-schlesinger-cfrg-act/"><u>schemes</u></a> are PQ-unlinkable, meaning that user privacy is preserved even when cryptographically relevant quantum computers exist, new credentials could be forged. This may be an attractive target for, say, a nation state actor.</p><p>New cryptography also faces deployment challenges: in the EU, only approved cryptographic primitives, as listed in the <a href="https://www.sogis.eu/documents/cc/crypto/SOGIS-Agreed-Cryptographic-Mechanisms-1.3.pdf"><u>SOG-IS catalogue,</u></a> can be used. At the time of writing, this catalogue is limited to established algorithms such as RSA or ECDSA. But when it comes to post-quantum cryptography, SOG-IS is <a href="https://www.sogis.eu/documents/cc/crypto/SOGIS-Agreed-Cryptographic-Mechanisms-1.3.pdf"><u>leaving the problem wide open</u></a>.</p><p>The wallet's first deployment will not be quantum-secure. However, with the transition to post-quantum algorithms being ahead of us, as soon as 2030 for high-risk use cases per <a href="https://digital-strategy.ec.europa.eu/en/library/coordinated-implementation-roadmap-transition-post-quantum-cryptography"><u>the EU roadma</u></a>p, research in a post-quantum compatible alternative for anonymous credentials is critical. This will encompass<b> </b><i>standardizing more cryptography.</i></p><p>Regarding existing large scale deployments, the US has allowed digital ID on smartphones since 2024. They <a href="https://www.tsa.gov/digital-id/participating-states"><u>can be used at TSA checkpoints</u></a> for instance. The <a href="https://www.dhs.gov/science-and-technology/privacy-preserving-digital-credential-wallets-verifiers"><u>Department of Homeland Security lists funding for six privacy-preserving digital credential wallets and verifiers on their website.</u></a> This early exploration and engagement is a positive sign, and highlights the need to plan for privacy-preserving presentations. </p><p>Finally, ongoing efforts at the Internet Engineering Task Force (IETF)<b> </b>aim<b> </b>to build a more private Internet by standardizing advanced cryptographic techniques. Active individual drafts (i.e., not yet adopted by a working group), such as <a href="https://datatracker.ietf.org/doc/draft-google-cfrg-libzk/"><u>Longfellow</u></a> and Anonymous Credit Tokens (<a href="https://datatracker.ietf.org/doc/draft-schlesinger-cfrg-act/"><u>ACT</u></a>), and adopted drafts like Anonymous Rate-limited Credentials (<a href="https://datatracker.ietf.org/doc/draft-yun-privacypass-crypto-arc/"><u>ARC</u></a>), propose more flexible multi-show anonymous credentials that incorporate developments over the last several years. At IETF 117 in 2023, <a href="https://www.irtf.org/anrw/2023/slides-117-anrw-sessc-not-so-low-hanging-fruit-security-and-privacy-research-opportunities-for-ietf-protocols-00.pdf"><u>post-quantum anonymous credentials and deployable generic anonymous credentials were presented as a research opportunity</u></a>. Check out our <a href="https://blog.cloudflare.com/private-rate-limiting/"><u>post on rate limiting agents</u></a> for details.</p><p>Before we get into the state-of-the-art for PQ, allow us to try to crystalize a set of requirements for real world applications.</p>
    <div>
      <h3>Requirements</h3>
      <a href="#requirements">
        
      </a>
    </div>
    <p>Given the diversity of use cases, adoption of ACs will be made easier by the fact that they can be built from a handful of powerful primitives. (More on this in our <a href="https://blog.cloudflare.com/private-rate-limiting/"><u>concurrent post</u></a>.) As we'll see in the next section, we don't yet have drop-in, PQ alternatives for these kinds of primitives. The "building blocks" of PQ ACs are likely to look quite different, and we're going to know something about what we're building towards.</p><p>For our purposes, we can think of an anonymous credential as a kind of fancy <a href="https://en.wikipedia.org/wiki/Blind_signature"><b><u>blind signature</u></b></a>. What's that you ask? A blind signature scheme has two phases: <b>issuance</b>, in which the server signs a message chosen by the client; and <b>presentation</b>, in which the client reveals the message and the signature to the server. The scheme should be <b>unlinkable</b> in the sense that the server can't link any message and signature to the run of the issuance protocol in which it was produced. It should also be <b>unforgeable</b> in the sense that no client can produce a valid signature without interacting with the server.</p><p>The key difference between ACs and blind signatures is that, during presentation of an AC, the client only presents <i>part of the message</i> in plaintext; the rest of the message is kept secret. Typically, the message has three components:</p><ol><li><p>Private <b>state</b>, such as a counter that, for example, keeps track of the number of times the credential was presented. The client would prove to the server that the state is "valid", for example, a counter with value $0 \leq C \leq N$, without revealing $C$. In many situations, it's desirable to allow the server to update this state upon successful presentation, for example, by decrementing the counter. In the context of rate limiting, this is the number of how many requests are left for a credential.</p></li><li><p>A random value called the <b>nullifier</b> that is revealed to the server during presentation. In rate-limiting, the nullifier prevents a user from spending a credential with a given state more than once.</p></li><li><p>Public <b>attributes</b> known to both the client and server that bind the AC to some application context. For example, this might represent the window of time in which the credential is valid (without revealing the exact time it was issued).</p></li></ol><p>Such ACs are well-suited for rate limiting requests made by the client. Here the idea is to prevent the client from making more than some maximum number of requests during the credential's lifetime. For example, if the presentation limit is 1,000 and the validity window is one hour, then the clients can make up to 0.27 requests/second on average before it gets throttled.</p><p>It's usually desirable to enforce rate limits on a <b>per-origin</b> basis. This means that if the presentation limit is 1,000, then the client can make at most 1,000 requests to any website that can verify the credential. Moreover, it can do so safely, i.e., without breaking unlinkability across these sites.</p><p>The current generation of ACs being considered for standardization at IETF are only <b>privately verifiable,</b> meaning the server issuing the credential (the <b>issuer</b>) must share a private key with the server verifying the credential (the <b>origin</b>). This will be sufficient for some deployment scenarios, but many will require <b>public verifiability</b>, where the origin only needs the issuer's public key. This is possible with BBS-based credentials, for example.</p><p>Finally, let us say a few words about round complexity. An AC is <b>round optimal</b> if issuance and presentation both complete in a single HTTP request and response. In our survey of PQ ACs, we found a number of papers that discovered neat tricks that reduce bandwidth (the total number of bits transferred between the client and server) at the cost of additional rounds. However, for use cases like ours, <b>round optimality</b> is an absolute necessity, especially for presentation. Not only do multiple rounds have a high impact on latency, they also make the implementation far more complex.</p><p>Within these constraints, our goal is to develop PQ ACs that have as low communication cost (i.e., bandwidth consumption) and runtime as possible in the context of rate-limiting.</p>
    <div>
      <h2>"Ideal world" (PQ) anonymous credentials</h2>
      <a href="#ideal-world-pq-anonymous-credentials">
        
      </a>
    </div>
    <p>The academic community has produced a number of promising post-quantum ACs. In our survey of the state of the art, we evaluated several leading schemes, scoring them on their underlying primitives and performance to determine which are truly ready for the Internet. To understand the challenges, it is essential to first grasp the cryptographic building blocks used in ACs today. We’ll now discuss some of the core concepts that frequently appear in the field.</p>
    <div>
      <h3>Relevant cryptographic paradigms</h3>
      <a href="#relevant-cryptographic-paradigms">
        
      </a>
    </div>
    
    <div>
      <h4>Zero-knowledge proofs</h4>
      <a href="#zero-knowledge-proofs">
        
      </a>
    </div>
    <p>Zero-knowledge proofs (ZKPs) are a cryptographic protocol that allows a <i>prover</i> to convince a <i>verifier</i> that a statement is true without revealing the secret information, or <i>witness</i>. ZKPs play a central role in ACs: they allow proving statements of the secret part of the credential's state without revealing the state itself. This is achieved by transforming the statement into a mathematical representation, such as a set of polynomial equations over a finite field. The prover then generates a proof by performing complex operations on this representation, which can only be completed correctly if they possess the valid witness.</p><p>General-purpose ZKP systems, like <a href="https://eprint.iacr.org/2018/046"><u>Scalable Transparent Arguments of Knowledge (STARKs)</u></a>, can prove the integrity of <i>any</i> computation up to a certain size. In a STARK-based system, the computational trace is represented as a <i>set of polynomials</i>. The prover then constructs a proof by evaluating these polynomials and committing to them using cryptographic hash functions. The verifier can then perform a quick probabilistic check on this proof to confirm that the original computation was executed correctly. Since the proof itself is just a collection of hashes and sampled polynomial values, it is secure against quantum computers, providing a statistically sound guarantee that the claimed result is valid.</p>
    <div>
      <h4>Cut-and-Choose</h4>
      <a href="#cut-and-choose">
        
      </a>
    </div>
    <p>Cut-and-choose is a cryptographic technique designed to ensure a prover’s honest behaviour by having a verifier check a random subset of their work. The prover first commits to multiple instances of a computation, after which the verifier randomly chooses a portion to be <i>cut open</i> by revealing the underlying secrets for inspection. If this revealed subset is correct, the verifier gains high statistical confidence that the remaining, un-opened instances are also correct.</p><p>This technique is important because while it is a generic tool used to build protocols secure against malicious adversaries, it also serves as a crucial case study. Its security is not trivial; for example, practical attacks on cut-and-choose schemes built with (post-quantum) homomorphic encryption have succeeded by <a href="https://eprint.iacr.org/2025/1890.pdf"><u>attacking the algebraic structure of the encoding</u></a>, not the encryption itself. This highlights that even generic constructions must be carefully analyzed in their specific implementation to prevent subtle vulnerabilities and information leaks.</p>
    <div>
      <h4>Sigma Protocols</h4>
      <a href="#sigma-protocols">
        
      </a>
    </div>
    <p><a href="https://datatracker.ietf.org/doc/draft-irtf-cfrg-sigma-protocols/01/"><u>Sigma protocols</u></a> follow a more structured approach that does not require us to throw away any computations. The <a href="https://pages.cs.wisc.edu/~mkowalcz/628.pdf"><u>three-move protocol</u></a> starts with a <i>commitment</i> phase where the prover generates some randomness<i>,</i> which is added to the input to generate the commitment, and sends the commitment to the verifier. Then, the verifier <i>challenges </i>the prover with an unpredictable challenge. To finish the proof, the prover provides a <i>response</i> in which they combine the initial randomness with the verifier’s challenge in a way that is only possible if the secret value, such as the solution to a discrete logarithm problem, is known.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1ihEZ5KhWBQ0PZF5pTc0Bi/e35de03a89af0c2254bcc114041f6904/image4.png" />
          </figure><p><sup>Depiction of a Sigma protocol flow, where the prover commits to their witness $w$, the verifier challenges the prover to prove knowledge about $w$, and the prover responds with a mathematical statement that the verifier can either accept or reject.</sup></p><p>In practice, the prover and verifier don't run this interactive protocol. Instead, they make it non-interactive using a technique known as the <a href="https://link.springer.com/content/pdf/10.1007/3-540-47721-7_12.pdf"><u>Fiat-Shamir transformation</u></a>. The idea is that the prover generates the challenge <i>itself</i>, by deriving it from its own commitment. It may sound a bit odd, but it works quite well. In fact, it's the basis of signatures like ECDSA and even PQ signatures like ML-DSA.</p>
    <div>
      <h4>MPC in the head</h4>
      <a href="#mpc-in-the-head">
        
      </a>
    </div>
    <p>Multi-party computation (MPC) is a cryptographic tool that allows multiple parties to jointly compute a function over their inputs without revealing their individual inputs to the other parties. <a href="https://web.cs.ucla.edu/~rafail/PUBLIC/77.pdf"><u>MPC in the Head</u></a> (MPCitH) is a technique to generate zero-knowledge proofs by simulating a multi-party protocol <i>in the head</i> of the prover.</p><p>The prover simulates the state and communication for each virtual party, commits to these simulations, and shows the commitments to the verifier. The verifier then challenges the prover to open a subset of these virtual parties. Since MPC protocols are secure even if a minority of parties are dishonest, revealing this subset doesn't leak the secret, yet it convinces the verifier that the overall computation was correct. </p><p>This paradigm is particularly useful to us because it's a flexible way to build post-quantum secure ZKPs. MPCitH constructions build their security from symmetric-key primitives (like hash functions). This approach is also transparent, requiring no trusted setup. While STARKs share these post-quantum and transparent properties, MPCitH often offers faster prover times for many computations. Its primary trade-off, however, is that its proofs scale linearly with the size of the circuit to prove, while STARKs are succinct, meaning their proof size grows much slower.</p>
    <div>
      <h4>Rejection sampling</h4>
      <a href="#rejection-sampling">
        
      </a>
    </div>
    <p>When a randomness source is biased or outputs numbers outside the desired range, rejection sampling can correct the distribution. For example, imagine you need a random number between 1 and 10, but your computer only gives you random numbers between 0 and 255. (Indeed, this is the case!) The rejection sampling algorithm calls the RNG until it outputs a number below 11 and above 0: </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/ogslPSn4DJYx3R5jGZ3mi/7ab640864dc26d6e1e2eb53c25f628ea/image6.png" />
          </figure><p>Calling the generator over and over again may seem a bit wasteful. An efficient implementation can be realized with an eXtendable Output Function (XOF). A XOF takes an input, for example a seed, and computes an arbitrarily-long output. An example is the SHAKE family (part of the <a href="https://csrc.nist.gov/pubs/fips/202/final"><u>SHA3 standard</u></a>), and the recently proposed round-reduced version of SHAKE called <a href="https://datatracker.ietf.org/doc/rfc9861/"><u>TurboSHAKE</u></a>.</p><p>Let’s imagine you want to have three numbers between 1 and 10. Instead of calling the XOF over and over, you can also ask the XOF for several bytes of output. Since each byte has a probability of 3.52% to be in range, asking the XOF for 174 bytes is enough to have a greater than 99% chance of finding at least three usable numbers. In fact, we can be even smarter than this: 10 fits in four bits, so we can split the output bytes into lower and higher <a href="https://en.wikipedia.org/wiki/Nibble"><u>nibbles</u></a>. The probability of a nibble being in the desired range is now 56.4%:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4W98tjgA7gIkaM7A5LBMyi/7b12bbfd22e53b84439a7c9e690605d9/image2.png" />
          </figure><p><sup>Rejection sampling by batching queries. </sup></p><p>Rejection sampling is a part of many cryptographic primitives, including many we'll discuss in the schemes we look at below.</p>
    <div>
      <h3>Building post-quantum ACs</h3>
      <a href="#building-post-quantum-acs">
        
      </a>
    </div>
    <p>Classical anonymous credentials (ACs), such as ARC and ACT, are built from algebraic groups- specifically, elliptic curves, which are very efficient. Their security relies on the assumption that certain mathematical problems over these groups are computationally hard. The premise of post-quantum cryptography, however, is that quantum computers can solve these supposedly hard problems. The most intuitive solution is to replace elliptic curves with a post-quantum alternative. In fact, cryptographers have been working on a replacement for a number of years: <a href="https://eprint.iacr.org/2018/383"><u>CSIDH</u></a>. </p><p>This raises the key question: can we simply adapt a scheme like ARC by replacing its elliptic curves with CSIDH? The short answer is <b>no</b>, due to a critical roadblock in constructing the necessary zero-knowledge proofs. While we can, in theory, <a href="https://eprint.iacr.org/2023/1614"><u>build the required Sigma protocols or MPC-in-the-Head (MPCitH) proofs from CSIDH</u></a>, they have a prerequisite that makes them unusable in practice: they require a <b>trusted setup</b> to ensure the prover cannot cheat. This requirement is a non-starter, as <a href="https://eprint.iacr.org/2022/518"><u>no algorithm for performing a trusted setup in CSIDH exists</u></a>. The trusted setup for sigma protocols can be replaced by a combination of <a href="https://eprint.iacr.org/2016/505"><u>generic techniques from multi-party computation</u></a> and cut-and-choose protocols, but that adds significant computation cost to the already computationally expensive isogeny operations.</p><p>This specific difficulty highlights a more general principle. The high efficiency of classical credentials like ARC is deeply tied to the rich algebraic structure of elliptic curves. Swapping this component for a post-quantum alternative, or moving to generic constructions, fundamentally alters the design and its trade-offs. We must therefore accept that post-quantum anonymous credentials cannot be a simple "lift-and-shift" of today's schemes. They will require new designs built from different cryptographic primitives, such as lattices or hash functions.</p>
    <div>
      <h3>Prefabricated schemes from generic approaches</h3>
      <a href="#prefabricated-schemes-from-generic-approaches">
        
      </a>
    </div>
    <p>At Cloudflare, we explored a <a href="https://eprint.iacr.org/2023/414"><u>post-quantum privacy pass construction in 2023</u></a> that closely resembles the functionality needed for anonymous credentials. The main result is a generic construction that composes separate, quantum-secure building blocks: a digital signature scheme and a general-purpose ZKP system:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4dpmFzSv7HG5JHEEqu7D9o/ea1f02c37c0e36dc0972dfd1044fa9a3/image8.png" />
          </figure><p>The figure shows a cryptographic protocol divided into two main phases: (1.) Issuance: The user commits to a message (without revealing it) and sends the commitment to the server. The server signs the commitment and returns this signed commitment, which serves as a token. The user verifies the server's signature. (2.) Redemption: To use the token, the user presents it and constructs a proof. This proof demonstrates they have a valid signature on the commitment and opens the commitment to reveal the original message. If the server validates the proof, the user and server continue (e.g., to access a rate-limited origin).</p><p>The main appeal of this modular design is its flexibility. The experimental <a href="https://github.com/guruvamsi-policharla/zkdilithium"><u>implementation</u></a> uses a modified version of the signature ML-DSA signatures and STARKs, but the components can be easily swapped out. The design provides strong, composable security guarantees derived directly from the underlying parts. A significant speedup for the construction came from replacing the hash function SHA3 in ML-DSA with the zero-knowledge friendly <a href="https://eprint.iacr.org/2019/458"><u>Poseidon</u></a>.</p><p>However, the modularity of our post-quantum Privacy Pass construction <a href="https://zkdilithium.cloudflareresearch.com/index.html"><u>incurs a significant performance overhead</u></a> demonstrated in a clear trade-off between proof generation time and size: a fast 300 ms proof generation requires a large 173 kB signature, while a 4.8s proof generation time cuts the size of the signature nearly in half. A balanced parameter set, which serves as a good benchmark for any dedicated solution to beat, took 660 ms to sign and resulted in a 112 kB signature. The implementation is currently a proof of concept, with perhaps some room for optimization. Alternatively, a different signature like <a href="https://datatracker.ietf.org/doc/draft-ietf-cose-falcon/"><u>FN-DSA</u></a> could offer speed improvements: while its issuance is more complex, its verification is far more straightforward, boiling down to a simple hash-to-lattice computation and a norm check.</p><p>However, while this construction gives a functional baseline, these figures highlight the performance limitations for a real-time rate limiting system, where every millisecond counts. The 660 ms signing time strongly motivates the development of <i>dedicated</i> cryptographic constructions that trade some of the modularity for performance.</p>
    <div>
      <h3>Solid structure: Lattices</h3>
      <a href="#solid-structure-lattices">
        
      </a>
    </div>
    <p><a href="https://blog.cloudflare.com/lattice-crypto-primer/"><u>Lattices</u></a> are a natural starting point when discussing potential post-quantum AC candidates. NIST standardized ML-DSA and ML-KEM as signature and KEM algorithms, both of which are based on lattices. So, are lattices the answer to post-quantum anonymous credentials?</p><p>The answer is a bit nuanced. While explicit anonymous credential schemes from lattices exist, they have shortcomings that prevent real-world deployment: for example, a <a href="https://eprint.iacr.org/2023/560.pdf"><u>recent scheme</u></a> sacrifices round-optimality for smaller communication size, which is unacceptable for a service like Privacy Pass where every second counts. Given that our RTT is 100ms or less for the majority of users, each extra communication round adds tangible latency especially for those on slower Internet connections. When the final credential size is still over 100 kB, the trade-offs are hard to justify. So, our search continues. We expand our horizon by looking into <i>blind signatures </i>and whether we can adapt them for anonymous credentials.</p>
    <div>
      <h4>Two-step approach: Hash-and-sign</h4>
      <a href="#two-step-approach-hash-and-sign">
        
      </a>
    </div>
    <p>A prominent paradigm in lattice-based signatures is the <i>hash-and-sign</i> construction. Here, the message is first hashed to a point in the lattice. Then, the signer uses their secret key, a <a href="https://eprint.iacr.org/2007/432"><u>lattice trapdoor</u></a>, to generate a vector that, when multiplied with the private key, evaluates to the hashed point in the lattice. This is the core mechanism behind signature schemes like FN-DSA.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/66hA0KmluGoGO4I2SHAGTv/1a465c6c810e4f17df3112b96ed816da/image1.png" />
          </figure><p>Adapting hash-and-sign for blind signatures is tricky, since the signer may not learn the message. This introduces a significant security challenge: If the user can request signatures on arbitrary points, they can mount an attack to extract the trapdoor by repeatedly requesting signatures for carefully chosen arbitrary points. These points can be used to reconstruct a short basis, which is equivalent to a key recovery. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1lyCHqOTL477mFGSWjH3dv/48ffe46acfbe81b692c2ba30f383634b/image9.png" />
          </figure><p>The standard defense against this attack is to require the user to prove in zero-knowledge that the point they are asking to be signed is the blinded output of the specified hash function. However, proving hash preimages leads to the same problem as in the generic post-quantum privacy pass paper: proving a conventional hash function (like SHA3) inside a ZKP is computationally expensive and has a large communication complexity.</p><p>This difficult trade-off is at the heart of recent academic work. The <a href="https://eprint.iacr.org/2023/077.pdf"><u>state-of-the-art paper</u></a> presents two lattice-based blind signature schemes with small signature sizes of 22 KB for a signature and 48 kB for a privately-verifiable protocol that may be more useful in a setting like anonymous credential. However, this focus on the final signature size comes at the cost of an impractical <i>issuance</i>. The user must provide ZKPs for the correct hash and lattice relations that, by the paper’s own analysis, can add to<i> several hundred kilobytes</i> and take<i> 20 seconds to generate and 10 seconds to verify</i>.</p><p>While these results are valuable for advancing the field, this trade-off is a significant barrier for any large-scale, practical system. For our use case, a protocol that increases the final signature size moderately in exchange for a more efficient and lightweight issuance process would be a more suitable and promising direction.</p>
    <div>
      <h4>Best of two signatures: Hash-and-sign with aborts</h4>
      <a href="#best-of-two-signatures-hash-and-sign-with-aborts">
        
      </a>
    </div>
    <p>A promising technique for blind signatures combines the hash-and-sign paradigm with <i>Fiat-Shamir with aborts</i>, a method that relies on rejection sampling signatures. In this approach, the signer repeatedly attempts to generate a signature and aborts any result that may leak information about the secret key. This process ensures the final signature is statistically independent of the key and is used in modern signatures like ML-DSA. The <a href="https://eprint.iacr.org/2014/1027"><u>Phoenix signature</u></a> scheme uses <i>hash-and-sign with aborts</i>, where a message is first hashed into the lattice and signed, with rejection sampling employed to break the dependency between the signature and the private key.</p><p>Building on this foundation is an <a href="https://eprint.iacr.org/2024/131"><u>anonymous credential scheme for hash-and-sign with aborts</u></a>. The main improvement over hash-and-sign anonymous credentials is that, instead of proving the validity of a hash, the user commits to their attributes, which avoids costly zero-knowledge proofs.</p><p>The scheme is <a href="https://github.com/Chair-for-Security-Engineering/lattice-anonymous-credentials"><u>fully implemented</u></a> and credentials with attribute proofs just under 80 KB and signatures under 7 kB. The scheme takes less than 400 ms for issuance and 500 ms for showing the credential. The protocol also has a lot of features necessary for anonymous credentials, allowing users to prove relations between attributes and request pseudonyms for different instances.</p><p>This research presents a compelling step towards real-world deployability by combining state-of-the-art techniques to achieve a much healthier balance between performance and security. While the underlying mathematics are a bit more complex, the scheme is fully implemented and with a proof of knowledge of a signature at 40 kB and a prover time under a second, the scheme stands out as a great contender. However, for practical deployment, these figures would likely need a significant speedup to be usable in real-time systems. An improvement seems plausible, given recent <a href="https://eprint.iacr.org/2024/1952"><u>advances in lattice samplers</u></a>. Though the exact scale we can achieve is unclear. Still, we think it would be worthwhile to nudge the underlying design paradigm a little closer to our use cases.</p>
    <div>
      <h3>Do it yourself: MPC-in-the-head </h3>
      <a href="#do-it-yourself-mpc-in-the-head">
        
      </a>
    </div>
    <p>While the lattice-based hash-and-sign with aborts scheme provides one path to post-quantum signatures, an alternative approach is emerging from the MPCitH variant VOLE-in-the-Head <a href="https://eprint.iacr.org/2023/996"><u>(VOLEitH)</u></a>. </p><p>This scheme builds on <a href="https://eprint.iacr.org/2017/617"><u>Vector Oblivious Linear Evaluation (VOLE)</u></a>, an interactive protocol where one party's input vector is processed with another's secret value <i>delta</i>, creating a <i>correlation</i>. This VOLE correlation is used as a cryptographic commitment to the prover’s input. The system provides a zero-knowledge proof because the prover is bound by this correlation and cannot forge a solution without knowing the secret delta. The verifier, in turn, just has to verify that the final equation holds when the commitment is opened. This system is <i>linearly homomorphic</i>, which means that two commitments can be combined. This property is ideal for the <i>commit-and-prove</i> paradigm, where the prover first commits to the witnesses and then proves the validity of the circuit gate by gate. The primary trade-off is that the proofs are linear in the size of the circuit, but they offer substantially better runtimes. We also use linear-sized proofs for ARC and ACT.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6o073F0y7J7RxxHuDb4BSY/1ac0c4fc8b154dd77a8d3294016cbd32/image4.png" />
          </figure><p><sup>Example of evaluating a circuit gate by first committing to each wire and then proving the composition. This is easy for linear gates.</sup></p><p>This commit-and-prove approach allows <a href="https://link.springer.com/chapter/10.1007/978-3-031-91134-7_14"><u>VOLEitH</u></a> to efficiently prove the evaluation of symmetric ciphers, which are quantum-resistant. The transformation to a non-interactive protocol follows the standard MPCitH method: the prover commits to all secret values, a challenge is used to select a subset to reveal, and the prover proves consistency.</p><p>Efficient implementations operate over two mathematical fields (binary and prime) simultaneously, allowing these ZK circuits to handle both arithmetic and bitwise functions (like XORs) efficiently. Based on this foundation, a <a href="https://www.youtube.com/watch?v=VMeaF9xgbcw"><u>recent talk</u></a> teased the potential for blind signatures from the multivariate quadratic signature scheme <a href="https://pqmayo.org/about/"><u>MAYO</u></a> with sizes of just 7.5 kB and signing/verification times under 50 ms.</p><p>The VOLEitH approach, as a general-purpose solution system, represents a promising new direction for performant constructions. There are a <a href="https://pqc-mirath.org"><u>number</u></a> <a href="https://mqom.org"><u>of</u></a> <a href="https://pqc-perk.org"><u>competing</u></a> <a href="https://sdith.org"><u>in-the-head</u></a> schemes in the <a href="https://csrc.nist.gov/projects/pqc-dig-sig"><u>NIST competition for additional signature schemes</u></a>, including <a href="https://faest.info/authors.html"><u>one based on VOLEitH</u></a>. The current VOLEitH literature focuses on high-performance digital signatures, and an explicit construction for a full anonymous credential system has not yet been proposed. This means that features standard to ACs, such as multi-show unlinkability or the ability to prove relations between attributes, are not yet part of the design, whereas they are explicitly supported by the lattice construction. However, the preliminary results show great potential for performance, and it will be interesting to see the continued cryptanalysis and feature development from this line of VOLEitH in the area of anonymous credentials, especially since the general-purpose construction allows adding features easily.
</p><table><tr><td><p><b>Approach</b></p></td><td><p><b>Pros</b></p></td><td><p><b>Cons</b></p></td><td><p><b>Practical Viability</b></p></td></tr><tr><td><p><a href="https://eprint.iacr.org/2023/414"><u>Generic Composition</u></a></p></td><td><p>Flexible construction, strong security</p></td><td><p>Large signatures (112 kB), slow (660 ms)</p></td><td><p>Low: Performance is not great</p></td></tr><tr><td><p><a href="https://eprint.iacr.org/2023/077.pdf"><u>Hash-and-sign</u></a></p></td><td><p>Potentially tiny signatures, lots of optimization potential</p></td><td><p>Current implementation large and slow</p></td><td><p>Low: Performance is not great</p></td></tr><tr><td><p><a href="https://eprint.iacr.org/2024/131"><u>Hash-and-sign with aborts</u></a></p></td><td><p>Full AC system, good balance in communication</p></td><td><p>Slow runtimes (1s)</p></td><td><p>Medium: promising but performance would need to improve</p></td></tr><tr><td><p><a href="https://www.youtube.com/watch?v=VMeaF9xgbcw"><u>VOLEitH</u></a></p></td><td><p>Excellent potential performance (&lt;50ms, 7.5 kB)</p></td><td><p>not a full AC system, not peer-reviewed</p></td><td><p>Medium: promising research direction, no full solution available so far</p></td></tr></table>
    <div>
      <h2>Closing the gap</h2>
      <a href="#closing-the-gap">
        
      </a>
    </div>
    <p>My (that is Lena's) internship focused on a critical question: what should we look at next to build ACs for the Internet? For us, "the right direction" means developing protocols that can be integrated with real world applications, and developed collaboratively at the IETF. To make these a reality, we need researchers to look beyond blind signatures; we need a complete privacy-preserving protocol that combines blind signatures with efficient zero-knowledge proofs and properties like multi-show credentials that have an internal state. The issuance should also be sublinear in communication size with the number of presentations.</p><p>So, with the transition to post-quantum cryptography on the horizon, what are our thoughts on the current IETF proposals? A 2022 NIST presentation on the current state of anonymous credentials states that <a href="https://csrc.nist.gov/csrc/media/Presentations/2022/stppa4-revoc-decent/images-media/20221121-stppa4--baldimtsi--anon-credentials-revoc-decentral.pdf"><u>efficient post-quantum secure solutions are basically non-existent</u></a>. We argue that the last three years show nice developments in lattices and MPCitH anonymous credentials, but efficient post-quantum protocols still need work. Moving protocols into a post-quantum world isn't just a matter of swapping out old algorithms for new ones. A common approach on constructing post-quantum versions of classical protocols is swapping out the building blocks for their quantum-secure counterpart. </p><p>We believe this approach is essential, but not forward-looking. In addition to identifying how modern concerns can be accommodated on old cryptographic designs, we should be building new, post-quantum native protocols.</p><ul><li><p>For ARC, the conceptual path to a post-quantum construction seems relatively straightforward. The underlying cryptography follows a similar structure as the lattice-based anonymous credentials, or, when accepting a protocol with fewer features, the <a href="https://eprint.iacr.org/2023/414"><u>generic post-quantum privacy-pass</u></a> construction. However, we need to support per-origin rate-limiting, which allows us to transform a token at an origin without leaking us being able to link the redemption to redemptions at other origins, a feature that none of the post-quantum anonymous credential protocols or blind signatures support. Also, ARC is sublinear in communication size with respect to the number of tokens issued, which so far only the hash-and-sign with abort lattices achieve, although the notion of “limited shows” is not present in the current proposal. In addition, it would be great to gauge efficient implementations, especially for blind signatures, as well as looking into efficient zero-knowledge proofs. </p></li><li><p>For ACT, we need the protocols for ARC and an additional state. Even for the simplest counter, we need the ability to homomorphically subtract from that balance within the credential itself. This is a much more complex cryptographic requirement. It would also be interesting to see a post-quantum double-spend prevention that enforces the sequential nature of ACT. </p></li></ul><p>Working on ACs and other privacy-preserving cryptography inevitably leads to a major bottleneck: efficient zero-knowledge proofs, or to be more exact, efficiently proving hash function evaluations. In a ZK circuit, multiplications are expensive. Each wire in the circuit that performs a multiplication requires a cryptographic commitment, which adds communication overhead. In contrast, other operations like XOR can be virtually "free." This makes a huge difference in performance. For example, SHAKE (the primitive used in ML-DSA) can be orders of magnitude slower than arithmetization-friendly hash functions inside a ZKP. This is why researchers and implementers are already using <a href="https://eprint.iacr.org/2019/458"><u>Poseidon</u></a> or <a href="https://eprint.iacr.org/2023/323"><u>Poseidon2</u></a> to make their protocols faster.</p><p>Currently, <a href="https://www.poseidon-initiative.info/"><u>Ethereum</u></a> is <a href="https://x.com/VitalikButerin/status/1894681713613164888"><u>seriously considering migrating Ethereum to the Poseidon hash</u></a> and calls for cryptanalysis, but there is no indication of standardization. This is a problem: papers increasingly use different instantiations of Poseidon to fit their use-case, and there <a href="https://eprint.iacr.org/2016/492"><u>are</u></a> <a href="https://eprint.iacr.org/2023/323"><u>more</u></a> <a href="https://eprint.iacr.org/2022/840"><u>and</u></a> <a href="https://eprint.iacr.org/2025/1893"><u>more</u></a> <a href="https://eprint.iacr.org/2025/926"><u>zero</u></a>-<a href="https://eprint.iacr.org/2020/1143"><u>knowledge</u></a> <a href="https://eprint.iacr.org/2019/426"><u>friendly</u></a> <a href="https://eprint.iacr.org/2023/1025"><u>hash</u></a> <a href="https://eprint.iacr.org/2021/1038"><u>functions</u></a> <a href="https://eprint.iacr.org/2022/403"><u>coming</u></a> <a href="https://eprint.iacr.org/2025/058"><u>out</u></a>, tailored to different use-cases. We would like to see at least one XOF and one hash each for a prime field and for a binary field, ideally with some security levels. And also, is Poseidon the best or just the most well-known ZK friendly cipher? Is it always secure against quantum computers (like we believe AES to be), and are there other attacks like the <a href="https://eprint.iacr.org/2025/950"><u>recent</u></a> <a href="https://eprint.iacr.org/2025/937"><u>attacks</u></a> on round-reduced versions?</p><p>Looking at algebra and zero-knowledge brings us to a fundamental debate in modern cryptography. Imagine a line representing the spectrum of research: On one end, you have protocols built on very well-analyzed standard assumptions like the <a href="https://blog.cloudflare.com/lattice-crypto-primer/#breaking-lattice-cryptography-by-finding-short-vectors"><u>SIS problem</u></a> on lattices or the collision resistance of SHA3. On the other end, you have protocols that gain massive efficiency by using more algebraic structure, which in turn relies on newer, stronger cryptographic assumptions. Breaking novel hash functions is somewhere in the middle. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2BMtbDoVnrmKeTvhCyfOjK/616438127351eedf6ff41db282a0511e/image7.png" />
          </figure><p>The answer for the Internet can’t just be to relent and stay at the left end of our graph to be safe. For the ecosystem to move forward, we need to have confidence in both. We need more research to validate the security of ZK-friendly primitives like Poseidon, and we need more scrutiny on the stronger assumptions that enable efficient algebraic methods.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>As we’ve explored, the cryptographic properties that make classical ACs efficient, particularly the rich structure of elliptic curves, do not have direct post-quantum equivalents. Our survey of the state of the art from generic compositions using STARKs, to various lattice-based schemes, and promising new directions like MPC-in-the-head, reveals a field full of potential but with no clear winner. The trade-offs between communication cost, computational cost, and protocol rounds remain a significant barrier to practical, large-scale deployment, especially in comparison to elliptic curve constructions.</p><p>To bridge this gap, we must move beyond simply building post-quantum blind signatures. We challenge our colleagues in academia and industry to develop complete, post-quantum native protocols that address real-world needs. This includes supporting essential features like the per-origin rate-limiting required for ARC or the complex stateful credentials needed for ACT.</p><p>A critical bottleneck for all these approaches is the lack of efficient, standardized, and well-analyzed zero-knowledge-friendly hash functions. We need to research zero-knowledge friendly primitives and build industry-wide confidence to enable efficient post-quantum privacy.</p><p>If you’re working on these problems, or you have experience in the management and deployment of classical credentials, now is the time to engage. The world is rapidly adopting credentials for everything from digital identity to bot management, and it is our collective responsibility to ensure these systems are private and secure for a post-quantum future. We can tell for certain that there are more discussions to be had, and if you’re interested in helping to build this more secure and private digital world, we’re hiring 1,111 interns over the course of next year, and have open positions!</p> ]]></content:encoded>
            <category><![CDATA[AI Bots]]></category>
            <category><![CDATA[Post-Quantum]]></category>
            <category><![CDATA[IETF]]></category>
            <category><![CDATA[European Union]]></category>
            <category><![CDATA[Elliptic Curves]]></category>
            <category><![CDATA[Cryptography]]></category>
            <guid isPermaLink="false">JA04hlqr6TaeGhkvyutbt</guid>
            <dc:creator>Lena Heimberger</dc:creator>
            <dc:creator>Christopher Patton</dc:creator>
        </item>
        <item>
            <title><![CDATA[Securing agentic commerce: helping AI Agents transact with Visa and Mastercard]]></title>
            <link>https://blog.cloudflare.com/secure-agentic-commerce/</link>
            <pubDate>Fri, 24 Oct 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare is partnering with Visa and Mastercard to help secure the future of agentic commerce. ]]></description>
            <content:encoded><![CDATA[ <p>The era of agentic commerce is coming, and it brings with it significant new challenges for security. That’s why Cloudflare is partnering with Visa and Mastercard to help secure automated commerce as AI agents search, compare, and purchase on behalf of consumers.</p><p>Through our collaboration, Visa developed the <a href="https://github.com/visa/trusted-agent-protocol"><u>Trusted Agent Protocol</u></a> and Mastercard developed <a href="https://www.mastercard.com/us/en/business/artificial-intelligence/mastercard-agent-pay.html"><u>Agent Pay</u></a> to help merchants distinguish legitimate, approved agents from malicious bots. Both Trusted Agent Protocol and Agent Pay leverage <a href="https://blog.cloudflare.com/web-bot-auth/"><u>Web Bot Auth</u></a> as the agent authentication layer to allow networks like Cloudflare to verify traffic from AI shopping agents that register with a payment network.</p>
    <div>
      <h2>The challenges with agentic commerce</h2>
      <a href="#the-challenges-with-agentic-commerce">
        
      </a>
    </div>
    <p>Agentic commerce is commerce driven by AI agents. As AI agents execute more transactions, merchants need to protect themselves and maintain trust with their customers. Merchants are beginning to see the promise of agentic commerce but face significant challenges: </p><ul><li><p>How can they distinguish a helpful, approved AI shopping agent from a malicious bot or web crawler? </p></li><li><p>Is the agent representing a known, repeat customer or someone entirely new? </p></li><li><p>Are there particular instructions the consumer gave to their agent that the merchant should respect?</p></li></ul><p>We are working with Visa and Mastercard, two of the most trusted consumer brands in payments, to address each of these challenges. </p>
    <div>
      <h2>Web Bot Auth is the foundation to securing agentic commerce</h2>
      <a href="#web-bot-auth-is-the-foundation-to-securing-agentic-commerce">
        
      </a>
    </div>
    <p>In May, we shared a new proposal called <a href="https://blog.cloudflare.com/web-bot-auth/"><u>Web Bot Auth</u></a> to cryptographically authenticate agent traffic. Historically, agent traffic has been classified using the user agent and IP address. However, these fields can be spoofed, leading to inaccurate classifications and bot mitigations can be applied inaccurately. Web Bot Auth allows an agent to provide a stable identifier by using <a href="https://datatracker.ietf.org/doc/html/draft-meunier-web-bot-auth-architecture"><u>HTTP Message Signatures</u></a> with public key cryptography.</p><p>As we spent time collaborating with the teams at Visa and Mastercard, we found that we could leverage Web Bot Auth as the foundation to ensure that each commerce agent request was verifiable, time-based, and non-replayable.</p><p>Visa’s Trusted Agent Protocol and Mastercard’s Agent Pay present three key solutions for merchants to manage agentic commerce transactions. First, merchants can identify a registered agent and distinguish whether a particular interaction is intended to browse or to pay. Second, merchants can link an agent to a consumer identity. Last, merchants can indicate to agents how a payment is expected, whether that is through a network token, browser-use guest checkout, or a micropayment.</p><p>This allows merchants that integrate with these protocols to instantly recognize a trusted agent during two key interactions: the initial browsing phase to determine product details and final costs, and the final payment interaction to complete a purchase. Ultimately, this provides merchants with the tools to verify these signatures, identify trusted interactions, and securely manage how these agents can interact with their site.</p>
    <div>
      <h2>How it works: leveraging HTTP message signatures </h2>
      <a href="#how-it-works-leveraging-http-message-signatures">
        
      </a>
    </div>
    <p>To make this work, an ecosystem of participants need to be on the same page. It all starts with <i>agent</i> <i>developers</i>, who build the agents to shop on behalf of consumers. These agents then interact with <i>merchants</i>, who need a reliable way to assess the request is made on behalf of consumers. Merchants rely on networks like Cloudflare to verify the agent's cryptographic signatures and ensure the interaction is legitimate. Finally, there are payment networks like Visa and Mastercard, who can link cardholder identity to agentic commerce transactions, helping ensure that transactions are verifiable and accountable.</p><p>When developing their protocols, Visa and Mastercard needed a secure way to authenticate each agent developer and securely transmit information from the agent to the merchant’s website. That’s where we came in and worked with their teams to build upon Web Bot Auth. <a href="https://datatracker.ietf.org/doc/html/draft-meunier-web-bot-auth-architecture"><u>Web Bot Auth</u></a> proposals specify how developers of bots and agents can attach their cryptographic signatures in HTTP requests by using <a href="https://www.rfc-editor.org/rfc/rfc9421"><u>HTTP Message Signatures</u></a>. </p><p>Both Visa and Mastercard protocols require agents to register and have their public keys (referenced as the <code>keyid</code> in the Signature-Input header) in a well-known directory, allowing merchants and networks to fetch the keys to validate these HTTP message signatures. To start, Visa and Mastercard will be hosting their own directories for Visa-registered and Mastercard-registered agents, respectively</p><p>The newly created agents then communicate their registration, identity, and payment details with the merchant using these HTTP Message Signatures. Both protocols build on Web Bot Auth by introducing a new tag that agents must supply in the <code>Signature-Input </code>header, which indicates whether the agent is browsing or purchasing. Merchants can use the tag to determine whether to interact with the agent. Agents must also include the nonce field, a unique sequence included in the signature, to provide protection against replay attacks.</p><p>An agent visiting a merchant’s website to browse a catalog would include an HTTP Message Signature in their request to verify their agent is authorized to browse the merchant’s storefront on behalf of a specific Visa cardholder:</p>
            <pre><code>GET /path/to/resource HTTP/1.1
Host: www.example.com
User-Agent: Mozilla/5.0 Chrome/113.0.0 MyShoppingAgent/1.1
Signature-Input: 
  sig2=("@authority" "@path"); 
  created=1735689600; 
  expires=1735693200; 
  keyid="poqkLGiymh_W0uP6PZFw-dvez3QJT5SolqXBCW38r0U"; 
  alg="Ed25519";   nonce="e8N7S2MFd/qrd6T2R3tdfAuuANngKI7LFtKYI/vowzk4IAZyadIX6wW25MwG7DCT9RUKAJ0qVkU0mEeLEIW1qg=="; 
  tag="web-bot-auth"
Signature: sig2=:jdq0SqOwHdyHr9+r5jw3iYZH6aNGKijYp/EstF4RQTQdi5N5YYKrD+mCT1HA1nZDsi6nJKuHxUi/5Syp3rLWBA==:</code></pre>
            <p>Trusted Agent Protocol and Agent Pay are designed for merchants to benefit from its validation mechanisms without changing their infrastructure. Instead, merchants can set the rules for agent interactions on their site and rely upon Cloudflare as the validator. For these requests, Cloudflare will run <a href="https://blog.cloudflare.com/verified-bots-with-cryptography/#message-signature-verification-for-origins"><u>the following checks</u></a>:</p><ol><li><p>Confirm the presence of the <code>Signature-Input</code> and <code>Signature</code> headers.</p></li><li><p>Pull the <code>keyid</code> from the Signature-Input. If Cloudflare has not previously retrieved and cached the key, fetch it from the public key directory.</p></li><li><p>Confirm the current time falls between the <code>created</code> and <code>expires</code> timestamps.</p></li><li><p>Check <code>nonce</code> uniqueness in the cache. By checking if a nonce has been recently used, Cloudflare can reject reused or expired signatures, ensuring the request is not a malicious copy of a prior, legitimate interaction.</p></li><li><p>Check the validity of the <code>tag</code>, as defined by the protocol. If the agent is browsing, the tag should be <code>agent-browser-auth</code>. If the agent is paying, the tag should be <code>agent-payer-auth</code>. </p></li><li><p>Reconstruct the canonical <a href="https://www.rfc-editor.org/rfc/rfc9421#name-creating-the-signature-base"><u>signature base</u></a> using the <a href="https://www.rfc-editor.org/rfc/rfc9421#covered-components"><u>components</u></a> from the <code>Signature-Input</code> header. </p></li><li><p>Perform the cryptographic <a href="https://www.rfc-editor.org/rfc/rfc9421#name-eddsa-using-curve-edwards25"><u>ed25519 signature verification</u></a> using the key supplied in <code>keyid</code>.</p></li></ol><p>Here is an <a href="https://github.com/visa/trusted-agent-protocol"><u>example from Visa</u></a> on the flow for agent validation:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4Preu2aFUSuW5o3UWE6281/caf5354a009fb89c8b01cfef10fc3e87/image3.png" />
          </figure><p>Mastercard’s Agent Pay validation flow is outlined below:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1vzpRtW4dRzsNGdc4Vnxf1/c780ef45a0b1fc263eb13b62b2af5457/image2.png" />
          </figure>
    <div>
      <h3>What’s next: Cloudflare’s Agent SDK &amp; Managed Rules</h3>
      <a href="#whats-next-cloudflares-agent-sdk-managed-rules">
        
      </a>
    </div>
    <p>We recently introduced support for <a href="https://blog.cloudflare.com/x402/#cloudflares-mcp-servers-agents-sdk-and-x402-payments"><u>x402 transactions</u></a> into Cloudflare’s <a href="https://agents.cloudflare.com/"><u>Agent SDK</u></a>, allowing anyone building an agent to easily transact using the new x402 protocol. We will similarly be working with Visa and Mastercard over the coming months to bring support for their protocols directly to the Agents SDK. This will allow developers to manage their registered agent’s private keys and to easily create the correct HTTP message signatures to authorize their agent to browse and transact on a merchant website.</p><p>Conceptually, the requests in a Cloudflare Worker would look something like this:</p>
            <pre><code>/**
 * Pseudocode example of a Cloudflare Worker acting as a trusted agent.
 * This version explicitly illustrates the signing logic to show the core flow. 
 */


// Helper function to encapsulate the signing protocol logic.
async function createSignatureHeaders(targetUrl, credentials) {
    // Internally, this function would perform the detailed cryptographic steps:
    // 1. Generate timestamps and a unique nonce.
    // 2. Construct the 'Signature-Input' header string with all required parameters.
    // 3. Build the canonical 'Signature Base' string according to the spec.
    // 4. Use the private key to sign the base string.
    // 5. Return the fully formed 'Signature-Input' and 'Signature' headers.
    
    const signedHeaders = new Headers();
    
    signedHeaders.set('Signature-Input', 'sig2=(...); keyid="..."; ...');
    signedHeaders.set('Signature', 'sig2=:...');
    return signedHeaders;
}


export default {
    async fetch(request, env) {
        // 1. Load the final API endpoint and private signing credentials.
        const targetUrl = new URL(request.url).searchParams.get('target');
        const credentials = { 
            privateKey: env.PAYMENT_NETWORK_PRIVATE_KEY, 
            keyId: env.PAYMENT_NETWORK_KEY_ID 
        };


        // 2. Generate the required signature headers using the helper.
        const signatureHeaders = await createSignatureHeaders(targetUrl, credentials);


        // 3. Attach the newly created signature headers to the request for authentication.
        const signedRequestHeaders = new Headers(request.headers);
        signedRequestHeaders.set('Host', new URL(targetUrl).hostname);
        signedRequestHeaders.set('Signature-Input', signatureHeaders.get('Signature-Input'));
        signedRequestHeaders.set('Signature', signatureHeaders.get('Signature'));


       // 4. Forward the fully signed request to the protected API.
        return fetch(targetUrl, { headers: signedRequestHeaders });
    },
};</code></pre>
            <p>We’ll also be creating new <a href="https://developers.cloudflare.com/waf/managed-rules/"><u>managed rulesets</u></a> for our customers that make it easy to allow agents that are using the Trusted Agent Protocol or Agent Pay. You might want to disallow most automated traffic to your storefront but not miss out on revenue opportunities from agents authorized to make a purchase on behalf of a cardholder. A managed rule would make this straightforward to implement. As the website owner, you could enable a managed rule that automatically allows all trusted agents registered with Visa or Mastercard to come to your site, passing your other bot protection &amp; WAF rules. </p><p>These will continue to evolve, and we will incorporate feedback to ensure that agent registration and validation works seamlessly across all networks and aligns with the Web Bot Auth proposal. American Express will also be leveraging Web Bot Auth as the foundation to their agentic commerce offering.</p>
    <div>
      <h2>How to get started today </h2>
      <a href="#how-to-get-started-today">
        
      </a>
    </div>
    <p>You can start building with Cloudflare’s <a href="https://agents.cloudflare.com/"><u>Agent SDK today</u></a>, see a sample implementation of the <a href="https://github.com/visa/trusted-agent-protocol"><u>Trusted Agent Protocol</u></a>, and view the <a href="https://developer.visa.com/capabilities/trusted-agent-protocol/trusted-agent-protocol-specifications"><u>Trusted Agent Protocol</u></a> and <a href="https://www.mastercard.com/us/en/business/artificial-intelligence/mastercard-agent-pay.html"><u>Agent Pay</u></a> docs. </p><p>We look forward to your contribution and feedback, should this be engaging on GitHub, building apps, or engaging in mailing lists discussions.</p> ]]></content:encoded>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Agents]]></category>
            <category><![CDATA[AI Bots]]></category>
            <guid isPermaLink="false">7EMx28KsZIufcu4wEq5YtV</guid>
            <dc:creator>Rohin Lohe</dc:creator>
            <dc:creator>Will Allen</dc:creator>
        </item>
        <item>
            <title><![CDATA[To build a better Internet in the age of AI, we need responsible AI bot principles. Here’s our proposal.]]></title>
            <link>https://blog.cloudflare.com/building-a-better-internet-with-responsible-ai-bot-principles/</link>
            <pubDate>Wed, 24 Sep 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ We are proposing—as starting points—responsible AI bot principles that emphasize transparency, accountability, and respect for content access and use preferences. ]]></description>
            <content:encoded><![CDATA[ <p>Cloudflare has a unique vantage point: we see not only how changes in technology shape the Internet, but also how new technologies can unintentionally impact different stakeholders. Take, for instance, the increasing reliance by everyday Internet users on AI–powered <a href="https://www.cloudflare.com/learning/bots/what-is-a-chatbot/"><u>chatbots</u></a> and <a href="https://www.pewresearch.org/short-reads/2025/07/22/google-users-are-less-likely-to-click-on-links-when-an-ai-summary-appears-in-the-results/sr_25-07-22_ai_summaries_1/"><u>search summaries</u></a>. On the one hand, end users are getting information faster than ever before. On the other hand, web publishers, who have historically relied on human eyeballs to their website to support their businesses, are seeing a <a href="https://www.forbes.com/sites/torconstantino/2025/04/14/the-60-problem---how-ai-search-is-draining-your-traffic/"><u>dramatic</u></a> <a href="https://blog.cloudflare.com/ai-search-crawl-refer-ratio-on-radar/"><u>decrease</u></a> in those eyeballs, which can reduce their ability to create original high-quality content. This cycle will ultimately hurt end users and AI companies (whose success relies on fresh, high-quality content to train models and provide services) alike.</p><p>We are indisputably at a point in time when the Internet needs clear “rules of the road” for AI bot behavior (a note on terminology: throughout this blog we refer to AI bots and crawlers interchangeably). We have had ongoing cross-functional conversations, both internally and with stakeholders and partners across the world, and it’s clear to us that the Internet at large needs key groups — publishers and content creators, bot operators, and Internet infrastructure and cybersecurity companies — to reach a consensus on certain principles that AI bots should follow.</p><p>Of course, agreeing on what exactly those principles are will take time and require continued discussion and collaboration, and a policy framework can’t perfectly capture every technical concern. Nevertheless, we think it’s important to start a conversation that we hope others will join. After all, a rough draft is better than a blank page.</p><p>That is why we are proposing the following responsible AI bot principles as starting points:</p><ol><li><p><b>Public disclosure: </b>Companies should publicly disclose information about their AI bots;</p></li><li><p><b>Self-identification: </b>AI bots should truthfully self-identify, eventually replacing less reliable methods, like user agent and IP address verification, with cryptographic verification;</p></li><li><p><b>Declared single purpose:</b> AI bots should have one distinct purpose and declare it;</p></li><li><p><b>Respect preferences: </b>AI bots should respect and comply with preferences expressed by website operators where proportionate and technically feasible;</p></li><li><p><b>Act with good intent:</b> AI bots must not flood sites with excessive traffic or engage in deceptive behavior.</p></li></ol><p>Each principle is discussed in greater detail <a href="#responsible-ai-bot-principles"><u>below</u></a>. These principles focus on AI bots because of the impact <a href="https://www.cloudflare.com/learning/ai/what-is-generative-ai/"><u>generative AI</u></a> is having on the Internet, but we have already seen these practices in action with other types of (non-AI) bots as well. We believe these principles will help move the Internet in a better direction. That said, we acknowledge that they are a starting point for this conversation, which requires input from other stakeholders. The Internet has always been a collaborative place for innovation, and these principles should be seen as equally dynamic and evolving. </p>
    <div>
      <h2>Why Cloudflare is encouraging this conversation</h2>
      <a href="#why-cloudflare-is-encouraging-this-conversation">
        
      </a>
    </div>
    <p>Since declaring July 1st <a href="https://blog.cloudflare.com/content-independence-day-no-ai-crawl-without-compensation/"><u>Content Independence Day</u></a>, Cloudflare has strived to play a balanced and effective role in safeguarding the future of the Internet in the age of generative AI. We have enabled customers to <a href="https://blog.cloudflare.com/introducing-pay-per-crawl/"><u>charge AI crawlers for access</u></a> or <a href="https://www.cloudflare.com/learning/ai/how-to-block-ai-crawlers/"><u>block them with one click</u></a>, published and enforced our <a href="https://developers.cloudflare.com/bots/concepts/bot/verified-bots/policy/"><u>verified bots policy</u></a> and developed the <a href="https://developers.cloudflare.com/bots/reference/bot-verification/web-bot-auth/"><u>Web Bot Auth</u></a> proposal, and unapologetically <a href="https://blog.cloudflare.com/perplexity-is-using-stealth-undeclared-crawlers-to-evade-website-no-crawl-directives/#how-well-meaning-bot-operators-respect-website-preferences"><u>called out and stopped bad behavior</u></a>.</p><p>While we have recently focused our attention on AI crawlers, Cloudflare has long been a leader in the bot management space, helping our customers protect their websites from unwanted — and even malicious —traffic. We also want to make sure that anyone — whether they’re our customer or not — can see <a href="https://radar.cloudflare.com/ai-insights#ai-bot-best-practices"><u>which AI bots are abiding by all, some, or none of these best practices</u></a>. </p><p>But we aren’t ignorant to the fact that companies operating crawlers are also adapting to a new Internet landscape — and we genuinely believe that most players in this space want to do the right thing, while continuing to innovate and propel the Internet in an exciting direction. Our hope is that we can use our expertise and unique vantage point on the Internet to help bring seemingly incompatible parties together and find a path forward — continuing our mission of helping to build a better Internet for everyone.</p>
    <div>
      <h2>Responsible AI bot principles</h2>
      <a href="#responsible-ai-bot-principles">
        
      </a>
    </div>
    <p>The following principles are a launchpad for a larger conversation, and we recognize that there is work to be done to address many nuanced perspectives. We envision these principles applying to AI bots but understand that technical complexity may require flexibility. <b>Ultimately, our goal is to emphasize transparency, accountability, and respect for content access and use preferences.</b> If these principles fall short of that — or fail to consider other important priorities — we want to know.</p>
    <div>
      <h3>Principle #1: Public disclosure</h3>
      <a href="#principle-1-public-disclosure">
        
      </a>
    </div>
    <p><b>Companies should publicly disclose information about their AI bots.</b> The following information should be publicly available and easy to find:</p><ul><li><p><b>Identity:</b> information that helps external parties identify a bot, <i>e.g.</i>, user agent, relevant IP address(es), and/or individual cryptographic identification (more on this below, in <a href="#principle-2-self-identification"><i><u>Principle #2: Self-identification</u></i></a>).</p></li><li><p><b>Operator:</b> the legal entity responsible for the AI bot, including a point of contact (<i>e.g.</i>, for reporting abuse);</p></li><li><p><b>Purpose:</b> for which purpose the accessed data will be used, <i>i.e.</i>, search, AI-input, or training (more on this below, in <a href="#principle-3-declared-single-purpose"><i><u>Principle #3: Declared Single Purpose</u></i></a>).</p></li></ul><p>OpenAI is an example of a leading AI company that clearly <a href="https://platform.openai.com/docs/bots"><u>discloses their bots</u></a>, complete with detailed explanations of each bot’s purpose. The benefits of this disclosure are apparent in the subsequent principles. It helps website operators validate that a given request is in fact coming from OpenAI and what its purpose is (<i>e.g.</i>, search indexing or AI model training). This, in turn, enables website operators to control access to and use of their content through preference expression mechanisms, like <a href="https://www.cloudflare.com/learning/bots/what-is-robots-txt/"><u>robots.txt files</u></a>.</p>
    <div>
      <h3>Principle #2: Self-identification</h3>
      <a href="#principle-2-self-identification">
        
      </a>
    </div>
    <p><b>AI bots should truthfully self-identify.</b> Not only should information about bots be disclosed in a publicly accessible location, this information should also be clearly communicated by bots themselves, <i>e.g.,</i> through an HTTP request that conveys the bot’s official user agent and comes from an IP address that the bot claims to send traffic from. Admittedly, this current approach is flawed, as we discuss in <a href="#a-note-on-cryptographic-verification-and-the-future-of-principle-2"><u>more detail below</u></a>. But until cryptographic verification is more widely adopted, we think relying on user agent and IP verification is better than nothing.</p><p>OpenAI’s <a href="https://radar.cloudflare.com/bots/directory/gptbot"><u>GPTBot</u></a> is an example of this principle in action. OpenAI <a href="https://platform.openai.com/docs/bots"><u>publicly shares</u></a> the expected full user-agent string for this bot and includes it in its requests. OpenAI also explains this bot’s purpose (“used to make [OpenAI’s] generative AI foundation models more useful and safe” and “to crawl content that may be used in training [their] generative AI foundation models”). And we have observed this bot sending traffic from IP addresses reported by OpenAI. Because site operators see GPTBot’s user agent and IP addresses matching what is publicly disclosed and expected, and they know information about the bot is publicly documented, they can confidently recognize the bot. This enables them to make informed decisions about whether they want to allow traffic from it.</p><p>Unfortunately, not all bots uphold this principle, making it difficult for website owners to know exactly which bot operators respect their crawl preferences, much less enforce them. For example, while Anthropic publishes its user agent alone, absent other verifiable information, it’s unclear which requests are truly from Anthropic. And xAI’s bot, grok, does not self-identify at all, making it impossible for website operators to block it. Anthropic and xAI’s lack of identification undermines trust between them and website owners, yet this could be fixed with minimal effort on their parts.</p>
    <div>
      <h2>A note on cryptographic verification and the future of Principle #2</h2>
      <a href="#a-note-on-cryptographic-verification-and-the-future-of-principle-2">
        
      </a>
    </div>
    <p>Truthful declaration of user agent and dedicated IP lists have historically been a functional way to verify. But in today’s rapidly-evolving bot climate, bots are increasingly vulnerable to being spoofed by bad actors. These bad actors, in turn, ignore robots.txt, which communicates allow/disallow preferences only on a user agent basis (so, a bad bot could spoof a permitted user agent and circumvent that domain’s preferences).</p><p><b>Ultimately, every AI bot should be cryptographically verified using an accepted standard.</b> This would protect them against spoofing and ensure website operators have the accurate and reliable information they need to properly evaluate access by AI bots. At this time, we believe that <a href="https://datatracker.ietf.org/doc/html/draft-meunier-web-bot-auth-architecture?cf_history_state=%7B%22guid%22%3A%22C255D9FF78CD46CDA4F76812EA68C350%22%2C%22historyId%22%3A43%2C%22targetId%22%3A%226EAB129D6194DD2C4E8CCD7C06D57DE2%22%7D"><u>Web Bot Auth</u></a> is sufficient proof of compliance with Principle #2. We recognize that this standard is still in development, and, as a result, this principle may evolve accordingly.</p><p>Web Bot Auth <a href="https://blog.cloudflare.com/web-bot-auth/"><u>uses cryptography to verify bot traffic</u></a>; cryptographic signatures in HTTP messages are used as verification that a given request came from an automated bot. Our implementation relies on proposed IETF <a href="https://datatracker.ietf.org/doc/html/draft-meunier-http-message-signatures-directory"><u>directory</u></a> and <a href="https://datatracker.ietf.org/doc/html/draft-meunier-web-bot-auth-architecture"><u>protocol</u></a> drafts. Initial reception of Web Bot Auth has been very positive, and we expect even more adoption. For example, a little over a month ago, Vercel <a href="https://vercel.com/changelog/vercels-bot-verification-now-supports-web-bot-auth"><u>announced</u></a> that its bot verification now supports Web Bot Auth. And OpenAI’s <a href="https://help.openai.com/en/articles/11845367-chatgpt-agent-allowlisting"><u>ChatGPT agent now signs its requests using Web Bot Auth</u></a>, in addition to using the HTTP Message Signatures <a href="https://datatracker.ietf.org/doc/html/rfc9421"><u>standard</u></a>.</p><p>We envision a future where cryptographic authentication becomes the norm, as we believe this will further strengthen the trustworthiness of bots.</p>
    <div>
      <h3>Principle #3: Declared single purpose </h3>
      <a href="#principle-3-declared-single-purpose">
        
      </a>
    </div>
    <p><b>AI bots should have one distinct purpose and declare it. </b>Today, <a href="https://blog.cloudflare.com/ai-crawler-traffic-by-purpose-and-industry"><u>some</u></a> bots self-identify their purpose as Training, Search, or User Action (<i>i.e.</i>, accessing a web page in response to a user’s query).</p><p>However, these purposes are sometimes combined without clear distinction. For example, content accessed for search purposes might also be used to train the AI model powering the search engine. When a bot’s purpose is unclear, website operators face a difficult decision: block it and risk undermining search engine optimization (SEO), or allow it and risk content being used in unwanted ways.</p><p>When operators deploy bots with distinct purposes, website owners are able to make clear decisions over who can access their content. What those purposes should be is up for debate, but we think the following breakdown is a starting point based on bot activity we see. We recognize this is an evolving space and changes may be required as innovation continues:</p><ul><li><p><b>Search:</b> building a search index and providing search results (<i>e.g.</i>, returning hyperlinks and short excerpts from your website’s contents). Search does <u>not</u> include providing AI-generated search summaries;</p></li><li><p><b>AI-input:</b> inputting content into one or more AI models, <i>e.g.</i>, retrieval-augmented generation (RAG), grounding, or other real-time taking of content for generative AI search answers; and</p></li><li><p><b>Training:</b> training or fine-tuning AI models.</p></li></ul><p>Relatedly, bots should not combine purposes in a way that prevents web operators from deliberately and effectively deciding whether to allow crawling.</p><p>Let’s consider two AI bots, OAI-SearchBot and Googlebot, from the perspective of Vinny, a website operator trying to make a living on the Internet. OAI-SearchBot has a single purpose: linking to and surfacing websites in ChatGPT’s search features. If Vinny takes OpenAI at face value (which we think it makes sense to do), he can trust that OAI-SearchBot does not crawl his content for training OpenAI’s generative AI models rather, a separate bot (GPTBot, as discussed in <a href="https://docs.google.com/document/d/1LQ2DkaKBaTn6pXrgLZp-5BjHsQd1FOS-7vmkf6DVxx0/edit?tab=t.1023mi6snxqe#heading=h.yfcrchlj1en9"><i><u>Principle #2: Self-identification</u></i></a>) does. Vinny can decide how he wants his content used by OpenAI, <i>e.g.</i>, permitting its use for search but not for AI training, and feel confident that his choices are respected because OAI-SearchBot <i>only</i> crawls for search purposes, while GPTBot is not granted access to the content in the first place (and therefore cannot use it).</p><p>On the other hand, while Googlebot scrapes content for traditional search-indexing (not model training), it also uses that content for inference purposes, such as for AI Overviews and AI Mode. Why is this a problem for Vinny? While he almost certainly wants his content appearing in search results, which drive the human eyeballs that fund his site, Vinny is forced to also accept that his content will appear in Google’s AI-generated summaries. If eyeballs are satisfied by the summary then they never visit Vinny’s website, which leads to <a href="https://www.bain.com/insights/goodbye-clicks-hello-ai-zero-click-search-redefines-marketing/"><u>“zero-click” searches</u> and undermines</a> Vinny’s ability to financially benefit from his content.</p><p>This is a vicious cycle: creating high-quality content, which typically leads to higher search rankings, now inadvertently also reduces the chances an eyeball will visit the site because that same valuable content is surfaced in an AI Overview (if it is even referenced as a source in the summary). To prevent this, Vinny must either opt out of search completely or use snippet controls (which risks degrading how his content appears in search results). This is because the only available signal to opt-out of AI, disallowing <a href="https://developers.google.com/search/docs/crawling-indexing/google-common-crawlers#google-extended"><u>Google-Extended,</u></a> is limited to training and does not apply to AI Overview, which is attached to search. Whether by accident or by design, this setup forces an impossible choice onto website owners.</p><p>Finally, the prominent technical argument in favor of combining multiple purposes — that this reduces the crawler operator’s costs — needs to be debunked. To reason by analogy: it’s like arguing that placing one call to order two pizzas is cheaper than placing two calls to order two pizzas. In reality, the cost of the two pizzas (both of which take time and effort to make) remains the same. The extra phone call may be annoying, but its costs are negligible.</p><p>Similarly, whether one bot request is made for two purposes (<i>e.g.</i>, search indexing and AI model training) or a separate bot request is made for each of two purposes, the costs basically remain the same. For the crawler, the cost of compute is the same because the content still needs to be processed for each purpose. And the cost of two connections (<i>i.e.</i>, for two requests) is virtually the same as one. We know this because Cloudflare runs one of the largest networks in the world, handling on average 84 million requests per second, so we understand the cost of requests at Internet scale. (As an aside, while additional crawls incur costs on website operators, they have the ability to choose whether the crawl is worth the cost, especially when bots have a single purpose.)</p>
    <div>
      <h3>Principle # 4: Respect preferences</h3>
      <a href="#principle-4-respect-preferences">
        
      </a>
    </div>
    <p><b>AI bots should respect and comply with preferences expressed by website operators where proportionate and technically feasible.</b> There are multiple options for expressing preferences. Prominent examples include the longstanding and familiar robots.txt, as well as newly emerging HTTP headers.</p><p>Given the widespread use of robots.txt files, bots should make a good faith attempt to fetch a robots.txt file first, in accordance with <a href="https://datatracker.ietf.org/doc/html/rfc9309"><u>RFC 9309</u></a>, and abide by both the access and use preferences specified therein. AI bot operators should also stay up to date on how those preferences evolve as a result of a <a href="https://ietf-wg-aipref.github.io/drafts/draft-ietf-aipref-vocab.html"><u>draft vocabulary</u></a> currently under development by an IETF working group. The goal of the proposed vocabulary is to improve granularity in robots.txt files, so that website operators are empowered to control how their assets are used. </p><p>At the same time, new industry standards under discussion may involve the attachment of machine-readable preferences to different formats, such as individual files. AI bot operators should eventually be prepared to comply with these standards, too. One idea currently being explored is a way for site owners to list preferences via HTTP headers, which offer a server-level method of declaring how content should be used.</p>
    <div>
      <h3>Principle #5: Act with good intent</h3>
      <a href="#principle-5-act-with-good-intent">
        
      </a>
    </div>
    <p><b>AI bots must not flood sites with excessive traffic or engage in deceptive behavior.</b> AI bot behavior should be benign or helpful to website operators and their users. It is also incumbent on companies that operate AI bots to monitor their networks and resources for breaches and patch vulnerabilities. Jeopardizing a website’s security or performance or engaging in harmful tactics is unacceptable.</p><p>Nor is it appropriate to appear to comply with the principles, only to secretly circumvent them. Reaffirming a long-standing principle of acceptable bot behavior, AI bots must never engage in <a href="https://blog.cloudflare.com/perplexity-is-using-stealth-undeclared-crawlers-to-evade-website-no-crawl-directives/"><u>stealth crawling</u></a> or use other stealth tactics to try and dodge detection, such as modifying their user agent, changing their source <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/"><u>ASNs</u></a> to hide their crawling activity, or ignoring robots.txt files. Doing so would undermine the preceding four principles, hurting website operators and worsening the Internet for all.</p>
    <div>
      <h2>The road ahead: multi-stakeholder efforts to bring these principles to life</h2>
      <a href="#the-road-ahead-multi-stakeholder-efforts-to-bring-these-principles-to-life">
        
      </a>
    </div>
    <p>As we continue working on these principles and soliciting feedback, we strive to find a balance: we want the wishes of content creators respected while still encouraging AI innovation. It’s a privilege to sit at the intersection of these important interests and to play a crucial role in developing an agreeable path forward.</p><p>We are continuing to engage with right holders, AI companies, policy-makers, and regulators to shape global industry standards and regulatory frameworks accordingly. We believe that the influx of generative AI use need not threaten the Internet’s place as an open source of quality content. Protecting its integrity requires agreement on workable technical standards that reflect the interests of web publishers, content creators, and AI companies alike.  </p><p>The whole ecosystem must continue to come together and collaborate towards a better Internet that truly works for everyone. Cloudflare advocates for neutral forums where all affected parties can discuss the impact of AI developments on the Internet. One such example is the IETF, which has current work focused on some of the technical aspects being considered. Those efforts attempt to address some, but not all, of the issues in an area that deserves holistic consideration. We believe the principles we have proposed are a step in the right direction — but we hope others will join this complex and important conversation, so that norms and behavior on the Internet can successfully adapt to this exciting new technological age.</p> ]]></content:encoded>
            <category><![CDATA[AI Bots]]></category>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Better Internet]]></category>
            <category><![CDATA[Generative AI]]></category>
            <category><![CDATA[Policy & Legal]]></category>
            <guid isPermaLink="false">1sZkiH7eUUcU8zs4jpo6F8</guid>
            <dc:creator>Leah Romm</dc:creator>
            <dc:creator>Sebastian Hufnagel</dc:creator>
        </item>
        <item>
            <title><![CDATA[AI Week 2025: Recap]]></title>
            <link>https://blog.cloudflare.com/ai-week-2025-wrapup/</link>
            <pubDate>Wed, 03 Sep 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ How do we embrace the power of AI without losing control? That was one of our big themes for AI Week 2025. Check out all of the products, partnerships, and features we announced. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>How do we embrace the power of AI without losing control? </p><p>That was one of our big themes for AI Week 2025, which has now come to a close. We announced products, partnerships, and features to help companies successfully navigate this new era.</p><p>Everything we built was based on feedback from customers like you that want to get the most out of AI without sacrificing control and safety. Over the next year, we will double down on our efforts to deliver world-class features that augment and secure AI. Please keep an eye on our Blog, AI Avenue, Product Change Log and CloudflareTV for more announcements.</p><p>This week we focused on four core areas to help companies secure and deliver AI experiences safely and securely:</p><ul><li><p><b>Securing AI environments and workflows</b></p></li><li><p><b>Protecting original content from misuse by AI</b></p></li><li><p><b>Helping developers build world-class, secure, AI experiences </b></p></li><li><p><b>Making Cloudflare better for you with AI</b></p></li></ul><p>Thank you for following along with our first ever AI week at Cloudflare. This recap blog will summarize each announcement across these four core areas. For more information, check out our “<a href="http://thisweekinnet.com"><u>This Week in NET</u></a>” recap episode also featured at the end of this blog.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1JQHvkcThqyE3f21FjM59I/20e41ab0d3c4aaecbedc6d51b5c1f9f8/BLOG-2933_2.png" />
          </figure>
    <div>
      <h2>Securing AI environments and workflows</h2>
      <a href="#securing-ai-environments-and-workflows">
        
      </a>
    </div>
    <p>These posts and features focused on helping companies control and understand their employee’s usage of AI tools.</p><table><tr><td><p><b>Blog</b></p></td><td><p><b>Recap</b></p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/ai-prompt-protection/">Beyond the ban: A better way to secure generative AI applications</a></p></td><td><p>Generative AI tools present a trade-off of productivity and data risk. Cloudflare One’s new AI prompt protection feature provides the visibility and control needed to govern these tools, allowing organizations to confidently embrace AI.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/shadow-AI-analytics/">Unmasking the Unseen: Your Guide to Taming Shadow AI with Cloudflare One</a></p></td><td><p>Don't let "Shadow AI" silently leak your data to unsanctioned AI. This new threat requires a new defense. Learn how to gain visibility and control without sacrificing innovation.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/confidence-score-rubric/">Introducing Cloudflare Application Confidence Score For AI Applications</a></p></td><td><p>Cloudflare will provide confidence scores within our application library for Gen AI applications, allowing customers to assess their risk for employees using shadow IT.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/casb-ai-integrations/">ChatGPT, Claude, &amp; Gemini security scanning with Cloudflare CASB</a></p></td><td><p>Cloudflare CASB now scans ChatGPT, Claude, and Gemini for misconfigurations, sensitive data exposure, and compliance issues, helping organizations adopt AI with confidence.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/zero-trust-mcp-server-portals/">Securing the AI Revolution: Introducing Cloudflare MCP Server Portals</a></p></td><td><p>Cloudflare MCP Server Portals are now available in Open Beta. MCP Server Portals are a new capability that enable you to centralize, secure, and observe every MCP connection in your organization.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/best-practices-sase-for-ai/">Best Practices for Securing Generative AI with SASE</a></p></td><td><p>This guide provides best practices for Security and IT leaders to securely adopt generative AI using Cloudflare’s SASE architecture as part of a strategy for AI Security Posture Management (AI-SPM).</p></td></tr></table>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3q82P48XrTFDEWKBiIWlVC/d9c1bfa96d7b170df2f66577767d1ecc/BLOG-2933_3.png" />
          </figure>
    <div>
      <h2>Protecting original content from misuse by AI</h2>
      <a href="#protecting-original-content-from-misuse-by-ai">
        
      </a>
    </div>
    <p>Cloudflare is committed to helping content creators control access to their original work. These announcements focused on analysis of what we’re currently seeing on the Internet with respect to AI bots and crawlers and significant improvements to our existing control features.</p><table><tr><td><p><b>Blog</b></p></td><td><p><b>Recap</b></p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/ai-crawler-traffic-by-purpose-and-industry/">A deeper look at AI crawlers: breaking down traffic by purpose and industry</a></p></td><td><p>We are extending AI-related insights on Cloudflare Radar with new industry-focused data and a breakdown of bot traffic by purpose, such as training or user action.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/signed-agents/">The age of agents: cryptographically recognizing agent traffic</a></p></td><td><p>Cloudflare now lets websites and bot creators use Web Bot Auth to segment agents from verified bots, making it easier for customers to allow or disallow the many types of user and partner directed.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/conversational-search-with-nlweb-and-autorag/">Make Your Website Conversational for People and Agents with NLWeb and AutoRAG</a></p></td><td><p>With NLWeb, an open project by Microsoft, and Cloudflare AutoRAG, conversational search is now a one-click setup for your website.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/introducing-ai-crawl-control/">The next step for content creators in working with AI bots: Introducing AI Crawl Control</a></p></td><td><p>Cloudflare launches AI Crawl Control (formerly AI Audit) and introduces easily customizable 402 HTTP responses.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/crawlers-click-ai-bots-training/">The crawl-to-click gap: Cloudflare data on AI bots, training, and referrals</a></p></td><td><p>By mid-2025, training drives nearly 80% of AI crawling, while referrals to publishers (especially from Google) are falling and crawl-to-refer ratios show AI consumes far more than it sends back.</p></td></tr></table>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2XxME3f6wr64laagnl7fMR/d6929874d74637eec7d0227de0c33211/BLOG-2933_4.png" />
          </figure>
    <div>
      <h2>Helping developers build world-class, secure, AI experiences</h2>
      <a href="#helping-developers-build-world-class-secure-ai-experiences">
        
      </a>
    </div>
    <p>At Cloudflare we are committing to building the best platform to build AI experiences, all with security by default.</p><table><tr><td><p><b>Blog</b></p></td><td><p><b>Recap</b></p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/ai-gateway-aug-2025-refresh/">AI Gateway now gives you access to your favorite AI models, dynamic routing and more — through just one endpoint</a></p></td><td><p>AI Gateway now gives you access to your favorite AI models, dynamic routing and more — through just one endpoint.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/cloudflares-most-efficient-ai-inference-engine/">How we built the most efficient inference engine for Cloudflare’s network</a></p></td><td><p>Infire is an LLM inference engine that employs a range of techniques to maximize resource utilization, allowing us to serve AI models more efficiently with better performance for Cloudflare workloads.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/workers-ai-partner-models/">State-of-the-art image generation Leonardo models and text-to-speech Deepgram models now available in Workers AI</a></p></td><td><p>We're expanding Workers AI with new partner models from Leonardo.Ai and Deepgram. Start using state-of-the-art image generation models from Leonardo and real-time TTS and STT models from Deepgram.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/how-cloudflare-runs-more-ai-models-on-fewer-gpus/">How Cloudflare runs more AI models on fewer GPUs: A technical deep-dive</a></p></td><td><p>Cloudflare built an internal platform called Omni. This platform uses lightweight isolation and memory over-commitment to run multiple AI models on a single GPU.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/welcome-to-ai-avenue/">Cloudflare Launching AI Miniseries for Developers (and Everyone Else They Know)</a></p></td><td><p>In AI Avenue, we address people’s fears, show them the art of the possible, and highlight the positive human stories where AI is augmenting — not replacing — what people can do. And yes, we even let people touch AI themselves.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/block-unsafe-llm-prompts-with-firewall-for-ai/">Block unsafe prompts targeting your LLM endpoints with Firewall for AI</a></p></td><td><p>Cloudflare's AI security suite now includes unsafe content moderation, integrated into the Application Security Suite via Firewall for AI.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/cloudflare-realtime-voice-ai/">Cloudflare is the best place to build realtime voice agents</a></p></td><td><p>Today, we're excited to announce new capabilities that make it easier than ever to build real-time, voice-enabled AI applications on Cloudflare's global network.</p></td></tr></table>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/69qL26BPP68czkSiBGVkuM/2e916e61473354bff2806ac0d8a2517a/BLOG-2933_5.png" />
          </figure>
    <div>
      <h2>Making Cloudflare better for you with AI</h2>
      <a href="#making-cloudflare-better-for-you-with-ai">
        
      </a>
    </div>
    <p>Cloudflare logs and analytics can often be a needle in the haystack challenge, AI helps surface and alert to issues that need attention or review. Instead of a human having to spend hours sifting and searching for an issue, they can focus on action and remediation while AI does the sifting.</p><table><tr><td><p><b>Blog</b></p></td><td><p><b>Except</b></p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/background-removal/">Evaluating image segmentation models for background removal for Images</a></p></td><td><p>An inside look at how the Images team compared dichotomous image segmentation models to identify and isolate subjects in an image from the background.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/automating-threat-analysis-and-response-with-cloudy/">Automating threat analysis and response with Cloudy</a></p></td><td><p>Cloudy now supercharges analytics investigations and Cloudforce One threat intelligence! Get instant insights from threat events and APIs on APTs, DDoS, cybercrime &amp; more - powered by Workers AI!</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/cloudy-driven-email-security-summaries/">Cloudy Summarizations of Email Detections: Beta Announcement</a></p></td><td><p>We're now leveraging our internal LLM, Cloudy, to generate automated summaries within our Email Security product, helping SOC teams better understand what's happening within flagged messages.</p></td></tr><tr><td><p><a href="https://blog.cloudflare.com/AI-troubleshoot-warp-and-network-connectivity-issues/">Troubleshooting network connectivity and performance with Cloudflare AI</a></p></td><td><p>Troubleshoot network connectivity issues by using Cloudflare AI-Power to quickly self diagnose and resolve WARP client and network issues.</p></td></tr></table><p>We thank you for following along this week — and please stay tuned for exciting announcements coming during Cloudflare’s 15th birthday week in September!</p><p>Check out the full video recap, featuring insights from Kenny Johnson and host João Tomé, in our special This Week in NET episode (<a href="https://thisweekinnet.com">ThisWeekinNET.com</a>) covering everything announced during AI Week 2025.</p><div>
  
</div><p></p> ]]></content:encoded>
            <category><![CDATA[AI Week]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[AI Gateway]]></category>
            <category><![CDATA[Generative AI]]></category>
            <category><![CDATA[Workers AI]]></category>
            <category><![CDATA[AI WAF]]></category>
            <category><![CDATA[AI Bots]]></category>
            <guid isPermaLink="false">6l0AjZFdEn4hrKgQlWOYiB</guid>
            <dc:creator>Kenny Johnson</dc:creator>
            <dc:creator>James Allworth</dc:creator>
        </item>
        <item>
            <title><![CDATA[The age of agents: cryptographically recognizing agent traffic]]></title>
            <link>https://blog.cloudflare.com/signed-agents/</link>
            <pubDate>Thu, 28 Aug 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare now lets websites and bot creators use Web Bot Auth to segment agents from verified bots, making it easier for customers to allow or disallow the many types of user and partner directed. ]]></description>
            <content:encoded><![CDATA[ <p>On the surface, the goal of handling bot traffic is clear: keep malicious bots away, while letting through the helpful ones. Some bots are evidently malicious — such as mass price scrapers or those testing stolen credit cards. Others are helpful, like the bots that index your website. Cloudflare has segmented this second category of helpful bot traffic through our <a href="https://developers.cloudflare.com/bots/concepts/bot/#verified-bots"><u>verified bots</u></a> program, <a href="https://developers.cloudflare.com/bots/concepts/bot/verified-bots/policy/"><u>vetting</u></a> and validating bots that are transparent about who they are and what they do.</p><p>Today, the rise of <a href="https://agents.cloudflare.com/"><u>agents</u></a> has transformed how we interact with the Internet, often blurring the distinctions between benign and malicious bot actors. Bots are no longer directed only by the bot owners, but also by individual end users to act on their behalf. These bots directed by end users are often working in ways that website owners want to allow, such as planning a trip, ordering food, or making a purchase.</p><p>Our customers have asked us for easier, more granular ways to ensure specific <a href="https://www.cloudflare.com/learning/bots/what-is-a-bot/"><u>bots</u></a>, <a href="https://www.cloudflare.com/learning/bots/what-is-a-web-crawler/"><u>crawlers</u></a>, and <a href="https://www.cloudflare.com/learning/ai/what-is-agentic-ai/"><u>agents</u></a> can reach their websites, while continuing to block bad actors. That’s why we’re excited to introduce <b>signed agents</b>, an extension of our verified bots program that gives a new bot classification in our security rules and in Radar. Cloudflare has long recognized agents — but we’re now endowing them with their own classification to make it even easier for our customers to set the traffic lanes they want for their website. </p>
    <div>
      <h2>The age of agents</h2>
      <a href="#the-age-of-agents">
        
      </a>
    </div>
    <p>Cloudflare has continuously expanded our verified bot categorization to include different functions as the market has evolved. For instance, we first announced our grouping of <a href="https://blog.cloudflare.com/ai-bots/"><u>AI crawler traffic as an official bot category</u></a> in 2023. And in 2024, when OpenAI announced a <a href="https://openai.com/index/searchgpt-prototype/"><u>new AI search prototype</u></a> and introduced <a href="https://platform.openai.com/docs/bots"><u>three different bots</u></a> with distinct purposes, we <a href="https://blog.cloudflare.com/cloudflare-ai-audit-control-ai-content-crawlers/"><u>added three new categories</u></a> to account for this innovation: AI Search, AI Assistant, and Archiver.</p><p>But the bot landscape is constantly evolving. Let's unpack a common type of verified AI bot — an AI crawler such as <a href="https://radar.cloudflare.com/bots/directory/gptbot"><u>GPTBot</u></a>. Even though the bot performs an array of tasks, the bot’s ultimate purpose is a singular, repetitive task on behalf of the operator of that bot: fetch and index information. Its intelligence is applied to performing that singular job on behalf of that bot owner. </p><p>Agents, though, are different. Think about an AI agent tasked by a user to "Book the best deal for a round-trip flight to New York City next month." These agents sometimes use remote browsing products like Cloudflare's <a href="https://developers.cloudflare.com/browser-rendering/"><u>Browser Rendering</u></a> and similar products from companies like Browserbase and Anchor Browser. And here is the key distinction: this particular type of bot isn’t operating on behalf of a single company, like OpenAI in the prior example, but rather the end users themselves. </p>
    <div>
      <h2>Introducing signed agents</h2>
      <a href="#introducing-signed-agents">
        
      </a>
    </div>
    <p>In May, we announced Web Bot Auth, a new method of <a href="https://blog.cloudflare.com/web-bot-auth/"><u>using cryptography to verify bot and agent traffic</u></a>. HTTP message signatures allow bots to authenticate themselves and allow customer origins to identify them. This is one of the authentication methods we use today for our verified bots program. </p><p>What, exactly, is a <a href="https://developers.cloudflare.com/bots/concepts/bot/signed-agents/"><u>signed agent</u></a>? First, they are agents that are generally directed by an end user instead of a single company or entity. Second, the infrastructure or remote browsing platform the agents use is signing their HTTP requests via Web Both Auth, with Cloudflare validating these message signatures. And last, they comply with our <a href="https://developers.cloudflare.com/bots/concepts/bot/signed-agents/policy/"><u>signed agent policy</u></a>.</p><p>The signed agents classification improves on our existing frameworks in a couple of ways:</p><ol><li><p><b>Increased precision and visibility:</b> we’ve updated the <i>Cloudflare bots and agents directory to include signed agents</i> in addition to verified bots. This allows us to verify the cryptographic signatures of a much wider set of automated traffic, and our customers to granularly apply their security preferences more easily. Bot operators can now <i>submit signed agent applications from the Cloudflare dashboard</i>, allowing bot owners to specify to us how they think we should segment their automated traffic. </p></li><li><p><b>Easier controls from security rules</b>: similar to how they can take action on verified bots as a group, our Enterprise customers will be able to take action on <i>signed agents as a group when configuring their security rules</i>. This new field will be available in the Cloudflare dashboard under security rules soon.</p></li></ol><p>To apply to have an agent added to Cloudflare’s directory of bots and agents, customers should complete the <a href="https://dash.cloudflare.com?to=/:account/configurations/bot-submission-form"><u>Bot Submission Form</u></a> in the Cloudflare dashboard. Here, they can specify whether the submission should be considered for the signed agents list or the verified bots list. All signed agents will be recognized by their cryptographic signatures through <a href="https://datatracker.ietf.org/doc/html/draft-meunier-web-bot-auth-architecture"><u>Web Bot Auth validation</u></a>. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5caeGdhlmI3dO3GNZKeEUg/0dac239a94732404861b3876f6bdb8b6/BLOG-2930_2.png" />
          </figure><p><sub>The Bot Submission Form, available in the Cloudflare dashboard for bot owners to submit both verified bot and signed agent applications.</sub></p><p>We want to be clear: our verified bots program isn’t going anywhere. In fact, well-behaved and transparent applications that make use of signed agents can further qualify to be a verified bot, if their specific service adheres to our <a href="https://developers.cloudflare.com/bots/concepts/bot/verified-bots/policy/"><u>policy</u></a>. For instance,<a href="https://radar.cloudflare.com/scan"> <u>Cloudflare Radar's URL Scanner</u></a>, which relies on Browser Rendering as a service to scan URLs, is a <a href="https://radar.cloudflare.com/bots/directory/cloudflare-radar-url-scanner"><u>verified bot</u></a>. While Browser Rendering itself does not qualify to be a verified bot, URL Scanner does, since the bot owner (in this case, Cloudflare Radar) directs the traffic sent by the bot and always identifies itself with a unique Web Bot Auth signature — distinct from <a href="https://developers.cloudflare.com/browser-rendering/reference/automatic-request-headers/"><u>Browser Rendering’s signature</u></a>. </p>
    <div>
      <h2>From an agent’s perspective… </h2>
      <a href="#from-an-agents-perspective">
        
      </a>
    </div>
    <p>Since the launch of Web Bot Auth, our own Browser Rendering product has been sending signed Web Bot Auth HTTP headers, and is always given a bot score of 1 for our Bot Management customers. As of today, Browser Rendering will now show up in this new signed agent category. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1F8Z0E6WqJTxLf9G3PLB3a/84e80539be402066fe02ab60c431100a/BLOG-2930_3.png" />
          </figure><p>We’re also excited to announce the first cohort of agents that we’re partnering with and will be classifying as signed agents: <a href="https://openai.com/index/introducing-chatgpt-agent/"><u>ChatGPT agent</u></a>, <a href="https://block.xyz/inside/block-open-source-introduces-codename-goose"><u>Goose</u></a> from Block, <a href="https://docs.browserbase.com/introduction/what-is-browserbase"><u>Browserbase</u></a>, and <a href="https://anchorbrowser.io/"><u>Anchor Browser</u></a>. They are perfect examples of this new classification because their remote browsers are used by their end customers, not necessarily the companies themselves. We’re thrilled to partner with these teams to take this critical step for the AI ecosystem:</p><blockquote><p>“<i>When we built Goose as an open source tool, we designed it to run locally with an extensible architecture that lets developers automate complex workflows. As Goose has evolved to interact with external services and third-party sites on users' behalf, Web Bot Auth enables those sites to trust Goose while preserving what makes it unique. </i><b><i>This authentication breakthrough unlocks entirely new possibilities for autonomous agents</i></b>." – <b>Douwe Osinga</b>, Staff Software Engineer, Block</p></blockquote><blockquote><p><i>"At Browserbase, we provide web browsing capabilities for some of the largest AI applications. We're excited to partner with Cloudflare to support the adoption of Web Bot Auth, a critical layer of identity for agents. </i><b><i>For AI to thrive, agents need reliable, responsible web access.</i></b><i>"</i>  – <b>Paul Klein</b>, CEO, Browserbase</p></blockquote><blockquote><p><i>“Anchor Browser has partnered with Cloudflare to let developers ship verified browser agents. This way </i><b><i>trustworthy bots get reliable access while sites stay protected</i></b><i>.”</i> – <b>Idan Raman</b>, CEO, Anchor Browser</p></blockquote>
    <div>
      <h2>Updated visibility on Radar</h2>
      <a href="#updated-visibility-on-radar">
        
      </a>
    </div>
    <p>We want everyone to be in the know about our bot classifications. Cloudflare began publishing verified bots on our Radar page <a href="https://radar.cloudflare.com/bots#verified-bots"><u>back in 2022</u></a>, meaning anyone on the Internet — Cloudflare customer or not — can see all of our <a href="https://radar.cloudflare.com/bots#verified-bots"><u>verified bots on Radar</u></a>. We dynamically update the list of bots, but show more than just a list: we announced on <a href="https://www.cloudflare.com/en-gb/press-releases/2025/cloudflare-just-changed-how-ai-crawlers-scrape-the-internet-at-large/"><u>Content Independence Day</u></a> that <a href="https://blog.cloudflare.com/ai-search-crawl-refer-ratio-on-radar/#one-more-thing"><u>every verified bot would get its own page</u></a> in our public-facing directory on Radar, which includes the traffic patterns that we see for each bot.</p><p>Our directory has been updated to include <a href="https://radar.cloudflare.com/bots/directory"><b><u>both signed agents and verified bots</u></b></a> — we share exactly how Cloudflare classifies the bots that it recognizes, plus we surface all of the traffic that Cloudflare observes from these many recognized agents and bots. Through this updated directory, we’re not only giving better visibility to our customers, but also striving to set a higher standard for transparency of bot traffic on the Internet. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/65QPFjmbBde3EzHTOwElSL/cccc8f23c37716c251e0c21850855265/BLOG-2930_4.png" />
          </figure><p><sub>Cloudflare Radar’s Bots Directory, which lists verified bots and signed agents. This view is filtered to view only agent entries.</sub></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2wBz7UwrQQzT7rJJnXiF8C/16eed3f1afd95cac32c4bcb647c6e5e6/BLOG-2930_5.png" />
          </figure><p><sub>Cloudflare Radar’s signed agent page for ChatGPT agent, which includes its traffic patterns for the last 7 days, from August 21, 2025 to August 27, 2025. </sub></p>
    <div>
      <h2>What’s now, what’s next</h2>
      <a href="#whats-now-whats-next">
        
      </a>
    </div>
    <p>As of today, the Cloudflare bot directory supports both bots and agents in a more clear-cut way, and customers or agent creators can submit agents to be signed and recognized <a href="https://dash.cloudflare.com/?to=/:account/configurations/bot-submission-form"><u>through their account dashboard</u></a>. In addition, anyone can see our signed agents and their traffic patterns on Radar. Soon, customers will be able to take action on signed agents as a group within their firewall rules, the same way you can take action on our verified bots. </p><p>Agents are changing the way that humans interact with the Internet. Websites need to know what tools are interacting with them, and for the builders of those tools to be able to easily scale. Message signatures help achieve both of these goals, but this is only step one. Cloudflare will continue to make it easier for agents and websites to interact (or not!) at scale, in a seamless way. </p><p>
</p> ]]></content:encoded>
            <category><![CDATA[AI Week]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Bots]]></category>
            <category><![CDATA[AI Bots]]></category>
            <category><![CDATA[Bot Management]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">1LQFWI1jzZnWAqR4iFMLLi</guid>
            <dc:creator>Jin-Hee Lee</dc:creator>
        </item>
        <item>
            <title><![CDATA[The next step for content creators in working with AI bots: Introducing AI Crawl Control]]></title>
            <link>https://blog.cloudflare.com/introducing-ai-crawl-control/</link>
            <pubDate>Thu, 28 Aug 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare launches AI Crawl Control (formerly AI Audit) and introduces easily customizable 402 HTTP responses. ]]></description>
            <content:encoded><![CDATA[ <p><i>Empowering content creators in the age of AI with smarter crawling controls and direct communication channels</i></p><p>Imagine you run a regional news site. Last month an AI bot scraped 3 years of archives in minutes — with no payment and little to no referral traffic. As a small company, you may struggle to get the AI company's attention for a licensing deal. Do you block all crawler traffic, or do you let them in and settle for the few referrals they send? </p><p>It’s picking between two bad options.</p><p>Cloudflare wants to help break that stalemate. On July 1st of this year, we declared <a href="https://www.cloudflare.com/press-releases/2025/cloudflare-just-changed-how-ai-crawlers-scrape-the-internet-at-large/"><u>Content Independence Day</u></a> based on a simple premise: creators deserve control of how their content is accessed and used. Today, we're taking the next step in that journey by releasing AI Crawl Control to general availability — giving content creators and AI crawlers an important new way to communicate.</p>
    <div>
      <h2>AI Crawl Control goes GA</h2>
      <a href="#ai-crawl-control-goes-ga">
        
      </a>
    </div>
    <p>Today, we're rebranding our AI Audit tool as <b>AI Crawl Control</b> and moving it from beta to <b>general availability</b>. This reflects the tool's evolution from simple monitoring to detailed insights and <a href="https://www.cloudflare.com/learning/ai/how-to-block-ai-crawlers/">control over how AI systems can access your content</a>. </p><p>The market response has been overwhelming: content creators across industries needed real agency, not just visibility. AI Crawl Control delivers that control.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/pIAbmCR0tTK71umann3w0/e570c5f898e3d399babf6d1f82c2f3d8/image3.png" />
          </figure>
    <div>
      <h2>Using HTTP 402 to help publishers license content to AI crawlers</h2>
      <a href="#using-http-402-to-help-publishers-license-content-to-ai-crawlers">
        
      </a>
    </div>
    <p>Many content creators have faced a binary choice: either they block all AI crawlers and miss potential licensing opportunities and referral traffic; or allow them through without any compensation. Many content creators had no practical way to say "we're open for business, but let's talk terms first."</p><p>Our customers are telling us:</p><ul><li><p>We want to license our content, but crawlers don't know how to reach us. </p></li><li><p>Blanket blocking feels like we're closing doors on potential revenue and referral traffic. </p></li><li><p>We need a way to communicate our terms before crawling begins. </p></li></ul><p>To address these needs, we are making it easier than ever to send customizable<b> </b><a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Status/402">402 HTTP status codes</a>. </p><p>Our <a href="https://blog.cloudflare.com/introducing-pay-per-crawl/#what-if-i-could-charge-a-crawler"><u>private beta launch of Pay Per Crawl</u></a> put the HTTP 402 (“Payment Required”) response codes to use, working in tandem with Web Bot Auth to enable direct payments between agents and content creators. Today, we’re making customizable 402 response codes available to every paid Cloudflare customer — not just pay per crawl users.</p><p>Here's how it works: in AI Crawl Control, paying Cloudflare customers will be able to select individual bots to block with a configurable message parameter and send 402 payment required responses. Think: "To access this content, email partnerships@yoursite.com or call 1-800-LICENSE" or "Premium content available via API at api.yoursite.com/pricing."</p><p>On an average day, Cloudflare customers are already sending over one billion 402 response codes. This shows a deep desire to move beyond blocking to open communication channels and new monetization models. With the 402 HTTP status code, content creators can tell crawlers exactly how to properly license their content, creating a direct path from crawling to a commercial agreement. We are excited to make this easier than ever in the AI Crawl Control dashboard. </p>
    <div>
      <h2>How to customize your 402 status code with AI Crawl Control: </h2>
      <a href="#how-to-customize-your-402-status-code-with-ai-crawl-control">
        
      </a>
    </div>
    <p><b>For Paid Plan Users:</b></p><ul><li><p>When you block individual crawlers from the AI Crawl Control dashboard, you can now choose to send 402 Payment Required status codes and customize your message. For example: <b>To access this content, email partnerships@yoursite.com or call 1-800-LICENSE</b>.</p></li></ul><p>The response will look like this:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5v5x41azcAK14DBhXjXPEX/8c0960b4bb556d62e88d19c9dd544f12/image4.png" />
          </figure><p>The message can be configured from Settings in the AI Crawl Control Dashboard:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2KMdRYwoey9RdYIxmzmFO1/7b39fd82d43349ee1cc4832cb602eb56/image1.png" />
          </figure>
    <div>
      <h2>Beyond just blocking AI bots</h2>
      <a href="#beyond-just-blocking-ai-bots">
        
      </a>
    </div>
    <p>This is just the beginning. We're planning to add additional parameters that will let crawlers understand the content's value, freshness, and licensing terms directly in the 402 response. Imagine crawlers receiving structured data about content quality and update frequency, for example, in addition to contact information.</p><p>Meanwhile, <a href="https://blog.cloudflare.com/introducing-pay-per-crawl/">pay per crawl</a> continues advancing through beta, giving content creators the infrastructure to automatically monetize crawler access with transparent, usage-based pricing.</p><p>What excites us most is the market shift we're seeing. We're moving to a world where content creators have clear monetization paths to become active participants in the development of rich AI experiences. </p><p>The 402 response is a bridge between two industries that want to work together: content creators whose work fuels AI development, and AI companies who need high-quality data. Cloudflare’s AI Crawl Control creates the infrastructure for these partnerships to flourish.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/31Np3qX2ssbeGaJnZHQodA/92246d3618778715c2e8b295b7acaa29/image5.png" />
          </figure><div>
  
</div><p></p> ]]></content:encoded>
            <category><![CDATA[AI Week]]></category>
            <category><![CDATA[Pay Per Crawl]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Bots]]></category>
            <category><![CDATA[AI Bots]]></category>
            <category><![CDATA[Bot Management]]></category>
            <guid isPermaLink="false">3UcNgGUfIUIm0EEtNwgLAT</guid>
            <dc:creator>Will Allen</dc:creator>
            <dc:creator>Pulkita Kini</dc:creator>
            <dc:creator>Cam Whiteside</dc:creator>
        </item>
        <item>
            <title><![CDATA[Perplexity is using stealth, undeclared crawlers to evade website no-crawl directives]]></title>
            <link>https://blog.cloudflare.com/perplexity-is-using-stealth-undeclared-crawlers-to-evade-website-no-crawl-directives/</link>
            <pubDate>Mon, 04 Aug 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ Perplexity is repeatedly modifying their user agent and changing IPs and ASNs to hide their crawling activity, in direct conflict with explicit no-crawl preferences expressed by websites. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>We are observing stealth crawling behavior from Perplexity, an AI-powered answer engine. Although Perplexity initially crawls from their declared user agent, when they are presented with a network block, they appear to obscure their crawling identity in an attempt to circumvent the website’s preferences. We see continued evidence that Perplexity is repeatedly modifying their user agent and changing their source <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/"><u>ASNs</u></a> to hide their crawling activity, as well as ignoring — or sometimes failing to even fetch — <a href="https://www.cloudflare.com/learning/bots/what-is-robots-txt/"><u>robots.txt</u> </a>files.</p><p>The Internet as we have known it for the past three decades is <a href="https://blog.cloudflare.com/content-independence-day-no-ai-crawl-without-compensation/"><u>rapidly changing</u></a>, but one thing remains constant: it is built on trust. There are clear preferences that crawlers should be transparent, serve a clear purpose, perform a specific activity, and, most importantly, follow website directives and preferences. Based on Perplexity’s observed behavior, which is incompatible with those preferences, we have de-listed them as a verified <a href="https://www.cloudflare.com/learning/bots/what-is-a-bot/">bot</a> and added heuristics to our managed rules that <a href="https://www.cloudflare.com/learning/ai/how-to-block-ai-crawlers/">block this stealth crawling</a>.</p>
    <div>
      <h3>How we tested</h3>
      <a href="#how-we-tested">
        
      </a>
    </div>
    <p>We received complaints from customers who had both disallowed Perplexity crawling activity in their <code>robots.txt</code> files and also created <a href="https://www.cloudflare.com/learning/ddos/glossary/web-application-firewall-waf/">WAF rules</a> to specifically block both of Perplexity’s <a href="https://docs.perplexity.ai/guides/bots"><u>declared crawlers</u></a>: <code>PerplexityBot</code> and <code>Perplexity-User</code>. These customers told us that Perplexity was still able to access their content even when they saw its bots successfully blocked. We confirmed that Perplexity’s crawlers were in fact being blocked on the specific pages in question, and then performed several targeted tests to confirm what exact behavior we could observe.</p><p>We created multiple brand-new <a href="https://www.cloudflare.com/learning/dns/glossary/what-is-a-domain-name/">domains</a>, similar to <code>testexample.com</code> and <code>secretexample.com</code>. These domains were newly purchased and had not yet been indexed by any search engine nor made publicly accessible in any discoverable way. We implemented a <code>robots.txt</code> file with directives to stop any respectful bots from accessing any part of a website:  </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/66QyzKuX9DQqQYPvCZpw4m/78e7bbd4ff79dd2f1523e70ef54dab9e/BLOG-2879_-_2.png" />
          </figure><p>We conducted an experiment by querying Perplexity AI with questions about these domains, and discovered Perplexity was still providing detailed information regarding the exact content hosted on each of these restricted domains. This response was unexpected, as we had taken all necessary precautions to prevent this data from being retrievable by their <a href="https://www.cloudflare.com/learning/bots/what-is-a-web-crawler/"><u>crawlers</u></a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/08ZLg0OE7vX8x35f9rDeg/a3086959793ac565b329fbbab5e52d1e/BLOG-2879_-_3.png" />
          </figure><p></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5uHc0gooXlr98LB56KBb3g/b7dae5987a64f2442d1f89cf21e974ba/BLOG-2879_-_4.png" />
          </figure>
    <div>
      <h3>Obfuscating behavior observed</h3>
      <a href="#obfuscating-behavior-observed">
        
      </a>
    </div>
    <p><b>Bypassing Robots.txt and undisclosed IPs/User Agents</b></p><p>Our multiple test domains explicitly prohibited all automated access by specifying in robots.txt and had specific WAF rules that blocked crawling from <a href="https://docs.perplexity.ai/guides/bots"><u>Perplexity’s public crawlers</u></a>. We observed that Perplexity uses not only their declared user-agent, but also a generic browser intended to impersonate Google Chrome on macOS when their declared crawler was blocked. </p><table><tr><td><p>Declared</p></td><td><p>Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; Perplexity-User/1.0; +https://perplexity.ai/perplexity-user)</p></td><td><p>20-25m daily requests</p></td></tr><tr><td><p>Stealth</p></td><td><p>Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/124.0.0.0 Safari/537.36</p></td><td><p>3-6m daily requests</p></td></tr></table><p>Both their declared and undeclared crawlers were attempting to access the content for scraping contrary to the web crawling norms as outlined in RFC <a href="https://datatracker.ietf.org/doc/html/rfc9309"><u>9309</u></a>.</p><p>This undeclared crawler utilized multiple IPs not listed in <a href="https://docs.perplexity.ai/guides/bots"><u>Perplexity’s official IP range</u></a>, and would rotate through these IPs in response to the restrictive robots.txt policy and block from Cloudflare. In addition to rotating IPs, we observed requests coming from different ASNs in attempts to further evade website blocks. This activity was observed across tens of thousands of domains and millions of requests per day. We were able to fingerprint this crawler using a combination of <a href="https://www.cloudflare.com/learning/ai/what-is-machine-learning/">machine learning</a> and network signals.</p><p>An example: </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4UKtFs1UPddDh9OCtMuwzC/bcdabf5fdd9b0d029581b14a90714d91/unnamed.png" />
          </figure><p>Of note: when the stealth crawler was successfully blocked, we observed that Perplexity uses other data sources — including other websites — to try to create an answer. However, these answers were less specific and lacked details from the original content, reflecting the fact that the block had been successful. </p>
    <div>
      <h2>How well-meaning bot operators respect website preferences</h2>
      <a href="#how-well-meaning-bot-operators-respect-website-preferences">
        
      </a>
    </div>
    <p>In contrast to the behavior described above, the Internet has expressed clear preferences on how good crawlers should behave. All well-intentioned crawlers acting in good faith should:</p><p><b>Be transparent</b>. Identify themselves honestly, using a unique user-agent, a declared list of IP ranges or <a href="https://developers.cloudflare.com/bots/concepts/bot/verified-bots/web-bot-auth/"><u>Web Bot Auth</u></a> integration, and provide contact information if something goes wrong.</p><p><b>Be well-behaved netizens</b>. Don’t flood sites with excessive traffic, <a href="https://www.cloudflare.com/learning/bots/what-is-data-scraping/"><u>scrape</u></a> sensitive data, or use stealth tactics to try and dodge detection.</p><p><b>Serve a clear purpose</b>. Whether it’s powering a voice assistant, checking product prices, or making a website more accessible, every bot has a reason to be there. The purpose should be clearly and precisely defined and easy for site owners to look up publicly.</p><p><b>Separate bots for separate activities</b>. Perform each activity from a unique bot. This makes it easy for site owners to decide which activities they want to allow. Don’t force site owners to make an all-or-nothing decision. </p><p><b>Follow the rules</b>. That means checking for and respecting website signals like <code>robots.txt</code>, staying within rate limits, and never bypassing security protections.</p><p>More details are outlined in our official <a href="https://developers.cloudflare.com/bots/concepts/bot/verified-bots/policy/"><u>Verified Bots Policy Developer Docs</u></a>.</p><p>OpenAI is an example of a leading AI company that follows these best practices. They clearly <a href="https://platform.openai.com/docs/bots"><u>outline their crawlers</u> and </a>give detailed explanations for each crawler’s purpose. They respect robots.txt and do not try to evade either a robots.txt directive or a network level block. And <a href="https://openai.com/index/introducing-chatgpt-agent/"><u>ChatGPT Agent</u></a> is signing http requests using the newly proposed open standard <a href="https://developers.cloudflare.com/bots/concepts/bot/verified-bots/web-bot-auth/"><u>Web Bot Auth</u></a>.</p><p>When we ran the same test as outlined above with ChatGPT, we found that ChatGPT-User fetched the robots file and stopped crawling when it was disallowed. We did not observe follow-up crawls from any other user agents or third party bots. When we removed the disallow directive from the robots entry, but presented ChatGPT with a block page, they again stopped crawling, and we saw no additional crawl attempts from other user agents. Both of these demonstrate the appropriate response to website owner preferences.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/HMJjS7DRmu4octZ99HX8K/753966a88476f80d7a981b1c135fd251/BLOG-2879_-_6.png" />
          </figure>
    <div>
      <h2>How can you protect yourself?</h2>
      <a href="#how-can-you-protect-yourself">
        
      </a>
    </div>
    <p>All the undeclared crawling activity that we observed from Perplexity’s hidden User Agent was scored by our <a href="https://www.cloudflare.com/application-services/products/bot-management/">bot management system </a>as a bot and was unable to pass managed challenges. Any bot management customer who has an existing block rule in place is already protected. Customers who don’t want to block traffic can set up rules to <a href="https://developers.cloudflare.com/waf/custom-rules/use-cases/challenge-bad-bots/"><u>challenge requests</u></a>, giving real humans an opportunity to proceed. Customers with existing challenge rules are already protected. Lastly, we added signature matches for the stealth crawler into our <a href="https://developers.cloudflare.com/bots/concepts/bot/#ai-bots"><u>managed rule</u></a> that <a href="https://developers.cloudflare.com/bots/additional-configurations/block-ai-bots/"><u>blocks AI crawling activity</u></a>. This rule is available to all customers, including our free customers.  </p>
    <div>
      <h2>What’s next?</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>It's been just over a month since we announced <a href="https://blog.cloudflare.com/content-independence-day-no-ai-crawl-without-compensation/">Content Independence Day</a>, giving content creators and publishers more control over how their content is accessed. Today, over two and a half million websites have chosen to completely disallow AI training through our managed robots.txt feature or our <a href="https://developers.cloudflare.com/bots/concepts/bot/#ai-bots"><u>managed rule blocking AI Crawlers</u></a>. Every Cloudflare customer is now able to selectively decide which declared AI crawlers are able to access their content in accordance with their business objectives.</p><p>We expected a change in bot and crawler behavior based on these new features, and we expect that the techniques bot operators use to evade detection will continue to evolve. Once this post is live the behavior we saw will almost certainly change, and the methods we use to stop them will keep evolving as well. </p><p>Cloudflare is actively working with technical and policy experts around the world, like the IETF efforts to standardize <a href="https://ietf-wg-aipref.github.io/drafts/draft-ietf-aipref-vocab.html?cf_target_id=_blank"><u>extensions to robots.txt</u></a>, to establish clear and measurable principles that well-meaning bot operators should abide by. We think this is an important next step in this quickly evolving space.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/25VWBDa33UWxDOtqEVEx5o/41eb4ddc262551b83179c1c23a9cb1e6/BLOG-2879_-_7.png" />
          </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Cloudforce One]]></category>
            <category><![CDATA[Threat Intelligence]]></category>
            <category><![CDATA[AI Bots]]></category>
            <category><![CDATA[Bots]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Bot Management]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Generative AI]]></category>
            <guid isPermaLink="false">6XJtrSa1t6frcelkMGuYOV</guid>
            <dc:creator>Gabriel Corral</dc:creator>
            <dc:creator>Vaibhav Singhal</dc:creator>
            <dc:creator>Brian Mitchell</dc:creator>
            <dc:creator>Reid Tatoris</dc:creator>
        </item>
        <item>
            <title><![CDATA[Content Independence Day: no AI crawl without compensation!]]></title>
            <link>https://blog.cloudflare.com/content-independence-day-no-ai-crawl-without-compensation/</link>
            <pubDate>Tue, 01 Jul 2025 10:01:00 GMT</pubDate>
            <description><![CDATA[ It’s Content Independence Day: Cloudflare, along with a majority of the world's leading publishers and AI companies, is changing the default to block AI crawlers unless they pay creators for content. ]]></description>
            <content:encoded><![CDATA[ <p>Almost 30 years ago, two graduate students at Stanford University — Larry Page and Sergey Brin — began working on a research project they called Backrub. That, of course, was the project that resulted in Google. But also something more: it created the business model for the web.</p><p>The deal that Google made with content creators was simple: let us copy your content for search, and we'll send you traffic. You, as a content creator, could then derive value from that traffic in one of three ways: running ads against it, selling subscriptions for it, or just getting the pleasure of knowing that someone was consuming your stuff.</p><p>Google facilitated all of this. Search generated traffic. They acquired DoubleClick and built AdSense to help content creators serve ads. And acquired Urchin to launch Google Analytics to let you measure just who was viewing your content at any given moment in time.</p><p>For nearly thirty years, that relationship was what defined the web and allowed it to flourish.</p><p>But that relationship is changing. For the first time in more than a decade, the percentage of searches run on Google is <a href="https://searchengineland.com/google-search-market-share-drops-2024-450497"><u>declining</u></a>. What's taking its place? AI.</p><p>If you're like me, you've been amazed at the new AI systems that have launched over the last two years and find yourself turning to them to answer questions that, in the past, you may have previously looked to Google. While it's still early, it seems clear that the interface of the future of the web will look more like ChatGPT than a spartan search box and ten blue links.</p><p>Google itself has changed. While ten years ago they presented a list of links and said that success was getting you off their site as quickly as possible, today they've added an answer box and more recently AI Overviews which answer users' questions without them having to leave Google.com. With the answer box, researchers have found that <a href="https://scrumdigital.com/blog/zero-click-search-trends-google-serp-analysis/"><u>75 percent</u></a> of mobile queries were answered without users leaving Google. With the more recent launch of AI Overviews it's even higher.</p><p>While Google’s users may like that, it's hurting content creators. Google still copies creators’ content, but over the last 10 years, because of the changes to the UI of “search” it's gotten almost 10 times more difficult for a content creator to get the same volume of traffic. That means it's 10 times more difficult to generate value from ads, subscriptions, or the ego of knowing someone cares about what you created.</p><p>And that's the good news. It’s even worse with <a href="https://blog.cloudflare.com/ai-search-crawl-refer-ratio-on-radar/#how-does-this-measurement-work"><u>today’s AI tools</u></a>. With OpenAI, it's 750 times more difficult to get traffic than it was with the Google of old. With Anthropic, it's 30,000 times more difficult. The reason is simple: increasingly we aren't consuming originals, we're consuming derivatives.</p><p>The problem is whether you create content to sell ads, sell subscriptions, or just to know that people value what you've created, an AI-driven web doesn't reward content creators the way that the old search-driven web did. And that means the deal that Google made to take content in exchange for sending you traffic just doesn't make sense anymore.</p><p>Instead of being a fair trade, the web is being stripmined by AI crawlers with content creators seeing almost no traffic and therefore almost no value.</p><p>That changes today, July 1, what we’re calling Content Independence Day. Cloudflare, along with a majority of the world's leading publishers and AI companies, is changing the default to <a href="https://www.cloudflare.com/learning/ai/how-to-block-ai-crawlers/">block AI crawlers</a> unless they pay creators for their content. That content is the fuel that powers AI engines, and so it's only fair that content creators are compensated directly for it.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6GFFa6knU0nKGjhJVh8Ar8/8a1b4c0661146596cc844cdd9dd900ea/BLOG-2860_2.png" />
          </figure><p>But that's just the beginning. Next, we'll work on a marketplace where content creators and AI companies, large and small, can come together. Traffic was always a poor proxy for value. We think we can do better. Let me explain.</p><p>Imagine an AI engine like a block of swiss cheese. New, original content that fills one of the holes in the AI engine’s block of cheese is more valuable than repetitive, low-value content that unfortunately dominates much of the web today.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6vUAgbW7FzzHSKA8tB8f8c/ea78e7cb4858602a32a91523800b882c/BLOG-2860_3.png" />
          </figure><p>We believe that if we can begin to score and value content not on how much traffic it generates, but on how much it furthers knowledge — measured by how much it fills the current holes in AI engines “swiss cheese” — we not only will help AI engines get better faster, but also potentially facilitate a new golden age of high-value content creation.</p><p>We don’t know all the answers yet, but we’re working with some of the leading economists and computer scientists to figure them out.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1VNIoN0740jhfO8lu6XDpJ/98829d238884cde3bcd345779a15df89/BLOG-2860_4.png" />
          </figure><p>The web is changing. Its business model will change. And, in the process, we have an opportunity to learn from what was great about the web of the last 30 years and what we can make better for the web of the future.</p><p>Cloudflare's mission is to help build a better Internet. I'm proud of the role we're playing in doing exactly that as the web evolves. And I’m proud that we’re helping content creators stick up and demand value for the content they worked hard to create.</p><p>Happy Content Independence Day!</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2Xme0Af7HqeJpdQbapzApG/6ff9ea29b7506e10867ed9c7ac5a2280/BLOG-2860_5.png" />
          </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Pay Per Crawl]]></category>
            <category><![CDATA[AI Bots]]></category>
            <category><![CDATA[AI]]></category>
            <guid isPermaLink="false">1pmK0OnvzPIip01yjWXj0x</guid>
            <dc:creator>Matthew Prince</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing pay per crawl: Enabling content owners to charge AI crawlers for access]]></title>
            <link>https://blog.cloudflare.com/introducing-pay-per-crawl/</link>
            <pubDate>Tue, 01 Jul 2025 10:00:00 GMT</pubDate>
            <description><![CDATA[ Pay per crawl is a new feature to allow content creators to charge AI crawlers for access to their content.  ]]></description>
            <content:encoded><![CDATA[ 
    <div>
      <h2>A changing landscape of consumption </h2>
      <a href="#a-changing-landscape-of-consumption">
        
      </a>
    </div>
    <p>Many publishers, content creators and website owners currently feel like they have a binary choice — either leave the front door wide open for AI to consume everything they create, or create their own walled garden. But what if there was another way?</p><p>At Cloudflare, we started from a simple principle: we wanted content creators to have control over who accesses their work. If a creator wants to <a href="https://www.cloudflare.com/learning/ai/how-to-block-ai-crawlers/">block all AI crawlers</a> from their content, they should be able to do so. If a creator wants to allow some or all AI crawlers full access to their content for free, they should be able to do that, too. Creators should be in the driver’s seat.</p><p>After hundreds of conversations with news organizations, publishers, and large-scale social media platforms, we heard a consistent desire for a third path: They’d like to allow AI crawlers to access their content, but they’d like to get compensated. Currently, that requires knowing the right individual and striking a one-off deal, which is an insurmountable challenge if you don’t have scale and leverage. </p>
    <div>
      <h2>What if I could charge a crawler? </h2>
      <a href="#what-if-i-could-charge-a-crawler">
        
      </a>
    </div>
    <p>We believe your choice need not be binary — there should be a third, more nuanced option: <b>You can charge for access.</b> Instead of a blanket block or uncompensated open access, we want to empower content owners to monetize their content at Internet scale.</p><p>We’re excited to help dust off a mostly forgotten piece of the web: <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Status/402"><b><u>HTTP response code 402</u></b></a>.</p>
    <div>
      <h2>Introducing pay per crawl</h2>
      <a href="#introducing-pay-per-crawl">
        
      </a>
    </div>
    <p><a href="http://www.cloudflare.com/paypercrawl-signup/">Pay per crawl</a>, in private beta, is our first experiment in this area. </p><p>Pay per crawl integrates with existing web infrastructure, leveraging <a href="https://www.cloudflare.com/learning/ddos/glossary/hypertext-transfer-protocol-http/">HTTP status codes</a> and established authentication mechanisms to create a framework for paid content access. </p><p>Each time an AI crawler requests content, they either present payment intent via request headers for successful access (<a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Status/200"><u>HTTP response code 200</u></a>), or receive a <code>402 Payment Required</code> response with pricing. Cloudflare acts as the Merchant of Record for pay per crawl and also provides the underlying technical infrastructure.</p>
    <div>
      <h3>Publisher controls and pricing</h3>
      <a href="#publisher-controls-and-pricing">
        
      </a>
    </div>
    <p>Pay per crawl grants domain owners full control over their monetization strategy. They can define a flat, per-request price across their entire site. Publishers will then have three distinct options for a crawler:</p><ul><li><p><b>Allow:</b> Grant the crawler free access to content.</p></li><li><p><b>Charge:</b> Require payment at the configured, domain-wide price.</p></li><li><p><b>Block:</b> Deny access entirely, with no option to pay.</p></li></ul>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2PhxxI7f3Teb521mPRFQUL/1ecfd01f60f165b35c27ab9457f8b152/image3.png" />
          </figure><p>An important mechanism here is that even if a crawler doesn’t have a billing relationship with Cloudflare, and thus couldn’t be charged for access, a publisher can still choose to ‘charge’ them. This is the functional equivalent of a network level block (an HTTP <code>403 Forbidden</code> response where no content is returned) — but with the added benefit of telling the crawler there could be a relationship in the future. </p><p>While publishers currently can define a flat price across their entire site, they retain the flexibility to bypass charges for specific crawlers as needed. This is particularly helpful if you want to allow a certain crawler through for free, or if you want to negotiate and execute a content partnership outside the pay per crawl feature. </p><p>To ensure integration with each publisher’s existing security posture, Cloudflare enforces Allow or Charge decisions via a rules engine that operates only after existing WAF policies and <a href="https://www.cloudflare.com/learning/bots/what-is-bot-management/">bot management</a> or bot blocking features have been applied.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3NI9GUkR8RmmApQyOgb1mI/4f77c199ccdc5ebc166204cdaec72c48/image2.png" />
          </figure>
    <div>
      <h3>Payment headers and access</h3>
      <a href="#payment-headers-and-access">
        
      </a>
    </div>
    <p>As we were building the system, we knew we had to solve an incredibly important technical challenge: ensuring we could charge a specific crawler, but prevent anyone from spoofing that crawler. Thankfully, there’s a way to do this using <a href="https://developers.cloudflare.com/bots/concepts/bot/verified-bots/web-bot-auth/"><u>Web Bot Auth</u></a> proposals.</p><p>For crawlers, <a href="https://blog.cloudflare.com/web-bot-auth/"><u>this involves:</u></a></p><ul><li><p>Generating an Ed25519 key pair, and making the <a href="https://datatracker.ietf.org/doc/html/rfc7517"><u>JWK</u></a>-formatted public key available in a hosted directory</p></li><li><p>Registering with Cloudflare to provide the URL of your key directory and user agent information.</p></li><li><p>Configuring your crawler to use <a href="https://datatracker.ietf.org/doc/rfc9421/"><u>HTTP Message Signatures</u></a> with each request.</p></li></ul><p>Once registration is accepted, crawler requests should always include <code>signature-agent</code>, <code>signature-input</code>, and <code>signature</code> headers to identify your crawler and discover paid resources.</p>
            <pre><code>GET /example.html
Signature-Agent: "https://signature-agent.example.com"
Signature-Input: sig2=("@authority" "signature-agent")
 ;created=1735689600
 ;keyid="poqkLGiymh_W0uP6PZFw-dvez3QJT5SolqXBCW38r0U"
 ;alg="ed25519"
 ;expires=1735693200
;nonce="e8N7S2MFd/qrd6T2R3tdfAuuANngKI7LFtKYI/vowzk4lAZYadIX6wW25MwG7DCT9RUKAJ0qVkU0mEeLElW1qg=="
 ;tag="web-bot-auth"
Signature: sig2=:jdq0SqOwHdyHr9+r5jw3iYZH6aNGKijYp/EstF4RQTQdi5N5YYKrD+mCT1HA1nZDsi6nJKuHxUi/5Syp3rLWBA==:</code></pre>
            
    <div>
      <h3>Accessing paid content</h3>
      <a href="#accessing-paid-content">
        
      </a>
    </div>
    <p>Once a crawler is set up, determination of whether content requires payment can happen via two flows:</p>
    <div>
      <h4>Reactive (discovery-first)</h4>
      <a href="#reactive-discovery-first">
        
      </a>
    </div>
    <p>Should a crawler request a paid URL, Cloudflare returns an <code>HTTP 402 Payment Required</code> response, accompanied by a <code>crawler-price</code> header. This signals that payment is required for the requested resource.</p>
            <pre><code>HTTP 402 Payment Required
crawler-price: USD XX.XX</code></pre>
            <p> The crawler can then decide to retry the request, this time including a <code>crawler-exact-price</code> header to indicate agreement to pay the configured price.</p>
            <pre><code>GET /example.html
crawler-exact-price: USD XX.XX </code></pre>
            
    <div>
      <h4>Proactive (intent-first)</h4>
      <a href="#proactive-intent-first">
        
      </a>
    </div>
    <p>Alternatively, a crawler can preemptively include a <code>crawler-max-price</code> header in its initial request.</p>
            <pre><code>GET /example.html
crawler-max-price: USD XX.XX</code></pre>
            <p>If the price configured for a resource is equal to or below this specified limit, the request proceeds, and the content is served with a successful <code>HTTP 200 OK</code> response, confirming the charge:</p>
            <pre><code>HTTP 200 OK
crawler-charged: USD XX.XX 
server: cloudflare</code></pre>
            <p>If the amount in a <code>crawler-max-price</code> request is greater than the content owner’s configured price, only the configured price is charged. However, if the resource’s configured price exceeds the maximum price offered by the crawler, an <code>HTTP</code><code><b> </b></code><code>402 Payment Required</code> response is returned, indicating the specified cost.  Only a single price declaration header, <code>crawler-exact-price</code> or <code>crawler-max-price</code>, may be used per request.</p><p>The <code>crawler-exact-price</code> or <code>crawler-max-price</code> headers explicitly declare the crawler's willingness to pay. If all checks pass, the content is served, and the crawl event is logged. If any aspect of the request is invalid, the edge returns an <code>HTTP 402 Payment Required</code> response.</p>
    <div>
      <h3>Financial settlement</h3>
      <a href="#financial-settlement">
        
      </a>
    </div>
    <p>Crawler operators and content owners must configure pay per crawl payment details in their Cloudflare account. Billing events are recorded each time a crawler makes an authenticated request with payment intent and receives an HTTP 200-level response with a <code>crawler-charged</code> header. Cloudflare then aggregates all the events, charges the crawler, and distributes the earnings to the publisher.</p>
    <div>
      <h2>Content for crawlers today, agents tomorrow </h2>
      <a href="#content-for-crawlers-today-agents-tomorrow">
        
      </a>
    </div>
    <p>At its core, pay per crawl begins a technical shift in how content is controlled online. By providing creators with a robust, programmatic mechanism for valuing and controlling their digital assets, we empower them to continue creating the rich, diverse content that makes the Internet invaluable. </p><p>We expect pay per crawl to evolve significantly. It’s very early: we believe many different types of interactions and marketplaces can and should develop simultaneously. We are excited to support these various efforts and open standards.</p><p>For example, a publisher or new organization might want to charge different rates for different paths or content types. How do you introduce dynamic pricing based not only upon demand, but also how many users your AI application has? How do you introduce granular licenses at internet scale, whether for training, <a href="https://www.cloudflare.com/learning/ai/inference-vs-training/">inference</a>, search, or something entirely new?</p><p>The true potential of pay per crawl may emerge in an <a href="https://www.cloudflare.com/learning/ai/what-is-agentic-ai/">agentic</a> world. What if an agentic paywall could operate entirely programmatically? Imagine asking your favorite deep research program to help you synthesize the latest cancer research or a legal brief, or just help you find the best restaurant in Soho — and then giving that agent a budget to spend to acquire the best and most relevant content. By anchoring our first solution on <b>HTTP response code 402</b>, we enable a future where intelligent agents can programmatically negotiate access to digital resources. </p>
    <div>
      <h2>Getting started</h2>
      <a href="#getting-started">
        
      </a>
    </div>
    <p>Pay per crawl is currently in private beta. We’d love to hear from you if you’re either a crawler interested in paying to access content or a content creator interested in charging for access. You can reach out to us at <a href="http://www.cloudflare.com/paypercrawl-signup/"><u>http://www.cloudflare.com/paypercrawl-signup/</u></a> or contact your Account Executive if you’re an existing Enterprise customer.</p> ]]></content:encoded>
            <category><![CDATA[Pay Per Crawl]]></category>
            <category><![CDATA[AI Bots]]></category>
            <category><![CDATA[Bots]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Bot Management]]></category>
            <guid isPermaLink="false">7AJ8tUOFDvk5mCTrDjBPDq</guid>
            <dc:creator>Will Allen</dc:creator>
            <dc:creator>Simon Newton</dc:creator>
        </item>
        <item>
            <title><![CDATA[From Googlebot to GPTBot: who’s crawling your site in 2025]]></title>
            <link>https://blog.cloudflare.com/from-googlebot-to-gptbot-whos-crawling-your-site-in-2025/</link>
            <pubDate>Tue, 01 Jul 2025 10:00:00 GMT</pubDate>
            <description><![CDATA[ From May 2024 to May 2025, crawler traffic rose 18%, with GPTBot growing 305% and Googlebot 96%. ]]></description>
            <content:encoded><![CDATA[ <p><a href="https://www.cloudflare.com/learning/bots/what-is-a-web-crawler/"><u>Web crawlers</u></a> are not new. The <a href="https://en.wikipedia.org/wiki/World_Wide_Web_Wanderer"><u>World Wide Web Wanderer</u></a> debuted in 1993, though the first web search engines to truly use crawlers and indexers were <a href="https://en.wikipedia.org/wiki/JumpStation"><u>JumpStation</u></a> and <a href="https://en.wikipedia.org/wiki/WebCrawler"><u>WebCrawler</u></a>. Crawlers are part of one of the backbones of the Internet’s success: search. Their main purpose has been to index the content of websites across the Internet so that those websites can appear in search engine results and direct users appropriately. In this blog post, we’re analyzing recent trends in web crawling, which now has a crucial and complex new role with the rise of AI.</p><p>Not all crawlers are the same. Bots, automated scripts that perform tasks across the Internet, come in many forms: those considered non-threatening or “<a href="https://www.cloudflare.com/learning/bots/how-to-manage-good-bots/"><u>good</u></a>” (such as API clients, search indexing bots like Googlebot, or health checkers) and those considered malicious or “<a href="https://www.cloudflare.com/learning/bots/how-to-manage-good-bots/"><u>bad</u></a>” (like those used for credential stuffing, spam, or <a href="https://www.cloudflare.com/learning/ai/how-to-prevent-web-scraping/">scraping content without permission</a>). In fact, around 30% of global web traffic today, according to <a href="https://radar.cloudflare.com/traffic?dateRange=52w#bot-vs-human"><u>Cloudflare Radar data</u></a>, comes from bots, and even exceeds human Internet traffic in some locations.</p><p>A new category, AI crawlers, has emerged in recent years. These bots collect data from across the web to train AI models, improving tools and experiences, but also <a href="https://en.wikipedia.org/wiki/Artificial_intelligence_and_copyright"><u>raising issues around content rights</u></a>, unauthorized use, and infrastructure overload. We aimed to confirm the growth of both search and AI crawlers, examine specific AI crawlers, and understand broader crawler usage.</p><p>This is increasingly relevant with the rapid adoption of AI, growing content rights concerns, and data privacy discussions. Some sites and creators are looking to <a href="https://www.cloudflare.com/learning/ai/how-to-block-ai-crawlers/">limit or block AI crawlers</a> using tools like <code>robots.txt</code> or <a href="https://blog.cloudflare.com/bringing-ai-to-cloudflare/#enabling-dynamic-updates-for-the-ai-bot-rule"><u>firewall rules</u></a>. Others, like Dutch indie maker and entrepreneur <a href="https://x.com/levelsio/status/1916626339924267319"><u>Pieter Levels</u></a>, have embraced them: “<i>I’m 100% fine with AI crawlers… very important to rank in LLMs [large language models]</i>”.</p><p>It’s important to note that crawlers serve different purposes. For example, the <code>facebookexternalhit</code> bot is not included in this analysis, as it is used by Facebook to fetch page content when generating previews for shared links. However, within this post, we are only focusing on AI and search crawlers that are indexing or scraping website content.</p>
    <div>
      <h2>AI-only crawlers perspective</h2>
      <a href="#ai-only-crawlers-perspective">
        
      </a>
    </div>
    <p>Let’s start with an AI-only crawler perspective that we currently have on <a href="https://radar.cloudflare.com/explorer?dataSet=ai.bots&amp;dt=12w"><u>Cloudflare Radar</u></a>, focused only on crawlers advertised as AI-related. To identify them, we’re using here a <a href="https://github.com/ai-robots-txt/ai.robots.txt/blob/main/robots.json"><u>list</u></a> derived from an open-source project that helps website owners manage and control access to AI crawlers — especially those used to train large language models (LLMs). It also provides guidance on what to include in <code>robots.txt</code><i> </i>files (more on that below). The data shown below is based on matching those crawler names with user-agent strings in HTTP requests. (Further details, including one exception, about this method can be found at the end of the blog post.)</p><p>The AI crawler landscape saw a significant shift between May 2024 and May 2025, with <code>GPTBot</code> (from OpenAI) emerging as the dominant force, surging from 5% to 30% share, and <code>Meta-ExternalAgent</code> (from Meta) making a strong new entry at 19%. This growth came at the expense of former leader <code>Bytespider</code>, which plummeted from 42% to 7%, as well as other AI crawlers like <code>ClaudeBot</code> and <code>Amazonbot</code>, which also saw declines. Our data clearly indicates a reordering of top AI crawlers, highlighting the increasing prominence of OpenAI and Meta in this category.</p><p><b>May 2024</b></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3W6ZVHbwe8r5R5pYrZE7Aw/20a6ef0f77c015ae932848861c04b556/image6.png" />
          </figure><p><b>May 2025</b></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5joaVYfpzHZe7K8VEfCZCV/729f22a39f51d54b80cae35dd38e42b4/image3.png" />
          </figure><table><tr><td><p><b>Rank</b></p></td><td><p><b>Bot Name</b></p></td><td><p><b>Share (May 2024)</b></p></td><td><p><b>Rank</b></p></td><td><p><b>Bot Name</b></p></td><td><p><b>Share (May 2025)</b></p></td></tr><tr><td><p>1</p></td><td><p>Bytespider</p></td><td><p>42%</p></td><td><p>1</p></td><td><p>GPTBot</p></td><td><p>30%</p></td></tr><tr><td><p>2</p></td><td><p>ClaudeBot</p></td><td><p>27%</p></td><td><p>2</p></td><td><p>ClaudeBot</p></td><td><p>21%</p></td></tr><tr><td><p>3</p></td><td><p>Amazonbot</p></td><td><p>21%</p></td><td><p>3</p></td><td><p>Meta-ExternalAgent</p></td><td><p>19%</p></td></tr><tr><td><p>4</p></td><td><p>GPTBot</p></td><td><p>5%</p></td><td><p>4</p></td><td><p>Amazonbot</p></td><td><p>11%</p></td></tr><tr><td><p>5</p></td><td><p>Applebot</p></td><td><p>4.1%</p></td><td><p>5</p></td><td><p>Bytespider</p></td><td><p>7.2%</p></td></tr></table><p>For additional context, the list below includes further information about the bots with higher crawling shares seen above. This information comes from the same open-source <a href="https://github.com/ai-robots-txt/ai.robots.txt/blob/main/robots.json"><u>list</u></a> mentioned above and from publications by companies like <a href="https://platform.openai.com/docs/bots"><u>OpenAI</u></a>, which explain how their crawlers are used. </p><ul><li><p><b>GPTBot</b> – OpenAI’s crawler used to improve and train large language models like ChatGPT.</p></li><li><p><b>ClaudeBot</b> – Anthropic’s crawler for training and updating the Claude AI assistant.</p></li><li><p><b>Meta-ExternalAgent</b> – Meta’s bot likely used for collecting data to train or fine-tune LLMs.</p></li><li><p><b>Amazonbot</b> – Amazon’s crawler that gathers data for its search and AI applications.</p></li><li><p><b>Bytespider</b> – ByteDance’s AI data collector, often linked to training models like Ernie or TikTok-related AI.</p></li><li><p><b>Applebot</b> – Apple’s web crawler primarily for Siri and Spotlight search, possibly used in AI development.</p></li><li><p><b>OAI-SearchBot</b> – OpenAI’s search-focused crawler, likely used for retrieving real-time web info for models.</p></li><li><p><b>ChatGPT-User</b> – Represents API-based or browser usage of ChatGPT in connection with user interactions.</p></li><li><p><b>PerplexityBot</b> – Crawler from Perplexity.ai, which powers their AI answer engine using real-time web data.</p></li></ul><p>Webmasters can inform crawler operators of whether they want these bots and crawlers to access their content by setting out rules in a file called <a href="https://www.cloudflare.com/learning/bots/what-is-robots-txt/"><code><u>robots.txt</u></code></a>, which tells crawlers what pages they should or shouldn’t access. <a href="https://blog.cloudflare.com/ai-audit-enforcing-robots-txt/"><u>As we’ve seen recently</u></a>, crawlers honoring your <code>robots.txt</code> policies is voluntary, but Cloudflare announced tools like <a href="https://blog.cloudflare.com/cloudflare-ai-audit-control-ai-content-crawlers/"><u>AI Audit</u></a> to help content creators to enforce it.</p><p>Now, as we’ve seen, the landscape of web crawling is evolving rapidly, driven by the merging roles of search engines and AI. AI is now deeply integrated into search, seen in Google’s AI Overviews and AI Mode, but also in social media platforms, like Meta AI on Instagram. So, let's broaden our analysis to include these wider AI-driven crawling activities.</p>
    <div>
      <h2>General AI and search crawling growth: +18%</h2>
      <a href="#general-ai-and-search-crawling-growth-18">
        
      </a>
    </div>
    <p>A broader view reveals the growth of crawling traffic from both search and AI crawlers over the first few months of 2025. To remove customer growth bias, we'll analyze trends using a fixed set of customers from specific weeks (a method we’ve used in our <a href="http://radar.cloudflare.com/year-in-review/"><u>Cloudflare Radar Year in Review</u></a>): the first week of May 2024, a week in November 2024, and the first week of April 2025. </p><p>Using that method, we found that AI and search crawler traffic grew by 18% from May 2024 to May 2025 (comparing full-month periods). The increase was even higher, at 48%, when including new Cloudflare customers added during that time. Peak AI and search crawling traffic occurred in April 2025, with a 32% increase compared to May 2024. This confirms that crawling traffic has clearly risen over the past year, but also that growth is not always constant. Google remains the dominant player, and its share is growing too, as we’ll see in the next section.</p><p>As the next chart shows, crawling traffic increased sharply in March and April 2025 and remained high, though slightly lower, in May.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/hePknXM0crXK4jX5e7LxZ/0956ac5024915734a9c0f20c8f15bc16/image4.png" />
          </figure><p>The patterns on the above crawling chart also seem to reflect broader seasonal patterns and general human Internet traffic patterns. In 2024, traffic dropped during the summer in the Northern Hemisphere, with August and September being the least active months. And like overall Internet traffic, it then rose in November, when people are typically more online due to shopping and seasonal habits, as we've seen in <a href="https://blog.cloudflare.com/from-deals-to-ddos-exploring-cyber-week-2024-internet-trends/"><u>past analyses</u></a>. </p>
    <div>
      <h2>Googlebot crawling grew 96% in one year</h2>
      <a href="#googlebot-crawling-grew-96-in-one-year">
        
      </a>
    </div>
    <p><a href="https://developers.google.com/search/docs/crawling-indexing/google-common-crawlers"><code><u>Googlebot</u></code></a>, which indexes content for Google Search, was clearly the top crawler throughout the period and showed strong growth, up 96% from May 2024 to May 2025, reflecting increased crawling by Google. Crawling traffic peaked in April 2025, reaching 145% higher than in May 2024. It's also important to mention that Google made changes to its search and launched <a href="https://ahrefs.com/blog/google-ai-overviews/"><u>AI Overviews</u></a> in its search engine during this time — first in the US in May 2024, then in more countries later.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1qFVGagpgYIti7p741j8uW/77dc4bc61bec86faa6b80b293997dffd/image1.png" />
          </figure><p>Two trends stand out when looking at daily data for Google-related crawlers, as shown in the graph below. First, <a href="https://developers.google.com/search/docs/crawling-indexing/google-common-crawlers"><code><u>Googlebot</u></code></a> and the more recent <code>GoogleOther</code> (a <a href="https://searchengineland.com/google-launches-new-googlebot-named-googleother-395827"><u>web crawler from 2023</u></a> for “research and development”) account for most of Google’s crawling activity. Second, there were two visible drops in crawling traffic: one on December 14, 2024 (around a Google Search <a href="https://status.search.google.com/incidents/V9nDKuo6nWKh2ThBALgA#:~:text=Incident%20began%20at%202024%2D12,Time"><u>update</u></a>), and another from May 20 to May 28, 2025. That May 20 drop occurred around the same time as the rollout of AI Mode on Google Search in the US, although the timing may be coincidental.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/16kB3kDeprY3LMetEDPS10/8f2bafc7568579377624d6c0aaeb1751/image5.png" />
          </figure>
    <div>
      <h2>Breakdown of top 20 AI and search web crawlers </h2>
      <a href="#breakdown-of-top-20-ai-and-search-web-crawlers">
        
      </a>
    </div>
    <p>Ranking crawlers by their share of total requests gives a clearer picture of which bots are gaining or losing ground, especially among those focused on search and AI. The table below shows a clear trend: some AI bots have grown rapidly since last year (with growth beginning even earlier), while many traditional search crawlers have remained flat or lost share (as in the case of Bing and its <code>Bingbot</code> crawler). The main exception is <code>Googlebot</code>.</p><p>The next table shows the percentage share of each crawler out of all crawling traffic generated by this specific cohort of over 30 AI &amp; search crawlers observed by Cloudflare in May 2024 and May 2025. The table below also includes the change in percentage points and the growth or decline in raw request volume. Crawlers are ranked by their share in May 2025. Key crawler shifts include <code>GPTBot</code> rising sharply (+305%), while <code>Bytespider</code> dropped dramatically (-85%).</p>
<div><table><thead>
  <tr>
    <th><span>Rank</span></th>
    <th><span>Bot name</span></th>
    <th><span>Share May 2024</span></th>
    <th><span>Share May 2025</span></th>
    <th><span>Δ percentage-point change</span></th>
    <th><span>Raw requests growth (May 2024 to May 2025)</span></th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><span>1</span></td>
    <td><span>Googlebot</span></td>
    <td><span>30%</span></td>
    <td><span>50%</span></td>
    <td><span>+20 pp</span></td>
    <td><span>96%</span></td>
  </tr>
  <tr>
    <td><span>2</span></td>
    <td><span>Bingbot</span></td>
    <td><span>10%</span></td>
    <td><span>8.7%</span></td>
    <td><span>-1.3 pp</span></td>
    <td><span>2%</span></td>
  </tr>
  <tr>
    <td><span>3</span></td>
    <td><span>GPTBot</span></td>
    <td><span>2.2%</span></td>
    <td><span>7.7%</span></td>
    <td><span>+5.5 pp</span></td>
    <td><span>305%</span></td>
  </tr>
  <tr>
    <td><span>4</span></td>
    <td><span>ClaudeBot</span></td>
    <td><span>11.7%</span></td>
    <td><span>5.4%</span></td>
    <td><span>-6.3 pp</span></td>
    <td><span>-46%</span></td>
  </tr>
  <tr>
    <td><span>5</span></td>
    <td><span>GoogleOther</span></td>
    <td><span>4.4%</span></td>
    <td><span>4.3%</span></td>
    <td><span>-0.1 pp</span></td>
    <td><span>14%</span></td>
  </tr>
  <tr>
    <td><span>6</span></td>
    <td><span>Amazonbot</span></td>
    <td><span>7.6%</span></td>
    <td><span>4.2%</span></td>
    <td><span>-3.4 pp</span></td>
    <td><span>-35%</span></td>
  </tr>
  <tr>
    <td><span>7</span></td>
    <td><span>Googlebot-Image</span></td>
    <td><span>4.5%</span></td>
    <td><span>3.3%</span></td>
    <td><span>-1.2 pp</span></td>
    <td><span>-13%</span></td>
  </tr>
  <tr>
    <td><span>8</span></td>
    <td><span>Bytespider</span></td>
    <td><span>22.8%</span></td>
    <td><span>2.9%</span></td>
    <td><span>-19.8 pp</span></td>
    <td><span>-85%</span></td>
  </tr>
  <tr>
    <td><span>9</span></td>
    <td><span>Yandex</span></td>
    <td><span>2.8%</span></td>
    <td><span>2.2%</span></td>
    <td><span>-0.7 pp</span></td>
    <td><span>-10%</span></td>
  </tr>
  <tr>
    <td><span>10</span></td>
    <td><span>ChatGPT-User</span></td>
    <td><span>0.1%</span></td>
    <td><span>1.3%</span></td>
    <td><span>+1.2 pp</span></td>
    <td><span>2,825%</span></td>
  </tr>
  <tr>
    <td><span>11</span></td>
    <td><span>Applebot</span></td>
    <td><span>1.9%</span></td>
    <td><span>1.2%</span></td>
    <td><span>-0.7 pp</span></td>
    <td><span>-26%</span></td>
  </tr>
  <tr>
    <td><span>12</span></td>
    <td><span>Timpibot</span></td>
    <td><span>0.3%</span></td>
    <td><span>0.6%</span></td>
    <td><span>+0.3 pp</span></td>
    <td><span>133%</span></td>
  </tr>
  <tr>
    <td><span>13</span></td>
    <td><span>Baiduspider</span></td>
    <td><span>0.5%</span></td>
    <td><span>0.4%</span></td>
    <td><span>-0.1 pp</span></td>
    <td><span>7%</span></td>
  </tr>
  <tr>
    <td><span>14</span></td>
    <td><span>PerplexityBot</span></td>
    <td><span>&lt;0.01%</span></td>
    <td><span>0.2%</span></td>
    <td><span>+0.2 pp</span></td>
    <td><span>157,490%</span></td>
  </tr>
  <tr>
    <td><span>15</span></td>
    <td><span>DuckDuckBot</span></td>
    <td><span>0.2%</span></td>
    <td><span>0.1%</span></td>
    <td><span>-0.1 pp</span></td>
    <td><span>-16%</span></td>
  </tr>
  <tr>
    <td><span>16</span></td>
    <td><span>SeznamBot</span></td>
    <td><span>0.1%</span></td>
    <td><span>0.1%</span></td>
    <td></td>
    <td><span>2%</span></td>
  </tr>
  <tr>
    <td><span>17</span></td>
    <td><span>Yeti</span></td>
    <td><span>0.1%</span></td>
    <td><span>0.1%</span></td>
    <td></td>
    <td><span>47%</span></td>
  </tr>
  <tr>
    <td><span>18</span></td>
    <td><span>coccocbot</span></td>
    <td><span>0.1%</span></td>
    <td><span>0.1%</span></td>
    <td></td>
    <td><span>-3%</span></td>
  </tr>
  <tr>
    <td><span>19</span></td>
    <td><span>Sogou</span></td>
    <td><span>0.1%</span></td>
    <td><span>0.1%</span></td>
    <td></td>
    <td><span>-22%</span></td>
  </tr>
  <tr>
    <td><span>20</span></td>
    <td><span>Yahoo! Slurp</span></td>
    <td><span>0.1%</span></td>
    <td><span>0.0%</span></td>
    <td><span>-0.1 pp</span></td>
    <td><span>-8%</span></td>
  </tr>
</tbody></table></div><p>Based on this data, two major shifts in web crawling occurred between May 2024 and May 2025:</p><p><b>1. Some AI crawlers rose sharply.
</b><code>GPTBot</code> (from OpenAI) increased its share from 2.2% to 7.7% (+5.5 pp), with a 305% rise in requests. This underscores the data demand for training large language models like ChatGPT. <code>GPTBot</code> jumped from #9 in May 2024 to #3 in May 2025.</p><p>Another OpenAI crawler, <code>ChatGPT-User</code>, saw requests surge by 2,825%, reaching a 1.3% share. This reflects a large rise in ChatGPT user activity or API-based interactions that involve accessing web content. <code>PerplexityBot</code> (from Perplexity.ai), despite a small 0.2% share, recorded the highest growth rate: a staggering 157,490% increase in raw requests.</p><p>Meanwhile, some AI crawlers saw steep declines. <code>ClaudeBot</code> (Anthropic) fell from 11.7% to 5.4% of total traffic and dropped 46% in requests. <code>Bytespider</code> plummeted 85% in request volume, falling from #2 to #8 in crawler share (now at just 2.9%).</p><p>Both <code>Amazonbot</code> and <code>Applebot</code>, also considered AI crawlers, saw decreases in share and in raw requests (–35% and –26%, respectively).</p><p><b>2. Google’s dominance expanded.
</b><code>Googlebot</code>’s share rose from 30% to 50%, supporting search indexing, but potentially also having AI-related purposes (such as new AI Overviews in Google Search). And <code>GoogleOther</code> (the<a href="https://searchengineland.com/google-launches-new-googlebot-named-googleother-395827"><u> crawler introduced in 2023</u></a>) also increased in crawling traffic, 14%. Other Google crawlers not in the top 20, like <code>Googlebot-News</code>, also grew significantly (+71% in requests). There’s a clear trend of growth in these Google-related web crawlers at a time when the company is investing heavily in combining AI with search.</p><p>Also in the search category, <code>Bingbot</code>’s share (from Microsoft) declined slightly from 10% to 8.7% (-1.3 pp), though its raw requests still grew modestly by 2%.</p><p>These trends show that web crawling is increasingly dominated by bots from Google and OpenAI, reflecting clear shifts over the course of a year. Google also appears to be adapting how it collects data to support both traditional search and AI-driven features.</p><p>Also worth noting is <code>FriendlyCrawler</code>, which no longer appears in the top 20 list as of May 2025 (now ranked #35). It was #14 in May 2024 with a 0.2% share, but saw a 100% drop in requests by May 2025. This bot is known to index and analyze website content, although its owner and <a href="https://imho.alex-kunz.com/2024/01/25/an-update-on-friendly-crawler/"><u>purpose</u></a> remain unclear. Typically, crawlers like this are used for improving search results, market research, or analytics.</p>
    <div>
      <h2>robots.txt &amp; AI bots: GPTBot leads twice</h2>
      <a href="#robots-txt-ai-bots-gptbot-leads-twice">
        
      </a>
    </div>
    <p>Recent data from June 6, 2025, from <a href="https://radar.cloudflare.com/ai-insights?dateStart=2025-05-30&amp;dateEnd=2025-06-06"><u>Cloudflare Radar</u></a> shows that out of 3,816 domains (from the <a href="https://radar.cloudflare.com/domains"><u>top 10,000</u></a>) where we were able to find a<i> robots.txt</i> file, 546 (about 14%) had “allow” or “disallow” (fully or partially) directives targeting AI bots in particular.</p><p>This leaves many site owners in a gray area because it’s not always clear how effective <i>robots.txt</i> is in managing AI crawlers. Some site owners may not think to use it specifically for AI bots, while others might be unsure whether these bots even respect <i>robots.txt </i>rules, especially newer or less transparent crawlers. In other cases, sites use partial rules to fine-tune access, trying to balance visibility and protection without fully opting in or out.</p><p>The “disallow” rules appear far more often than “allow” rules. The most frequently blocked bot was <code>GPTBot</code>, disallowed by 312 domains (250 fully, 62 partially), followed by <code>CCBot</code> and <code>Google-Extended</code>, as shown in the following graph.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6CgnH5GZNCIgUAZEeMWTVK/fe608135d5376e936f0ac503e3e9564c/image2.png" />
          </figure><p>Although <code>GPTBot</code> was the most blocked, it was also the most explicitly allowed, with 61 domains granting access (18 fully, 43 partially). Still, very few sites openly and explicitly allow AI bots, and when they do, it’s usually for limited sections. Note that bots not listed in a site’s robots.txt are effectively allowed by default.</p><p>As AI crawling increases, more websites are moving from passive signals like <i>robots.txt</i> to active protections like <a href="https://www.cloudflare.com/learning/ddos/glossary/web-application-firewall-waf/"><u>Web Application Firewalls</u></a>. The ecosystem is shifting, with a growing focus on enforceable controls.</p><p><i>Note: When we analyze crawler traffic, we compare user-agent tokens found in robots.txt files (like those for AI crawlers) with the actual user-agent strings in HTTP requests. It's important to note that some robots.txt tokens, such as Google-Extended, aren't user-agent substrings. As described in </i><a href="https://www.rfc-editor.org/rfc/rfc9309.html#name-the-user-agent-line"><i><u>RFC 9309</u></i></a><i>, one goal of these token may be to signal the purpose of the crawler. For instance, Google uses Google-Extended in robots.txt to see if your content can be used for AI training, but the traffic itself still comes from standard Google user-agents like Googlebot. Because of this, not every robots.txt entry will have a direct match in HTTP request logs.</i></p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>As AI crawlers reshape the Internet, websites face both new challenges and new opportunities in managing their online presence.</p><p>This analysis highlights the growing impact of AI on web crawling, showing a clear shift from traditional search indexing to data collection for training AI models. The detailed statistics, such as Googlebot’s continued growth and the rapid rise of AI-specific crawlers, offer context for understanding how this space is evolving and what it means for the future of web content access.</p><p>The trend toward stronger, enforceable blocking methods, something <a href="https://blog.cloudflare.com/cloudflare-ai-audit-control-ai-content-crawlers/"><u>Cloudflare has also been invested</u></a>, signals a key shift in how websites may control their interactions with AI systems going forward.</p> ]]></content:encoded>
            <category><![CDATA[Pay Per Crawl]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Radar]]></category>
            <category><![CDATA[AI Bots]]></category>
            <category><![CDATA[Bots]]></category>
            <guid isPermaLink="false">7KJiiS1zdIyBiVgoT6SgKf</guid>
            <dc:creator>João Tomé</dc:creator>
            <dc:creator>Jorge Pacheco</dc:creator>
            <dc:creator>Carlos Azevedo</dc:creator>
        </item>
        <item>
            <title><![CDATA[Message Signatures are now part of our Verified Bots Program, simplifying bot authentication]]></title>
            <link>https://blog.cloudflare.com/verified-bots-with-cryptography/</link>
            <pubDate>Tue, 01 Jul 2025 10:00:00 GMT</pubDate>
            <description><![CDATA[ Bots can start authenticating to Cloudflare using public key cryptography, preventing them from being spoofed and allowing origins to have confidence in their identity. ]]></description>
            <content:encoded><![CDATA[ <p>As a site owner, how do you know which bots to allow on your site, and which you’d like to block? Existing identification methods rely on a combination of IP address range (which may be shared by other services, or change over time) and user-agent header (easily spoofable). These have limitations and deficiencies. In our <a href="https://blog.cloudflare.com/web-bot-auth/"><u>last blog post</u></a>, we proposed using HTTP Message Signatures: a way for developers of bots, agents, and crawlers to clearly identify themselves by cryptographically signing requests originating from their service. </p><p>Since we published the blog post on Message Signatures and the <a href="https://datatracker.ietf.org/doc/html/draft-meunier-web-bot-auth-architecture"><u>IETF draft for Web Bot Auth</u></a> in May 2025, we’ve seen significant interest around implementing and deploying Message Signatures at scale. It’s clear that well-intentioned bot owners want a clear way to identify their bots to site owners, and site owners want a clear way to identify and manage bot traffic. Both parties seem to agree that deploying cryptography for the purposes of authentication is the right solution.     </p><p>Today, we’re announcing that we’re integrating HTTP Message Signatures directly into our <b>Verified Bots Program</b>. This announcement has two main parts: (1) for bots, crawlers, and agents, we’re simplifying enrollment into the Verified Bots program for those who sign requests using Message Signatures, and (2) we’re encouraging <i>all bot operators moving forward </i>to use Message Signatures over existing verification mechanisms. Because Verified Bots are considered authenticated, they do not face challenges from our Bot Management to identify as bots, given they’re already identified as such.</p><p>For site owners, no additional action is required – Cloudflare will automatically validate signatures on our edge, and if that validation is a success, that traffic will be marked as verified so that site owners can use the <a href="https://developers.cloudflare.com/bots/concepts/bot/verified-bots/categories/"><u>verified bot fields</u></a> to create Bot Management and <a href="https://developers.cloudflare.com/waf/custom-rules/"><u>WAF rules</u></a> based on it.  </p><p>This isn't just about simplifying things for bot operators — it’s about giving website owners unparalleled accuracy in identifying trusted bot traffic, cutting down on the overhead for cryptographic verification, and fundamentally transforming how we manage authentication across the Cloudflare network.</p>
    <div>
      <h2>Become a Verified Bot with Message Signatures</h2>
      <a href="#become-a-verified-bot-with-message-signatures">
        
      </a>
    </div>
    <p>Cloudflare’s existing <a href="https://developers.cloudflare.com/bots/concepts/bot/verified-bots/"><u>Verified Bots program</u></a> is for bots that are transparent about who they are and what they do, like indexing sites for search or scanning for security vulnerabilities. You can see a list of these verified bots in <a href="https://radar.cloudflare.com/bots#verified-bots"><u>Cloudflare Radar</u></a>:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2lMYno3QOwtwfTDDgeqFx8/c69088229dcf9fc08f5a76ce7e0a0354/1.png" />
          </figure><p><sup><i>A preview of the Verified Bots page on Cloudflare Radar. </i></sup></p><p>In the past, in order to <a href="https://dash.cloudflare.com/?to=/:account/configurations/verified-bots"><u>apply</u></a> to be a verified bot, we used to ask for IP address ranges or reverse DNS names so that we could verify your identity. This required some manual steps like checking that the IP address range is valid and is associated with the appropriate <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/"><u>ASN</u></a>. </p><p>With the integration of Message Signatures, we’re aiming to streamline applications into our Verified Bot program. Bots applying with well-formed Message Signatures will be prioritized, and approved more quickly! </p>
    <div>
      <h2>Getting started</h2>
      <a href="#getting-started">
        
      </a>
    </div>
    <p>In order to make generating Message Signatures as easy as possible, Cloudflare is providing two open source libraries: a <a href="https://crates.io/crates/web-bot-auth"><u>web-bot-auth library in rust</u></a>, and a <a href="https://www.npmjs.com/package/web-bot-auth"><u>web-bot-auth npm package in TypeScript</u></a>. If you’re working on a different implementation, <a href="https://www.cloudflare.com/lp/verified-bots/"><u>let us know</u></a> – we’d love to add it to our <a href="https://developers.cloudflare.com/bots/concepts/bot/verified-bots/web-bot-auth/"><u>developer docs</u></a>!</p><p>At a high level, signing your requests with web bot auth consists of the following steps: </p><ul><li><p>Generate a valid signing key. See <a href="https://developers.cloudflare.com/bots/concepts/bot/verified-bots/web-bot-auth/#1-generate-a-valid-signing-key"><u>Signing Key section</u></a> for step-by-step instructions.</p></li><li><p>Host a JSON web key set containing your public key under <code>/.well-known/http-message-signature-directory</code> of your website.</p></li><li><p>Sign responses for that URL using a Web Bot Auth library, one signature for each key contained in it, to prove you own it. See the <a href="https://developers.cloudflare.com/bots/concepts/bot/verified-bots/web-bot-auth/#2-host-a-key-directory"><u>Hosting section</u></a> for step-by-step instructions.</p></li><li><p>Register that URL with us, using our Verified Bots form. This can be done directly in your Cloudflare account. See <a href="https://developers.cloudflare.com/bots/concepts/bot/verified-bots/overview/"><u>our documentation</u></a>.</p></li><li><p>Sign requests using a Web Bot Auth library. </p></li></ul><p>
As an example, <a href="https://radar.cloudflare.com/scan"><u>Cloudflare Radar's URL Scanner</u></a> lets you scan any URL and get a publicly shareable report with security, performance, technology, and network information. Here’s an example of what a well-formed signature looks like for requests coming from URL Scanner:</p>
            <pre><code>GET /path/to/resource HTTP/1.1
Host: www.example.com
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36
Signature-Agent: "https://web-bot-auth-directory.radar-cfdata-org.workers.dev"
Signature-Input: sig=("@authority" "signature-agent");\
             	 created=1700000000;\
             	 expires=1700011111;\
             	 keyid="poqkLGiymh_W0uP6PZFw-dvez3QJT5SolqXBCW38r0U";\
             	 tag="web-bot-auth"
Signature:sig=jdq0SqOwHdyHr9+r5jw3iYZH6aNGKijYp/EstF4RQTQdi5N5YYKrD+mCT1HA1nZDsi6nJKuHxUi/5Syp3rLWBA==:</code></pre>
            <p>Since we’ve already registered URLScanner as a Verified Bot, Cloudflare will now automatically verify that the signature in the <code>Signature</code> header matches the request — more on that later.</p>
    <div>
      <h2>Register your bot</h2>
      <a href="#register-your-bot">
        
      </a>
    </div>
    <p>Access the <a href="https://dash.cloudflare.com/?to=/:account/configurations/verified-bots"><u>Verified Bots submission form</u></a> on your account. If that link does not immediately take you there, go to <i>your Cloudflare account</i> →  <i>Account Home</i>  → <i>the three dots next to your account name</i>  → <i>Configurations</i> → <i>Verified Bots.</i></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/73yQcvLmiVDe19HJXYvBIc/ca2bdb2bb81addc29583568087c2ccc2/3.png" />
          </figure><p>If you do not have a Cloudflare account, you can <a href="https://dash.cloudflare.com/sign-up"><u>sign up for a free one</u></a>.</p><p>For the verification method, select "Request Signature", then enter the URL of your key directory in Validation Instructions. Specifying the User-Agent values is optional if you’re submitting a Request Signature bot. </p><p>Once your application has gone through our (now shortened) review process, you don’t need to take any further action.</p>
    <div>
      <h2>Message Signature verification for origins</h2>
      <a href="#message-signature-verification-for-origins">
        
      </a>
    </div>
    <p>Starting today, Cloudflare is ramping up verification of <a href="https://datatracker.ietf.org/doc/html/draft-meunier-web-bot-auth-architecture"><u>cryptographic signatures provided by automated crawlers and bots</u></a>. This is currently available for all Free and Pro plans, and as we continue to test and validate at scale, will be released to all Business and Enterprise plans. This means that as time passes, the number of unauthenticated web crawlers should diminish, ensuring most bot traffic is authenticated before it reaches your website’s servers, helping to prevent spoofing attacks. </p><p>At a high level, signature verification works like this: </p><ol><li><p>A bot or agent sends a request to a website behind Cloudflare.</p></li><li><p>Cloudflare’s Message Signature verification service checks for the <code>Signature</code>, <code>Signature-Input</code>, and <code>Signature-Agent</code> headers.</p></li><li><p>It checks that the incoming request presents a <code>keyid</code> parameter in your Signature-Input that points to a key we already know.</p></li><li><p>It looks at the <code>expires</code> parameter in the incoming bot request. If the current time is after expiration, verification fails. This guards against replay attacks, preventing malicious agents from trying to pass as a bot by retrying messages they captured in the past.</p></li><li><p>It checks that you’ve specified a <code>tag</code> parameter indicating <code>web-bot-auth</code>, to indicate your intent that the message be handled using web bot authentication specifically</p></li><li><p>It looks at all the <a href="https://www.rfc-editor.org/rfc/rfc9421#covered-components"><u>components</u></a> chosen in your <code>Signature-Input</code> header, and constructs <a href="https://www.rfc-editor.org/rfc/rfc9421#name-creating-the-signature-base"><u>a signature base</u></a> from it. </p></li><li><p>If all pre-flight checks pass, Cloudflare attempts to verify the signature base against the value in Signature field using an <a href="https://www.rfc-editor.org/rfc/rfc9421#name-eddsa-using-curve-edwards25"><u>ed25519 verification algorithm</u></a> and the key supplied in <code>keyid</code>.</p></li><li><p>Verified Bots and other systems at Cloudflare use a successful verification as proof of your identity, and apply rules corresponding to that identity. </p></li></ol><p>If any of the above steps fail, Cloudflare falls back to existing bot identification and mitigation mechanisms. As the system matures, we would strengthen these requirements, and limit the possibilities of a soft downgrade.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/128Ox15wBqBPVKUUzvn4gA/acca9b9e6df243b8317b8964285ce57c/2.png" />
          </figure><p>As a site owner, you can segment your Verified Bot traffic by its type and purpose by adding the <a href="https://developers.cloudflare.com/bots/concepts/bot/verified-bots/categories/"><u>Verified Bot Categories</u></a> field <code>cf.verified_bot_category</code> as a filter criterion in <a href="https://developers.cloudflare.com/waf/custom-rules/"><u>WAF Custom rules</u></a>, <a href="https://developers.cloudflare.com/waf/rate-limiting-rules/"><u>Advanced Rate Limiting</u></a>, and Late <a href="https://developers.cloudflare.com/rules/transform/"><u>Transform rules</u></a>. For instance, to allow the Bibliothèque nationale de France and the Library of Congress, and institutions dedicated to academic research, you can add a rule that allows bots in the <code>Academic Research</code> category.</p>
    <div>
      <h2>Where we’re going next</h2>
      <a href="#where-were-going-next">
        
      </a>
    </div>
    <p>HTTP Message Signatures is a primitive that is useful beyond Cloudflare – the IETF standardized it as part of <a href="https://datatracker.ietf.org/doc/html/rfc9421"><u>RFC 9421</u></a>.</p><p>As discussed in our <a href="https://blog.cloudflare.com/web-bot-auth/#introducing-http-message-signatures"><u>previous blog post</u></a>, Cloudflare believes that making Message Signatures a core component of bot authentication on the web should follow the same path. The <a href="https://www.ietf.org/archive/id/draft-meunier-web-bot-auth-architecture-02.html"><u>specifications</u></a> for the protocol are being built in the open, and they have already evolved following feedback.</p><p>Moreover, due to widespread interest, the IETF is considering forming a working group around <a href="https://datatracker.ietf.org/wg/webbotauth/about/"><u>Web Bot Auth</u></a>. Should you be a crawler, an origin, or even a CDN, we invite you to provide feedback to ensure the solution gets stronger, and suits your needs.</p>
    <div>
      <h2>A better, more trusted Internet</h2>
      <a href="#a-better-more-trusted-internet">
        
      </a>
    </div>
    <p>For bot, agent, and crawler operators that act transparently and provide vital services for the Internet, we’re providing a faster and more automated path to being recognized as a Verified Bot, reducing manual processes. We trust that this approach improves bot authentication from what were formerly brittle and unreliable authentication methods, to a secure and reliable alternative. It should reduce the overall volume of friction and hurdles genuinely useful bots face.</p><p>For site owners, Message Signatures provides better assurance that the bot traffic is legitimate — automatically recognized and allowed, minimizing disruption to essential services (e.g., search engine indexing, monitoring). In line with our commitments to making TLS/<a href="https://blog.cloudflare.com/introducing-universal-ssl/"><u>SSL</u></a> and <a href="https://blog.cloudflare.com/pt-br/post-quantum-zero-trust/"><u>Post-Quantum</u></a> certificates available for everyone, we’ll always offer the cryptographic verification of Message Signatures for all sites because we believe in a safer and more efficient Internet by fostering a trusted environment for both human and automated traffic.</p><p>If you have a feature request, feedback, or are interested in partnering with us, please <a href="https://www.cloudflare.com/lp/verified-bots/"><u>reach out</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Pay Per Crawl]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Bots]]></category>
            <category><![CDATA[Bot Management]]></category>
            <category><![CDATA[AI Bots]]></category>
            <category><![CDATA[Cryptography]]></category>
            <guid isPermaLink="false">5K5btgE8vXWGaGxCrs5yFH</guid>
            <dc:creator>Mari Galicer</dc:creator>
            <dc:creator>Akshat Mahajan</dc:creator>
            <dc:creator>Gauri Baraskar</dc:creator>
            <dc:creator>Helen Du</dc:creator>
        </item>
        <item>
            <title><![CDATA[Forget IPs: using cryptography to verify bot and agent traffic]]></title>
            <link>https://blog.cloudflare.com/web-bot-auth/</link>
            <pubDate>Thu, 15 May 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ Bots now browse like humans. We're proposing bots use cryptographic signatures so that website owners can verify their identity. Explanations and demonstration code can be found within the post. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>With the rise of traffic from <a href="https://www.cloudflare.com/learning/ai/what-is-agentic-ai/">AI agents</a>, what’s considered a bot is no longer clear-cut. There are some clearly malicious bots, like ones that DoS your site or do <a href="https://www.cloudflare.com/learning/bots/what-is-credential-stuffing/">credential stuffing</a>, and ones that most site owners do want to interact with their site, like the bot that indexes your site for a search engine, or ones that fetch RSS feeds.      </p><p>Historically, Cloudflare has relied on two main signals to verify legitimate web crawlers from other types of automated traffic: user agent headers and IP addresses. The <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/User-Agent"><code><u>User-Agent</u></code><u> header</u></a> allows bot developers to identify themselves, i.e. <code>MyBotCrawler/1.1</code>. However, user agent headers alone are easily spoofed and are therefore insufficient for reliable identification. To address this, user agent checks are often supplemented with <a href="https://developers.cloudflare.com/bots/concepts/bot/verified-bots/policy/#ip-validation"><u>IP address validation</u></a>, the inspection of published IP address ranges to confirm a crawler's authenticity. However, the logic around IP address ranges representing a product or group of users is brittle – connections from the crawling service might be shared by multiple users, such as in the case of <a href="https://blog.cloudflare.com/icloud-private-relay/"><u>privacy proxies</u></a> and VPNs, and these ranges, often maintained by cloud providers, change over time.</p><p>Cloudflare will always try to block malicious bots, but we think our role here is to also provide an affirmative mechanism to authenticate desirable bot traffic. By using well-established cryptography techniques, we’re proposing a better mechanism for legitimate agents and bots to declare who they are, and provide a clearer signal for site owners to decide what traffic to permit. </p><p><b>Today, we’re introducing two proposals – HTTP message signatures and request mTLS – for </b><a href="https://blog.cloudflare.com/friendly-bots/"><b><u>friendly bots</u></b></a><b> to authenticate themselves, and for customer origins to identify them. </b>In this blog post, we’ll share how these authentication mechanisms work, how we implemented them, and how you can participate in our closed beta.</p>
    <div>
      <h2>Existing bot verification mechanisms are broken </h2>
      <a href="#existing-bot-verification-mechanisms-are-broken">
        
      </a>
    </div>
    <p>Historically, if you’ve worked on ChatGPT, Claude, Gemini, or any other agent, you’ve had several options to identify your HTTP traffic to other services: </p><ol><li><p>You define a <a href="https://www.rfc-editor.org/rfc/rfc9110#name-user-agent"><u>user agent</u></a>, an HTTP header described in <a href="https://www.rfc-editor.org/rfc/rfc9110.html#name-user-agent"><u>RFC 9110</u></a>. The problem here is that this header is easily spoofable and there’s not a clear way for agents to identify themselves as semi-automated browsers — agents often use the Chrome user agent for this very reason, which is discouraged. The RFC <a href="https://www.rfc-editor.org/rfc/rfc9110.html#section-10.1.5-9"><u>states</u></a>: 
<i>“If a user agent masquerades as a different user agent, recipients can assume that the user intentionally desires to see responses tailored for that identified user agent, even if they might not work as well for the actual user agent being used.” </i> </p></li><li><p>You publish your IP address range(s). This has limitations because the same IP address might be shared by multiple users or multiple services within the same company, or even by multiple companies when hosting infrastructure is shared (like <a href="https://www.cloudflare.com/developer-platform/products/workers/">Cloudflare Workers</a>, for example). In addition, IP addresses are prone to change as underlying infrastructure changes, leading services to use ad-hoc sharing mechanisms like <a href="https://www.cloudflare.com/ips-v4"><u>CIDR lists</u></a>. </p></li><li><p>You go to every website and share a secret, like a <a href="https://www.rfc-editor.org/rfc/rfc6750"><u>Bearer</u></a> token. This is impractical at scale because it requires developers to maintain separate tokens for each website their bot will visit.</p></li></ol><p>We can do better! Instead of these arduous methods, we’re proposing that developers of bots and agents cryptographically sign requests originating from their service. When protecting origins, <a href="https://www.cloudflare.com/learning/cdn/glossary/reverse-proxy/">reverse proxies</a> such as Cloudflare can then validate those signatures to confidently identify the request source on behalf of site owners, allowing them to take action as they see fit. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3yB6h6XcSWNQO5McRWIpL8/edf32f7938b01a4c8f5eedefee2b9328/image2.png" />
          </figure><p>A typical system has three actors:</p><ul><li><p>User: the entity that wants to perform some actions on the web. This may be a human, an automated program, or anything taking action to retrieve information from the web.</p></li><li><p>Agent: an orchestrated browser or software program. For example, Chrome on your computer, or OpenAI’s <a href="https://operator.chatgpt.com/"><u>Operator</u></a> with ChatGPT. Agents can interact with the web according to web standards (HTML rendering, JavaScript, subrequests, etc.).</p></li><li><p>Origin: the website hosting a resource. The user wants to access it through the browser. This is Cloudflare when your website is using our services, and it’s your own server(s) when exposed directly to the Internet.</p></li></ul><p>In the next section, we’ll dive into HTTP Message Signatures and request mTLS, two mechanisms a browser agent may implement to sign outgoing requests, with different levels of ease for an origin to adopt. </p>
    <div>
      <h2>Introducing HTTP Message Signatures</h2>
      <a href="#introducing-http-message-signatures">
        
      </a>
    </div>
    <p><a href="https://www.rfc-editor.org/rfc/rfc9421.html"><u>HTTP Message Signatures</u></a> is a standard that defines the cryptographic authentication of a request sender. It’s essentially a cryptographically sound way to say, “hey, it’s me!”. It’s not the only way that developers can sign requests from their infrastructure — for example, AWS has used <a href="https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html"><u>Signature v4</u></a>, and Stripe has a framework for <a href="https://docs.stripe.com/webhooks#verify-webhook-signatures-with-official-libraries"><u>authenticating webhooks</u></a> — but Message Signatures is a published standard, and the cleanest, most developer-friendly way to sign requests.  </p><p>We’re working closely with the wider industry to support these standards-based approaches. For example, OpenAI has started to sign their requests. In their own words:   </p><blockquote><p><i>"Ensuring the authenticity of Operator traffic is paramount. With HTTP Message Signatures (</i><a href="https://www.rfc-editor.org/rfc/rfc9421.html"><i><u>RFC 9421</u></i></a><i>), OpenAI signs all Operator requests so site owners can verify they genuinely originate from Operator and haven’t been tampered with” </i>– Eugenio, Engineer, OpenAI</p></blockquote><p>Without further delay, let’s dive in how HTTP Messages Signatures work to identify bot traffic.</p>
    <div>
      <h3>Scoping standards to bot authentication</h3>
      <a href="#scoping-standards-to-bot-authentication">
        
      </a>
    </div>
    <p>Generating a message signature works like this: before sending a request, the agent signs the target origin with a public key. When fetching <code>https://example.com/path/to/resource</code>, it signs <code>example.com</code>. This public key is known to the origin, either because the agent is well known, because it has previously registered, or any other method. Then, the agent writes a <b>Signature-Input</b> header with the following parameters:</p><ol><li><p>A validity window (<code>created</code> and <code>expires</code> timestamps)</p></li><li><p>A Key ID that uniquely identifies the key used in the signature. This is a <a href="https://www.rfc-editor.org/rfc/rfc7638.html"><u>JSON Web Key Thumbprint</u></a>.  </p></li><li><p>A tag that shows websites the signature’s purpose and validation method, i.e. <code>web-bot-auth</code> for bot authentication.</p></li></ol><p>In addition, the <code>Signature-Agent</code> header indicates where the origin can find the public keys the agent used when signing the request, such as in a directory hosted by <code>signer.example.com</code>. This header is part of the signed content as well.</p><p>Here’s an example:</p>
            <pre><code>GET /path/to/resource HTTP/1.1
Host: www.example.com
User-Agent: Mozilla/5.0 Chrome/113.0.0 MyBotCrawler/1.1
Signature-Agent: signer.example.com
Signature-Input: sig=("@authority" "signature-agent");\
             	 created=1700000000;\
             	 expires=1700011111;\
             	 keyid="ba3e64==";\
             	 tag="web-bot-auth"
Signature: sig=abc==</code></pre>
            <p>For those building bots, <a href="https://datatracker.ietf.org/doc/draft-meunier-web-bot-auth-architecture/"><u>we propose</u></a> signing the authority of the target URI, i.e. www.example.com, and a way to retrieve the bot public key in the form of <a href="https://datatracker.ietf.org/doc/draft-meunier-http-message-signatures-directory/"><u>signature-agent</u></a>, if present, i.e. <a href="http://crawler.search.google.com"><u>crawler.search.google.com</u></a> for Google Search, <a href="http://operator.openai.com"><u>operator.openai.com</u></a> for OpenAI Operator, workers.dev for Cloudflare Workers.</p><p>The <code>User-Agent</code> from the example above indicates that the software making the request is Chrome, because it is an agent that uses an orchestrated Chrome to browse the web. You should note that <code>MyBotCrawler/1.1</code> is still present. The <code>User-Agent</code> header can actually contain multiple products, in decreasing order of importance. If our agent is making requests via Chrome, that’s the most important product and therefore comes first.</p><p>At Internet-level scale, these signatures may add a notable amount of overhead to request processing. However, with the right cryptographic suite, and compared to the cost of existing bot mitigation, both technical and social, this seems to be a straightforward tradeoff. This is a metric we will monitor closely, and report on as adoption grows.</p>
    <div>
      <h3>Generating request signatures</h3>
      <a href="#generating-request-signatures">
        
      </a>
    </div>
    <p>We’re making several examples for generating Message Signatures for bots and agents <a href="https://github.com/cloudflareresearch/web-bot-auth/"><u>available on Github</u></a> (though we encourage other implementations!), all of which are standards-compliant, to maximize interoperability. </p><p>Imagine you’re building an agent using a managed Chromium browser, and want to sign all outgoing requests. To achieve this, the <a href="https://github.com/w3c/webextensions"><u>webextensions standard</u></a> provides <a href="https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/API/webRequest/onBeforeSendHeaders"><u>chrome.webRequest.onBeforeSendHeaders</u></a>, where you can modify HTTP headers before they are sent by the browser. The event is <a href="https://developer.chrome.com/docs/extensions/reference/api/webRequest#life_cycle_of_requests"><u>triggered</u></a> before sending any HTTP data, and when headers are available.</p><p>Here’s what that code would look like: </p>
            <pre><code>chrome.webRequest.onBeforeSendHeaders.addListener(
  function (details) {
	// Signature and header assignment logic goes here
      // &lt;CODE&gt;
  },
  { urls: ["&lt;all_urls&gt;"] },
  ["blocking", "requestHeaders"] // requires "installation_mode": "force_installed"
);</code></pre>
            <p>Cloudflare provides a <a href="https://www.npmjs.com/package/web-bot-auth"><u>web-bot-auth</u></a> helper package on npm that helps you generate request signatures with the correct parameters. <code>onBeforeSendHeaders</code> is a Chrome extension hook that needs to be implemented synchronously. To do so, we <code>import {signatureHeadersSync} from “web-bot-auth”</code>. Once the signature completes, both <code>Signature</code> and <code>Signature-Input</code> headers are assigned. The request flow can then continue.</p>
            <pre><code>const request = new URL(details.url);
const created = new Date();
const expired = new Date(created.getTime() + 300_000)


// Perform request signature
const headers = signatureHeadersSync(
  request,
  new Ed25519Signer(jwk),
  { created, expires }
);
// `headers` object now contains `Signature` and `Signature-Input` headers that can be used</code></pre>
            <p>This extension code is available on <a href="https://github.com/cloudflareresearch/web-bot-auth/"><u>GitHub</u></a>, alongside a  debugging server, deployed at <a href="https://http-message-signatures-example.research.cloudflare.com"><u>https://http-message-signatures-example.research.cloudflare.com</u></a>. </p>
    <div>
      <h3>Validating request signatures </h3>
      <a href="#validating-request-signatures">
        
      </a>
    </div>
    <p>Using our <a href="https://http-message-signatures-example.research.cloudflare.com"><u>debug server</u></a>, we can now inspect and validate our request signatures from the perspective of the website we’d be visiting. We should now see the Signature and Signature-Input headers:  </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/18P5OyGxu2fU0Dpyv70Gjz/d82d62355524ad1914deb41b601bcad2/image3.png" />
          </figure><p><sup><i>In this example, the homepage of the debugging server validates the signature from the RFC 9421 Ed25519 verifying key, which the extension uses for signing.</i></sup></p><p>The above demo and code walkthrough has been fully written in TypeScript: the verification website is on Cloudflare Workers, and the client is a Chrome browser extension. We are cognisant that this does not suit all clients and servers on the web. To demonstrate the proposal works in more environments, we have also implemented bot signature validation in Go with a <a href="https://github.com/cloudflareresearch/web-bot-auth/tree/main/examples/caddy-plugin"><u>plugin</u></a> for <a href="https://caddyserver.com/"><u>Caddy server</u></a>.</p>
    <div>
      <h2>Experimentation with request mTLS</h2>
      <a href="#experimentation-with-request-mtls">
        
      </a>
    </div>
    <p>HTTP is not the only way to convey signatures. For instance, one mechanism that has been used in the past to authenticate automated traffic against secured endpoints is <a href="https://www.cloudflare.com/learning/access-management/what-is-mutual-tls/"><u>mTLS</u></a>, the “mutual” presentation of <a href="https://www.cloudflare.com/application-services/products/ssl/">TLS certificates</a>. As described in our <a href="https://www.cloudflare.com/learning/access-management/what-is-mutual-tls/"><u>knowledge base</u></a>:</p><blockquote><p><i>Mutual TLS, or mTLS for short, is a method for</i><a href="https://www.cloudflare.com/learning/access-management/what-is-mutual-authentication/"><i> </i><i><u>mutual authentication</u></i></a><i>. mTLS ensures that the parties at each end of a network connection are who they claim to be by verifying that they both have the correct private</i><a href="https://www.cloudflare.com/learning/ssl/what-is-a-cryptographic-key/"><i> </i><i><u>key</u></i></a><i>. The information within their respective</i><a href="https://www.cloudflare.com/learning/ssl/what-is-an-ssl-certificate/"><i> </i><i><u>TLS certificates</u></i></a><i> provides additional verification.</i></p></blockquote><p>While mTLS seems like a good fit for bot authentication on the web, it has limitations. If a user is asked for authentication via the mTLS protocol but does not have a certificate to provide, they would get an inscrutable and unskippable error. Origin sites need a way to conditionally signal to clients that they accept or require mTLS authentication, so that only mTLS-enabled clients use it.</p>
    <div>
      <h3>A TLS flag for bot authentication</h3>
      <a href="#a-tls-flag-for-bot-authentication">
        
      </a>
    </div>
    <p>TLS flags are an efficient way to describe whether a feature, like mTLS, is supported by origin sites. Within the IETF, we have proposed a new TLS flag called <a href="https://datatracker.ietf.org/doc/draft-jhoyla-req-mtls-flag/"><code><u>req mTLS</u></code></a> to be sent by the client during the establishment of a connection that signals support for authentication via a client certificate. </p><p>This proposal leverages the <a href="https://www.ietf.org/archive/id/draft-ietf-tls-tlsflags-14.html"><u>tls-flags</u></a> proposal under discussion in the IETF. The TLS Flags draft allows clients and servers to send an array of one bit flags to each other, rather than creating a new extension (with its associated overhead) for each piece of information they want to share. This is one of the first uses of this extension, and we hope that by using it here we can help drive adoption.</p><p>When a client sends the <a href="https://datatracker.ietf.org/doc/draft-jhoyla-req-mtls-flag/"><code><u>req mTLS</u></code></a> flag to the server, they signal to the server that they are able to respond with a certificate if requested. The server can then safely request a certificate without risk of blocking ordinary user traffic, because ordinary users will never set this flag. </p><p>Let’s take a look at what an example of such a req mTLS would look like in <a href="https://www.wireshark.org/"><u>Wireshark</u></a>, a network protocol analyser. You can follow along in the packet capture <a href="https://github.com/cloudflareresearch/req-mtls/tree/main/assets/demonstration-capture.pcapng"><u>here</u></a>.</p>
            <pre><code>Extension: req mTLS (len=12)
	Type: req mTLS (65025)
	Length: 12
	Data: 0b0000000000000000000001</code></pre>
            <p>The extension number is 65025, or 0xfe01. This corresponds to an unassigned block of <a href="https://www.iana.org/assignments/tls-extensiontype-values/tls-extensiontype-values.xhtml#tls-extensiontype-values-1"><u>TLS extensions</u></a> that can be used to experiment with TLS Flags. Once the standard is adopted and published by the IETF, the number would be fixed. To use the <code>req mTLS</code> flag the client needs to set the 80<sup>th</sup> bit to true, so with our block length of 12 bytes, it should  contain the data 0b0000000000000000000001, which is the case here. The server then responds with a certificate request, and the request follows its course.</p>
    <div>
      <h3>Request mTLS in action</h3>
      <a href="#request-mtls-in-action">
        
      </a>
    </div>
    <p><i>Code for this section is available in GitHub under </i><a href="https://github.com/cloudflareresearch/req-mtls"><i><u>cloudflareresearch/req-mtls</u></i></a></p><p>Because mutual TLS is widely supported in TLS libraries already, the parts we need to introduce to the client and server are:</p><ol><li><p>Sending/parsing of TLS-flags</p></li><li><p>Specific support for the <code>req mTLS</code> flag</p></li></ol><p>To the best of our knowledge, there is no complete public implementation of either scheme. Using it for bot authentication may provide a motivation to do so.</p><p>Using <a href="https://github.com/cloudflare/go"><u>our experimental fork of Go</u></a>, a TLS client could support req mTLS as follows:</p>
            <pre><code>config := &amp;tls.Config{
    	TLSFlagsSupported:  []tls.TLSFlag{0x50},
    	RootCAs:       	rootPool,
    	Certificates:  	certs,
    	NextProtos:    	[]string{"h2"},
}
trans := http.Transport{TLSClientConfig: config, ForceAttemptHTTP2: true}</code></pre>
            <p>This example library allows you to configure Go to send <code>req mTLS 0x50</code> bytes in the <code>TLS Flags</code> extension. If you’d like to test your implementation out, you can prompt your client for certificates against <a href="http://req-mtls.research.cloudflare.com"><u>req-mtls.research.cloudflare.com</u></a> using the Cloudflare Research client <a href="https://github.com/cloudflareresearch/req-mtls"><u>cloudflareresearch/req-mtls</u></a>. For clients, once they set the TLS Flags associated with <code>req mTLS</code>, they are done. The code section taking care of normal mTLS will take over at that point, with no need to implement something new.</p>
    <div>
      <h2>Two approaches, one goal</h2>
      <a href="#two-approaches-one-goal">
        
      </a>
    </div>
    <p>We believe that developers of agents and bots should have a public, standard way to authenticate themselves to CDNs and website hosting platforms, regardless of the technology they use or provider they choose. At a high level, both HTTP Message Signatures and request mTLS achieve a similar goal: they allow the owner of a service to authentically identify themselves to a website. That’s why we’re participating in the standardizing effort for both of these protocols at the IETF, where many other authentication mechanisms we’ve discussed here — from TLS to OAuth Bearer tokens –— been developed by diverse sets of stakeholders and standardized as RFCs.   </p><p>Evaluating both proposals against each other, we’re prioritizing <a href="https://datatracker.ietf.org/doc/html/draft-meunier-web-bot-auth-architecture"><u>HTTP Message Signatures for Bots</u></a> because it relies on the previously adopted <a href="https://datatracker.ietf.org/doc/html/rfc9421"><u>RFC 9421</u></a> with several <a href="https://httpsig.org/"><u>reference implementations</u></a>, and works at the HTTP layer, making adoption simpler. <a href="https://datatracker.ietf.org/doc/draft-jhoyla-req-mtls-flag/"><u>request mTLS</u></a> may be a better fit for site owners with concerns about the additional bandwidth, but <a href="https://datatracker.ietf.org/doc/html/draft-ietf-tls-tlsflags"><u>TLS Flags</u></a> has fewer implementations, is still waiting for IETF adoption, and upgrading the TLS stack has proven to be more challenging than with HTTP. Both approaches share similar discovery and key management concerns, as highlighted in a <a href="https://datatracker.ietf.org/doc/draft-meunier-web-bot-auth-glossary/"><u>glossary</u></a> draft at the IETF. We’re actively exploring both options, and would love to <a href="https://www.cloudflare.com/lp/verified-bots/"><u>hear</u></a> from both site owners and bot developers about how you’re evaluating their respective tradeoffs.</p>
    <div>
      <h2>The bigger picture </h2>
      <a href="#the-bigger-picture">
        
      </a>
    </div>
    <p>In conclusion, we think request signatures and mTLS are promising mechanisms for bot owners and developers of AI agents to authenticate themselves in a tamper-proof manner, forging a path forward that doesn’t rely on ever-changing IP address ranges or spoofable headers such as <code>User-Agent</code>. This authentication can be consumed by Cloudflare when acting as a reverse proxy, or directly by site owners on their own infrastructure. This means that as a bot owner, you can now go to content creators and discuss crawling agreements, with as much granularity as the number of bots you have. You can start implementing these solutions today and test them against the research websites we’ve provided in this post.</p><p>Bot authentication also empowers site owners small and large to have more control over the traffic they allow, empowering them to continue to serve content on the public Internet while monitoring automated requests. Longer term, we will integrate these authentication mechanisms into our <a href="https://blog.cloudflare.com/cloudflare-ai-audit-control-ai-content-crawlers/"><u>AI Audit</u></a> and <a href="https://developers.cloudflare.com/bots/get-started/bot-management/"><u>Bot Management</u></a> products, to provide better visibility into the bots and agents that are willing to identify themselves.</p><p>Being able to solve problems for both origins and clients is key to helping build a better Internet, and we think identification of automated traffic is a step towards that. If you want us to start verifying your message signatures or client certificates, have a compelling use case you’d like us to consider, or any questions, please <a href="https://www.cloudflare.com/lp/verified-bots/"><u>reach out</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Bots]]></category>
            <category><![CDATA[Bot Management]]></category>
            <category><![CDATA[AI Bots]]></category>
            <category><![CDATA[Cryptography]]></category>
            <guid isPermaLink="false">2hUP3FdePgIYVDwhgJVLeV</guid>
            <dc:creator>Thibault Meunier</dc:creator>
            <dc:creator>Mari Galicer</dc:creator>
        </item>
        <item>
            <title><![CDATA[Trapping misbehaving bots in an AI Labyrinth]]></title>
            <link>https://blog.cloudflare.com/ai-labyrinth/</link>
            <pubDate>Wed, 19 Mar 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ How Cloudflare uses generative AI to slow down, confuse, and waste the resources of AI Crawlers and other bots that don’t respect “no crawl” directives. ]]></description>
            <content:encoded><![CDATA[ <p>Today, we’re excited to announce AI Labyrinth, a new mitigation approach that uses AI-generated content to slow down, confuse, and waste the resources of AI Crawlers and other bots that don’t respect “no crawl” directives. When you opt in, Cloudflare will automatically deploy an AI-generated set of linked pages when we detect inappropriate bot activity, without the need for customers to create any custom rules.</p><p>AI Labyrinth is available on an opt-in basis to all customers, including the<a href="https://www.cloudflare.com/plans/free/"> Free plan</a>.</p>
    <div>
      <h3>Using Generative AI as a defensive weapon</h3>
      <a href="#using-generative-ai-as-a-defensive-weapon">
        
      </a>
    </div>
    <p>AI-generated content has exploded, reportedly accounting for <a href="https://www.thetimes.co.uk/article/why-ai-content-everywhere-how-to-detect-l2m2kdx9p"><u>four of the top 20 Facebook posts</u></a> last fall. Additionally, Medium estimates that <a href="https://www.wired.com/story/ai-generated-medium-posts-content-moderation/"><u>47% of all content</u></a> on their platform is AI-generated. Like any newer tool it has both wonderful and <a href="https://www.npr.org/2024/12/24/nx-s1-5235265/how-to-protect-yourself-from-holiday-ai-scams"><u>malicious</u></a> uses.</p><p>At the same time, we’ve also seen an explosion of new crawlers used by AI companies to scrape data for model training. AI Crawlers generate more than 50 billion requests to the Cloudflare network every day, or just under 1% of all web requests we see. While Cloudflare has several tools for <a href="https://www.cloudflare.com/learning/ai/how-to-block-ai-crawlers/"><u>identifying and blocking unauthorized AI crawling</u></a>, we have found that blocking malicious <a href="https://www.cloudflare.com/learning/bots/what-is-a-bot/">bots</a> can alert the attacker that you are on to them, leading to a shift in approach, and a never-ending arms race. So, we wanted to create a new way to thwart these unwanted bots, without letting them know they’ve been thwarted.</p><p>To do this, we decided to use a new offensive tool in the bot creator’s toolset that we haven’t really seen used defensively: AI-generated content. When we detect unauthorized crawling, rather than blocking the request, we will link to a series of AI-generated pages that are convincing enough to entice a crawler to traverse them. But while real looking, this content is not actually the content of the site we are protecting, so the crawler wastes time and resources. </p><p>As an added benefit, AI Labyrinth also acts as a next-generation honeypot. No real human would go four links deep into a maze of AI-generated nonsense. Any visitor that does is very likely to be a bot, so this gives us a brand-new tool to identify and fingerprint bad bots, which we add to our list of known bad actors. Here’s how we do it…</p>
    <div>
      <h3>How we built the labyrinth </h3>
      <a href="#how-we-built-the-labyrinth">
        
      </a>
    </div>
    <p>When AI crawlers follow these links, they waste valuable computational resources processing irrelevant content rather than extracting your legitimate website data. This significantly reduces their ability to gather enough useful information to train their models effectively.</p><p>To generate convincing human-like content, we used <a href="https://developers.cloudflare.com/workers-ai/"><u>Workers AI</u></a> with an open source model to create unique HTML pages on diverse topics. Rather than creating this content on-demand (which could impact performance), we implemented a pre-generation pipeline that sanitizes the content to<a href="https://www.cloudflare.com/learning/security/how-to-prevent-xss-attacks/"> prevent any XSS vulnerabilities</a>, and stores it in <a href="https://www.cloudflare.com/developer-platform/products/r2/">R2</a> for faster retrieval. We found that generating a diverse set of topics first, then creating content for each topic, produced more varied and convincing results. It is important to us that we don’t generate inaccurate content that contributes to the spread of misinformation on the Internet, so the content we generate is real and related to scientific facts, just not relevant or proprietary to the site being crawled.</p><p>This pre-generated content is seamlessly integrated as hidden links on existing pages via our custom HTML transformation process, without disrupting the original structure or content of the page. Each generated page includes appropriate meta directives to protect SEO by preventing search engine indexing. We also ensured that these links remain invisible to human visitors through carefully implemented attributes and styling. To further minimize the impact to regular visitors, we ensured that these links are presented only to suspected AI scrapers, while allowing legitimate users and verified crawlers to browse normally.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2PHSCXVMFipAhGJ5IheXW3/a46aad93f2e60f6d892d4c597a752a58/image4.png" />
          </figure><p><sup><i>A graph of daily requests over time, comparing different categories of AI Crawlers.</i></sup></p><p>What makes this approach particularly effective is its role in our continuously evolving bot detection system. When these links are followed, we know with high confidence that it's automated crawler activity, as human visitors and legitimate browsers would never see or click them. This provides us with a powerful identification mechanism, generating valuable data that feeds into our <a href="https://www.cloudflare.com/learning/ai/what-is-machine-learning/">machine learning models</a>. By analyzing which crawlers are following these hidden pathways, we can identify new bot patterns and signatures that might otherwise go undetected. This proactive approach helps us <a href="https://www.cloudflare.com/learning/ai/how-to-prevent-web-scraping/">stay ahead of AI scrapers</a>, continuously improving our detection capabilities without disrupting the normal browsing experience.</p><p>By building this solution on our developer platform, we've created a system that serves convincing decoy content instantly while maintaining consistent quality - all without impacting your site's performance or user experience.</p>
    <div>
      <h3>How to use AI Labyrinth to stop AI crawlers</h3>
      <a href="#how-to-use-ai-labyrinth-to-stop-ai-crawlers">
        
      </a>
    </div>
    <p>Enabling AI Labyrinth is simple and requires just a single toggle in your Cloudflare dashboard. Navigate to the bot management section within your zone, and toggle the new AI Labyrinth setting to on:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/q1ZQlnnMztSsK8PWD1h0S/ef02f081544dc751f754e9630f17261e/image1.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/61qBVDv0WFh8YzrbVULtxq/13ec46d7651c59454f9fe3754e253b85/image3.png" />
          </figure><p>Once enabled, the AI Labyrinth begins working immediately with no additional configuration needed.</p>
    <div>
      <h3>AI honeypots, created by AI</h3>
      <a href="#ai-honeypots-created-by-ai">
        
      </a>
    </div>
    <p>The core benefit of AI Labyrinth is to confuse and distract bots. However, a secondary benefit is to serve as a next-generation honeypot. In this context, a honeypot is just an invisible link that a website visitor can’t see, but a bot parsing HTML would see and click on, therefore revealing itself to be a bot. Honeypots have been used to catch hackers as early as the late <a href="https://medium.com/@jcart657/the-cuckoos-egg-9b502442ea67"><u>1986 Cuckoo’s Egg incident</u></a>. And in 2004, <a href="https://www.projecthoneypot.org/"><u>Project Honeypot</u></a> was created by Cloudflare founders (prior to founding Cloudflare) to let everyone easily deploy free email honeypots, and receive lists of crawler IPs in exchange for contributing to the database. But as bots have evolved, they now proactively look for honeypot techniques like hidden links, making this approach less effective.</p><p>AI Labyrinth won’t simply add invisible links, but will eventually create whole networks of linked URLs that are much more realistic, and not trivial for automated programs to spot. The content on the pages is obviously content no human would spend time-consuming, but AI bots are programmed to crawl rather deeply to harvest as much data as possible. When bots hit these URLs, we can be confident they aren’t actual humans, and this information is recorded and automatically fed to our machine learning models to help improve our bot identification. This creates a beneficial feedback loop where each scraping attempt helps protect all Cloudflare customers.</p>
    <div>
      <h3>What’s next</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>This is only the first iteration of using generative AI to thwart bots for us. Currently, while the content we generate is convincingly human, it won’t conform to the existing structure of every website. In the future, we’ll continue to work to make these links harder to spot and make them fit seamlessly into the existing structure of the website they’re embedded in. You can help us by opting in now.</p><p>To take the next step in the fight against bots, <a href="https://dash.cloudflare.com/?to=/:account/:zone/security/detections/bot-traffic"><u>opt-in to AI Labyrinth</u></a> today.</p> ]]></content:encoded>
            <category><![CDATA[Security Week]]></category>
            <category><![CDATA[Bots]]></category>
            <category><![CDATA[Bot Management]]></category>
            <category><![CDATA[AI Bots]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Machine Learning]]></category>
            <category><![CDATA[Generative AI]]></category>
            <guid isPermaLink="false">1Zh4fm4BB1S3xuVwfETiTE</guid>
            <dc:creator>Reid Tatoris</dc:creator>
            <dc:creator>Harsh Saxena</dc:creator>
            <dc:creator>Luis Miglietti</dc:creator>
        </item>
        <item>
            <title><![CDATA[Grinch Bots strike again: defending your holidays from cyber threats]]></title>
            <link>https://blog.cloudflare.com/grinch-bot-2024/</link>
            <pubDate>Mon, 23 Dec 2024 14:01:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare observed a 4x increase in bot-related traffic on Black Friday in 2024. 29% of all traffic on our network on Black Friday was Grinch Bots wreaking holiday havoc. ]]></description>
            <content:encoded><![CDATA[ 
    <div>
      <h2>Grinch Bots are still stealing Christmas</h2>
      <a href="#grinch-bots-are-still-stealing-christmas">
        
      </a>
    </div>
    <p>Back in 2021, we covered the antics of <a href="https://blog.cloudflare.com/grinch-bot/"><u>Grinch Bots</u></a> and how the combination of proposed regulation and technology could prevent these malicious programs from stealing holiday cheer.</p><p>Fast-forward to 2024 — the <a href="https://www.congress.gov/bill/117th-congress/senate-bill/3276/all-info#:~:text=%2F30%2F2021)-,Stopping%20Grinch%20Bots%20Act%20of%202021,or%20services%20in%20interstate%20commerce"><u>Stop Grinch Bots Act of 2021</u></a> has not passed, and bots are more active and powerful than ever, leaving businesses to fend off increasingly sophisticated attacks on their own. During Black Friday 2024, Cloudflare observed:</p><ul><li><p><b>29% of all traffic on Black Friday was Grinch Bots</b>. Humans still accounted for the majority of all traffic, but bot traffic was up 4x from three years ago in absolute terms. </p></li><li><p><b>1% of traffic on Black Friday came from AI bots. </b>The majority of it came from Claude, Meta, and Amazon. 71% of this traffic was given the green light to access the content requested. </p></li><li><p><b>63% of login attempts across our network on Black Friday were from bots</b>. While this number is high, it was down a few percentage points compared to a month prior, indicating that more humans accessed their accounts and holiday deals. </p></li><li><p><b>Human logins on e-commerce sites increased 7-8%</b> compared to the previous month. </p></li></ul><p>These days, holiday shopping doesn’t start on Black Friday and stop on Cyber Monday. Instead, it stretches through Cyber Week and beyond, including flash sales, pre-orders, and various other promotions. While this provides consumers more opportunities to shop, it also creates more openings for Grinch Bots to wreak havoc.</p>
    <div>
      <h2>Black Friday - Cyber Monday by the numbers</h2>
      <a href="#black-friday-cyber-monday-by-the-numbers">
        
      </a>
    </div>
    <p><a href="https://blog.cloudflare.com/the-truth-about-black-friday-and-cyber-monday/">Black Friday and Cyber Monday</a> in 2024 brought record-breaking shopping — and grinching. In addition to looking across our entire network, we also analyzed traffic patterns specifically on a cohort of e-commerce sites. </p><p>Legitimate shoppers flocked to e-commerce sites, with requests reaching an astounding 405 billion on Black Friday, accounting for 81% of the day’s total traffic to e-commerce sites. Retailers reaped the rewards of their deals and advertising, seeing a 50% surge in shoppers week-over-week and a 61% increase compared to the previous month.</p><p>Unfortunately, Grinch Bots were equally active. Total e-commerce bot activity surged to 103 billion requests, representing up to 19% of all traffic to e-commerce sites. Nearly one in every five requests to an online store was not a real customer. That’s a lot of resources to waste on bogus traffic. Cyber Week was a battleground, with bots hoarding inventory, exploiting deals, and disrupting genuine shopping experiences.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6XOfNPFabXiMzIrZrXGES2/9fa1eb6d1ca558ef9f405826999bcea2/image9.png" />
          </figure><p>The upside, if there is one, is that there was more human activity on e-commerce sites (81%) than observed on our network more broadly (71%). </p>
    <div>
      <h2>The Grinch Bot’s Modus Operandi</h2>
      <a href="#the-grinch-bots-modus-operandi">
        
      </a>
    </div>
    <p>Cloudflare saw 4x more bot requests than what we observed in 2021. Being able to observe and score all this traffic at scale means we can help customers keep the grinches away. We also got to see patterns that help us better identify the concentration of these attacks: </p><ul><li><p>19% of traffic on e-commerce sites was Grinch Bots</p></li><li><p>1% of traffic to e-commerce sites was from AI Bots. </p></li><li><p>63% of login attempt requests across our network were from bots </p></li><li><p>22% of bot activity originated from residential proxy networks</p></li></ul>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6Lo8y9dRRZGKujWxwfxBaf/4be5c93638b3dabe7acc823adac5ed6f/image3.png" />
          </figure><p>What are all of these bots up to? </p>
    <div>
      <h3><b>AI bots</b></h3>
      <a href="#ai-bots">
        
      </a>
    </div>
    <p>This year marked a breakthrough for AI-driven bots, agents, and models, with their impact spilling into Black Friday. AI bots went from zero to one, now making up 1% of all bot traffic on e-commerce sites. </p><p>AI-driven bots generated 29 billion requests on Black Friday alone, with Meta-external, Claudebot, and Amazonbot leading the pack. Based on their owners, these bots are meant to crawl to augment training data sets for Llama, Claude, and Alexa respectively. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6jzXxH2fR5KWRWc252SbNH/ad40f4a4cb8cf1fdc36558dc4324b717/image10.png" />
          </figure><p>We looked at e-commerce sites specifically to find out if these bots were treating all content equally. While Meta-External and Amazonbot were still in the Top 3 of AI bots reaching e-commerce sites, Bytedance’s Bytespider crawled the most shopping sites.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6jOsfPtliY01FSNVVgb2bZ/f444cee15d3818d24ef2e3e53731896c/image2.png" />
          </figure>
    <div>
      <h3><b>Account Takeover (ATO) bots</b></h3>
      <a href="#account-takeover-ato-bots">
        
      </a>
    </div>
    <p>In addition to <a href="https://www.cloudflare.com/learning/ai/how-to-prevent-web-scraping/">scraping</a>, crawling, and shopping, bots also targeted customer accounts on Black Friday. We saw 14.1 billion requests from bots to /login endpoints, accounting for 63% of that day’s login attempts. </p><p>While this number seems high, intuitively it makes sense, given that humans don’t log in to accounts every day, but bots definitely try to crack accounts every day. Interestingly, while humans only accounted for 36% of traffic to login pages on Black Friday, this number was <b><i>up 7-8% compared to the prior month</i></b>. This suggests that more shoppers logged in to capitalize on deals and discounts on Black Friday than in preceding weeks. Human logins peaked at around 40% of all traffic to login sites on the Monday before Thanksgiving, and again on Cyber Monday.  </p><p>Separately, we also saw a 37% increase in leaked passwords used in login requests compared to the prior month. During Birthday Week, we shared <a href="https://blog.cloudflare.com/a-safer-internet-with-cloudflare/#account-takeover-detection"><u>how 65% of internet users are at risk of ATO due to re-use of leaked passwords</u></a>. This surge, coinciding with heightened human and bot traffic, underscores a troubling pattern: both humans and bots continue to depend on common and compromised passwords, amplifying security risks.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1DDDcH41X7fhmfk2VdNaNV/de40dce017459752ba366e5fea331377/image7.png" />
          </figure><p><b>Proxy bots: </b>Regardless of whether they’re crawling your content or hoarding your wares, 22% of bot traffic originated from <a href="https://blog.cloudflare.com/residential-proxy-bot-detection-using-machine-learning/"><u>residential proxy networks</u></a>. This obfuscation makes these requests look like legitimate customers browsing from their homes rather than large cloud networks. The large pool of IP addresses and the diversity of networks poses a challenge to traditional bot defense mechanisms that rely on IP reputation and rate limiting. </p><p>Moreover, the diversity of IP addresses enables the attackers to rotate through them indefinitely. This shrinks the window of opportunity for bot detection systems to effectively detect and stop the attacks. The use of residential proxies is a trend we have been tracking for months now and Black Friday traffic was within the range we’ve seen throughout this year.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1Rk3c2sq0kLpl9Y71NdL8K/1186c343f760dda1f64adb4dd8716170/image1.png" />
          </figure><p>If you’re using Cloudflare’s <a href="https://www.cloudflare.com/en-gb/application-services/products/bot-management/"><u>Bot Management</u></a>, your site is already protected from these bots since we update our bot score based on these types of network fingerprints. In May 2024, we <a href="https://blog.cloudflare.com/residential-proxy-bot-detection-using-machine-learning/"><u>introduced</u></a> our latest model optimized for detecting residential proxies. Early results show promising declines in this type of activity, indicating that bot operators may be reducing their reliance on residential proxies. </p>
    <div>
      <h2>The Christmas “Yule” log: how customers can protect themselves</h2>
      <a href="#the-christmas-yule-log-how-customers-can-protect-themselves">
        
      </a>
    </div>
    <p>35% of all traffic on Black Friday was Grinch Bots. To keep Grinch Bots at bay, businesses need year-round bot protection and proactive strategies tailored to the unique challenges of holiday shopping.</p><p>Here are 4 yules (aka “rules”) for the season:</p><p><b>(1) Block bots</b>: 22% of bot traffic originated from residential proxy networks. Our bot management <a href="https://blog.cloudflare.com/residential-proxy-bot-detection-using-machine-learning/"><u>score automatically adjusts based on these network signals</u></a>. Use our <a href="https://developers.cloudflare.com/bots/concepts/bot-score/"><u>Bot Score</u></a> in rules to challenge sensitive actions. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3yNb5lIJ0kUuUkwrwRIFV4/21bd315b382663d5601d4629c8bec3a2/image8.png" />
          </figure><p><b>(2) Monitor potential Account Takeover (ATO) attacks</b>: Bots often test <a href="https://blog.cloudflare.com/helping-keep-customers-safe-with-leaked-password-notification/"><u>stolen credentials</u></a> in the months leading up to Cyber Week to refine their strategies. Re-use of stolen credentials makes businesses even more vulnerable. Our account abuse detections help customers monitor login paths for leaked credentials and traffic anomalies.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/20ICo3eYcY6BTdqAJE5Kj9/90b1373280aaeef54dfbed3c41e5ea8c/Screenshot_2024-12-21_at_17.52.17.png" />
          </figure><p>Check out more examples of related <a href="https://developers.cloudflare.com/waf/detections/leaked-credentials/examples/"><u>rules</u></a> you can create.</p><p><b>(3) Rate limit account and purchase paths: </b>Apply rate-limiting <a href="https://developers.cloudflare.com/waf/rate-limiting-rules/best-practices/"><u>best practices</u></a> on critical application paths. These include limiting new account access/creation from previously seen IP addresses, and leveraging other network fingerprints, to help prevent promo code abuse and inventory hoarding, as well as identifying account takeover attempts through the application of <a href="https://developers.cloudflare.com/bots/concepts/detection-ids/"><u>detection IDs</u></a> and <a href="https://developers.cloudflare.com/waf/managed-rules/check-for-exposed-credentials/"><u>leaked credential checks</u></a>.</p><p><b>(4) Block AI bots</b> abusing shopping features to maintain fair access for human users. If you’re using Cloudflare, you can quickly <a href="https://www.cloudflare.com/learning/ai/how-to-block-ai-crawlers/">block all AI bots</a> by <a href="https://blog.cloudflare.com/declaring-your-aindependence-block-ai-bots-scrapers-and-crawlers-with-a-single-click/"><u>enabling our automatic AI bot blocking</u></a> feature.  </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5vfiWs6fEkloTwGblzkltR/53c9a89a323a4bb9107ea8f3e83e9b49/image11.png" />
          </figure>
    <div>
      <h2>What to expect in 2025? </h2>
      <a href="#what-to-expect-in-2025">
        
      </a>
    </div>
    <p>Over the next year, e-commerce sites should expect to see more humans shopping for longer periods. As sale periods lengthen (like they did in 2024) we expect more peaks in human activity on e-commerce sites across November and December. This is great for consumers and great for merchants.</p><p>More AI bots and agents will be integrated into e-commerce journeys in 2025. AI bots will not only be crawling sites for training data, but will also integrate into the shopping experience. AI bots did not exist in 2021, but now make up 1% of all bot traffic. This is only the tip of the iceberg and their growth will explode in the next year. We expect this to pose new risks as bots mimic and act on behalf of humans.</p><p>More sophisticated automation through network, device, and cookie cycling will also become a bigger threat. Bot operators will continue to employ advanced evasion tactics like rotating devices, IP addresses, and cookies to bypass detection.</p><p>Grinch Bots are evolving, and regulation may be slowing, but businesses don’t have to face them alone. We remain resolute in our mission to help build a better Internet ... and holiday shopping experience.</p><p>Even though the holiday season is closing out soon, bots are never on vacation. It’s never too late or too early to start protecting your customers and your business from grinches that work all year round.</p><p>Wishing you all happy holidays and a bot-free new year!</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1E192osHHmrXszsfbE2fIa/f7d04ea59ef6355eb78fed4e2fa15bd4/image6.png" />
          </figure><p></p> ]]></content:encoded>
            <category><![CDATA[AI Bots]]></category>
            <category><![CDATA[Grinch]]></category>
            <category><![CDATA[Bots]]></category>
            <category><![CDATA[Bot Management]]></category>
            <category><![CDATA[Application Security]]></category>
            <category><![CDATA[Application Services]]></category>
            <guid isPermaLink="false">5yiWFM9NumXY8HARoEP8x6</guid>
            <dc:creator>Avi Jaisinghani</dc:creator>
            <dc:creator>Adam Martinetti</dc:creator>
            <dc:creator>Brian Mitchell</dc:creator>
        </item>
        <item>
            <title><![CDATA[AI Everywhere with the WAF Rule Builder Assistant, Cloudflare Radar AI Insights, and updated AI bot protection]]></title>
            <link>https://blog.cloudflare.com/bringing-ai-to-cloudflare/</link>
            <pubDate>Fri, 27 Sep 2024 13:00:00 GMT</pubDate>
            <description><![CDATA[ This year for Cloudflare’s birthday, we’ve extended our AI Assistant capabilities to help you build new WAF rules, added new AI bot & crawler traffic insights to Radar, and given customers new AI bot  ]]></description>
            <content:encoded><![CDATA[ <p>The continued growth of AI has fundamentally changed the Internet over the past 24 months. AI is increasingly ubiquitous, and Cloudflare is leaning into the new opportunities and challenges it presents in a big way. This year for Cloudflare’s birthday, we’ve extended our AI Assistant capabilities to help you build new WAF rules, added AI bot traffic insights on Cloudflare Radar, and given customers new <a href="https://www.cloudflare.com/learning/ai/how-to-block-ai-crawlers/">AI bot blocking capabilities</a>.  </p>
    <div>
      <h2>AI Assistant for WAF Rule Builder</h2>
      <a href="#ai-assistant-for-waf-rule-builder">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5RYC4wmCDbs0axY92FfkFk/a728906cb6a902dd1c78ec93a0f650c2/BLOG-2564_1.png" />
          </figure><p>At Cloudflare, we’re always listening to your feedback and striving to make our products as user-friendly and powerful as possible. One area where we've heard your feedback loud and clear is in the complexity of creating custom and rate-limiting rules for our Web Application Firewall (WAF). With this in mind, we’re excited to introduce a new feature that will make rule creation easier and more intuitive: the AI Assistant for WAF Rule Builder. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7avSjubqlfg7L8ymKEztgk/7c3c31e50879ec64bccc384bdfcd5524/BLOG-2564_2.png" />
          </figure><p>By simply entering a natural language prompt, you can generate a custom or rate-limiting rule tailored to your needs. For example, instead of manually configuring a complex rule matching criteria, you can now type something like, "Match requests with low bot score," and the assistant will generate the rule for you. It’s not about creating the perfect rule in one step, but giving you a strong foundation that you can build on. </p><p>The assistant will be available in the Custom and Rate Limit Rule Builder for all WAF users. We’re launching this feature in Beta for all customers, and we encourage you to give it a try. We’re looking forward to hearing your feedback (via the UI itself) as we continue to refine and enhance this tool to meet your needs.</p>
    <div>
      <h2>AI bot traffic insights on Cloudflare Radar</h2>
      <a href="#ai-bot-traffic-insights-on-cloudflare-radar">
        
      </a>
    </div>
    <p>AI platform providers use bots to crawl and scrape websites, vacuuming up data to use for model training. This is frequently done without the permission of, or a business relationship with, the content owners and providers. In July, Cloudflare urged content owners and providers to <a href="https://blog.cloudflare.com/declaring-your-aindependence-block-ai-bots-scrapers-and-crawlers-with-a-single-click/"><u>“declare their AIndependence”</u></a>, providing them with a way to block AI bots, <a href="https://www.cloudflare.com/learning/ai/how-to-prevent-web-scraping/">scrapers</a>, and crawlers with a single click. In addition to this so-called “easy button” approach, sites can provide more specific guidance to these bots about what they are and are not allowed to access through directives in a <a href="https://www.cloudflare.com/en-gb/learning/bots/what-is-robots-txt/"><u>robots.txt</u></a> file. Regardless of whether a customer chooses to block or allow requests from AI-related bots, Cloudflare has insight into request activity from these bots, and associated traffic trends over time.</p><p>Tracking traffic trends for AI bots can help us better understand their activity over time — which are the most aggressive and have the highest volume of requests, which launch crawls on a regular basis, etc. The new <a href="https://radar.cloudflare.com/traffic#ai-bot-crawler-traffic"><b><u>AI bot &amp; crawler traffic </u></b><u>graph on Radar’s Traffic page</u></a> provides insight into these traffic trends gathered over the selected time period for the top known AI bots. The associated list of bots tracked here is based on the <a href="https://github.com/ai-robots-txt/ai.robots.txt"><u>ai.robots.txt list</u></a>, and will be updated with new bots as they are identified. <a href="https://developers.cloudflare.com/api/operations/radar-get-ai-bots-timeseries-group-by-user-agent"><u>Time series</u></a> and <a href="https://developers.cloudflare.com/api/operations/radar-get-ai-bots-summary-by-user-agent"><u>summary</u></a> data is available from the Radar API as well. (Traffic trends for the full set of AI bots &amp; crawlers <a href="https://radar.cloudflare.com/explorer?dataSet=ai.bots"><u>can be viewed in the new Data Explorer</u></a>.)</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5tYefQaBhTPYpqZPtE6KPu/f60694d0b24de2acba13fe0944589885/BLOG-2564_3.png" />
          </figure>
    <div>
      <h2>Blocking more AI bots</h2>
      <a href="#blocking-more-ai-bots">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/UiFu8l6K4Pm3ulxTK3XU0/541d109e29a9ae94e4792fdf94f7e4aa/BLOG-2564_4.png" />
          </figure><p>For Cloudflare’s birthday, we’re following up on our previous blog post, <a href="https://blog.cloudflare.com/declaring-your-aindependence-block-ai-bots-scrapers-and-crawlers-with-a-single-click/"><i><u>Declaring Your AIndependence</u></i></a>, with an update on the new detections we’ve added to stop AI bots. Customers who haven’t already done so can simply <a href="https://dash.cloudflare.com/?to=/:account/:zone/security/bots/configure"><u>click the button</u></a> to block AI bots to gain more protection for their website. </p>
    <div>
      <h3>Enabling dynamic updates for the AI bot rule</h3>
      <a href="#enabling-dynamic-updates-for-the-ai-bot-rule">
        
      </a>
    </div>
    <p>The old button allowed customers to block <i>verified</i> AI crawlers, those that respect robots.txt and crawl rate, and don’t try to hide their behavior. We’ve added new crawlers to that list, but we’ve also expanded the previous rule to include 27 signatures (and counting) of AI bots that <i>don’t </i>follow the rules. We want to take time to say “thank you” to everyone who took the time to use our “<a href="https://docs.google.com/forms/d/14bX0RJH_0w17_cAUiihff5b3WLKzfieDO4upRlo5wj8"><u>tip line</u></a>” to point us towards new AI bots. These tips have been extremely helpful in finding some bots that would not have been on our radar so quickly. </p><p>For each bot we’ve added, we’re also adding them to our “Definitely automated” definition as well. So, if you’re a self-service plan customer using <a href="https://blog.cloudflare.com/super-bot-fight-mode/"><u>Super Bot Fight Mode</u></a>, you’re already protected. Enterprise Bot Management customers will see more requests shift from the “Likely Bot” range to the “Definitely automated” range, which we’ll discuss more below.</p><p>Under the hood, we’ve converted this rule logic to a <a href="https://developers.cloudflare.com/waf/managed-rules/"><u>Cloudflare managed rule</u></a> (the same framework that powers our WAF). This enables our security analysts and engineers to safely push updates to the rule in real-time, similar to how new WAF rule changes are rapidly delivered to ensure our customers are protected against the latest CVEs. If you haven’t logged back into the Bots dashboard since the previous version of our AI bot protection was announced, click the button again to update to the latest protection. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2tI8Yqxt1S0UPapImb32J4/6cb9e9bf423c370383edb820e5722929/BLOG-2564_5.png" />
          </figure>
    <div>
      <h3>The impact of new fingerprints on the model </h3>
      <a href="#the-impact-of-new-fingerprints-on-the-model">
        
      </a>
    </div>
    <p>One hidden beneficiary of fingerprinting new AI bots is our ML model. <a href="https://blog.cloudflare.com/cloudflare-bot-management-machine-learning-and-more/"><u>As we’ve discussed before</u></a>, our global ML model uses supervised machine learning and greatly benefits from more sources of labeled bot data. Below, you can see how well our ML model recognized these requests as automated, before and after we updated the button, adding new rules. To keep things simple, we have shown only the top 5 bots by the volume of requests on the chart. With the introduction of our new managed rule, we have observed an improvement in our detection capabilities for the majority of these AI bots. Button v1 represents the old option that let customers block only verified AI crawlers, while Button v2 is the newly introduced feature that includes managed rule detections.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2CZVGyDCp9ZtMrZdIi49fE/aacd04d240e9348b5a9b65bad4b470e2/BLOG-2564_6.jpg" />
          </figure><p>So how did we make our detections more robust? As we have mentioned before, sometimes <a href="https://blog.cloudflare.com/cloudflare-bot-management-machine-learning-and-more/"><i><u>a single attribute can give a bot away</u></i></a>. We developed a sophisticated set of heuristics tailored to these AI bots, enabling us to effortlessly and accurately classify them as such. Although our ML model was already detecting the vast majority of these requests, the integration of additional heuristics has resulted in a noticeable increase in detection rates for each bot, and ensuring we score every request correctly 100% of the time. Transitioning from a purely machine learning approach to incorporating heuristics offers several advantages, including faster detection times and greater certainty in classification. While deploying a machine learning model is complex and time-consuming, new heuristics can be created in minutes. </p><p>The initial launch of the AI bots block button was well-received and is now used by over 133,000 websites, with significant adoption even among our Free tier customers. The newly updated button, launched on August 20, 2024, is rapidly gaining traction. Over 90,000 zones have already adopted the new rule, with approximately 240 new sites integrating it every hour. Overall, we are now helping to protect the intellectual property of more than 146,000 sites from AI bots, and we are currently blocking 66 million requests daily with this new rule. Additionally, we’re excited to announce that support for configuring AI bots protection via Terraform will be available by the end of this year, providing even more flexibility and control for managing your bot protection settings.</p>
    <div>
      <h3>Bot behavior</h3>
      <a href="#bot-behavior">
        
      </a>
    </div>
    <p>With the enhancements to our detection capabilities, it is essential to assess the impact of these changes to bot activity on the Internet. Since the launch of the updated AI bots block button, we have been closely monitoring for any shifts in bot activity and adaptation strategies. The most basic fingerprinting technique we use to identify AI bot looking for simple user-agent matches. User-agent matches are important to monitor because they indicate the bot is transparently announcing who they are when they’re crawling a website. </p><p>The graph below shows a volume of traffic we label as AI bot over the past two months. The blue line indicates the daily request count, while the red line represents the monthly average number of requests. In the past two months, we have seen an average reduction of nearly 30 million requests, with a decrease of 40 million in the most recent month.This decline coincides with the release of Button v1 and Button v2. Our hypothesis is that with the new AI bots blocking feature, Cloudflare is blocking a majority of these bots, which is discouraging them from crawling. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/23ULxmxBIRskEONlWVIvlA/1dbd3d03239047492c2d4f7307217d97/BLOG-2564_7.jpg" />
          </figure><p>This hypothesis is supported by the observed decline in requests from several top AI crawlers. Specifically, the Bytespider bot reduced its daily requests from approximately 100 million to just 50 million between the end of June and the end of August (see graph below). This reduction could be attributed to several factors, including our new AI bots block button and changes in the crawler's strategy.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5UwtyZSXULrVzIqLcICGKd/fdf02c15d17e1d7ed248ba5f8a97eb54/BLOG-2564_8.jpg" />
          </figure><p>We have also observed an increase in the accountability of some AI crawlers. The most basic fingerprinting technique we use to identify AI bot looking for simple user-agent matches. User-agent matches are important to monitor because they indicate the bot is transparently announcing who they are when they’re crawling a website. These crawlers are now more frequently using their agents, reflecting a shift towards more transparent and responsible behavior. Notably, there has been a dramatic surge in the number of requests from the Perplexity user agent. This increase might be linked to <a href="https://rknight.me/blog/perplexity-ai-is-lying-about-its-user-agent/">previous accusations<u> </u></a>that Perplexity did not properly present its user agent, which could have prompted a shift in their approach to ensure better identification and compliance. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7Hq2vUMqqdNCyaxNTCg3JD/610ad53d57203203c5176229245c8086/BLOG-2564_9.jpg" />
          </figure><p>These trends suggest that our updates are likely affecting how AI crawlers interact with content. We will continue to monitor AI bot activity to help users control who accesses their content and how. By keeping a close watch on emerging patterns, we aim to provide users with the tools and insights needed to make informed decisions about managing their traffic. </p>
    <div>
      <h2>Wrap up</h2>
      <a href="#wrap-up">
        
      </a>
    </div>
    <p>We’re excited to continue to explore the AI landscape, whether we’re finding more ways to make the Cloudflare dashboard usable or new threats to guard against. Our AI insights on Radar update in near real-time, so please join us in watching as new trends emerge and discussing them in the <a href="https://community.cloudflare.com/"><u>Cloudflare Community</u></a>. </p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Bot Management]]></category>
            <category><![CDATA[Bots]]></category>
            <category><![CDATA[AI Bots]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Machine Learning]]></category>
            <category><![CDATA[Generative AI]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Application Services]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">6HqKUMoXg0wFIQg9howLMX</guid>
            <dc:creator>Adam Martinetti</dc:creator>
            <dc:creator>Harsh Saxena</dc:creator>
            <dc:creator>Gauri Baraskar</dc:creator>
            <dc:creator>Carlos Azevedo</dc:creator>
            <dc:creator>David Belson</dc:creator>
        </item>
        <item>
            <title><![CDATA[Start auditing and controlling the AI models accessing your content]]></title>
            <link>https://blog.cloudflare.com/cloudflare-ai-audit-control-ai-content-crawlers/</link>
            <pubDate>Mon, 23 Sep 2024 13:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare customers on any plan can now audit and control how AI models access the content on their site.
 ]]></description>
            <content:encoded><![CDATA[ <p>Site owners have lacked the ability to determine how AI services use their content for training or other purposes. Today, Cloudflare is releasing a set of tools to make it easy for site owners, creators, and publishers to take back control over how their content is made available to AI-related bots and crawlers. All Cloudflare customers can now audit and control how AI models access the content on their site.</p><p>This launch starts with a detailed analytics view of the AI services that crawl your site and the specific content they access. Customers can review activity by AI provider, by type of bot, and which sections of their site are most popular. This data is available to every site on Cloudflare and does not require any configuration.</p><p>We expect that this new level of visibility will prompt teams to make a decision about their exposure to AI crawlers. To help give them time to make that decision, Cloudflare now provides <a href="https://blog.cloudflare.com/declaring-your-aindependence-block-ai-bots-scrapers-and-crawlers-with-a-single-click/"><u>a one-click option</u></a> in our dashboard to immediately <a href="https://www.cloudflare.com/learning/ai/how-to-block-ai-crawlers/">block</a> any AI crawlers from accessing any site. Teams can then use this “pause” to decide if they want to allow specific AI providers or types of bots to proceed. Once that decision is made, those administrators can use new filters in the Cloudflare dashboard to enforce those policies in just a couple of clicks.</p><p>Some customers have already made decisions to negotiate deals directly with AI companies. Many of those contracts include terms about the frequency of scanning and the type of content that can be accessed. We want those publishers to have the tools to measure the implementation of these deals.  As part of today’s announcement, Cloudflare customers can now generate a report with a single click that can be used to audit the activity allowed in these arrangements.</p><p>We also think that sites of any size should be able to determine how they want to be compensated for the usage of their content by AI models. Today’s announcement previews a new Cloudflare monetization feature which will give site owners the tools to set prices, control access, and capture value for the scanning of their content.</p>
    <div>
      <h3>What is the problem?</h3>
      <a href="#what-is-the-problem">
        
      </a>
    </div>
    <p>Until recently, bots and scrapers on the Internet mostly fell into two clean categories: good and bad. Good bots, like search engine crawlers, helped audiences discover your site and drove traffic to you. Bad bots tried to take down your site, jump the queue ahead of your customers, or scrape competitive data. We built the <a href="https://www.cloudflare.com/application-services/products/bot-management/"><u>Cloudflare Bot Management</u></a> platform to give you the ability to distinguish between those two broad categories and to allow or block them.</p><p>The rise of AI Large Language Models (LLMs) and other generative tools created a murkier third category. Unlike malicious bots, the crawlers associated with these platforms are not actively trying to knock your site offline or to get in the way of your customers. They are not trying to steal sensitive data; they just want to scan what is already public on your site.</p><p>However, unlike helpful bots, these AI-related crawlers do not necessarily drive traffic to your site. AI Data Scraper bots scan the content on your site to train new LLMs. Your material is then put into a kind of blender, mixed up with other content, and used to answer questions from users without attribution or the need for users to visit your site. Another type of crawler, AI Search Crawler bots, scan your content and attempt to cite it when responding to a user’s search. The downside is that those users might just stay inside of that interface, rather than visit your site, because an answer is assembled on the page in front of them.</p><p>This murkiness leaves site owners with a hard decision to make. The value exchange is unclear. And site owners are at a disadvantage while they play catch up. Many sites allowed these AI crawlers to scan their content because these crawlers, for the most part, looked like “good” bots — only for the result to mean less traffic to their site as their content is repackaged in AI-written answers.</p><p>We believe this poses a risk to an open Internet. Without the ability to control scanning and realize value, site owners will be discouraged to launch or maintain Internet properties. Creators will stash more of their content behind paywalls and the largest publishers will strike direct deals. AI model providers will in turn struggle to find and access the long tail of high-quality content on smaller sites.</p><p>Both sides lack the tools to create a healthy, transparent exchange of permissions and value. Starting today, Cloudflare equips site owners with the services they need to begin fixing this. We have broken out a series of steps we recommend all of our customers follow to get started.</p>
    <div>
      <h3>Step 1: Understand how AI models use your site</h3>
      <a href="#step-1-understand-how-ai-models-use-your-site">
        
      </a>
    </div>
    <p>Every site on Cloudflare now has access to a new analytics view that summarizes the crawling behavior of popular and known AI services. You can begin reviewing this information to understand the AI scanning of your content by selecting a site in your dashboard and navigating to the <b>AI Crawl Control </b><i><b>(formerly AI Audit)</b></i> tab in the left-side navigation bar.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7FknDZw445xutqps2fSSJt/597fca585cf0e7086ea5db567f258714/BLOG-2509_2.png" />
          </figure><p>When AI model providers access content on your site, they rely on automated tools called “bots” or “crawlers” to scan pages. The bot will request the content of your page, capture the response, and store it as part of a future data training set or remember it for AI search engine results in the future.</p><p>These bots often identify themselves to your site (and Cloudflare’s network) by including an HTTP header in their request called a <code>User Agent</code>. Although, in some cases, a bot from one of these AI services might not send the header and Cloudflare instead relies on other heuristics like IP address or behavior to identify them.</p><p>When the bot does identify itself, the header will contain a string of text with the bot name. For example, <a href="https://support.anthropic.com/en/articles/8896518-does-anthropic-crawl-data-from-the-web-and-how-can-site-owners-block-the-crawler"><u>Anthropic sometimes crawls sites</u></a> on the Internet with a bot called <code>ClaudeBot</code>. When that service requests the content of a page from your site on Cloudflare, Cloudflare logs the <code>User Agent</code> as <code>ClaudeBot</code>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/55IhilRHrLYZo4kvLSPBuI/7a88476fb443e09a28fdbf7e9abd5b8d/BLOG-2509_3.png" />
          </figure><p>Cloudflare takes the logs gathered from visits to your site and looks for user agents that match known AI bots and crawlers. We summarize the activity of individual crawlers and also provide you with filters to review just the activities of specific AI platforms. Many AI firms rely on multiple crawlers that serve distinct purposes. When <a href="https://platform.openai.com/docs/bots"><u>OpenAI scans sites</u></a> for data scraping, they rely on <code>GPTBot</code>, but when they crawl sites for their new AI search engine, they use <code>OAI-SearchBot</code>.</p><p>And those differences matter. Scanning from different bot types can impact traffic to your site or the attribution of your content. AI search engines will often link to sites as part of their response, potentially sending visitors to your destination. In that case, you might be open to those types of bots crawling your Internet property. AI Data Scrapers, on the other hand, just exist to read as much of the Internet as possible to train future models or improve existing ones.</p><p>We think that you deserve to know why a bot is crawling your site in addition to when and how often. Today’s release gives you a filter to review bot activity by categories like AI Data Scraper, AI Search Crawler, and Archiver.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/vd66ddf4Lii8LEr8Tt3Nt/ff85253f1d6894d4086a6696e14d250e/BLOG-2509_4.png" />
          </figure><p>With this data, you can begin analyzing how AI models access your site. That information might be overwhelming, especially if your team has not had time yet to decide how you want to handle AI scanning of your content. If you find yourself unsure on how to respond, proceed to Step 2.</p>
    <div>
      <h3>Step 2: Give yourself a pause to decide what to do next</h3>
      <a href="#step-2-give-yourself-a-pause-to-decide-what-to-do-next">
        
      </a>
    </div>
    <p>We talked to several organizations who know their sites are valuable destinations for AI crawlers, but they do not yet know what to do about it. These teams need a “time out” so they can make an informed decision about how they make their data available to these services.</p><p>Cloudflare gives you that easy button right now. Any customer on any plan can choose to block all AI bots and crawlers to give yourself a pause while you decide what you do want to allow.</p><p>To implement that option, navigate to the Bots section under the Security tab of the Cloudflare Dashboard. Follow the blue link in the top right corner to configure how Cloudflare’s proxy handles bot traffic. Next, toggle the button in the “Block AI Scrapers and Crawlers” card to the “On” position.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5RQm7Zofbmkzekclq5OWsa/921660fa9b45018ab2ecf402c104960b/BLOG-2509_5.png" />
          </figure><p>The one-click option blocks known AI-related bots and crawlers from accessing your site based on a list that Cloudflare maintains. With a block in place, you and your team can make a less rushed decision about what to do next with your content.</p>
    <div>
      <h3>Step 3: Control the bots you do want to allow</h3>
      <a href="#step-3-control-the-bots-you-do-want-to-allow">
        
      </a>
    </div>
    <p>The pause button buys time for your team to decide what you want the relationship to be between these crawlers and your content. Once your team has reached a decision, you can begin relying on Cloudflare’s network to implement that policy.</p><p>If that decision is “we are not going to allow any crawling,” then you can leave the block button discussed above toggled to “On”. If you want to allow some selective scanning, today’s release provides you with options to permit certain types of bots, or just bots from certain providers, to access your content.</p><p>For some teams, the decision will be to allow the bots associated with AI search engines to scan their Internet properties because those tools can still drive traffic to the site. Other organizations might sign deals with a specific model provider, and they want to allow any type of bot from that provider to access their content. Customers can now navigate to the WAF section of the Cloudflare dashboard to implement these types of policies.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7rtP61D3cneqk5pqQn1lFO/d297e6a10354b4dab193c67549252cb6/BLOG-2509_6.png" />
          </figure><p>Administrators can also create rules that would, for example, block all AI bots except for those from a specific platform. Teams can deploy these types of filters if they are skeptical of most AI platforms but comfortable with one AI model provider and its policies. These types of rules can also be used to implement contracts where a site owner has negotiated to allow scanning from a single provider. The site administrator would need to create a rule to block all types of AI-related bots and then add an exception that allows the specific bot or bots from their AI partner.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3dc8LlDlamjuWXsRQgx45p/9212bba47cf6df8da8b57b3bac7d38fe/BLOG-2509_7.png" />
          </figure><p>We also recommend that customers consider updating their Terms of Service to cover this new use case in addition to applying these new filters. We have <a href="https://developers.cloudflare.com/bots/reference/sample-terms/"><u>documented the steps</u></a> we suggest that “good citizen” bots and crawlers take with respect to robots.txt files. As an extension of those best practices, we are adding a new section to that documentation where we provide a sample Terms of Service section that site owners can consider using to establish that AI scanning needs to follow the policies you have defined in your robots.txt file.</p>
    <div>
      <h3>Step 4: Audit your existing scanning arrangements</h3>
      <a href="#step-4-audit-your-existing-scanning-arrangements">
        
      </a>
    </div>
    <p>An increasing number of sites are signing agreements directly with model providers to license consumption of their content in exchange for payment. Many of those deals contain provisions that determine the rate of crawling for certain sections or entire sites. Cloudflare’s AI Crawl Control tab provides you with the tools to monitor those kinds of contracts.</p><p>The table at the bottom of the AI Crawl Control tool now lists the most popular content on your site ranked by the count of scans in the time period from the filter set at the top of the page. You can click the <code>Export to CSV</code> button to quickly download a file with the details presented here that you can use to discuss any discrepancies with the AI platform that you are allowing to access your content.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3e0OE5WAARgyjcJKOOmnFT/f427935bd50b215bb603a96e3d529743/BLOG-2509_8.png" />
          </figure><p>Today, the data available to you represents key metrics we have heard from customers in these kinds of arrangements: requests against certain pages and requests against the entire site.</p>
    <div>
      <h3>Step 5: Prepare your site to capture value from AI scanning</h3>
      <a href="#step-5-prepare-your-site-to-capture-value-from-ai-scanning">
        
      </a>
    </div>
    <p>Not everyone has the time or contacts to negotiate deals with AI companies. Up to this point, only the largest publishers on the Internet have the resources to set those kinds of terms and get paid for their content.</p><p>Everyone else has been left with two basic choices on how to handle their data: block all scanning or allow unrestricted access. Today’s releases give content creators more visibility and control than just those two options, but the long tail of sites on the Internet still lack a pathway to monetization.</p><p>We think that sites of any size should be fairly compensated for the use of their content. Cloudflare plans to launch a new component of our dashboard that goes beyond just blocking and analyzing crawls. Site owners will have the ability to set a price for their site, or sections of their site, and to then charge model providers based on their scans and the price you have set. We’ll handle the rest so that you can focus on creating great content for your audience.</p><p>The fastest way to get ready to capture value through this new component is to make sure your sites use Cloudflare’s network. We plan to invite sites to participate in the beta based on the date they first joined Cloudflare. Interested in being notified when this is available? <a href="http://www.cloudflare.com/lp/ai-value-tool-waitlist"><u>Let us know here</u></a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/581ZPDNlMumBmFzrVgnG9D/d2b9d5a2b96d572239d00a39da79c77a/BLOG-2509_9.png" />
          </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[AI Bots]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[LLM]]></category>
            <guid isPermaLink="false">47pmgthPjmg2ZeYqTNmU8f</guid>
            <dc:creator>Sam Rhea</dc:creator>
        </item>
        <item>
            <title><![CDATA[Declare your AIndependence: block AI bots, scrapers and crawlers with a single click]]></title>
            <link>https://blog.cloudflare.com/declaring-your-aindependence-block-ai-bots-scrapers-and-crawlers-with-a-single-click/</link>
            <pubDate>Wed, 03 Jul 2024 13:00:26 GMT</pubDate>
            <description><![CDATA[ To help preserve a safe Internet for content creators, we’ve just launched a brand new “easy button” to block all AI bots. It’s available for all customers, including those on our free tier ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/D59Fq5QkC4J7Jjo5lM4Fm/fcc55b665562d321bd84f88f53f46b22/image7-1.png" />
            
            </figure><p>To help preserve a safe Internet for content creators, we’ve just launched a brand new “easy button” to <a href="https://www.cloudflare.com/learning/ai/how-to-block-ai-crawlers/">block all AI bots</a>. It’s available for all customers, including those on our free tier.</p><p>The popularity of <a href="https://www.cloudflare.com/learning/ai/what-is-generative-ai/">generative AI</a> has made the demand for content used to train models or run inference on skyrocket, and, although some AI companies clearly identify their web scraping bots, not all AI companies are being transparent. Google reportedly <a href="https://www.reuters.com/technology/reddit-ai-content-licensing-deal-with-google-sources-say-2024-02-22/">paid $60 million a year</a> to license Reddit’s user generated content, and most recently, Perplexity has been <a href="https://rknight.me/blog/perplexity-ai-is-lying-about-its-user-agent/">accused of impersonating legitimate visitors</a> in order to scrape content from websites. The value of original content in bulk has never been higher.</p><p>Last year, <a href="/ai-bots">Cloudflare announced the ability for customers to easily block AI bots</a> that behave well. These bots follow <a href="https://www.cloudflare.com/learning/bots/what-is-robots-txt/">robots.txt</a>, and don’t use unlicensed content to train their models or run inference for <a href="https://blogs.nvidia.com/blog/what-is-retrieval-augmented-generation/">RAG</a> applications using website data. Even though these AI bots follow the rules, Cloudflare customers overwhelmingly opt to <a href="https://www.cloudflare.com/learning/ai/how-to-prevent-web-scraping/">block them</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5aAA77Hl9OM2vtI611QcUI/0992e096262e348b451efd8be296fa27/image9.png" />
            
            </figure><p>We hear clearly that customers don’t want AI bots visiting their websites, and especially those that do so dishonestly. To help, we’ve added a brand new one-click to block all AI bots. It’s available for all customers, including those on the free tier. To enable it, simply navigate to the <a href="https://dash.cloudflare.com/?to=/:account/:zone/security/bots/configure">Security &gt; Bots</a> section of the Cloudflare dashboard, and click the toggle labeled AI Scrapers and Crawlers.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/xD0lhy89vZb34dtIWukt1/3e0e1ed979a33d344e53d4da2a819e1e/image2.png" />
            
            </figure><p>This feature will automatically be updated over time as we see new fingerprints of offending bots we identify as widely scraping the web for model training. To ensure we have a comprehensive understanding of all AI crawler activity, we surveyed traffic across our network.</p>
    <div>
      <h3>AI bot activity today</h3>
      <a href="#ai-bot-activity-today">
        
      </a>
    </div>
    <p>The graph below illustrates the most popular AI bots seen on Cloudflare’s network in terms of their request volume. We looked at common AI crawler user agents and aggregated the number of requests on our platform from these AI user agents over the last year:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/13pNq4MJJB92Dcs1ghxC6k/b7e7acc7e65e9e0958eed5d4b4cb0594/image6.png" />
            
            </figure><p>When looking at the number of requests made to Cloudflare sites, we see that <i>Bytespider</i>, <i>Amazonbot</i>, <i>ClaudeBot</i>, and <i>GPTBot</i> are the top four AI crawlers. Operated by ByteDance, the Chinese company that owns TikTok, <i>Bytespider</i> is reportedly used to gather training data for its large language models (LLMs), including those that support its ChatGPT rival, Doubao. <i>Amazonbot</i> and <i>ClaudeBot</i> follow <i>Bytespider</i> in request volume. <i>Amazonbot</i>, reportedly used to index content for Alexa’s question-answering, sent the second-most number of requests and <i>ClaudeBot</i>, used to train the Claude chat bot, has recently increased in request volume.</p><p>Among the top AI bots that we see, <i>Bytespider</i> not only leads in terms of number of requests but also in both the extent of its Internet property crawling and the frequency with which it is blocked. Following closely is <i>GPTBot</i>, which ranks second in both crawling and being blocked. <i>GPTBot</i>, managed by OpenAI, collects training data for its LLMs, which underpin AI-driven products such as ChatGPT. In the table below, “Share of websites accessed” refers to the proportion of websites protected by Cloudflare that were accessed by the named AI bot.</p>
<table><thead>
  <tr>
    <th><span>AI Bot</span></th>
    <th><span>Share of Websites Accessed</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>Bytespider</span></td>
    <td><span>40.40%</span></td>
  </tr>
  <tr>
    <td><span>GPTBot</span></td>
    <td><span>35.46%</span></td>
  </tr>
  <tr>
    <td><span>ClaudeBot</span></td>
    <td><span>11.17%</span></td>
  </tr>
  <tr>
    <td><span>ImagesiftBot</span></td>
    <td><span>8.75%</span></td>
  </tr>
  <tr>
    <td><span>CCBot</span></td>
    <td><span>2.14%</span></td>
  </tr>
  <tr>
    <td><span>ChatGPT-User</span></td>
    <td><span>1.84%</span></td>
  </tr>
  <tr>
    <td><span>omgili</span></td>
    <td><span>0.10%</span></td>
  </tr>
  <tr>
    <td><span>Diffbot</span></td>
    <td><span>0.08%</span></td>
  </tr>
  <tr>
    <td><span>Claude-Web</span></td>
    <td><span>0.04%</span></td>
  </tr>
  <tr>
    <td><span>PerplexityBot</span></td>
    <td><span>0.01%</span></td>
  </tr>
</tbody></table><p>While our analysis identified the most popular crawlers in terms of request volume and number of Internet properties accessed, many customers are likely not aware of the more popular AI crawlers actively crawling their sites. Our Radar team performed an analysis of the top robots.txt entries across the <a href="https://radar.cloudflare.com/domains">top 10,000 Internet domains</a> to identify the most commonly actioned AI bots, then looked at how frequently we saw these bots on sites protected by Cloudflare.</p><p>In the graph below, which looks at disallowed crawlers for these sites, we see that customers most often reference <i>GPTBot, CCBot</i>, and <i>Google</i> in robots.txt, but do not specifically disallow popular AI crawlers like <i>Bytespider</i> and <i>ClaudeBot</i>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6m4jV8g9sQ0BLR7OIonsoB/a4c3100a34160c96aea07c4ed4bc6a8d/image3.png" />
            
            </figure><p>With the Internet now flooded with these AI bots, we were curious to see how website operators have already responded. In June, AI bots accessed around 39% of the top one million Internet properties using Cloudflare, but only 2.98% of these properties took measures to block or challenge those requests. Moreover, the higher-ranked (more popular) an Internet property is, the more likely it is to be targeted by AI bots, and correspondingly, the more likely it is to block such requests.</p>
<table><thead>
  <tr>
    <th><span>Top N Internet properties by number of visitors seen by Cloudflare</span></th>
    <th><span>% accessed by AI bots</span></th>
    <th><span>% blocking AI bots</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>10</span></td>
    <td><span>80.0%</span></td>
    <td><span>40.0%</span></td>
  </tr>
  <tr>
    <td><span>100</span></td>
    <td><span>63.0%</span></td>
    <td><span>16.0%</span></td>
  </tr>
  <tr>
    <td><span>1,000</span></td>
    <td><span>53.2%</span></td>
    <td><span>8.8%</span></td>
  </tr>
  <tr>
    <td><span>10,000</span></td>
    <td><span>47.99%</span></td>
    <td><span>8.92%</span></td>
  </tr>
  <tr>
    <td><span>100,000</span></td>
    <td><span>44.53%</span></td>
    <td><span>6.36%</span></td>
  </tr>
  <tr>
    <td><span>1,000,000</span></td>
    <td><span>38.73%</span></td>
    <td><span>2.98%</span></td>
  </tr>
</tbody></table>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6gCWVJMv9GajRT3H8BQ5EM/effb5e2b52c0bdecb99f5f4e339c8d1d/image4.png" />
            
            </figure><p>We see website operators completely block access to these AI crawlers using robots.txt. However, these blocks are reliant on the bot operator respecting robots.txt and adhering to <a href="https://www.rfc-editor.org/rfc/rfc9309.html#name-the-user-agent-line">RFC9309</a> (ensuring variations on user against all match the product token) to honestly identify who they are when they visit an Internet property, but user agents are trivial for bot operators to change.</p>
    <div>
      <h3>How we find AI bots pretending to be real web browsers</h3>
      <a href="#how-we-find-ai-bots-pretending-to-be-real-web-browsers">
        
      </a>
    </div>
    <p>Sadly, we’ve observed bot operators attempt to appear as though they are a real browser by using a spoofed user agent. We’ve monitored this activity over time, and we’re proud to say that our global machine learning model has always recognized this activity as a bot, even when operators lie about their user agent.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4JpBRAGuQ1DOCTSFu9yHbH/9c11b569a30f68ddb1b4c197054ed1c8/image1.png" />
            
            </figure><p>Take one example of a specific bot that <a href="https://rknight.me/blog/perplexity-ai-is-lying-about-its-user-agent/">others</a> observed to be <a href="https://www.wired.com/story/perplexity-is-a-bullshit-machine/">hiding their activity</a>. We ran an analysis to see how our machine learning models scored traffic from this bot. In the diagram below, you can see that all <a href="https://developers.cloudflare.com/bots/concepts/bot-score/">bot scores</a> are firmly below 30, indicating that our scoring thinks this activity is likely to be coming from a bot.</p><p>The diagram reflects scoring of the requests using <a href="/residential-proxy-bot-detection-using-machine-learning">our newest model</a>, where “hotter” colors indicate more requests falling in that band, and “cooler” colors meaning fewer requests did. We can see the vast majority of requests fell into the bottom two bands, showing that Cloudflare’s model gave the offending bot a score of 9 or less. The user agent changes have no effect on the score, because this is the very first thing we expect bot operators to do.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1y0G6D2b512V1UAgR6sooD/4cc9b659f091e84facbec66a30baafad/image5.png" />
            
            </figure><p>Any customer with an existing WAF rule set to challenge visitors with a bot score below 30 (our recommendation) automatically blocked all of this AI bot traffic with no new action on their part. The same will be true for future AI bots that use similar techniques to hide their activity.</p><p>We leverage Cloudflare global signals to calculate our Bot Score, which for AI bots like the one above, reflects that we correctly identify and score them as a “likely bot.”</p><p>When bad actors attempt to crawl websites at scale, they generally use tools and frameworks that we are able to fingerprint. For every fingerprint we see, we use Cloudflare’s network, which sees over 57 million requests per second on average, to understand how much we should trust this fingerprint. To power our models, we compute global aggregates across many signals. Based on these signals, our models were able to appropriately flag traffic from evasive AI bots, like the example mentioned above, as bots.</p><p>The upshot of this globally aggregated data is that we can immediately detect new scraping tools and their behavior without needing to manually fingerprint the bot, ensuring that customers stay protected from the newest waves of bot activity.</p><p>If you have a tip on an AI bot that’s not behaving, we’d love to investigate. There are two options you can use to report misbehaving AI crawlers:</p><p>1. Enterprise Bot Management customers can submit a False Negative <a href="https://developers.cloudflare.com/bots/concepts/feedback-loop/">Feedback Loop</a> report via Bot Analytics by simply selecting the segment of traffic where they noticed misbehavior:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/iwX6mvdnqg3KGRN0dMHou/a7c0a39275680db58f49c9292ca180c7/image8.png" />
            
            </figure><p>2. We’ve also set up a <a href="https://docs.google.com/forms/d/14bX0RJH_0w17_cAUiihff5b3WLKzfieDO4upRlo5wj8/edit">reporting tool</a> where any Cloudflare customer can submit reports of an AI bot scraping your website without permission.</p><p>We fear that some AI companies intent on circumventing rules to access content will persistently adapt to evade bot detection. We will continue to keep watch and add more bot blocks to our AI Scrapers and Crawlers rule and evolve our machine learning models to help keep the Internet a place where content creators can thrive and keep full control over which models their content is used to train or run inference on.</p> ]]></content:encoded>
            <category><![CDATA[Bots]]></category>
            <category><![CDATA[Bot Management]]></category>
            <category><![CDATA[AI Bots]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Machine Learning]]></category>
            <category><![CDATA[Generative AI]]></category>
            <guid isPermaLink="false">4iUvyS3jKebfV9pHwg7pol</guid>
            <dc:creator>Alex Bocharov</dc:creator>
            <dc:creator>Santiago Vargas</dc:creator>
            <dc:creator>Adam Martinetti</dc:creator>
            <dc:creator>Reid Tatoris</dc:creator>
            <dc:creator>Carlos Azevedo</dc:creator>
        </item>
    </channel>
</rss>