
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Thu, 09 Apr 2026 11:16:06 GMT</lastBuildDate>
        <item>
            <title><![CDATA[An AI Index for all our customers]]></title>
            <link>https://blog.cloudflare.com/an-ai-index-for-all-our-customers/</link>
            <pubDate>Fri, 26 Sep 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare will soon automatically create an AI-optimized search index for your domain, and expose a set of ready-to-use standard APIs and tools including an MCP server, LLMs.txt, and a search API. ]]></description>
            <content:encoded><![CDATA[ <p>Today, we’re announcing the <b>private beta</b> of <b>AI Index </b>for domains on Cloudflare, a new type of web index that gives content creators the tools to make their data discoverable by AI, and gives AI builders access to better data for fair compensation.</p><p>With AI Index enabled on your domain, we will automatically create an AI-optimized search index for your website, and expose a set of ready-to-use standard APIs and tools including an MCP server, LLMs.txt, and a search API. Our customers will own and control that index and how it’s used, and you will have the ability to monetize access through <a href="https://developers.cloudflare.com/ai-crawl-control/features/pay-per-crawl/what-is-pay-per-crawl/"><u>Pay per crawl</u></a> and the new <a href="https://blog.cloudflare.com/x402/"><u>x402 integrations</u></a>. You will be able to use it to build modern search experiences on your own site, and more importantly, interact with external AI and Agentic providers to make your content more discoverable while being fairly compensated.</p><p>For AI builders—whether developers creating agentic applications, or AI platform companies providing foundational LLM models—Cloudflare will offer a new way to discover and retrieve web content: direct <b>pub/sub connections</b> to individual websites with AI Index. Instead of indiscriminate crawling, builders will be able to subscribe to specific sites that have opted in for discovery, receive structured updates as soon as content changes, and pay fairly for each access. Access is always at the discretion of the site owner.</p><p>From the individual indexes, Cloudflare will also build an aggregated layer, the <b>Open Index</b>, that bundles together participating sites. Builders get a single place to search across collections or the broader web, while every site still retains control and can earn from participation. </p>
    <div>
      <h3>Why build an AI Index?</h3>
      <a href="#why-build-an-ai-index">
        
      </a>
    </div>
    <p>AI platforms are quickly becoming one of the main ways people discover information online. Whether asking a chatbot to summarize a news article or find a product recommendation, the path to that answer almost always starts with crawling original content and indexing or using that data for training. However, today, that process is largely controlled by platforms: what gets crawled, how often, and whether the site owner has any input in the matter.</p><p>Although Cloudflare now offers to monitor and control how AI services respect your access policies and how they access your content, it's still challenging to make new content visible. Content creators have no efficient way to signal to AI builders when a page is published or updated. On the other hand, for AI builders, crawling and recrawling unstructured content is costly, wastes resources, especially when you don’t know the quality and cost in advance.</p><p>We need a fairer and healthier ecosystem for content discovery and usage that bridges the gap between content creators and AI builders.</p>
    <div>
      <h3>How AI Index will work</h3>
      <a href="#how-ai-index-will-work">
        
      </a>
    </div>
    <p>When you onboard a domain to Cloudflare, or if you have an existing domain on Cloudflare, you will have the choice to enable an AI Index. If enabled, we will automatically create an AI-optimized search index for your domain that you own and control.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3kV7Oru6D5jPWeGeWDQDsi/7d738250f24250cf98db2e96222319ec/image1.png" />
          </figure><p>As your site updates and grows, the index will evolve with it. New or updated pages will be processed in real-time using the same technology that powers Cloudflare <a href="https://developers.cloudflare.com/ai-search/"><u>AI Search (formerly AutoRAG)</u></a> and its <a href="https://developers.cloudflare.com/ai-search/configuration/data-source/website/"><u>Website</u></a> as a data source. Best of all, we will manage everything; you won't have to worry about each individual component of compute, storage resources, databases, embeddings, chunking, or AI models. Everything will happen behind the scenes, automatically.</p><p>Importantly, you will have control over what content to <b>include or exclude </b>from your website's index, and <b>who</b> can get access to your content via <b>AI</b> <b>Crawl Control</b>, ensuring that only the data you want to expose is made searchable and accessible. You also will be able to opt out of the AI Index completely; it will all be up to you.</p><p>When your AI Index is set up, you will get a set of ready-to-use APIs:                                                                                                                                                   </p><ul><li><p><b>An MCP Server: </b>Agentic applications will be able to connect directly to your site using the <a href="https://www.cloudflare.com/learning/ai/what-is-model-context-protocol-mcp/"><u>Model Context Protocol (MCP)</u></a>, making your content discoverable to agents in a standardized way. This includes support for <a href="https://developers.cloudflare.com/ai-search/how-to/nlweb/"><u>NLWeb</u></a> tools, an open project developed by Microsoft that defines a standard protocol for natural language queries on websites.</p></li><li><p><b>A flexible search API: </b>This endpoint will<b> </b>return relevant results in structured JSON. </p></li><li><p><b>LLMs.txt and LLMs-full.txt: </b>Standard files that provide LLMs with a machine-readable map of your site, following <a href="https://github.com/AnswerDotAI/llms-txt"><u>emerging open standards</u></a>. These will help models understand how to use your site’s content at inference time. An example of <a href="https://developers.cloudflare.com/llms.txt"><u>llms.txt</u></a> exists in the Cloudflare Developer Documentation.</p></li><li><p><b>A bulk data API: </b>An endpoint<b> </b>for transferring large amounts of content efficiently, available under the rules you set. Instead of querying for every document, AI providers will be able to ingest in one shot.</p></li><li><p><b>Pub-sub subscriptions: </b>AI platforms will be able to subscribe to your site’s index and receive events and content updates directly from Cloudflare in a structured format in real-time, making it easy for them to stay current without re-crawling.</p></li><li><p><b>Discoverability directives:</b> In robots.txt and well-known URIs to allow AI agents and crawlers visiting your site to discover and use the available API automatically.</p></li></ul>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4Hr3EhsMBH0oVwMVKywwre/2a01efbe03d67a8154123b63c05c000f/image3.png" />
          </figure><p>The index will integrate directly with <a href="https://developers.cloudflare.com/ai-crawl-control/"><u>AI Crawl Control</u></a>, so you will be able to see who’s accessing your content, set rules, and manage permissions. And with <a href="https://developers.cloudflare.com/ai-crawl-control/features/pay-per-crawl/what-is-pay-per-crawl/"><u>Pay per crawl</u></a> and <a href="https://blog.cloudflare.com/x402/"><u>x402 integrations</u></a>, you can choose to directly monetize access to your content. </p>
    <div>
      <h3>A feed of the web for AI builders</h3>
      <a href="#a-feed-of-the-web-for-ai-builders">
        
      </a>
    </div>
    <p>As an AI builder, you will be able to discover and subscribe to high-quality, permissioned web data through individual site’s AI indexes. Instead of sending crawlers blindly across the open Internet, you will connect via a pub/sub model: participating websites will expose structured updates whenever their content changes, and you will be able to subscribe to receive those updates in real-time. With this model, your new workflow may look something like this:</p><ol><li><p><b>Discover websites that have opted in: </b>Browse and filter through a directory of websites that make their indexes available through Cloudflare.</p></li><li><p><b>Evaluate content with metadata and metrics: </b>Get content metadata information on various metrics (e.g., uniqueness, depth, contextual relevance, popularity) before accessing it.</p></li><li><p><b>Pay fairly for access:</b> When content is valuable, platforms can compensate creators directly through Pay per crawl. These payments not only enable access but also support the continued creation of original content, helping to sustain a healthier ecosystem for discovery.</p></li><li><p><b>Subscribe to updates: </b>Use pub-sub subscriptions to receive events about changes made by the website, so you know when to retrieve or crawl for new content without wasting resources on constant re-crawling. </p></li></ol><p>By shifting from blind crawling to a permissioned pub/sub system for the web, AI builders save time, cut costs, and gain access to cleaner, high-quality data while content creators remain in control and are fairly compensated.</p>
    <div>
      <h3>The aggregated Open Index</h3>
      <a href="#the-aggregated-open-index">
        
      </a>
    </div>
    <p>Individual indexes provide AI platforms with the ability to access data directly from specific sites, allowing them to subscribe for updates, evaluate value, and pay for full content access on a per-site basis. But when builders need to work at a larger scale, managing dozens or hundreds of separate subscriptions can become complex. The <b>Open Index </b>will provide an additional option: a bundled, opt-in collection of those indexes, featuring sophisticated features such as quality, uniqueness, originality, and depth of content filters, all accessible in one place.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6rjkK5UCh9BLSqceUuG0RI/92413aed318baced0ee8812bec511cfb/image2.png" />
          </figure><p>The Open Index is designed to make content discovery at scale easier:</p><ul><li><p><b>Get unified access: </b>Query and retrieve data across many participating sites simultaneously. This reduces integration overhead and enables builders to plug into a curated collection of data, or use it as a ready-made web search layer that can be accessed at query time.</p></li><li><p><b>Discover broader scopes: </b>Work with topic-specific bundles (e.g., news, documentation, scientific research) or a general discovery index covering the broader web. This makes it simple to explore new content sources you may not have identified individually.</p></li><li><p><b>Bottom-up monetization: </b>Results still originate from an individual site’s AI index, with monetization flowing back to that site through Pay per crawl, helping preserve fairness and sustainability at scale.</p></li></ul><p>Together, per-site AI indexes and the Open Index will provide flexibility and precise control when you want full content from individual sites (i.e., for training, AI agents, or search experiences), and broad search coverage when you need a unified search across the web.</p>
    <div>
      <h3>How you can participate in the shift</h3>
      <a href="#how-you-can-participate-in-the-shift">
        
      </a>
    </div>
    <p>With AI Index and the Cloudflare Open Index, we’re creating a model where websites decide how their content is accessed, and AI builders receive structured, reliable data at scale to build a fairer and healthier ecosystem for content discovery and usage on the Internet.</p><p>We’re starting with a <b>private beta</b>. If you want to enroll your website into the AI Index or access the pub/sub web feed as an AI builder, you can <a href="https://www.cloudflare.com/aiindex-signup/"><b><u>sign up today</u></b></a>.</p> ]]></content:encoded>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Pay Per Crawl]]></category>
            <category><![CDATA[AI Search]]></category>
            <category><![CDATA[MCP]]></category>
            <guid isPermaLink="false">7rcW6x4j6v7O6ZEHir5fmK</guid>
            <dc:creator>Celso Martinho</dc:creator>
            <dc:creator>Anni Wang</dc:creator>
        </item>
        <item>
            <title><![CDATA[The next step for content creators in working with AI bots: Introducing AI Crawl Control]]></title>
            <link>https://blog.cloudflare.com/introducing-ai-crawl-control/</link>
            <pubDate>Thu, 28 Aug 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare launches AI Crawl Control (formerly AI Audit) and introduces easily customizable 402 HTTP responses. ]]></description>
            <content:encoded><![CDATA[ <p><i>Empowering content creators in the age of AI with smarter crawling controls and direct communication channels</i></p><p>Imagine you run a regional news site. Last month an AI bot scraped 3 years of archives in minutes — with no payment and little to no referral traffic. As a small company, you may struggle to get the AI company's attention for a licensing deal. Do you block all crawler traffic, or do you let them in and settle for the few referrals they send? </p><p>It’s picking between two bad options.</p><p>Cloudflare wants to help break that stalemate. On July 1st of this year, we declared <a href="https://www.cloudflare.com/press-releases/2025/cloudflare-just-changed-how-ai-crawlers-scrape-the-internet-at-large/"><u>Content Independence Day</u></a> based on a simple premise: creators deserve control of how their content is accessed and used. Today, we're taking the next step in that journey by releasing AI Crawl Control to general availability — giving content creators and AI crawlers an important new way to communicate.</p>
    <div>
      <h2>AI Crawl Control goes GA</h2>
      <a href="#ai-crawl-control-goes-ga">
        
      </a>
    </div>
    <p>Today, we're rebranding our AI Audit tool as <b>AI Crawl Control</b> and moving it from beta to <b>general availability</b>. This reflects the tool's evolution from simple monitoring to detailed insights and <a href="https://www.cloudflare.com/learning/ai/how-to-block-ai-crawlers/">control over how AI systems can access your content</a>. </p><p>The market response has been overwhelming: content creators across industries needed real agency, not just visibility. AI Crawl Control delivers that control.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/pIAbmCR0tTK71umann3w0/e570c5f898e3d399babf6d1f82c2f3d8/image3.png" />
          </figure>
    <div>
      <h2>Using HTTP 402 to help publishers license content to AI crawlers</h2>
      <a href="#using-http-402-to-help-publishers-license-content-to-ai-crawlers">
        
      </a>
    </div>
    <p>Many content creators have faced a binary choice: either they block all AI crawlers and miss potential licensing opportunities and referral traffic; or allow them through without any compensation. Many content creators had no practical way to say "we're open for business, but let's talk terms first."</p><p>Our customers are telling us:</p><ul><li><p>We want to license our content, but crawlers don't know how to reach us. </p></li><li><p>Blanket blocking feels like we're closing doors on potential revenue and referral traffic. </p></li><li><p>We need a way to communicate our terms before crawling begins. </p></li></ul><p>To address these needs, we are making it easier than ever to send customizable<b> </b><a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Status/402">402 HTTP status codes</a>. </p><p>Our <a href="https://blog.cloudflare.com/introducing-pay-per-crawl/#what-if-i-could-charge-a-crawler"><u>private beta launch of Pay Per Crawl</u></a> put the HTTP 402 (“Payment Required”) response codes to use, working in tandem with Web Bot Auth to enable direct payments between agents and content creators. Today, we’re making customizable 402 response codes available to every paid Cloudflare customer — not just pay per crawl users.</p><p>Here's how it works: in AI Crawl Control, paying Cloudflare customers will be able to select individual bots to block with a configurable message parameter and send 402 payment required responses. Think: "To access this content, email partnerships@yoursite.com or call 1-800-LICENSE" or "Premium content available via API at api.yoursite.com/pricing."</p><p>On an average day, Cloudflare customers are already sending over one billion 402 response codes. This shows a deep desire to move beyond blocking to open communication channels and new monetization models. With the 402 HTTP status code, content creators can tell crawlers exactly how to properly license their content, creating a direct path from crawling to a commercial agreement. We are excited to make this easier than ever in the AI Crawl Control dashboard. </p>
    <div>
      <h2>How to customize your 402 status code with AI Crawl Control: </h2>
      <a href="#how-to-customize-your-402-status-code-with-ai-crawl-control">
        
      </a>
    </div>
    <p><b>For Paid Plan Users:</b></p><ul><li><p>When you block individual crawlers from the AI Crawl Control dashboard, you can now choose to send 402 Payment Required status codes and customize your message. For example: <b>To access this content, email partnerships@yoursite.com or call 1-800-LICENSE</b>.</p></li></ul><p>The response will look like this:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5v5x41azcAK14DBhXjXPEX/8c0960b4bb556d62e88d19c9dd544f12/image4.png" />
          </figure><p>The message can be configured from Settings in the AI Crawl Control Dashboard:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2KMdRYwoey9RdYIxmzmFO1/7b39fd82d43349ee1cc4832cb602eb56/image1.png" />
          </figure>
    <div>
      <h2>Beyond just blocking AI bots</h2>
      <a href="#beyond-just-blocking-ai-bots">
        
      </a>
    </div>
    <p>This is just the beginning. We're planning to add additional parameters that will let crawlers understand the content's value, freshness, and licensing terms directly in the 402 response. Imagine crawlers receiving structured data about content quality and update frequency, for example, in addition to contact information.</p><p>Meanwhile, <a href="https://blog.cloudflare.com/introducing-pay-per-crawl/">pay per crawl</a> continues advancing through beta, giving content creators the infrastructure to automatically monetize crawler access with transparent, usage-based pricing.</p><p>What excites us most is the market shift we're seeing. We're moving to a world where content creators have clear monetization paths to become active participants in the development of rich AI experiences. </p><p>The 402 response is a bridge between two industries that want to work together: content creators whose work fuels AI development, and AI companies who need high-quality data. Cloudflare’s AI Crawl Control creates the infrastructure for these partnerships to flourish.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/31Np3qX2ssbeGaJnZHQodA/92246d3618778715c2e8b295b7acaa29/image5.png" />
          </figure><div>
  
</div><p></p> ]]></content:encoded>
            <category><![CDATA[AI Week]]></category>
            <category><![CDATA[Pay Per Crawl]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Bots]]></category>
            <category><![CDATA[AI Bots]]></category>
            <category><![CDATA[Bot Management]]></category>
            <guid isPermaLink="false">3UcNgGUfIUIm0EEtNwgLAT</guid>
            <dc:creator>Will Allen</dc:creator>
            <dc:creator>Pulkita Kini</dc:creator>
            <dc:creator>Cam Whiteside</dc:creator>
        </item>
        <item>
            <title><![CDATA[Content Independence Day: no AI crawl without compensation!]]></title>
            <link>https://blog.cloudflare.com/content-independence-day-no-ai-crawl-without-compensation/</link>
            <pubDate>Tue, 01 Jul 2025 10:01:00 GMT</pubDate>
            <description><![CDATA[ It’s Content Independence Day: Cloudflare, along with a majority of the world's leading publishers and AI companies, is changing the default to block AI crawlers unless they pay creators for content. ]]></description>
            <content:encoded><![CDATA[ <p>Almost 30 years ago, two graduate students at Stanford University — Larry Page and Sergey Brin — began working on a research project they called Backrub. That, of course, was the project that resulted in Google. But also something more: it created the business model for the web.</p><p>The deal that Google made with content creators was simple: let us copy your content for search, and we'll send you traffic. You, as a content creator, could then derive value from that traffic in one of three ways: running ads against it, selling subscriptions for it, or just getting the pleasure of knowing that someone was consuming your stuff.</p><p>Google facilitated all of this. Search generated traffic. They acquired DoubleClick and built AdSense to help content creators serve ads. And acquired Urchin to launch Google Analytics to let you measure just who was viewing your content at any given moment in time.</p><p>For nearly thirty years, that relationship was what defined the web and allowed it to flourish.</p><p>But that relationship is changing. For the first time in more than a decade, the percentage of searches run on Google is <a href="https://searchengineland.com/google-search-market-share-drops-2024-450497"><u>declining</u></a>. What's taking its place? AI.</p><p>If you're like me, you've been amazed at the new AI systems that have launched over the last two years and find yourself turning to them to answer questions that, in the past, you may have previously looked to Google. While it's still early, it seems clear that the interface of the future of the web will look more like ChatGPT than a spartan search box and ten blue links.</p><p>Google itself has changed. While ten years ago they presented a list of links and said that success was getting you off their site as quickly as possible, today they've added an answer box and more recently AI Overviews which answer users' questions without them having to leave Google.com. With the answer box, researchers have found that <a href="https://scrumdigital.com/blog/zero-click-search-trends-google-serp-analysis/"><u>75 percent</u></a> of mobile queries were answered without users leaving Google. With the more recent launch of AI Overviews it's even higher.</p><p>While Google’s users may like that, it's hurting content creators. Google still copies creators’ content, but over the last 10 years, because of the changes to the UI of “search” it's gotten almost 10 times more difficult for a content creator to get the same volume of traffic. That means it's 10 times more difficult to generate value from ads, subscriptions, or the ego of knowing someone cares about what you created.</p><p>And that's the good news. It’s even worse with <a href="https://blog.cloudflare.com/ai-search-crawl-refer-ratio-on-radar/#how-does-this-measurement-work"><u>today’s AI tools</u></a>. With OpenAI, it's 750 times more difficult to get traffic than it was with the Google of old. With Anthropic, it's 30,000 times more difficult. The reason is simple: increasingly we aren't consuming originals, we're consuming derivatives.</p><p>The problem is whether you create content to sell ads, sell subscriptions, or just to know that people value what you've created, an AI-driven web doesn't reward content creators the way that the old search-driven web did. And that means the deal that Google made to take content in exchange for sending you traffic just doesn't make sense anymore.</p><p>Instead of being a fair trade, the web is being stripmined by AI crawlers with content creators seeing almost no traffic and therefore almost no value.</p><p>That changes today, July 1, what we’re calling Content Independence Day. Cloudflare, along with a majority of the world's leading publishers and AI companies, is changing the default to <a href="https://www.cloudflare.com/learning/ai/how-to-block-ai-crawlers/">block AI crawlers</a> unless they pay creators for their content. That content is the fuel that powers AI engines, and so it's only fair that content creators are compensated directly for it.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6GFFa6knU0nKGjhJVh8Ar8/8a1b4c0661146596cc844cdd9dd900ea/BLOG-2860_2.png" />
          </figure><p>But that's just the beginning. Next, we'll work on a marketplace where content creators and AI companies, large and small, can come together. Traffic was always a poor proxy for value. We think we can do better. Let me explain.</p><p>Imagine an AI engine like a block of swiss cheese. New, original content that fills one of the holes in the AI engine’s block of cheese is more valuable than repetitive, low-value content that unfortunately dominates much of the web today.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6vUAgbW7FzzHSKA8tB8f8c/ea78e7cb4858602a32a91523800b882c/BLOG-2860_3.png" />
          </figure><p>We believe that if we can begin to score and value content not on how much traffic it generates, but on how much it furthers knowledge — measured by how much it fills the current holes in AI engines “swiss cheese” — we not only will help AI engines get better faster, but also potentially facilitate a new golden age of high-value content creation.</p><p>We don’t know all the answers yet, but we’re working with some of the leading economists and computer scientists to figure them out.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1VNIoN0740jhfO8lu6XDpJ/98829d238884cde3bcd345779a15df89/BLOG-2860_4.png" />
          </figure><p>The web is changing. Its business model will change. And, in the process, we have an opportunity to learn from what was great about the web of the last 30 years and what we can make better for the web of the future.</p><p>Cloudflare's mission is to help build a better Internet. I'm proud of the role we're playing in doing exactly that as the web evolves. And I’m proud that we’re helping content creators stick up and demand value for the content they worked hard to create.</p><p>Happy Content Independence Day!</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2Xme0Af7HqeJpdQbapzApG/6ff9ea29b7506e10867ed9c7ac5a2280/BLOG-2860_5.png" />
          </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Pay Per Crawl]]></category>
            <category><![CDATA[AI Bots]]></category>
            <category><![CDATA[AI]]></category>
            <guid isPermaLink="false">1pmK0OnvzPIip01yjWXj0x</guid>
            <dc:creator>Matthew Prince</dc:creator>
        </item>
        <item>
            <title><![CDATA[The crawl before the fall… of referrals: understanding AI’s impact on content providers]]></title>
            <link>https://blog.cloudflare.com/ai-search-crawl-refer-ratio-on-radar/</link>
            <pubDate>Tue, 01 Jul 2025 10:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare Radar now shows how often a given AI model sends traffic to a site relative to how often it crawls that site. This helps site owners make decisions about which AI bots to allow or block.
 ]]></description>
            <content:encoded><![CDATA[ <p>Content publishers welcomed crawlers and bots from search engines because they helped drive traffic to their sites. The <a href="https://www.cloudflare.com/learning/bots/what-is-a-web-crawler/"><u>crawlers</u></a> would see what was published on the site and surface that material to users searching for it. Site owners could monetize their material because those users still needed to click through to the page to access anything beyond a short title.</p><p><a href="https://www.cloudflare.com/learning/ai/what-is-artificial-intelligence/"><u>Artificial Intelligence (AI)</u></a> bots also crawl the content of a site, but with an entirely different delivery model. These <a href="https://www.cloudflare.com/learning/ai/what-is-large-language-model/"><u>Large Language Models (LLMs)</u></a> do their best to read the web to train a system that can repackage that content for the user, without the user ever needing to visit the original publication.</p><p>The AI applications might still try to cite the content, but we’ve found that very few users actually click through relative to how often the AI bot <a href="https://www.cloudflare.com/learning/bots/what-is-content-scraping/"><u>scrapes</u></a> a given website. We have discussed this challenge in smaller settings, and today we are excited to publish our findings as <a href="https://radar.cloudflare.com/ai-insights#crawl-to-refer-ratio"><u>a new metric shown on the AI Insights page on Cloudflare Radar</u></a>.</p><p>Visitors to Cloudflare Radar can now review how often a given AI model sends traffic to a site relative to how often it crawls that site. We are sharing this analysis with a broad audience so that site owners can have better information to help them make decisions about which AI bots to allow or block and so that users can understand how AI usage in aggregate impacts Internet traffic.</p>
    <div>
      <h2>How does this measurement work?</h2>
      <a href="#how-does-this-measurement-work">
        
      </a>
    </div>
    <p>As HTML pages are arguably the most valuable content for these crawlers, the ratios displayed are calculated by dividing the total number of requests from relevant user agents associated with a given search or AI platform where the response was of <code>Content-type: text/html</code> by the total number of requests for HTML content where the <code>Referer</code> header contained a hostname associated with a given search or AI platform.</p><p>The diagrams below illustrate two common crawling scenarios, and show that companies may use different user agents depending on the purpose of the crawler. The top one represents a simple transaction where the example AI platform is requesting content for the purposes of training an LLM, representing itself as <code>AIBot</code>. The bottom one represents a scenario where the example AI platform is requesting content to service a user request — looking for flight information, for example. In this case, it is representing itself as <code>AIBot-User</code>. Request traffic from both of these user agents would be aggregated under a single platform name for the purposes of our analysis. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3SOsmpe6TAWwqK6g9irLI2/cca037eadf97578f7851e24ba6b90af4/image9.png" />
          </figure><p>When a user clicks on a link on a website or application, the client will often send a <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Referer"><code><u>Referer:</u></code><u> header</u></a> as part of the request to the target site. In the diagram below, the example AI platform has returned content that contains links to external sites in response to a user interaction. When the user clicks on a link, a request is made to the content provider that includes <code>ai.example.com </code>in the <code>Referer:</code> header, letting them know where that request traffic came from. Hostnames are associated with their respective platforms for the purpose of our analysis.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5WqrD6q6k4ng8sBLbgzp42/b139464c5653d3cab533bf6413930a62/image10.png" />
          </figure>
    <div>
      <h2>Observations</h2>
      <a href="#observations">
        
      </a>
    </div>
    
    <div>
      <h3>Reviewing the ratios</h3>
      <a href="#reviewing-the-ratios">
        
      </a>
    </div>
    <p>The new metric is presented as a simple table, comparing the number of aggregate HTML page requests from crawlers (user agents) associated with a given platform to the number of HTML page requests from clients referred by a hostname associated with a given platform. The calculated ratio is always normalized to a single referral request.</p><p>The table below shows that for the period June 19-26, 2025, as an example, the ratios range from Anthropic’s 70,900:1 down to Mistral’s 0.1:1. This means that Anthropic’s AI platform Claude made nearly 71,000 HTML page requests for every HTML page referral, while Mistral sent 10x as many referrals as crawl requests. (However, traffic referred by Claude’s native app does not include a <code>Referer:</code> header, and we believe that the same holds true for traffic generated from other native apps as well. As such, because the referral counts only include traffic from the Web-based tools from these providers, these calculations may overstate the respective ratios, but it is unclear by how much.)</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1JaUDnjXMlq5YMxuKZGh7b/31210c8cd80779974450adfb4909f1cd/image7.png" />
          </figure><p>Of course, due in part to changes in crawling patterns, these ratios will change over time. The table above also displays the ratio changes as compared to the previous period, with changes ranging from increases of over 6% for DuckDuckGo and Yandex to Google’s 19.4% decrease. The week-over-week drop in Google’s ratio is related to an observed drop in crawling traffic from <code>GoogleBot</code> starting on June 24, while Yandex’s week-over-week growth is related to an observed increase in <code>YandexBot</code> crawling activity that started on June 21, as seen in the graphs below.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2UThXDeJepqM6jQCzXMvvw/f2d75d2202c33711f9eaa0a38c01a9f3/image3.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4FDYlEWYztxZCJZMg5RPvf/b4a3dac2dc4a06b709e2ef8d74ea1bc0/image10.png" />
          </figure><p>Radar’s Data Explorer includes a <a href="https://radar.cloudflare.com/explorer?dataSet=bots.crawlers&amp;groupBy=crawl_refer_ratio&amp;dt=2025-05-01_2025-05-28"><u>time series view of how these ratios change over time</u></a>, such as in the Baidu example below. The time series data is also available through an <a href="https://developers.cloudflare.com/api/resources/radar/subresources/bots/subresources/web_crawlers/methods/timeseries_groups/"><u>API endpoint</u></a>.</p>
    <div>
      <h3>Patterns in referral traffic</h3>
      <a href="#patterns-in-referral-traffic">
        
      </a>
    </div>
    <p>Changes and trends in the underlying activity can be seen in the <a href="https://radar.cloudflare.com/explorer?dataSet=bots.crawlers&amp;groupBy=referer&amp;timeCompare=1"><u>associated Data Explorer view</u></a>, as well as in the raw data available via API endpoints (<a href="https://developers.cloudflare.com/api/resources/radar/subresources/bots/subresources/web_crawlers/methods/timeseries_groups/"><u>timeseries</u></a>, <a href="https://developers.cloudflare.com/api/resources/radar/subresources/bots/subresources/web_crawlers/methods/summary/"><u>summary</u></a>). Note that the shares of both referral and crawl traffic are relative to the sets of referrers and crawlers included in the graphs, and not Cloudflare traffic overall.</p><p>For example, in the referrer-centric view below, covering nearly the first four weeks of June 2025, we can see that referral traffic is dominated by search platform Google, with a fairly consistent diurnal pattern visible in the data. (The <code>google.*</code> entry covers referral traffic from the main <a href="http://google.com"><u>google.com</u></a> site, as well as local sites, such as <a href="http://google.es"><u>google.es</u></a> or <a href="http://google.com.tw"><u>google.com.tw</u></a>.) Because of prefetching driven by the use of <a href="https://developer.chrome.com/blog/search-speculation-rules"><u>speculation rules</u></a>, referral traffic coming from Google’s ASN (AS15169) is specifically excluded from analysis here, as it doesn’t represent active user consumption of content.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5pNnqBHkfJEEGioN1dhpi5/65251de2ad63e0cef0ee2340e79f2f4b/image14.png" />
          </figure><p>Clear diurnal patterns are also visible in the referral request shares of other search platforms, although the request shares are a fraction of what is seen from Google.  </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5flVZwDhtYlseH5uYDk76U/a03e9957a10983e87e4fcd8f6a9e59bf/image4.png" />
          </figure><p>Throughout June, the share of traffic referred by AI platforms was significantly lower, even in aggregate, than the share of traffic referred by search platforms.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/705m9ac6GXGgT4qshubY70/3c6c0ca43be66114be53fa607bcb857d/image8.png" />
          </figure>
    <div>
      <h3>Changes in crawling traffic</h3>
      <a href="#changes-in-crawling-traffic">
        
      </a>
    </div>
    <p>As noted above, the change in ratio values over time can be driven by shifts in crawling activity. These shifts are visible in the <a href="https://radar.cloudflare.com/explorer?dataSet=bots.crawlers&amp;groupBy=user_agent&amp;timeCompare=1"><u>crawling traffic shares available in Data Explorer</u></a>, as well as in the raw data available via API endpoints (<a href="https://developers.cloudflare.com/api/resources/radar/subresources/bots/subresources/web_crawlers/methods/timeseries_groups/"><u>timeseries</u></a>, <a href="https://developers.cloudflare.com/api/resources/radar/subresources/bots/subresources/web_crawlers/methods/summary/"><u>summary</u></a>). In the crawler-centric view below, covering nearly the first four weeks of June 2025, we can see that the share of requests related to Google’s crawling activity for both their <code>Googlebot</code> and <code>GoogleOther</code> identifiers falls over the course of the month, with several peak/valley periods. A similar pattern <a href="https://radar.cloudflare.com/explorer?dataSet=http&amp;loc=as15169&amp;dt=2025-05-31_2025-06-27"><u>observed in HTTP request traffic from Google’s AS15169</u></a> during that same time period loosely matches this observed drop in share.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1K92yRMz57QrRH7iPvNH4V/0f7d7816fb3b22232dbee8359127b367/image11.png" />
          </figure><p>In addition, it appears that OpenAI’s <code>GPTBot</code> saw multiple periods where little-to-no crawling activity was observed throughout the month.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/sXdBr25Y4toS2t3nvPKMm/e1313d3356130bc333a2e03574e56661/image13.png" />
          </figure>
    <div>
      <h2>What this means for content providers</h2>
      <a href="#what-this-means-for-content-providers">
        
      </a>
    </div>
    <p>These ratios directly impact the viability of content publication on the Internet. While they will vary over time, the trend continues to be more crawls and fewer referrals when compared in relation to each other. Legacy search index crawlers would scan your content a couple of times, or less, for each visitor sent. A site’s availability to crawlers made their revenue model more viable, not less.</p><p>The new data we are observing suggests that is no longer the case. These models continue to consume more content, more frequently, despite sending the same or less traffic to the source of its content.</p><p>We have <a href="https://blog.cloudflare.com/cloudflare-ai-audit-control-ai-content-crawlers/"><u>released new tools</u></a> over the last year to help site owners take control back. With a single click, publishers can <a href="https://www.cloudflare.com/learning/ai/how-to-block-ai-crawlers/">block the kinds of AI crawlers that train against their content</a>. And today, <a href="https://blog.cloudflare.com/introducing-pay-per-crawl"><u>we announced new ways</u></a> to make the exchange of value fair for both sides of the equation. However, we continue to recommend that content creators audit and then enforce their preferred policies for AI crawlers.</p>
    <div>
      <h2>One more thing…</h2>
      <a href="#one-more-thing">
        
      </a>
    </div>
    <p>In addition to providing these new insights around crawling and referral traffic and associated trends, we’ve also taken the opportunity to launch expanded Verified Bots content. The <a href="https://radar.cloudflare.com/bots"><u>Bots page on Cloudflare Radar</u></a> includes a paginated list of <a href="https://developers.cloudflare.com/bots/concepts/bot/verified-bots/"><u>Verified Bots</u></a>, displaying the bot name, owner, category, and rank (based on request volume). This list has now been expanded into a <a href="https://radar.cloudflare.com/bots/directory"><u>standalone directory in a new Bots section</u></a>. The directory, shown below, displays a card for each Verified Bot, showing the bot name, a description, the bot owner and category, and verification status. Users can search the directory by bot name, owner, or description, and can also filter by category (selecting just <i>Monitoring &amp; Analytics</i> bots, for example).</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7nTytFwnB1NVuwnAeAduX8/40efad4c333d8046d28a7ee44a8d91ca/image2.png" />
          </figure><p>Clicking on a bot name within a card brings up a bot-specific page that includes metadata about the bot, information on how the bot’s user agent is represented in <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/User-Agent"><u>HTTP request headers</u></a> and how it should be <a href="https://datatracker.ietf.org/doc/html/rfc9309#name-the-user-agent-line"><u>specified in robots.txt directives</u></a>, and a traffic graph that shows associated HTTP request volume trends for the selected time period (with a default comparison to the previous period). Associated data is also available via the <a href="https://developers.cloudflare.com/api/resources/radar/subresources/bots/"><u>API</u></a>. As we add additional information to these bot-specific pages in the future, we will document the updates in <a href="https://developers.cloudflare.com/changelog/?product=radar"><u>Changelog entries</u></a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1SY1pwRzVnvC1sFNANrPxx/003260c3fdd3792cdff55d3a95628592/image12.png" />
          </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Pay Per Crawl]]></category>
            <category><![CDATA[Radar]]></category>
            <category><![CDATA[Internet Traffic]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Bots]]></category>
            <guid isPermaLink="false">2pLY5VumUNgntdcfkU9Ua3</guid>
            <dc:creator>David Belson</dc:creator>
            <dc:creator>Sam Rhea</dc:creator>
        </item>
        <item>
            <title><![CDATA[Control content use for AI training with Cloudflare’s managed robots.txt and blocking for monetized content]]></title>
            <link>https://blog.cloudflare.com/control-content-use-for-ai-training/</link>
            <pubDate>Tue, 01 Jul 2025 10:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare is making it easier for publishers and content creators of all sizes to prevent their content from being scraped for AI training by managing robots.txt on their behalf.  ]]></description>
            <content:encoded><![CDATA[ <p>Cloudflare is giving all website owners two new tools to easily control whether AI bots are allowed to access their content for model training. First, customers can let Cloudflare <b>create and manage a robots.txt file</b>, creating the appropriate entries to let crawlers know not to access their site for AI training. Second, all customers can choose a new option to <a href="https://www.cloudflare.com/learning/ai/how-to-block-ai-crawlers/">block AI bots</a> <b>only on portions of their site that are monetized through ads</b>.</p>
    <div>
      <h2>The new generation of AI crawlers</h2>
      <a href="#the-new-generation-of-ai-crawlers">
        
      </a>
    </div>
    <p>Creators that monetize their content by showing ads depend on traffic volume. Their livelihood is directly linked to the number of views their content receives. These creators have allowed crawlers on their sites for decades, for a simple reason: search crawlers such as <code>Googlebot</code> made their sites more discoverable, and drove more traffic to their content. Google benefitted from delivering better search results to their customers, and the site owners also benefitted through increased views, and therefore increased revenues.</p><p>But recently, a new generation of crawlers has appeared: bots that crawl sites to gather data for training AI models. While these crawlers operate in the same technical way as search crawlers, the relationship is no longer symbiotic. AI training crawlers use the data they ingest from content sites to answer questions for their own customers directly, within their own apps. They typically send much less traffic back to the site they crawled. Our <a href="https://radar.cloudflare.com/"><u>Radar</u></a> team did an analysis of crawls and referrals for sites behind Cloudflare. As HTML pages are arguably the most valuable content for these crawlers, we <a href="https://blog.cloudflare.com/ai-search-crawl-refer-ratio-on-radar/"><u>calculated crawl ratios</u></a> by dividing the total number of requests from relevant user agents associated with a given search or AI platform where the response was of <code>Content-type: text/html</code> by the total number of requests for HTML content where the <code>Referer</code>: header contained a hostname associated with a given search or AI platform. As of June 2025, we find that Google crawls websites about 14 times for every referral. But for AI companies, the <a href="https://radar.cloudflare.com/ai-insights#crawl-to-refer-ratio"><u>crawl-to-refer ratio</u></a> is orders of magnitude greater. In June 2025, <b>OpenAI’s crawl-to-referral ratio was 1,700:1, Anthropic’s 73,000:1</b>. This clearly breaks the “crawl in exchange for traffic” relationship that previously existed between search crawlers and publishers. (Please note that this calculation reflects our best estimate, recognizing that traffic referred by native apps may not always be attributed to a provider due to a lack of a <code>Referer</code>: header, which may affect the ratio.)</p><p>And while sites can use robots.txt to tell these bots not to crawl their site, most don’t take this first step. We found that only about <a href="https://radar.cloudflare.com/ai-insights#ai-user-agents-found-in-robotstxt"><b><u>37% of the top 10,000 domains currently have a robots.txt file</u></b></a>, showing that robots.txt is underutilized in this age of evolving crawlers.</p><p>That’s where Cloudflare comes in. Our mission is to help build a better Internet, and a better Internet is one with a huge thriving ecosystem of independent publishers. So, we’re taking action to keep that ecosystem alive.</p>
    <div>
      <h2>Giving ALL customers full control</h2>
      <a href="#giving-all-customers-full-control">
        
      </a>
    </div>
    <p>Protecting content creators isn’t new for Cloudflare. In July 2024, we gave everyone on the Cloudflare network a simple way to <a href="https://blog.cloudflare.com/declaring-your-aindependence-block-ai-bots-scrapers-and-crawlers-with-a-single-click/"><u>block all AI scrapers with a single click</u></a> for free. We’ve already seen <b>more than 1 million customers enable this feature</b>, which has given us some interesting data.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2B8KAmaP6DrMEMW5YSjLYP/d9eb0f67a998b730373a27aa707ade9d/image5.png" />
          </figure><p>Since our last update, we can see that <code><b>Bytespider</b></code><b>, our previous top bot, has seen traffic volume decline 71.45% since the first week of July 2024</b>. During the same time, we saw an increased number of <code>Bytespider</code> requests that customers chose to specifically block. In contrast, <code>GPTBot</code> traffic volume has grown significantly as it has become more popular, now even surpassing traffic we see from big traditional tech players like Amazon and ByteDance.</p><p>The share of sites accessed by particular crawlers has gone down across the board since our last update. Previously, <code>Bytespider</code> accessed &gt;40% of websites protected by Cloudflare, but that number has dropped to only 9.37%. <code><b>GPTBot</b></code><b> has taken the top spot for most sites accessed</b>, but while its request volume has grown significantly (noted above), the share of sites it crawls has actually decreased since last year from 35.46% to 28.97%, with an increase in customers blocking.</p><table><tr><td><p>AI Bot</p></td><td><p>Share of Websites Accessed</p></td></tr><tr><td><p>GPTBot</p></td><td><p>28.97%</p></td></tr><tr><td><p>Meta-ExternalAgent</p></td><td><p>22.16%</p></td></tr><tr><td><p>ClaudeBot</p></td><td><p>18.80%</p></td></tr><tr><td><p>Amazonbot</p></td><td><p>14.56%</p></td></tr><tr><td><p>Bytespider</p></td><td><p>9.37%</p></td></tr><tr><td><p>GoogleOther</p></td><td><p>9.31%</p></td></tr><tr><td><p>ImageSiftBot</p></td><td><p>4.45%</p></td></tr><tr><td><p>Applebot</p></td><td><p>3.77%</p></td></tr><tr><td><p>OAI-SearchBot</p></td><td><p>1.66%</p></td></tr><tr><td><p>ChatGPT-User</p></td><td><p>1.06%</p></td></tr></table><p>And while AI Search and AI Assistant crawling related activity has exploded in popularity in the last 6 months, we still see their total traffic pale in comparison to AI training crawl activity, which has seen a <b>65% increase in traffic over the past 6 months</b>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7nOWMQs8IzgS3RfrXHaVT1/b1b31024a92b70a3f39083b376bb3934/image4.png" />
          </figure><p>To this end, we launched <a href="https://blog.cloudflare.com/cloudflare-ai-audit-control-ai-content-crawlers/"><u>free granular auditing</u></a> in September 2024 to help customers understand which crawlers were accessing their content most often, and created simple templates to block all or specific crawlers. And in December 2024, we made it easy for publishers to automatically block <a href="https://blog.cloudflare.com/ai-audit-enforcing-robots-txt/"><u>crawlers that weren’t respecting robots.txt</u></a>. But we realized many sites didn’t have the time to create or manage their own robots.txt file. Today, we’re going two steps further.</p>
    <div>
      <h2>Step 1: fully managed robots.txt</h2>
      <a href="#step-1-fully-managed-robots-txt">
        
      </a>
    </div>
    <p>When it comes to managing your website’s visibility to search engine crawlers and other bots, the <code>robots.txt</code> file is a key player. This simple text file acts like a traffic controller, signaling to bots which parts of the website they should or should not access. We can think of <a href="https://www.cloudflare.com/learning/bots/what-is-robots-txt/"><u>robots.txt</u></a> as a "Code of Conduct" sign posted at a community pool, listing general dos and don'ts, according to the pool owner’s wishes. While the sign itself does not enforce the listed directives, well-behaved visitors will still read the sign and follow the instructions they see. On the other hand, poorly-behaved visitors who break the rules risk <a href="https://blog.cloudflare.com/ai-audit-enforcing-robots-txt/"><u>getting themselves banned</u></a>. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6oGxSRxy3sU88o4TZP7p42/aea1d7bbf5e57eb133ce8cdfae88dc37/image2.png" />
          </figure><p>What do these files actually look like? Take Google’s as an example, visible to anyone at <a href="https://www.google.com/robots.txt"><u>https://www.google.com/robots.txt</u></a>. Parsing its contents, you'll notice four directives in the set of instructions: <b>User-agent</b>, <b>Disallow</b>, <b>Allow</b>, and <b>Sitemap</b>. In a <code>robots.txt</code> file, the <b>User-agent</b> directive specifies which bots the rules apply to. The <b>Disallow</b> directive tells those bots which parts of the website they should avoid. In contrast, the <b>Allow</b> directive grants specific bots permission to access certain areas. Finally, the<a href="https://www.sitemaps.org/index.html"> <b>Sitemap</b> directive</a> shows a bot which pages it can reach, so that it won’t miss any important pages. The <a href="https://www.ietf.org/"><u>Internet Engineering Task Force (IETF)</u></a> formalized the definition and language for the Robots Exclusion Protocol in <a href="https://datatracker.ietf.org/doc/html/rfc9309"><u>RFC 9309</u></a>, specifying the exact syntax and precedence of these directives. It also outlines how crawlers should handle errors or redirects while stressing that compliance is <i>voluntary</i> and does not constitute access control. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/79JML5EIN1f4NVzRankehO/20a2c99ccaca62e7718c9d66bb8585d5/image10.png" />
          </figure><p>Website owners should have agency over AI bot activity on their websites. We mentioned that only 37% of the top 10,000 domains on Cloudflare even have a robots.txt file. Of those robots files that do exist, few include Disallow directives for the <a href="https://radar.cloudflare.com/ai-insights#ai-bot-crawler-traffic"><i><u>top</u></i><u> AI Bots</u></a> that we see on a daily basis.  For instance, as of publication, <a href="https://radar.cloudflare.com/explorer?dataSet=robots_txt&amp;groupBy=user_agents%2Fdirective&amp;filters=directive%253DDISALLOW"><code><u>GPTBot</u></code><u> is only disallowed in 7.8% of the robots.txt files</u></a> found for the top domains; <code>Google-Extended</code> only shows up in 5.6%; <code>anthropic-ai</code>, <code>PerplexityBot</code>, <code>ClaudeBot</code>, and <code>Bytespider</code> each show up in under 5%. Furthermore, the difference between the 7.8% of Disallow directives for <code>GPTBot</code> and the ~5% of Disallow directives for other major AI crawlers suggests a gap between the desire to <a href="https://www.cloudflare.com/learning/ai/how-to-prevent-web-scraping/">prevent your content from being used for AI model training</a> and the proper configuration that accomplishes this by calling out bots like <code>Google-Extended</code>. (After all, there’s more to stopping AI crawlers than disallowing <code>GPTBot</code>.)</p><p>Along with viewing the most active bots and crawlers, Cloudflare Radar also shares weekly updates on how websites are handling <a href="https://radar.cloudflare.com/ai-insights?cf_target_id=3D982CE3E88C4E32F9D4AA79E7869F7C#ai-user-agents-found-in-robotstxt"><u>AI bots in their robots.txt files</u></a>. We can examine two snapshots below, one from <a href="https://radar.cloudflare.com/ai-insights?dateStart=2025-06-23&amp;dateEnd=2025-06-24"><u>June 2025</u></a> and the other from <a href="https://radar.cloudflare.com/ai-insights?dateStart=2025-01-26&amp;dateEnd=2025-02-01"><u>January 2025</u></a>:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/30Wc2jLvDqSMBKF5QxU2yc/f18b44d8ba9d11687c0224b40cf12675/image6.png" />
          </figure><p><sub><i>Radar snapshot from the week of June 23, 2025, showing the top AI user agents mentioned in the Disallow directive in robots.txt files across the top 10,000 domains. The 3 bots with the highest number of Disallows are GPTBot, CCBot, and facebookexternalhit.</i></sub></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/T9krKSMLRud7sYgG7ahei/8632afeba6d22baa304ae9fd901e187a/image9.png" />
          </figure><p><sub><i>Radar snapshot from the week of January 26, 2025, showing the top AI user agents mentioned in the Disallow directive in robots.txt files across the top 10,000 domains. The 3 bots with the highest number of Disallows are GPTBot, CCBot, and anthropic-ai.</i></sub></p><p>From the above data, we also observe that fewer than 100 new robots.txt files have been added among the top domains between January and June. One visually striking change is the ratio of dark blue to light blue: compared to January, there is a steep decrease in “Partially Disallowed” permissions; websites are now flat-out choosing “Fully Disallowed” for the top AI crawlers, including <code>GPTBot</code>, <code>CCBot</code>, and <code>Google-Extended</code>. This underscores the changing landscape of web crawling, particularly the relationship of trust between website owners and AI crawlers.</p>
    <div>
      <h3>Putting up a guardrail with Cloudflare’s managed robots.txt</h3>
      <a href="#putting-up-a-guardrail-with-cloudflares-managed-robots-txt">
        
      </a>
    </div>
    <p>Many website owners have told us they’re in a tricky spot in this new era of AI crawlers. They’ve poured time and effort into creating original content, have published it on their own sites, and naturally want it to reach as many people as possible. To do that, website owners make their sites accessible to search engine crawlers, which index the content and make it discoverable in search results. But with the rise of AI-powered crawlers, that same content is now being scraped not just for indexing, but also to train AI models, often without the creator’s explicit consent. Take <code>Googlebot</code>, for example: it’s an absolute requirement for most website owners to allow for SEO. But Google crawls with user agent <code>Googlebot</code> for both SEO <i>and</i> AI training purposes. Specifically disallowing <a href="https://developers.google.com/search/docs/crawling-indexing/google-common-crawlers#google-extended"><code><u>Google-Extended</u></code></a> (but not <code>Googlebot</code>) in your robots.txt file is what communicates to Google that you do not want your content to be crawled to feed AI training.</p><p>So, what if you don’t want your content to serve as training data for the next AI model, but don’t have the time to manually maintain an up-to-date robots.txt file? <b>Enter Cloudflare’s new managed robots.txt offering.</b> Once enabled, Cloudflare will automatically update your existing robots.txt or create a robots.txt file on your site that includes directives asking popular AI bot operators to not use your content for AI model training. For instance, <b>Cloudflare’s managed robots.txt signals your preference to </b><code><b>Google-Extended</b></code><b> and </b><a href="https://support.apple.com/en-us/119829"><code><b><u>Applebot-Extended</u></b></code></a><b>, amongst others, that they should not crawl your site for AI training,</b> while keeping your domain(s) SEO-friendly.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2SLxL9LMN1IK2WXOIq8ezP/786db3e1cbc24b1cce4c337b8136d3a7/image3.png" />
          </figure><p><sup><i>Cloudflare dashboard snapshot of the new managed robots.txt activation toggle </i></sup></p><p>This feature is available to all customers, meaning anyone can <a href="https://developers.cloudflare.com/bots/additional-configurations/managed-robots-txt/"><u>enable this today</u></a> from the Cloudflare dashboard. Once enabled, website owners who previously had no robots.txt file will now have Cloudflare’s managed bot directives live on their website. What about website owners who already have a robots.txt file? The contents of Cloudflare’s managed robots.txt will be <i>prepended</i> to site owners’ existing file. This way, their existing Block directives – and the time and rationale put into customizing this file – are honored, while still ensuring the website has AI crawler guardrails managed by Cloudflare.</p><p>As the AI bot landscape changes with new bots on the rise, Cloudflare will keep our customers a step ahead by updating the directives on our managed robots.txt, so they don’t have to worry about maintaining things on their own. Once enabled, customers won’t need to take any action in order for any updates of the managed robots.txt content to go live on their site. </p><p>We believe that managing crawling is key to protecting the open Internet, so we’ll also be encouraging every new site that onboards to Cloudflare to enable our managed robots.txt. When you onboard a new site, you’ll see the following options for managing AI crawlers:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6l4RpmHHf0OGP44XyDnZra/66c30bb8080d3107ab93af55dc6a8c6e/Screenshot_2025-06-30_at_3.59.54%C3%A2__PM.png" />
          </figure><p>This makes it effortless to ensure that <b>every new customer or domain onboarded to Cloudflare gives clear directives to how they want their content used.</b></p>
    <div>
      <h3>Under the hood: technical implementation</h3>
      <a href="#under-the-hood-technical-implementation">
        
      </a>
    </div>
    <p>To implement this feature, we developed a new module that intercepts all inbound HTTP requests for <code>/robots.txt</code>. For all such requests, we’ll check whether the zone has opted in to use Cloudflare’s managed robots.txt by reading a value from our <a href="https://blog.cloudflare.com/introducing-quicksilver-configuration-distribution-at-internet-scale/"><u>distributed key-value store</u></a>. If they have, the module then responds with the Cloudflare’s managed robots.txt directives, prepended to the origin’s robot.txt if there is an existing file. We prepend so we can add a generalized header that instructs all bots on the customers preferences for data use, as defined in the <a href="https://www.ietf.org/archive/id/draft-it-aipref-attachment-00.html#name-introduction"><u>IETF AI preferences proposal</u></a>. Note that in robots.txt, the <a href="https://datatracker.ietf.org/doc/html/rfc9309#section-2.2.2"><u>most specific match</u></a> <i>must</i> always be used, and since our disallow expressions are scoped to cover everything, we can ensure a directive we prepend will never conflict with a more targeted customer directive. If the customer has <i>not</i> enabled this feature, the request is forwarded to the origin server as usual, using whatever the customer has written in their own robots.txt file. (While caching origin's robots.txt could reduce latency by eliminating a round trip to the origin, the impact on overall page load times would be minimal, as robots.txt requests comprise a small fraction of total traffic. Adding cache update/invalidation would introduce complexity with limited benefit, so we prioritized functionality and reliability in our implementation.)</p>
    <div>
      <h2>Step 2: block, but only where you show ads</h2>
      <a href="#step-2-block-but-only-where-you-show-ads">
        
      </a>
    </div>
    <p>Adding an entry to your robots.txt file is the first step to telling AI bots not to crawl you. But robots.txt is an honor system. Nothing forces bots to follow it. That’s why we introduced our <a href="https://blog.cloudflare.com/declaring-your-aindependence-block-ai-bots-scrapers-and-crawlers-with-a-single-click/"><u>one-click managed rule</u></a> to block all AI bots across your zone. However, some customers want AI bots to visit certain pages, like developer or support documentation. For customers who are hesitant to block everywhere, we have a brand-new option: let us detect when ads are shown on a hostname, and we will block AI bots ONLY on that hostname. Here’s how we do it.</p><p>First, we use multiple techniques to identify if a request is coming from an AI bot. The easiest technique is to identify well-behaved crawlers that publicly declare their user agent, and use dedicated IP ranges. Often we work directly with these bot makers to add them to our <a href="https://radar.cloudflare.com/traffic/verified-bots"><u>Verified Bot list</u></a>.</p><p>Many bot operators act in good faith by publicly publishing their user agents, or even <a href="https://blog.cloudflare.com/verified-bots-with-cryptography/"><u>cryptographically verifying their bot requests</u></a> directly with Cloudflare. Unfortunately, some attempt to appear like a real browser by using a spoofed user agent. It's not new for our global machine learning models to recognize this activity as a bot, even when operators lie about their user agent. When bad actors attempt to crawl websites at scale, they generally use tools and frameworks that we’re able to fingerprint, and we use Cloudflare’s network of over 57 million requests per second on average, to understand how much we should trust the fingerprint. We compute global aggregates across many signals, and based on these signals, our models are able to consistently and <a href="https://blog.cloudflare.com/declaring-your-aindependence-block-ai-bots-scrapers-and-crawlers-with-a-single-click/"><u>appropriately flag traffic from evasive AI bots</u></a>.</p><p>When we see a request from an AI bot, our system checks if we have previously identified ads in the response served by the target page. To do this, we inspect the “response body” — the raw HTML code of the web page being sent back.  After parsing the HTML document, we perform a comprehensive scan for code patterns commonly found in <a href="https://support.google.com/adsense/answer/9183549?hl=en#:~:text=An%20ad%20unit%20is%20one,flexibility%20in%20terms%20of%20customization."><u>ad units</u></a>, which signals to us that the page is serving an ad. Examples of such code would be:</p>
            <pre><code>&lt;div class="ui-advert" data-role="advert-unit" data-testid="advert-unit" data-ad-format="takeover" data-type="" data-label="" style=""&gt;
&lt;script&gt;
....
&lt;/script&gt;
&lt;/div&gt;</code></pre>
            <p>Here, the div-container has the <code>ui-advert</code> class commonly used for advertising. Similarly, links to commonly used ad servers like Google Syndication are a good signal as well, such as the following:</p>
            <pre><code>&lt;link rel="dns-prefetch" href="https://pagead2.googlesyndication.com/"&gt;

&lt;script async src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js?client=ca-pub-1234567890123456" crossorigin="anonymous"&gt;&lt;/script&gt;</code></pre>
            <p>By streaming and directly parsing small chunks of the response using our ultra-fast <a href="https://blog.cloudflare.com/html-parsing-2/#lol-html"><u>LOL HTML parser</u></a>, we can perform scans without adding any latency to the inspected response.</p><p>So as not to reinvent the wheel, we are adopting techniques similar to those that ad blockers have been using for years. Ad blockers fundamentally perform two separate tasks to block advertisements in a browser. The first is to block the browser from fetching resources from ad servers, and the second is to suppress displaying HTML elements that contain ads. For this, ad blockers rely on large filter lists such as <a href="https://easylist.to/index.html"><u>EasyList</u></a> that contain both so-called URL block filters that match outgoing request URLs against a set of patterns, and block them if they match one of the filters, and CSS selectors that are designed to match HTML ad elements.</p><p>We can use both of these techniques to detect if an HTML response contains ads by checking external resources (e.g. content referenced by HREF or SCRIPT tags) against URL block filters, and the HTML elements themselves against CSS selectors. Because we do not actually need to block every single advertisement on a site, but rather detect the overall presence of ads on a site, we can achieve the same detection efficacy when shrinking the number of CSS and URL filters down from more than 40,000 in EasyList to the 400 most commonly seen ones to increase our computational efficiency.</p><p>Because some sites load ads dynamically rather than directly in the returned HTML (partially to avoid ad blocking), we enrich this first information source with data from <a href="https://developers.cloudflare.com/fundamentals/reference/policies-compliances/content-security-policies/"><u>Content Security Policy (CSP)</u></a> reports. The Content Security Policy standard is a security mechanism that helps web developers control the resources (like scripts, stylesheets, and images) a browser is allowed to load for a specific web page, and browsers send reports about loaded resources to a CSP management system, which for many sites is Cloudflare’s <a href="https://developers.cloudflare.com/page-shield/"><u>Page Shield</u></a> product. These reports allow us to relate scripts loaded from ad servers directly with page URLs. Both of these information sources are consumed by our <a href="https://www.cloudflare.com/en-gb/learning/security/glossary/what-is-endpoint/"><u>endpoint management service</u></a>, which then matches incoming requests against hostnames that we already know are serving ads.</p><p>We do all of this on every request for any customer who opts in, even free customers. </p><p>To enable this feature, simply navigate to the <a href="https://dash.cloudflare.com/?to=/:account/:zone/security/bots/configure"><u>Security &gt; Settings &gt; Bots</u></a> section of the Cloudflare dashboard, and choose either <code>Block on pages with Ads</code> or <code>Block Everywhere</code>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/yoGKnsD7fuG9K8MysCMHl/91fb4bb69625d8c85a8dcf4cfb21f6de/unnamed__1_.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/64xCpJrlgY1WtsNI0CeeT5/975e6a329b605e11445faafa038181aa/unnamed__2_.png" />
          </figure>
    <div>
      <h2>The AI bot hunt: finding and identifying bots</h2>
      <a href="#the-ai-bot-hunt-finding-and-identifying-bots">
        
      </a>
    </div>
    <p>The AI bot landscape has exploded and continues to grow with an exponential trajectory as more and more operators come online. At Cloudflare, our team of security researchers are constantly identifying and classifying different AI-related crawlers and scrapers across our network. </p><p>There are two major ways in which we track AI bots and identify those that are poorly behaved:</p><p>1. Our customers play a crucial role by directly submitting reports of misbehaved AI bots that may not yet be classified by Cloudflare. (If you have an AI bot that comes to mind here, we’d love for you to let us know through our <a href="https://docs.google.com/forms/d/14bX0RJH_0w17_cAUiihff5b3WLKzfieDO4upRlo5wj8/"><u>bots submission form</u></a> today.) Once such a bot comes to our attention, our security analysts investigate to determine how it should be categorized.</p><p>2. We’re able to derive insights through analysis of the massive scale of our customers’ traffic that we observe. Specifically, we can see which AI agents visit which websites and when, drawing out trends or patterns that might make a website owner want to disallow a given AI bot. This bird’s-eye view on abusive AI bot behavior was paramount as we started to determine the content of a managed robots.txt.</p>
    <div>
      <h2>What’s next?</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>Our new <a href="https://developers.cloudflare.com/bots/additional-configurations/managed-robots-txt/"><u>managed robots.txt</u></a> and blocking AI bots on pages with ads features are available to <i>all Cloudflare customers</i>, including everyone on a Free plan. We encourage customers to start using them today – to take control over how the content on your website gets used. Looking ahead, Cloudflare will monitor the <a href="https://ietf-wg-aipref.github.io/drafts/draft-ietf-aipref-vocab.html"><u>IETF’s pending proposal</u></a> allowing website publishers to control how automated systems use their content and update our managed robots.txt accordingly. We will also continue to provide more granular control around AI bot management and investigate new distinguishing signals as AI bots become more and more precise. And if you’ve seen suspicious behavior from an AI scraper, contribute to the Internet ecosystem by <a href="https://docs.google.com/forms/d/14bX0RJH_0w17_cAUiihff5b3WLKzfieDO4upRlo5wj8/"><u>letting us know</u></a>!</p> ]]></content:encoded>
            <category><![CDATA[Pay Per Crawl]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Bots]]></category>
            <category><![CDATA[Impact]]></category>
            <guid isPermaLink="false">44HBJInoaQRMqVRmSaqjg6</guid>
            <dc:creator>Jin-Hee Lee</dc:creator>
            <dc:creator>Dipunj Gupta</dc:creator>
            <dc:creator>Brian Mitchell</dc:creator>
            <dc:creator>Reid Tatoris</dc:creator>
            <dc:creator>Henry Clausen</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing pay per crawl: Enabling content owners to charge AI crawlers for access]]></title>
            <link>https://blog.cloudflare.com/introducing-pay-per-crawl/</link>
            <pubDate>Tue, 01 Jul 2025 10:00:00 GMT</pubDate>
            <description><![CDATA[ Pay per crawl is a new feature to allow content creators to charge AI crawlers for access to their content.  ]]></description>
            <content:encoded><![CDATA[ 
    <div>
      <h2>A changing landscape of consumption </h2>
      <a href="#a-changing-landscape-of-consumption">
        
      </a>
    </div>
    <p>Many publishers, content creators and website owners currently feel like they have a binary choice — either leave the front door wide open for AI to consume everything they create, or create their own walled garden. But what if there was another way?</p><p>At Cloudflare, we started from a simple principle: we wanted content creators to have control over who accesses their work. If a creator wants to <a href="https://www.cloudflare.com/learning/ai/how-to-block-ai-crawlers/">block all AI crawlers</a> from their content, they should be able to do so. If a creator wants to allow some or all AI crawlers full access to their content for free, they should be able to do that, too. Creators should be in the driver’s seat.</p><p>After hundreds of conversations with news organizations, publishers, and large-scale social media platforms, we heard a consistent desire for a third path: They’d like to allow AI crawlers to access their content, but they’d like to get compensated. Currently, that requires knowing the right individual and striking a one-off deal, which is an insurmountable challenge if you don’t have scale and leverage. </p>
    <div>
      <h2>What if I could charge a crawler? </h2>
      <a href="#what-if-i-could-charge-a-crawler">
        
      </a>
    </div>
    <p>We believe your choice need not be binary — there should be a third, more nuanced option: <b>You can charge for access.</b> Instead of a blanket block or uncompensated open access, we want to empower content owners to monetize their content at Internet scale.</p><p>We’re excited to help dust off a mostly forgotten piece of the web: <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Status/402"><b><u>HTTP response code 402</u></b></a>.</p>
    <div>
      <h2>Introducing pay per crawl</h2>
      <a href="#introducing-pay-per-crawl">
        
      </a>
    </div>
    <p><a href="http://www.cloudflare.com/paypercrawl-signup/">Pay per crawl</a>, in private beta, is our first experiment in this area. </p><p>Pay per crawl integrates with existing web infrastructure, leveraging <a href="https://www.cloudflare.com/learning/ddos/glossary/hypertext-transfer-protocol-http/">HTTP status codes</a> and established authentication mechanisms to create a framework for paid content access. </p><p>Each time an AI crawler requests content, they either present payment intent via request headers for successful access (<a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Status/200"><u>HTTP response code 200</u></a>), or receive a <code>402 Payment Required</code> response with pricing. Cloudflare acts as the Merchant of Record for pay per crawl and also provides the underlying technical infrastructure.</p>
    <div>
      <h3>Publisher controls and pricing</h3>
      <a href="#publisher-controls-and-pricing">
        
      </a>
    </div>
    <p>Pay per crawl grants domain owners full control over their monetization strategy. They can define a flat, per-request price across their entire site. Publishers will then have three distinct options for a crawler:</p><ul><li><p><b>Allow:</b> Grant the crawler free access to content.</p></li><li><p><b>Charge:</b> Require payment at the configured, domain-wide price.</p></li><li><p><b>Block:</b> Deny access entirely, with no option to pay.</p></li></ul>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2PhxxI7f3Teb521mPRFQUL/1ecfd01f60f165b35c27ab9457f8b152/image3.png" />
          </figure><p>An important mechanism here is that even if a crawler doesn’t have a billing relationship with Cloudflare, and thus couldn’t be charged for access, a publisher can still choose to ‘charge’ them. This is the functional equivalent of a network level block (an HTTP <code>403 Forbidden</code> response where no content is returned) — but with the added benefit of telling the crawler there could be a relationship in the future. </p><p>While publishers currently can define a flat price across their entire site, they retain the flexibility to bypass charges for specific crawlers as needed. This is particularly helpful if you want to allow a certain crawler through for free, or if you want to negotiate and execute a content partnership outside the pay per crawl feature. </p><p>To ensure integration with each publisher’s existing security posture, Cloudflare enforces Allow or Charge decisions via a rules engine that operates only after existing WAF policies and <a href="https://www.cloudflare.com/learning/bots/what-is-bot-management/">bot management</a> or bot blocking features have been applied.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3NI9GUkR8RmmApQyOgb1mI/4f77c199ccdc5ebc166204cdaec72c48/image2.png" />
          </figure>
    <div>
      <h3>Payment headers and access</h3>
      <a href="#payment-headers-and-access">
        
      </a>
    </div>
    <p>As we were building the system, we knew we had to solve an incredibly important technical challenge: ensuring we could charge a specific crawler, but prevent anyone from spoofing that crawler. Thankfully, there’s a way to do this using <a href="https://developers.cloudflare.com/bots/concepts/bot/verified-bots/web-bot-auth/"><u>Web Bot Auth</u></a> proposals.</p><p>For crawlers, <a href="https://blog.cloudflare.com/web-bot-auth/"><u>this involves:</u></a></p><ul><li><p>Generating an Ed25519 key pair, and making the <a href="https://datatracker.ietf.org/doc/html/rfc7517"><u>JWK</u></a>-formatted public key available in a hosted directory</p></li><li><p>Registering with Cloudflare to provide the URL of your key directory and user agent information.</p></li><li><p>Configuring your crawler to use <a href="https://datatracker.ietf.org/doc/rfc9421/"><u>HTTP Message Signatures</u></a> with each request.</p></li></ul><p>Once registration is accepted, crawler requests should always include <code>signature-agent</code>, <code>signature-input</code>, and <code>signature</code> headers to identify your crawler and discover paid resources.</p>
            <pre><code>GET /example.html
Signature-Agent: "https://signature-agent.example.com"
Signature-Input: sig2=("@authority" "signature-agent")
 ;created=1735689600
 ;keyid="poqkLGiymh_W0uP6PZFw-dvez3QJT5SolqXBCW38r0U"
 ;alg="ed25519"
 ;expires=1735693200
;nonce="e8N7S2MFd/qrd6T2R3tdfAuuANngKI7LFtKYI/vowzk4lAZYadIX6wW25MwG7DCT9RUKAJ0qVkU0mEeLElW1qg=="
 ;tag="web-bot-auth"
Signature: sig2=:jdq0SqOwHdyHr9+r5jw3iYZH6aNGKijYp/EstF4RQTQdi5N5YYKrD+mCT1HA1nZDsi6nJKuHxUi/5Syp3rLWBA==:</code></pre>
            
    <div>
      <h3>Accessing paid content</h3>
      <a href="#accessing-paid-content">
        
      </a>
    </div>
    <p>Once a crawler is set up, determination of whether content requires payment can happen via two flows:</p>
    <div>
      <h4>Reactive (discovery-first)</h4>
      <a href="#reactive-discovery-first">
        
      </a>
    </div>
    <p>Should a crawler request a paid URL, Cloudflare returns an <code>HTTP 402 Payment Required</code> response, accompanied by a <code>crawler-price</code> header. This signals that payment is required for the requested resource.</p>
            <pre><code>HTTP 402 Payment Required
crawler-price: USD XX.XX</code></pre>
            <p> The crawler can then decide to retry the request, this time including a <code>crawler-exact-price</code> header to indicate agreement to pay the configured price.</p>
            <pre><code>GET /example.html
crawler-exact-price: USD XX.XX </code></pre>
            
    <div>
      <h4>Proactive (intent-first)</h4>
      <a href="#proactive-intent-first">
        
      </a>
    </div>
    <p>Alternatively, a crawler can preemptively include a <code>crawler-max-price</code> header in its initial request.</p>
            <pre><code>GET /example.html
crawler-max-price: USD XX.XX</code></pre>
            <p>If the price configured for a resource is equal to or below this specified limit, the request proceeds, and the content is served with a successful <code>HTTP 200 OK</code> response, confirming the charge:</p>
            <pre><code>HTTP 200 OK
crawler-charged: USD XX.XX 
server: cloudflare</code></pre>
            <p>If the amount in a <code>crawler-max-price</code> request is greater than the content owner’s configured price, only the configured price is charged. However, if the resource’s configured price exceeds the maximum price offered by the crawler, an <code>HTTP</code><code><b> </b></code><code>402 Payment Required</code> response is returned, indicating the specified cost.  Only a single price declaration header, <code>crawler-exact-price</code> or <code>crawler-max-price</code>, may be used per request.</p><p>The <code>crawler-exact-price</code> or <code>crawler-max-price</code> headers explicitly declare the crawler's willingness to pay. If all checks pass, the content is served, and the crawl event is logged. If any aspect of the request is invalid, the edge returns an <code>HTTP 402 Payment Required</code> response.</p>
    <div>
      <h3>Financial settlement</h3>
      <a href="#financial-settlement">
        
      </a>
    </div>
    <p>Crawler operators and content owners must configure pay per crawl payment details in their Cloudflare account. Billing events are recorded each time a crawler makes an authenticated request with payment intent and receives an HTTP 200-level response with a <code>crawler-charged</code> header. Cloudflare then aggregates all the events, charges the crawler, and distributes the earnings to the publisher.</p>
    <div>
      <h2>Content for crawlers today, agents tomorrow </h2>
      <a href="#content-for-crawlers-today-agents-tomorrow">
        
      </a>
    </div>
    <p>At its core, pay per crawl begins a technical shift in how content is controlled online. By providing creators with a robust, programmatic mechanism for valuing and controlling their digital assets, we empower them to continue creating the rich, diverse content that makes the Internet invaluable. </p><p>We expect pay per crawl to evolve significantly. It’s very early: we believe many different types of interactions and marketplaces can and should develop simultaneously. We are excited to support these various efforts and open standards.</p><p>For example, a publisher or new organization might want to charge different rates for different paths or content types. How do you introduce dynamic pricing based not only upon demand, but also how many users your AI application has? How do you introduce granular licenses at internet scale, whether for training, <a href="https://www.cloudflare.com/learning/ai/inference-vs-training/">inference</a>, search, or something entirely new?</p><p>The true potential of pay per crawl may emerge in an <a href="https://www.cloudflare.com/learning/ai/what-is-agentic-ai/">agentic</a> world. What if an agentic paywall could operate entirely programmatically? Imagine asking your favorite deep research program to help you synthesize the latest cancer research or a legal brief, or just help you find the best restaurant in Soho — and then giving that agent a budget to spend to acquire the best and most relevant content. By anchoring our first solution on <b>HTTP response code 402</b>, we enable a future where intelligent agents can programmatically negotiate access to digital resources. </p>
    <div>
      <h2>Getting started</h2>
      <a href="#getting-started">
        
      </a>
    </div>
    <p>Pay per crawl is currently in private beta. We’d love to hear from you if you’re either a crawler interested in paying to access content or a content creator interested in charging for access. You can reach out to us at <a href="http://www.cloudflare.com/paypercrawl-signup/"><u>http://www.cloudflare.com/paypercrawl-signup/</u></a> or contact your Account Executive if you’re an existing Enterprise customer.</p> ]]></content:encoded>
            <category><![CDATA[Pay Per Crawl]]></category>
            <category><![CDATA[AI Bots]]></category>
            <category><![CDATA[Bots]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Bot Management]]></category>
            <guid isPermaLink="false">7AJ8tUOFDvk5mCTrDjBPDq</guid>
            <dc:creator>Will Allen</dc:creator>
            <dc:creator>Simon Newton</dc:creator>
        </item>
        <item>
            <title><![CDATA[From Googlebot to GPTBot: who’s crawling your site in 2025]]></title>
            <link>https://blog.cloudflare.com/from-googlebot-to-gptbot-whos-crawling-your-site-in-2025/</link>
            <pubDate>Tue, 01 Jul 2025 10:00:00 GMT</pubDate>
            <description><![CDATA[ From May 2024 to May 2025, crawler traffic rose 18%, with GPTBot growing 305% and Googlebot 96%. ]]></description>
            <content:encoded><![CDATA[ <p><a href="https://www.cloudflare.com/learning/bots/what-is-a-web-crawler/"><u>Web crawlers</u></a> are not new. The <a href="https://en.wikipedia.org/wiki/World_Wide_Web_Wanderer"><u>World Wide Web Wanderer</u></a> debuted in 1993, though the first web search engines to truly use crawlers and indexers were <a href="https://en.wikipedia.org/wiki/JumpStation"><u>JumpStation</u></a> and <a href="https://en.wikipedia.org/wiki/WebCrawler"><u>WebCrawler</u></a>. Crawlers are part of one of the backbones of the Internet’s success: search. Their main purpose has been to index the content of websites across the Internet so that those websites can appear in search engine results and direct users appropriately. In this blog post, we’re analyzing recent trends in web crawling, which now has a crucial and complex new role with the rise of AI.</p><p>Not all crawlers are the same. Bots, automated scripts that perform tasks across the Internet, come in many forms: those considered non-threatening or “<a href="https://www.cloudflare.com/learning/bots/how-to-manage-good-bots/"><u>good</u></a>” (such as API clients, search indexing bots like Googlebot, or health checkers) and those considered malicious or “<a href="https://www.cloudflare.com/learning/bots/how-to-manage-good-bots/"><u>bad</u></a>” (like those used for credential stuffing, spam, or <a href="https://www.cloudflare.com/learning/ai/how-to-prevent-web-scraping/">scraping content without permission</a>). In fact, around 30% of global web traffic today, according to <a href="https://radar.cloudflare.com/traffic?dateRange=52w#bot-vs-human"><u>Cloudflare Radar data</u></a>, comes from bots, and even exceeds human Internet traffic in some locations.</p><p>A new category, AI crawlers, has emerged in recent years. These bots collect data from across the web to train AI models, improving tools and experiences, but also <a href="https://en.wikipedia.org/wiki/Artificial_intelligence_and_copyright"><u>raising issues around content rights</u></a>, unauthorized use, and infrastructure overload. We aimed to confirm the growth of both search and AI crawlers, examine specific AI crawlers, and understand broader crawler usage.</p><p>This is increasingly relevant with the rapid adoption of AI, growing content rights concerns, and data privacy discussions. Some sites and creators are looking to <a href="https://www.cloudflare.com/learning/ai/how-to-block-ai-crawlers/">limit or block AI crawlers</a> using tools like <code>robots.txt</code> or <a href="https://blog.cloudflare.com/bringing-ai-to-cloudflare/#enabling-dynamic-updates-for-the-ai-bot-rule"><u>firewall rules</u></a>. Others, like Dutch indie maker and entrepreneur <a href="https://x.com/levelsio/status/1916626339924267319"><u>Pieter Levels</u></a>, have embraced them: “<i>I’m 100% fine with AI crawlers… very important to rank in LLMs [large language models]</i>”.</p><p>It’s important to note that crawlers serve different purposes. For example, the <code>facebookexternalhit</code> bot is not included in this analysis, as it is used by Facebook to fetch page content when generating previews for shared links. However, within this post, we are only focusing on AI and search crawlers that are indexing or scraping website content.</p>
    <div>
      <h2>AI-only crawlers perspective</h2>
      <a href="#ai-only-crawlers-perspective">
        
      </a>
    </div>
    <p>Let’s start with an AI-only crawler perspective that we currently have on <a href="https://radar.cloudflare.com/explorer?dataSet=ai.bots&amp;dt=12w"><u>Cloudflare Radar</u></a>, focused only on crawlers advertised as AI-related. To identify them, we’re using here a <a href="https://github.com/ai-robots-txt/ai.robots.txt/blob/main/robots.json"><u>list</u></a> derived from an open-source project that helps website owners manage and control access to AI crawlers — especially those used to train large language models (LLMs). It also provides guidance on what to include in <code>robots.txt</code><i> </i>files (more on that below). The data shown below is based on matching those crawler names with user-agent strings in HTTP requests. (Further details, including one exception, about this method can be found at the end of the blog post.)</p><p>The AI crawler landscape saw a significant shift between May 2024 and May 2025, with <code>GPTBot</code> (from OpenAI) emerging as the dominant force, surging from 5% to 30% share, and <code>Meta-ExternalAgent</code> (from Meta) making a strong new entry at 19%. This growth came at the expense of former leader <code>Bytespider</code>, which plummeted from 42% to 7%, as well as other AI crawlers like <code>ClaudeBot</code> and <code>Amazonbot</code>, which also saw declines. Our data clearly indicates a reordering of top AI crawlers, highlighting the increasing prominence of OpenAI and Meta in this category.</p><p><b>May 2024</b></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3W6ZVHbwe8r5R5pYrZE7Aw/20a6ef0f77c015ae932848861c04b556/image6.png" />
          </figure><p><b>May 2025</b></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5joaVYfpzHZe7K8VEfCZCV/729f22a39f51d54b80cae35dd38e42b4/image3.png" />
          </figure><table><tr><td><p><b>Rank</b></p></td><td><p><b>Bot Name</b></p></td><td><p><b>Share (May 2024)</b></p></td><td><p><b>Rank</b></p></td><td><p><b>Bot Name</b></p></td><td><p><b>Share (May 2025)</b></p></td></tr><tr><td><p>1</p></td><td><p>Bytespider</p></td><td><p>42%</p></td><td><p>1</p></td><td><p>GPTBot</p></td><td><p>30%</p></td></tr><tr><td><p>2</p></td><td><p>ClaudeBot</p></td><td><p>27%</p></td><td><p>2</p></td><td><p>ClaudeBot</p></td><td><p>21%</p></td></tr><tr><td><p>3</p></td><td><p>Amazonbot</p></td><td><p>21%</p></td><td><p>3</p></td><td><p>Meta-ExternalAgent</p></td><td><p>19%</p></td></tr><tr><td><p>4</p></td><td><p>GPTBot</p></td><td><p>5%</p></td><td><p>4</p></td><td><p>Amazonbot</p></td><td><p>11%</p></td></tr><tr><td><p>5</p></td><td><p>Applebot</p></td><td><p>4.1%</p></td><td><p>5</p></td><td><p>Bytespider</p></td><td><p>7.2%</p></td></tr></table><p>For additional context, the list below includes further information about the bots with higher crawling shares seen above. This information comes from the same open-source <a href="https://github.com/ai-robots-txt/ai.robots.txt/blob/main/robots.json"><u>list</u></a> mentioned above and from publications by companies like <a href="https://platform.openai.com/docs/bots"><u>OpenAI</u></a>, which explain how their crawlers are used. </p><ul><li><p><b>GPTBot</b> – OpenAI’s crawler used to improve and train large language models like ChatGPT.</p></li><li><p><b>ClaudeBot</b> – Anthropic’s crawler for training and updating the Claude AI assistant.</p></li><li><p><b>Meta-ExternalAgent</b> – Meta’s bot likely used for collecting data to train or fine-tune LLMs.</p></li><li><p><b>Amazonbot</b> – Amazon’s crawler that gathers data for its search and AI applications.</p></li><li><p><b>Bytespider</b> – ByteDance’s AI data collector, often linked to training models like Ernie or TikTok-related AI.</p></li><li><p><b>Applebot</b> – Apple’s web crawler primarily for Siri and Spotlight search, possibly used in AI development.</p></li><li><p><b>OAI-SearchBot</b> – OpenAI’s search-focused crawler, likely used for retrieving real-time web info for models.</p></li><li><p><b>ChatGPT-User</b> – Represents API-based or browser usage of ChatGPT in connection with user interactions.</p></li><li><p><b>PerplexityBot</b> – Crawler from Perplexity.ai, which powers their AI answer engine using real-time web data.</p></li></ul><p>Webmasters can inform crawler operators of whether they want these bots and crawlers to access their content by setting out rules in a file called <a href="https://www.cloudflare.com/learning/bots/what-is-robots-txt/"><code><u>robots.txt</u></code></a>, which tells crawlers what pages they should or shouldn’t access. <a href="https://blog.cloudflare.com/ai-audit-enforcing-robots-txt/"><u>As we’ve seen recently</u></a>, crawlers honoring your <code>robots.txt</code> policies is voluntary, but Cloudflare announced tools like <a href="https://blog.cloudflare.com/cloudflare-ai-audit-control-ai-content-crawlers/"><u>AI Audit</u></a> to help content creators to enforce it.</p><p>Now, as we’ve seen, the landscape of web crawling is evolving rapidly, driven by the merging roles of search engines and AI. AI is now deeply integrated into search, seen in Google’s AI Overviews and AI Mode, but also in social media platforms, like Meta AI on Instagram. So, let's broaden our analysis to include these wider AI-driven crawling activities.</p>
    <div>
      <h2>General AI and search crawling growth: +18%</h2>
      <a href="#general-ai-and-search-crawling-growth-18">
        
      </a>
    </div>
    <p>A broader view reveals the growth of crawling traffic from both search and AI crawlers over the first few months of 2025. To remove customer growth bias, we'll analyze trends using a fixed set of customers from specific weeks (a method we’ve used in our <a href="http://radar.cloudflare.com/year-in-review/"><u>Cloudflare Radar Year in Review</u></a>): the first week of May 2024, a week in November 2024, and the first week of April 2025. </p><p>Using that method, we found that AI and search crawler traffic grew by 18% from May 2024 to May 2025 (comparing full-month periods). The increase was even higher, at 48%, when including new Cloudflare customers added during that time. Peak AI and search crawling traffic occurred in April 2025, with a 32% increase compared to May 2024. This confirms that crawling traffic has clearly risen over the past year, but also that growth is not always constant. Google remains the dominant player, and its share is growing too, as we’ll see in the next section.</p><p>As the next chart shows, crawling traffic increased sharply in March and April 2025 and remained high, though slightly lower, in May.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/hePknXM0crXK4jX5e7LxZ/0956ac5024915734a9c0f20c8f15bc16/image4.png" />
          </figure><p>The patterns on the above crawling chart also seem to reflect broader seasonal patterns and general human Internet traffic patterns. In 2024, traffic dropped during the summer in the Northern Hemisphere, with August and September being the least active months. And like overall Internet traffic, it then rose in November, when people are typically more online due to shopping and seasonal habits, as we've seen in <a href="https://blog.cloudflare.com/from-deals-to-ddos-exploring-cyber-week-2024-internet-trends/"><u>past analyses</u></a>. </p>
    <div>
      <h2>Googlebot crawling grew 96% in one year</h2>
      <a href="#googlebot-crawling-grew-96-in-one-year">
        
      </a>
    </div>
    <p><a href="https://developers.google.com/search/docs/crawling-indexing/google-common-crawlers"><code><u>Googlebot</u></code></a>, which indexes content for Google Search, was clearly the top crawler throughout the period and showed strong growth, up 96% from May 2024 to May 2025, reflecting increased crawling by Google. Crawling traffic peaked in April 2025, reaching 145% higher than in May 2024. It's also important to mention that Google made changes to its search and launched <a href="https://ahrefs.com/blog/google-ai-overviews/"><u>AI Overviews</u></a> in its search engine during this time — first in the US in May 2024, then in more countries later.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1qFVGagpgYIti7p741j8uW/77dc4bc61bec86faa6b80b293997dffd/image1.png" />
          </figure><p>Two trends stand out when looking at daily data for Google-related crawlers, as shown in the graph below. First, <a href="https://developers.google.com/search/docs/crawling-indexing/google-common-crawlers"><code><u>Googlebot</u></code></a> and the more recent <code>GoogleOther</code> (a <a href="https://searchengineland.com/google-launches-new-googlebot-named-googleother-395827"><u>web crawler from 2023</u></a> for “research and development”) account for most of Google’s crawling activity. Second, there were two visible drops in crawling traffic: one on December 14, 2024 (around a Google Search <a href="https://status.search.google.com/incidents/V9nDKuo6nWKh2ThBALgA#:~:text=Incident%20began%20at%202024%2D12,Time"><u>update</u></a>), and another from May 20 to May 28, 2025. That May 20 drop occurred around the same time as the rollout of AI Mode on Google Search in the US, although the timing may be coincidental.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/16kB3kDeprY3LMetEDPS10/8f2bafc7568579377624d6c0aaeb1751/image5.png" />
          </figure>
    <div>
      <h2>Breakdown of top 20 AI and search web crawlers </h2>
      <a href="#breakdown-of-top-20-ai-and-search-web-crawlers">
        
      </a>
    </div>
    <p>Ranking crawlers by their share of total requests gives a clearer picture of which bots are gaining or losing ground, especially among those focused on search and AI. The table below shows a clear trend: some AI bots have grown rapidly since last year (with growth beginning even earlier), while many traditional search crawlers have remained flat or lost share (as in the case of Bing and its <code>Bingbot</code> crawler). The main exception is <code>Googlebot</code>.</p><p>The next table shows the percentage share of each crawler out of all crawling traffic generated by this specific cohort of over 30 AI &amp; search crawlers observed by Cloudflare in May 2024 and May 2025. The table below also includes the change in percentage points and the growth or decline in raw request volume. Crawlers are ranked by their share in May 2025. Key crawler shifts include <code>GPTBot</code> rising sharply (+305%), while <code>Bytespider</code> dropped dramatically (-85%).</p>
<div><table><thead>
  <tr>
    <th><span>Rank</span></th>
    <th><span>Bot name</span></th>
    <th><span>Share May 2024</span></th>
    <th><span>Share May 2025</span></th>
    <th><span>Δ percentage-point change</span></th>
    <th><span>Raw requests growth (May 2024 to May 2025)</span></th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><span>1</span></td>
    <td><span>Googlebot</span></td>
    <td><span>30%</span></td>
    <td><span>50%</span></td>
    <td><span>+20 pp</span></td>
    <td><span>96%</span></td>
  </tr>
  <tr>
    <td><span>2</span></td>
    <td><span>Bingbot</span></td>
    <td><span>10%</span></td>
    <td><span>8.7%</span></td>
    <td><span>-1.3 pp</span></td>
    <td><span>2%</span></td>
  </tr>
  <tr>
    <td><span>3</span></td>
    <td><span>GPTBot</span></td>
    <td><span>2.2%</span></td>
    <td><span>7.7%</span></td>
    <td><span>+5.5 pp</span></td>
    <td><span>305%</span></td>
  </tr>
  <tr>
    <td><span>4</span></td>
    <td><span>ClaudeBot</span></td>
    <td><span>11.7%</span></td>
    <td><span>5.4%</span></td>
    <td><span>-6.3 pp</span></td>
    <td><span>-46%</span></td>
  </tr>
  <tr>
    <td><span>5</span></td>
    <td><span>GoogleOther</span></td>
    <td><span>4.4%</span></td>
    <td><span>4.3%</span></td>
    <td><span>-0.1 pp</span></td>
    <td><span>14%</span></td>
  </tr>
  <tr>
    <td><span>6</span></td>
    <td><span>Amazonbot</span></td>
    <td><span>7.6%</span></td>
    <td><span>4.2%</span></td>
    <td><span>-3.4 pp</span></td>
    <td><span>-35%</span></td>
  </tr>
  <tr>
    <td><span>7</span></td>
    <td><span>Googlebot-Image</span></td>
    <td><span>4.5%</span></td>
    <td><span>3.3%</span></td>
    <td><span>-1.2 pp</span></td>
    <td><span>-13%</span></td>
  </tr>
  <tr>
    <td><span>8</span></td>
    <td><span>Bytespider</span></td>
    <td><span>22.8%</span></td>
    <td><span>2.9%</span></td>
    <td><span>-19.8 pp</span></td>
    <td><span>-85%</span></td>
  </tr>
  <tr>
    <td><span>9</span></td>
    <td><span>Yandex</span></td>
    <td><span>2.8%</span></td>
    <td><span>2.2%</span></td>
    <td><span>-0.7 pp</span></td>
    <td><span>-10%</span></td>
  </tr>
  <tr>
    <td><span>10</span></td>
    <td><span>ChatGPT-User</span></td>
    <td><span>0.1%</span></td>
    <td><span>1.3%</span></td>
    <td><span>+1.2 pp</span></td>
    <td><span>2,825%</span></td>
  </tr>
  <tr>
    <td><span>11</span></td>
    <td><span>Applebot</span></td>
    <td><span>1.9%</span></td>
    <td><span>1.2%</span></td>
    <td><span>-0.7 pp</span></td>
    <td><span>-26%</span></td>
  </tr>
  <tr>
    <td><span>12</span></td>
    <td><span>Timpibot</span></td>
    <td><span>0.3%</span></td>
    <td><span>0.6%</span></td>
    <td><span>+0.3 pp</span></td>
    <td><span>133%</span></td>
  </tr>
  <tr>
    <td><span>13</span></td>
    <td><span>Baiduspider</span></td>
    <td><span>0.5%</span></td>
    <td><span>0.4%</span></td>
    <td><span>-0.1 pp</span></td>
    <td><span>7%</span></td>
  </tr>
  <tr>
    <td><span>14</span></td>
    <td><span>PerplexityBot</span></td>
    <td><span>&lt;0.01%</span></td>
    <td><span>0.2%</span></td>
    <td><span>+0.2 pp</span></td>
    <td><span>157,490%</span></td>
  </tr>
  <tr>
    <td><span>15</span></td>
    <td><span>DuckDuckBot</span></td>
    <td><span>0.2%</span></td>
    <td><span>0.1%</span></td>
    <td><span>-0.1 pp</span></td>
    <td><span>-16%</span></td>
  </tr>
  <tr>
    <td><span>16</span></td>
    <td><span>SeznamBot</span></td>
    <td><span>0.1%</span></td>
    <td><span>0.1%</span></td>
    <td></td>
    <td><span>2%</span></td>
  </tr>
  <tr>
    <td><span>17</span></td>
    <td><span>Yeti</span></td>
    <td><span>0.1%</span></td>
    <td><span>0.1%</span></td>
    <td></td>
    <td><span>47%</span></td>
  </tr>
  <tr>
    <td><span>18</span></td>
    <td><span>coccocbot</span></td>
    <td><span>0.1%</span></td>
    <td><span>0.1%</span></td>
    <td></td>
    <td><span>-3%</span></td>
  </tr>
  <tr>
    <td><span>19</span></td>
    <td><span>Sogou</span></td>
    <td><span>0.1%</span></td>
    <td><span>0.1%</span></td>
    <td></td>
    <td><span>-22%</span></td>
  </tr>
  <tr>
    <td><span>20</span></td>
    <td><span>Yahoo! Slurp</span></td>
    <td><span>0.1%</span></td>
    <td><span>0.0%</span></td>
    <td><span>-0.1 pp</span></td>
    <td><span>-8%</span></td>
  </tr>
</tbody></table></div><p>Based on this data, two major shifts in web crawling occurred between May 2024 and May 2025:</p><p><b>1. Some AI crawlers rose sharply.
</b><code>GPTBot</code> (from OpenAI) increased its share from 2.2% to 7.7% (+5.5 pp), with a 305% rise in requests. This underscores the data demand for training large language models like ChatGPT. <code>GPTBot</code> jumped from #9 in May 2024 to #3 in May 2025.</p><p>Another OpenAI crawler, <code>ChatGPT-User</code>, saw requests surge by 2,825%, reaching a 1.3% share. This reflects a large rise in ChatGPT user activity or API-based interactions that involve accessing web content. <code>PerplexityBot</code> (from Perplexity.ai), despite a small 0.2% share, recorded the highest growth rate: a staggering 157,490% increase in raw requests.</p><p>Meanwhile, some AI crawlers saw steep declines. <code>ClaudeBot</code> (Anthropic) fell from 11.7% to 5.4% of total traffic and dropped 46% in requests. <code>Bytespider</code> plummeted 85% in request volume, falling from #2 to #8 in crawler share (now at just 2.9%).</p><p>Both <code>Amazonbot</code> and <code>Applebot</code>, also considered AI crawlers, saw decreases in share and in raw requests (–35% and –26%, respectively).</p><p><b>2. Google’s dominance expanded.
</b><code>Googlebot</code>’s share rose from 30% to 50%, supporting search indexing, but potentially also having AI-related purposes (such as new AI Overviews in Google Search). And <code>GoogleOther</code> (the<a href="https://searchengineland.com/google-launches-new-googlebot-named-googleother-395827"><u> crawler introduced in 2023</u></a>) also increased in crawling traffic, 14%. Other Google crawlers not in the top 20, like <code>Googlebot-News</code>, also grew significantly (+71% in requests). There’s a clear trend of growth in these Google-related web crawlers at a time when the company is investing heavily in combining AI with search.</p><p>Also in the search category, <code>Bingbot</code>’s share (from Microsoft) declined slightly from 10% to 8.7% (-1.3 pp), though its raw requests still grew modestly by 2%.</p><p>These trends show that web crawling is increasingly dominated by bots from Google and OpenAI, reflecting clear shifts over the course of a year. Google also appears to be adapting how it collects data to support both traditional search and AI-driven features.</p><p>Also worth noting is <code>FriendlyCrawler</code>, which no longer appears in the top 20 list as of May 2025 (now ranked #35). It was #14 in May 2024 with a 0.2% share, but saw a 100% drop in requests by May 2025. This bot is known to index and analyze website content, although its owner and <a href="https://imho.alex-kunz.com/2024/01/25/an-update-on-friendly-crawler/"><u>purpose</u></a> remain unclear. Typically, crawlers like this are used for improving search results, market research, or analytics.</p>
    <div>
      <h2>robots.txt &amp; AI bots: GPTBot leads twice</h2>
      <a href="#robots-txt-ai-bots-gptbot-leads-twice">
        
      </a>
    </div>
    <p>Recent data from June 6, 2025, from <a href="https://radar.cloudflare.com/ai-insights?dateStart=2025-05-30&amp;dateEnd=2025-06-06"><u>Cloudflare Radar</u></a> shows that out of 3,816 domains (from the <a href="https://radar.cloudflare.com/domains"><u>top 10,000</u></a>) where we were able to find a<i> robots.txt</i> file, 546 (about 14%) had “allow” or “disallow” (fully or partially) directives targeting AI bots in particular.</p><p>This leaves many site owners in a gray area because it’s not always clear how effective <i>robots.txt</i> is in managing AI crawlers. Some site owners may not think to use it specifically for AI bots, while others might be unsure whether these bots even respect <i>robots.txt </i>rules, especially newer or less transparent crawlers. In other cases, sites use partial rules to fine-tune access, trying to balance visibility and protection without fully opting in or out.</p><p>The “disallow” rules appear far more often than “allow” rules. The most frequently blocked bot was <code>GPTBot</code>, disallowed by 312 domains (250 fully, 62 partially), followed by <code>CCBot</code> and <code>Google-Extended</code>, as shown in the following graph.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6CgnH5GZNCIgUAZEeMWTVK/fe608135d5376e936f0ac503e3e9564c/image2.png" />
          </figure><p>Although <code>GPTBot</code> was the most blocked, it was also the most explicitly allowed, with 61 domains granting access (18 fully, 43 partially). Still, very few sites openly and explicitly allow AI bots, and when they do, it’s usually for limited sections. Note that bots not listed in a site’s robots.txt are effectively allowed by default.</p><p>As AI crawling increases, more websites are moving from passive signals like <i>robots.txt</i> to active protections like <a href="https://www.cloudflare.com/learning/ddos/glossary/web-application-firewall-waf/"><u>Web Application Firewalls</u></a>. The ecosystem is shifting, with a growing focus on enforceable controls.</p><p><i>Note: When we analyze crawler traffic, we compare user-agent tokens found in robots.txt files (like those for AI crawlers) with the actual user-agent strings in HTTP requests. It's important to note that some robots.txt tokens, such as Google-Extended, aren't user-agent substrings. As described in </i><a href="https://www.rfc-editor.org/rfc/rfc9309.html#name-the-user-agent-line"><i><u>RFC 9309</u></i></a><i>, one goal of these token may be to signal the purpose of the crawler. For instance, Google uses Google-Extended in robots.txt to see if your content can be used for AI training, but the traffic itself still comes from standard Google user-agents like Googlebot. Because of this, not every robots.txt entry will have a direct match in HTTP request logs.</i></p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>As AI crawlers reshape the Internet, websites face both new challenges and new opportunities in managing their online presence.</p><p>This analysis highlights the growing impact of AI on web crawling, showing a clear shift from traditional search indexing to data collection for training AI models. The detailed statistics, such as Googlebot’s continued growth and the rapid rise of AI-specific crawlers, offer context for understanding how this space is evolving and what it means for the future of web content access.</p><p>The trend toward stronger, enforceable blocking methods, something <a href="https://blog.cloudflare.com/cloudflare-ai-audit-control-ai-content-crawlers/"><u>Cloudflare has also been invested</u></a>, signals a key shift in how websites may control their interactions with AI systems going forward.</p> ]]></content:encoded>
            <category><![CDATA[Pay Per Crawl]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Radar]]></category>
            <category><![CDATA[AI Bots]]></category>
            <category><![CDATA[Bots]]></category>
            <guid isPermaLink="false">7KJiiS1zdIyBiVgoT6SgKf</guid>
            <dc:creator>João Tomé</dc:creator>
            <dc:creator>Jorge Pacheco</dc:creator>
            <dc:creator>Carlos Azevedo</dc:creator>
        </item>
        <item>
            <title><![CDATA[Message Signatures are now part of our Verified Bots Program, simplifying bot authentication]]></title>
            <link>https://blog.cloudflare.com/verified-bots-with-cryptography/</link>
            <pubDate>Tue, 01 Jul 2025 10:00:00 GMT</pubDate>
            <description><![CDATA[ Bots can start authenticating to Cloudflare using public key cryptography, preventing them from being spoofed and allowing origins to have confidence in their identity. ]]></description>
            <content:encoded><![CDATA[ <p>As a site owner, how do you know which bots to allow on your site, and which you’d like to block? Existing identification methods rely on a combination of IP address range (which may be shared by other services, or change over time) and user-agent header (easily spoofable). These have limitations and deficiencies. In our <a href="https://blog.cloudflare.com/web-bot-auth/"><u>last blog post</u></a>, we proposed using HTTP Message Signatures: a way for developers of bots, agents, and crawlers to clearly identify themselves by cryptographically signing requests originating from their service. </p><p>Since we published the blog post on Message Signatures and the <a href="https://datatracker.ietf.org/doc/html/draft-meunier-web-bot-auth-architecture"><u>IETF draft for Web Bot Auth</u></a> in May 2025, we’ve seen significant interest around implementing and deploying Message Signatures at scale. It’s clear that well-intentioned bot owners want a clear way to identify their bots to site owners, and site owners want a clear way to identify and manage bot traffic. Both parties seem to agree that deploying cryptography for the purposes of authentication is the right solution.     </p><p>Today, we’re announcing that we’re integrating HTTP Message Signatures directly into our <b>Verified Bots Program</b>. This announcement has two main parts: (1) for bots, crawlers, and agents, we’re simplifying enrollment into the Verified Bots program for those who sign requests using Message Signatures, and (2) we’re encouraging <i>all bot operators moving forward </i>to use Message Signatures over existing verification mechanisms. Because Verified Bots are considered authenticated, they do not face challenges from our Bot Management to identify as bots, given they’re already identified as such.</p><p>For site owners, no additional action is required – Cloudflare will automatically validate signatures on our edge, and if that validation is a success, that traffic will be marked as verified so that site owners can use the <a href="https://developers.cloudflare.com/bots/concepts/bot/verified-bots/categories/"><u>verified bot fields</u></a> to create Bot Management and <a href="https://developers.cloudflare.com/waf/custom-rules/"><u>WAF rules</u></a> based on it.  </p><p>This isn't just about simplifying things for bot operators — it’s about giving website owners unparalleled accuracy in identifying trusted bot traffic, cutting down on the overhead for cryptographic verification, and fundamentally transforming how we manage authentication across the Cloudflare network.</p>
    <div>
      <h2>Become a Verified Bot with Message Signatures</h2>
      <a href="#become-a-verified-bot-with-message-signatures">
        
      </a>
    </div>
    <p>Cloudflare’s existing <a href="https://developers.cloudflare.com/bots/concepts/bot/verified-bots/"><u>Verified Bots program</u></a> is for bots that are transparent about who they are and what they do, like indexing sites for search or scanning for security vulnerabilities. You can see a list of these verified bots in <a href="https://radar.cloudflare.com/bots#verified-bots"><u>Cloudflare Radar</u></a>:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2lMYno3QOwtwfTDDgeqFx8/c69088229dcf9fc08f5a76ce7e0a0354/1.png" />
          </figure><p><sup><i>A preview of the Verified Bots page on Cloudflare Radar. </i></sup></p><p>In the past, in order to <a href="https://dash.cloudflare.com/?to=/:account/configurations/verified-bots"><u>apply</u></a> to be a verified bot, we used to ask for IP address ranges or reverse DNS names so that we could verify your identity. This required some manual steps like checking that the IP address range is valid and is associated with the appropriate <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/"><u>ASN</u></a>. </p><p>With the integration of Message Signatures, we’re aiming to streamline applications into our Verified Bot program. Bots applying with well-formed Message Signatures will be prioritized, and approved more quickly! </p>
    <div>
      <h2>Getting started</h2>
      <a href="#getting-started">
        
      </a>
    </div>
    <p>In order to make generating Message Signatures as easy as possible, Cloudflare is providing two open source libraries: a <a href="https://crates.io/crates/web-bot-auth"><u>web-bot-auth library in rust</u></a>, and a <a href="https://www.npmjs.com/package/web-bot-auth"><u>web-bot-auth npm package in TypeScript</u></a>. If you’re working on a different implementation, <a href="https://www.cloudflare.com/lp/verified-bots/"><u>let us know</u></a> – we’d love to add it to our <a href="https://developers.cloudflare.com/bots/concepts/bot/verified-bots/web-bot-auth/"><u>developer docs</u></a>!</p><p>At a high level, signing your requests with web bot auth consists of the following steps: </p><ul><li><p>Generate a valid signing key. See <a href="https://developers.cloudflare.com/bots/concepts/bot/verified-bots/web-bot-auth/#1-generate-a-valid-signing-key"><u>Signing Key section</u></a> for step-by-step instructions.</p></li><li><p>Host a JSON web key set containing your public key under <code>/.well-known/http-message-signature-directory</code> of your website.</p></li><li><p>Sign responses for that URL using a Web Bot Auth library, one signature for each key contained in it, to prove you own it. See the <a href="https://developers.cloudflare.com/bots/concepts/bot/verified-bots/web-bot-auth/#2-host-a-key-directory"><u>Hosting section</u></a> for step-by-step instructions.</p></li><li><p>Register that URL with us, using our Verified Bots form. This can be done directly in your Cloudflare account. See <a href="https://developers.cloudflare.com/bots/concepts/bot/verified-bots/overview/"><u>our documentation</u></a>.</p></li><li><p>Sign requests using a Web Bot Auth library. </p></li></ul><p>
As an example, <a href="https://radar.cloudflare.com/scan"><u>Cloudflare Radar's URL Scanner</u></a> lets you scan any URL and get a publicly shareable report with security, performance, technology, and network information. Here’s an example of what a well-formed signature looks like for requests coming from URL Scanner:</p>
            <pre><code>GET /path/to/resource HTTP/1.1
Host: www.example.com
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36
Signature-Agent: "https://web-bot-auth-directory.radar-cfdata-org.workers.dev"
Signature-Input: sig=("@authority" "signature-agent");\
             	 created=1700000000;\
             	 expires=1700011111;\
             	 keyid="poqkLGiymh_W0uP6PZFw-dvez3QJT5SolqXBCW38r0U";\
             	 tag="web-bot-auth"
Signature:sig=jdq0SqOwHdyHr9+r5jw3iYZH6aNGKijYp/EstF4RQTQdi5N5YYKrD+mCT1HA1nZDsi6nJKuHxUi/5Syp3rLWBA==:</code></pre>
            <p>Since we’ve already registered URLScanner as a Verified Bot, Cloudflare will now automatically verify that the signature in the <code>Signature</code> header matches the request — more on that later.</p>
    <div>
      <h2>Register your bot</h2>
      <a href="#register-your-bot">
        
      </a>
    </div>
    <p>Access the <a href="https://dash.cloudflare.com/?to=/:account/configurations/verified-bots"><u>Verified Bots submission form</u></a> on your account. If that link does not immediately take you there, go to <i>your Cloudflare account</i> →  <i>Account Home</i>  → <i>the three dots next to your account name</i>  → <i>Configurations</i> → <i>Verified Bots.</i></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/73yQcvLmiVDe19HJXYvBIc/ca2bdb2bb81addc29583568087c2ccc2/3.png" />
          </figure><p>If you do not have a Cloudflare account, you can <a href="https://dash.cloudflare.com/sign-up"><u>sign up for a free one</u></a>.</p><p>For the verification method, select "Request Signature", then enter the URL of your key directory in Validation Instructions. Specifying the User-Agent values is optional if you’re submitting a Request Signature bot. </p><p>Once your application has gone through our (now shortened) review process, you don’t need to take any further action.</p>
    <div>
      <h2>Message Signature verification for origins</h2>
      <a href="#message-signature-verification-for-origins">
        
      </a>
    </div>
    <p>Starting today, Cloudflare is ramping up verification of <a href="https://datatracker.ietf.org/doc/html/draft-meunier-web-bot-auth-architecture"><u>cryptographic signatures provided by automated crawlers and bots</u></a>. This is currently available for all Free and Pro plans, and as we continue to test and validate at scale, will be released to all Business and Enterprise plans. This means that as time passes, the number of unauthenticated web crawlers should diminish, ensuring most bot traffic is authenticated before it reaches your website’s servers, helping to prevent spoofing attacks. </p><p>At a high level, signature verification works like this: </p><ol><li><p>A bot or agent sends a request to a website behind Cloudflare.</p></li><li><p>Cloudflare’s Message Signature verification service checks for the <code>Signature</code>, <code>Signature-Input</code>, and <code>Signature-Agent</code> headers.</p></li><li><p>It checks that the incoming request presents a <code>keyid</code> parameter in your Signature-Input that points to a key we already know.</p></li><li><p>It looks at the <code>expires</code> parameter in the incoming bot request. If the current time is after expiration, verification fails. This guards against replay attacks, preventing malicious agents from trying to pass as a bot by retrying messages they captured in the past.</p></li><li><p>It checks that you’ve specified a <code>tag</code> parameter indicating <code>web-bot-auth</code>, to indicate your intent that the message be handled using web bot authentication specifically</p></li><li><p>It looks at all the <a href="https://www.rfc-editor.org/rfc/rfc9421#covered-components"><u>components</u></a> chosen in your <code>Signature-Input</code> header, and constructs <a href="https://www.rfc-editor.org/rfc/rfc9421#name-creating-the-signature-base"><u>a signature base</u></a> from it. </p></li><li><p>If all pre-flight checks pass, Cloudflare attempts to verify the signature base against the value in Signature field using an <a href="https://www.rfc-editor.org/rfc/rfc9421#name-eddsa-using-curve-edwards25"><u>ed25519 verification algorithm</u></a> and the key supplied in <code>keyid</code>.</p></li><li><p>Verified Bots and other systems at Cloudflare use a successful verification as proof of your identity, and apply rules corresponding to that identity. </p></li></ol><p>If any of the above steps fail, Cloudflare falls back to existing bot identification and mitigation mechanisms. As the system matures, we would strengthen these requirements, and limit the possibilities of a soft downgrade.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/128Ox15wBqBPVKUUzvn4gA/acca9b9e6df243b8317b8964285ce57c/2.png" />
          </figure><p>As a site owner, you can segment your Verified Bot traffic by its type and purpose by adding the <a href="https://developers.cloudflare.com/bots/concepts/bot/verified-bots/categories/"><u>Verified Bot Categories</u></a> field <code>cf.verified_bot_category</code> as a filter criterion in <a href="https://developers.cloudflare.com/waf/custom-rules/"><u>WAF Custom rules</u></a>, <a href="https://developers.cloudflare.com/waf/rate-limiting-rules/"><u>Advanced Rate Limiting</u></a>, and Late <a href="https://developers.cloudflare.com/rules/transform/"><u>Transform rules</u></a>. For instance, to allow the Bibliothèque nationale de France and the Library of Congress, and institutions dedicated to academic research, you can add a rule that allows bots in the <code>Academic Research</code> category.</p>
    <div>
      <h2>Where we’re going next</h2>
      <a href="#where-were-going-next">
        
      </a>
    </div>
    <p>HTTP Message Signatures is a primitive that is useful beyond Cloudflare – the IETF standardized it as part of <a href="https://datatracker.ietf.org/doc/html/rfc9421"><u>RFC 9421</u></a>.</p><p>As discussed in our <a href="https://blog.cloudflare.com/web-bot-auth/#introducing-http-message-signatures"><u>previous blog post</u></a>, Cloudflare believes that making Message Signatures a core component of bot authentication on the web should follow the same path. The <a href="https://www.ietf.org/archive/id/draft-meunier-web-bot-auth-architecture-02.html"><u>specifications</u></a> for the protocol are being built in the open, and they have already evolved following feedback.</p><p>Moreover, due to widespread interest, the IETF is considering forming a working group around <a href="https://datatracker.ietf.org/wg/webbotauth/about/"><u>Web Bot Auth</u></a>. Should you be a crawler, an origin, or even a CDN, we invite you to provide feedback to ensure the solution gets stronger, and suits your needs.</p>
    <div>
      <h2>A better, more trusted Internet</h2>
      <a href="#a-better-more-trusted-internet">
        
      </a>
    </div>
    <p>For bot, agent, and crawler operators that act transparently and provide vital services for the Internet, we’re providing a faster and more automated path to being recognized as a Verified Bot, reducing manual processes. We trust that this approach improves bot authentication from what were formerly brittle and unreliable authentication methods, to a secure and reliable alternative. It should reduce the overall volume of friction and hurdles genuinely useful bots face.</p><p>For site owners, Message Signatures provides better assurance that the bot traffic is legitimate — automatically recognized and allowed, minimizing disruption to essential services (e.g., search engine indexing, monitoring). In line with our commitments to making TLS/<a href="https://blog.cloudflare.com/introducing-universal-ssl/"><u>SSL</u></a> and <a href="https://blog.cloudflare.com/pt-br/post-quantum-zero-trust/"><u>Post-Quantum</u></a> certificates available for everyone, we’ll always offer the cryptographic verification of Message Signatures for all sites because we believe in a safer and more efficient Internet by fostering a trusted environment for both human and automated traffic.</p><p>If you have a feature request, feedback, or are interested in partnering with us, please <a href="https://www.cloudflare.com/lp/verified-bots/"><u>reach out</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Pay Per Crawl]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Bots]]></category>
            <category><![CDATA[Bot Management]]></category>
            <category><![CDATA[AI Bots]]></category>
            <category><![CDATA[Cryptography]]></category>
            <guid isPermaLink="false">5K5btgE8vXWGaGxCrs5yFH</guid>
            <dc:creator>Mari Galicer</dc:creator>
            <dc:creator>Akshat Mahajan</dc:creator>
            <dc:creator>Gauri Baraskar</dc:creator>
            <dc:creator>Helen Du</dc:creator>
        </item>
    </channel>
</rss>