
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sat, 04 Apr 2026 11:24:08 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Cloudflare Confidence Scorecards - making AI safer for the Internet]]></title>
            <link>https://blog.cloudflare.com/cloudflare-confidence-scorecards-making-ai-safer-for-the-internet/</link>
            <pubDate>Tue, 23 Sep 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare Confidence Scorecards are now live in the Application Library. Get transparent risk ratings for SaaS and Gen-AI apps. ]]></description>
            <content:encoded><![CDATA[ <p>Security and IT teams face an impossible balancing act: Employees are adopting AI tools every day, but each tool carries unique risks tied to compliance, data privacy, and security practices. Employees using these tools without seeking prior approval leads to a new type of<a href="https://www.cloudflare.com/learning/access-management/what-is-shadow-it/"><u> Shadow IT</u></a> which is referred to as <a href="https://blog.cloudflare.com/shadow-AI-analytics/"><u>Shadow AI</u></a>. Preventing Shadow AI requires manually vetting each AI application to determine whether it should be approved or disapproved. This isn’t scalable. And blanket bans of AI applications will only drive AI usage deeper underground, making it harder to secure.</p><p>That’s why today we are launching Cloudflare Application Confidence Scorecards. This is part of our new <a href="https://www.cloudflare.com/ai-security/">suite of AI Security features</a> within the Cloudflare One SASE platform. These scores bring scale and automation to the labor- and time-intensive task of evaluating generative AI and SaaS applications one by one. Instead of spending hours trying to find AI applications’ compliance certifications or data-handling practices, evaluators get a clear score that reflects an application’s safety and trustworthiness. With that signal, decision makers within organizations can confidently set policies or apply guardrails where needed, and block risky tools so their organizations can embrace innovation without compromising security.</p><p>Our Cloudflare Application Confidence Scorecards rate both AI-powered applications on a number of factors, including whether they’ve achieved industry-recognized certifications, follow certain data management and security measures, and the maturity level of the company. Meanwhile, amongst other considerations, our Generative AI confidence score awards higher scores to AI models that provide system cards that describe testing for bias, ethics, and safety considerations, and that do not train on user inputs.  We hope our emphasis on privacy, security, and safety helps drive <a href="https://blog.cloudflare.com/best-practices-sase-for-ai/">safer and more secure AI for everyone</a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6FQPYW5ZI0vPO950CBJ0Di/3bd6f05703f522c84608882f347f3585/generative-AI-confidence-score.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/opTtg2dkqMc7ZeUevjZjS/77bacb0c4a888622024c7a1b808d41a5/app-confidence-score.png" />
          </figure>
    <div>
      <h2>Rapid increase in Shadow AI</h2>
      <a href="#rapid-increase-in-shadow-ai">
        
      </a>
    </div>
    <p>Over the last decade, SaaS adoption has reshaped how businesses work. Employees can now pick up a new tool in minutes with nothing more than a credit card or free trial link. Now with the growth of generative AI, entire workflows are moving outside corporate oversight. From writing assistants to image generators, employees are relying on these tools daily, without knowing whether they comply with corporate or regulatory requirements. </p><p>The risks of these tools are wide-ranging. Sensitive data can be stored or transmitted outside of company controls. Tools may lack certifications such as SOC2 or ISO 27001. Many providers retain user data indefinitely or use it to train external models. Others face financial or operational instability that could disrupt your business if they go bankrupt or suffer a breach. Models can produce biased outputs that can introduce compliance risks or lead to erroneous business decisions. Security leaders tell us they cannot keep up with auditing every new application.  </p>
    <div>
      <h2>We score them for you, at scale</h2>
      <a href="#we-score-them-for-you-at-scale">
        
      </a>
    </div>
    <p>In order to make this effective, we needed two things: a rubric that could judge AI and SaaS applications, and then a mechanism to scalably score all those applications. Here’s how we did it.</p>
    <div>
      <h3>How the rubric works</h3>
      <a href="#how-the-rubric-works">
        
      </a>
    </div>
    <p>The Application Posture Score (5 points) evaluates a SaaS provider across five major categories:</p><ul><li><p><b>Security and Privacy Compliance (1.2 points):</b> Credit for SOC 2 and ISO 27001 certifications, which signal operational maturity.</p></li><li><p><b>Data Management Practices (1 point):</b> Retention windows and whether the provider shares data with third parties. Shorter retention and no sharing earns the highest marks.</p></li><li><p><b>Security Controls (1 point):</b> Support for MFA, SSO, TLS 1.3, role-based access, and session monitoring. These are the table stakes of modern SaaS security.</p></li><li><p><b>Security Reports and Incident History (1 point):</b> Availability of a trust or security page, bug bounty program, and incident response transparency. A recent material breach results in a full deduction.</p></li><li><p><b>Financial Stability (.8 points):</b> Public companies and heavily capitalized providers score highest, while startups with less funding or firms in distress score lower.</p></li></ul><p>The Gen-AI Posture Score (5 points) evaluates AI-specific risks:</p><ul><li><p><b>Compliance (1 point):</b> Presence of the ISO 42001 certification for AI management systems.</p></li><li><p><b>Deployment Security Model (1 point):</b> Whether access is authenticated and rate-limited or left publicly exposed.</p></li><li><p><b>System Card (1 point):</b> Publication of a model or system card that documents evaluations of safety, bias, and risk.</p></li><li><p><b>Training Data Governance (2 points):</b> Whether user data is explicitly excluded from model training or if there are available controls allowing opt-in/opt-out of training user data.</p></li></ul><p>Together, these scores give a transparent view of how much confidence you can place in a provider.</p>
    <div>
      <h3>How we score at scale</h3>
      <a href="#how-we-score-at-scale">
        
      </a>
    </div>
    <p>In the same way it’s not scalable for you to stay on top of every new AI and SaaS tool being created, our team quickly realized that we too would have the same problem. AI applications are being spun up so quickly that trying to keep pace manually would require a large team of people. </p><p>We knew we had to build a methodology to do it automatically, so we designed infrastructure that can crawl the Internet to answer the rubric questions at scale. We built a system that scrapes public trust centers, privacy policies, security pages, and compliance documents. Large language models parse those documents to identify relevant answers, but we also hardened the process to resist hallucinations by requiring source validation and structured extraction.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6qKD3BGqJ4h4COX4GAYU5S/b0848f940e7c9e7bbdbd78ed09983c0c/image1.png" />
          </figure><p>Every score produced by automation is then reviewed and audited by Cloudflare analysts before it goes live in the Application Library. This combination of automated crawling/extraction and human validation makes sure that the scores are both comprehensive and trustworthy.</p>
    <div>
      <h2>We make it easy to act on it</h2>
      <a href="#we-make-it-easy-to-act-on-it">
        
      </a>
    </div>
    <p>Confidence scores are built directly into the Application Library, making them actionable from day one. When you click on a score in your Cloudflare dashboard, you will see a detailed breakdown of how the app performed across each dimension of the rubric. Scores update as vendors improve their security and compliance, giving you a live view instead of a static report.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6FwChyEBXFyDOHWX3WepFw/13802cc41464cc07ab4ea55f4e4d5caa/BLOG-2961-1.png" />
          </figure><p>This approach makes life easier for every stakeholder. IT and security teams can spot high-risk tools at a glance. Procurement Governance Risk &amp; Compliance teams can accelerate vendor reviews while developers and employees can make smarter choices without waiting weeks for approvals.</p>
    <div>
      <h2>And it’s getting even better</h2>
      <a href="#and-its-getting-even-better">
        
      </a>
    </div>
    <p>Visibility is just the start. Soon, these scores will also drive enforcement across your Cloudflare One environment. You will be able to use Gateway to block or warn employees about low-scoring apps or tie DLP policies directly to confidence scores. That way untrusted AI and SaaS providers never become a backdoor for sensitive information.</p><p>By embedding scores into both visibility and enforcement, we are turning them into a tool for keeping your corporate environment safer.</p>
    <div>
      <h2>Interested in these scores?</h2>
      <a href="#interested-in-these-scores">
        
      </a>
    </div>
    <p>Cloudflare Application Confidence Scorecards are now live in the Application Library. You can explore them today in the Cloudflare dashboard, use them to evaluate the tools your teams rely on, and soon enforce policies across the Cloudflare Zero Trust platform.</p><p>This is one more step in our mission to make the Internet safer, faster, and more reliable not just for networks, but for the applications and AI tools that power modern work.</p><p>If you are a Cloudflare customer you can check out the <a href="https://developers.cloudflare.com/cloudflare-one/applications/app-library/"><u>Application Library</u></a>, explore the confidence scores, and let us know what you think. And if you’re not — fear not! — application scores are freely available to all users, including free. You can <a href="https://dash.cloudflare.com/sign-up/zero-trust"><u>get started</u></a> by simply creating a free account — and seeing these scores yourself. </p><p>Finally, if you want to get involved testing new functionality or sharing insights related to <a href="https://www.cloudflare.com/learning/ai/what-is-ai-security/">AI security</a>, we would love for you to express interest in <a href="https://www.cloudflare.com/lp/ai-security-user-research-program-2025/"><u>joining our user research program</u></a>. </p> ]]></content:encoded>
            <category><![CDATA[Cloudflare One]]></category>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Cloudflare Zero Trust]]></category>
            <category><![CDATA[SASE]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[AI-SPM]]></category>
            <guid isPermaLink="false">Z2wzT0u3Zixm6qdFEYWZo</guid>
            <dc:creator>Ayush Kumar</dc:creator>
            <dc:creator>Sharon Goldberg</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing Cloudflare Application Confidence Score For AI Applications]]></title>
            <link>https://blog.cloudflare.com/confidence-score-rubric/</link>
            <pubDate>Tue, 26 Aug 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare will provide confidence scores within our application library for Gen AI applications, allowing customers to assess their risk for employees using shadow IT.  ]]></description>
            <content:encoded><![CDATA[ 
    <div>
      <h2>Introduction</h2>
      <a href="#introduction">
        
      </a>
    </div>
    <p>The availability of SaaS and <a href="https://www.cloudflare.com/learning/ai/what-is-generative-ai/"><u>Gen AI</u></a> applications is transforming how businesses operate, boosting collaboration and productivity across teams. However, with increased productivity comes increased risk, as employees turn to unapproved SaaS and Gen AI applications, often dumping sensitive data into them for quick productivity wins. </p><p>The prevalence of “Shadow IT” and “Shadow AI” creates multiple problems for security, IT, GRC and legal teams. For example:</p><ul><li><p>Gen AI applications may train their models on user inputs, which could expose proprietary corporate information to third parties, competitors, or even through clever attacks like <a href="https://genai.owasp.org/llmrisk/llm01-prompt-injection/"><u>prompt injection</u></a>. </p></li><li><p>Applications may retain user data for long periods, share data with <a href="https://www.malwarebytes.com/blog/news/2025/02/deepseek-found-to-be-sharing-user-data-with-tiktok-parent-company-bytedance#:~:text=PIPC%20said%20that%20DeepSeek%E2%80%94an,without%20disclosure%20or%20explicit%20consent."><u>third parties</u></a>, have <a href="https://www.wiz.io/blog/38-terabytes-of-private-data-accidentally-exposed-by-microsoft-ai-researchers"><u>lax security practices</u></a>, suffer a <a href="https://www.wired.com/story/mcdonalds-ai-hiring-chat-bot-paradoxai/"><u>data breach</u></a>, or even go <a href="https://www.npr.org/2025/03/24/nx-s1-5338622/23andme-bankruptcy-genetic-data-privacy"><u>bankrupt</u></a>, leaving sensitive data exposed to the highest bidder.  </p></li><li><p>Gen AI applications may produce outputs that are biased, unsafe or incorrect, leading to <a href="https://www.europarl.europa.eu/thinktank/en/document/EPRS_ATA(2025)769509"><u>compliance violations</u></a> or <a href="https://www.bbc.com/news/world-us-canada-65735769"><u>bad</u></a> <a href="https://www.theguardian.com/media/2023/oct/31/microsoft-accused-of-damaging-guardians-reputation-with-ai-generated-poll"><u>business</u></a> <a href="https://www.reuters.com/article/world/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG/"><u>decisions</u></a>.</p></li></ul><p>In spite of these problems, <a href="https://www.cloudflare.com/the-net/banning-ai/"><u>blanket bans of Gen AI</u></a> don't work. They stifle innovation and push employee usage underground. Instead, organizations need smarter controls.</p><p>Security, IT, legal and GRC teams therefore face a difficult challenge: how can you appropriately assess each third-party application, without auditing and crafting individual policies for every single one of them that your employees might decide to interact with? And with the rate at which they’re proliferating — how could you possibly hope to keep abreast of them all?</p><p>Today, we’re excited to announce that we’re helping these teams automate assessment of SaaS and Gen AI applications at scale with the introduction of our new <b>Cloudflare Application Confidence Scores. </b>Scores will soon be available as part of our new suite of <a href="https://blog.cloudflare.com/best-practices-sase-for-ai/"><u>AI Security Posture Management (AI-SPM)</u></a> features in the Cloudflare One SASE platform, enabling IT and Security administrators to identify confidence levels associated with third-party SaaS and AI applications, and ultimately write policies informed by those confidence scores. We’re starting by scoring AI applications, because that’s where the need is most urgent.</p><p>In this blog, we’ll be covering the design of our Cloudflare Application Confidence Score, focusing specifically about the features of the score and our scoring rubric.  Our current goal is to reveal the details of our scoring rubric, which is designed to be as transparent and objective as possible — while simultaneously <a href="https://www.cloudflare.com/ai-security/">helping organizations of all sizes safely adopt AI</a>, and encouraging the industry and AI providers to adopt <a href="https://www.cloudflare.com/learning/ai/what-is-ai-security/">best practices for AI safety and security</a>.  </p><p>In the future, as part of our mission to help build a better Internet, we also plan to make Cloudflare Application Confidence Scores available for free to all our customer tiers. And even if you aren’t a Cloudflare customer, you will easily be able to browse through these Scores by creating a free account on the Cloudflare <a href="https://dash.cloudflare.com/"><u>dashboard</u></a> and navigating to our new <a href="https://developers.cloudflare.com/changelog/2025-07-07-dashboard-app-library/"><u>Application Library</u></a>.  </p>
    <div>
      <h2>Transparency, not vibes</h2>
      <a href="#transparency-not-vibes">
        
      </a>
    </div>
    <p>Cloudflare Application Confidence Scores is a transparent, understandable, and accountable metric that measures app safety, security, and data protection. It’s designed to give Security, IT, legal and GRC teams a rapid way of assessing the rapidly burgeoning space of AI applications.</p><p>Scores are not based on vibes or black-box “learning algorithms” or “artificial intelligence engines”.  We avoid subjective judgments or large-scale red-teaming as those can be tough to execute reliably and consistently over time. Instead, scores will be computed against an objective rubric that we describe in detail in this blog. Our rubric will be publicly maintained and kept up to date in the Cloudflare developer docs. </p><p>Many providers of the applications that we score are also our customers and partners, so our overarching goal is to be as fair and accountable as possible. We believe that transparency will build trust in our scoring rubric and guide the industry to adopt the best practices that our scoring rubric encourages. </p>
    <div>
      <h2>Principles behind our rubric</h2>
      <a href="#principles-behind-our-rubric">
        
      </a>
    </div>
    <p>Each component of our rubric requires a simple answer based on publicly available data like privacy policies, security documentation, compliance certifications, model cards and incident reports. If something isn't publicly disclosed, we assign zero points to that component of the rubric, with no further assumptions or guesswork.  Scores are computed according to our rubric via an automated system that incorporates human oversight for accuracy.  We use crawlers to collect public information (e.g. privacy policies, compliance documents), process it using AI for extraction and to compute the resulting scores, and then send them to human analysts for a final review.   </p><p>Scores are reviewed on a periodic basis. If a vendor believes that we have mis-scored their application, they can submit supporting documentation via <a><u>app-confidence-scores@cloudflare.com</u></a>, and we will update their score if appropriate.</p><p>Scores are on a scale from 1 to 5, with 5 being the highest confidence and 1 being the most risky. We decided to use a <b>"confidence score"</b> instead of a <b>"risk score"</b> because we can express confidence in an application when it provides clear positive evidence of good security, compliance and safety practices. An application may have good practices internally, but we cannot express confidence in these practices if they are not publicly documented. Moreover, a confidence score allows us to give customers transparent information, so they can make their own informed decisions. For example, an application might get a low confidence score because it lacks a documented data retention policy. While that might be a concern for some, your organization might find it acceptable and decide to allow the application anyway.</p><p>We separately evaluate different account tiers for the same application provider, because different account tiers can provide very different levels of enterprise risk. For instance, consumer plans (e.g. ChatGPT Free) may involve training on user prompts and score lower, whereas enterprise plans (e.g. ChatGPT Enterprise) do not train on user prompts and thus score higher. </p><p>That said, we are quite opinionated about components we selected in our rubric, drawing from deep experience of our own internal product, engineering, legal, GRC, and security teams. We prioritize factors like data retention policies and encryption standards because we believe they are foundational to protecting sensitive information in an AI-driven world. We included certifications, security frameworks and model cards because they provide evidence of maturity, stability, safety and adherence with industry best practices.</p>
    <div>
      <h2>Actually, it’s really two Scores</h2>
      <a href="#actually-its-really-two-scores">
        
      </a>
    </div>
    <p>As AI applications emerge at an unprecedented pace, the problem of "Shadow AI" intensifies traditional risks associated with Shadow IT. Shadow IT applications create risk when they retain user data for long periods, have lax security practices, are financially unstable, or widely share data with third parties.  Meanwhile, AI tools create new risks when they retain and train on user prompts, or generate responses that are biased, toxic, inaccurate or unsafe. </p><p>To separate out these different risks, we provide two different Scores: </p><ul><li><p><b>Application Confidence Score</b> (5 points) covers general SaaS maturity, and</p></li><li><p><b>Gen-AI Confidence Score</b> (5 points) focused on Gen AI-specific risks.</p></li></ul><p>We chose to focus on two separate areas to make our metric extensible (so that, in the future, we can apply it to applications that are not focused on Gen AI) and to make the Scores easier to understand and reason about.   </p><p>Each Score is applied to each account tier of a given Gen AI provider. For example, here’s how we scored OpenAI's ChatGPT:</p><ul><li><p><b>ChatGPT Free (App Confidence 3.3, GenAI Confidence 1)</b> received a low score due to limited enterprise controls and higher data exposure risk since by default, input data is used for model training.</p></li><li><p><b>ChatGPT Plus (App Confidence 3.3, GenAI Confidence 3)</b> scored slightly higher as it allows users to opt out of training on their input data.</p></li><li><p><b>ChatGPT Team (App Confidence 4.3, GenAI Confidence 3)</b> improved further with added collaboration safeguards and configurable data retention windows.</p></li><li><p><b>ChatGPT Enterprise (App Confidence 4.3, GenAI Confidence 4)</b> achieved the highest score, as training on input data is disabled by default while retaining the enhanced controls from the Team tier.</p></li></ul>
    <div>
      <h2>A detailed look at our rubric</h2>
      <a href="#a-detailed-look-at-our-rubric">
        
      </a>
    </div>
    <p>We now walk through the details of the rubric behind each of our Scores.</p>
    <div>
      <h3>Application Confidence Score (5.0 Points Total)</h3>
      <a href="#application-confidence-score-5-0-points-total">
        
      </a>
    </div>
    <p>This half evaluates the app's overall maturity as a SaaS service, drawing from enterprise best practices.</p><p><b>Regulatory Compliance:</b> Checks for key certifications that signal operational maturity. We selected these because they represent proven frameworks that demonstrate a commitment to widely-adopted security and data protection best practices.</p><ul><li><p><a href="https://www.aicpa-cima.com/topic/audit-assurance/audit-and-assurance-greater-than-soc-2"><u>SOC 2</u></a>: .4 points </p></li><li><p><a href="https://eur-lex.europa.eu/eli/reg/2016/679/oj/eng"><u>GDPR</u></a>: .4 points </p></li><li><p><a href="https://www.iso.org/standard/27001"><u>ISO 27001</u></a>: .4 points </p></li></ul><p><b>Data Management Practices: </b>Focuses on how data is retained and shared to minimize exposure. These criteria were chosen as they directly impact the risk of data leaks or misuse, based on common vulnerabilities we've observed in SaaS environments and our own legal/GRC team’s experience assessing third-party SaaS applications at Cloudflare.</p><ul><li><p><b>Documented data retention window:</b>  Shorter retention limits risk.</p><ul><li><p>0 day retention: .5 points</p></li><li><p>30 day retention: .4 points</p></li><li><p>60 day retention: .3 points</p></li><li><p>90 day retention: .1 point</p></li><li><p>No documented retention window: 0 points</p></li></ul></li><li><p><b>Third-party sharing:</b> No sharing means less external exposure of enterprise data. Sharing for advertising purposes means high risk of third parties mining and using the data.</p><ul><li><p>No third-party sharing: .5 points.</p></li><li><p>Sharing only for troubleshooting/support: .25 points</p></li><li><p>Sharing for other reasons like advertising or end user targeting: 0 points</p></li></ul></li></ul><p><b>Security Controls:</b> We prioritized these because they form the foundational defenses against unauthorized access, drawing from best practices that have prevented incidents in cloud services.</p><ul><li><p>MFA support: .2 points.</p></li><li><p>Role-based access: .2 points.</p></li><li><p>Session monitoring: .2 points.</p></li><li><p>TLS 1.3: .2 points.</p></li><li><p>SSO support: .2 points.</p></li></ul><p><b>Security reports and incident history:</b> Rewards transparency and deducts for recent issues. This was included to emphasize accountability, as a history of breaches or proactive transparency often indicates how seriously a provider takes security.</p><ul><li><p>Published safety framework and bug bounty: 1 point.</p><ul><li><p>To get full points the company needs to have <b>both</b> of the following: </p><ul><li><p>A publicly accessible page (e.g., security, trust, or safety) that includes a comprehensive whitepaper, framework overview, OR detailed security documentation that covers:</p><ul><li><p>Encryption in transit and at rest</p></li><li><p>Authentication and authorization mechanisms</p></li><li><p>Network or infrastructure security design</p></li></ul></li><li><p>Incident Response Transparency - Published vulnerability disclosure or bug bounty policy OR a documented incident response process and security advisory archive.</p></li></ul></li><li><p>Example: Google has a <a href="https://bughunters.google.com/"><u>bug bounty program</u></a>, a whitepaper providing an overview of their <a href="https://cloud.google.com/docs/security/overview/whitepaper"><u>security posture</u></a>, as well as a <a href="https://transparencyreport.google.com/"><u>transparency report</u></a>. </p></li></ul></li><li><p>No commitments or weak security framework with the lack of any of the above criteria. If the company only has one of the criteria above but lacks the other they will also receive no credit: 0 points.</p><ul><li><p>Example: Lovable who has a security page but seems to lack many other parts of the criteria: https://lovable.dev/security</p></li></ul></li><li><p>If there has been a material breach in the last two years. If the company has experienced a material cybersecurity incident that resulted in the unauthorized disclosure of customer data to external parties (e.g., data posted, sold, or otherwise made accessible outside the organization). Incident must be publicly acknowledged by the company through a trust center update, press release, incident notification page, or an official regulatory filing: Full deduction to 0.</p><ul><li><p>Example: <a href="https://blog.23andme.com/articles/addressing-data-security-concerns"><u>23andMe </u></a>suffered credential stuffing attack in 2023 that resulted in the exposure of user data.</p></li></ul></li></ul><p><b>Financial Stability:</b> Gauges long-term viability of the company behind the application. We added this because a company’s financial health affects its ability to invest in ongoing security and support, and reduces the risk of sudden disruptions, corner-cutting, bankruptcy or sudden sale of user data to unknown third parties.</p><ul><li><p>Public company or private with &gt;$300M raised: .8 points.</p></li><li><p>Private with &gt;$100M raised: .5 points.</p></li><li><p>Private with &lt;$100M raised: .2 point.</p></li><li><p>Recent bankruptcy/distress (e.g. recent bankruptcy filings, major layoffs tied to funding shortfalls, failure to meet debt obligations): 0 points.</p></li></ul>
    <div>
      <h3>Gen-AI Confidence Score (5.0 Points Total)</h3>
      <a href="#gen-ai-confidence-score-5-0-points-total">
        
      </a>
    </div>
    <p>This Score zooms in on AI-specific risks, like data usage in training and input vulnerabilities.</p><p><b>Regulatory Compliance,  </b><a href="https://www.iso.org/standard/42001"><b><u>ISO 42001</u></b></a><b>:</b> ISO 42001 is a new certification for AI management systems. We chose this emerging standard because it specifically addresses <a href="https://www.cloudflare.com/the-net/building-cyber-resilience/ai-data-governance/"><u>AI governance</u></a>, filling a gap in traditional certifications and signaling forward-thinking risk management.</p><ul><li><p>ISO 42001 Compliant: 1 point.</p></li><li><p>Not ISO 42001 Compliant: 0 points.</p></li></ul><p><b>Deployment Security Model:</b> Stronger access controls get higher points. Authentication not only controls access but also enables monitoring and logging. This makes it easier to detect misuse and investigate incidents. Public, unauthenticated access is a red flag for shadow IT risk.</p><ul><li><p>Authenticated web portal or key-protected API with rate limiting: 1 point.</p></li><li><p>Unprotected public access: 0 points.</p></li></ul><p><b>Model Card:</b>  A model card is a concise document that provides essential information about an AI model, similar to a nutrition label for a food product. It is crucial for AI safety and security because it offers transparency into a model's design, training data, limitations, and potential biases, enabling developers and users to understand its risks and use it responsibly. Some leading AI providers have committed to providing model cards as public documentation of safety evaluations. We included this in our rubric to encourage the industry to broadly adopt model cards as a best practice. As the practice of model cards is further developed and standardized across the industry, we hope to incorporate more fine-grained details from model cards into our own risk scores. But for now, we only include the existence (or lack thereof) of a model card in our score.</p><ul><li><p>Has its own model card: 1 point.</p></li><li><p>Uses a model with a model card: .5 points.</p></li><li><p>None: 0 points.</p></li></ul><p><b>Training on user prompts:</b> This is one of the most important components of our score.  Models that train on user prompts are very risky because users might share sensitive corporate information in user prompts. We weighted this heavily because <a href="https://www.cloudflare.com/learning/ai/how-to-secure-training-data-against-ai-data-leaks/">control over training data</a> is central to preventing unintended data exposure, a core <a href="https://www.cloudflare.com/the-net/generative-ai-zero-trust/"><u>risk in generative AI</u></a> that can lead to major incidents.</p><ul><li><p>Explicit opt-in is required for training on user prompts: 2 points.</p></li><li><p>Opt-out of training on user prompts is explicitly available to users: 1 point.</p></li><li><p>No way to opt out of training on user prompts: 0 points.</p></li></ul><p>Here's an example of these Scores applied to a few popular AI providers.  As expected, enterprise tiers typically earn higher Confidence Scores than consumer tiers of the same AI provider.</p>
<table><thead>
  <tr>
    <th><span>Company</span></th>
    <th><span>Application Score</span></th>
    <th><span>Gen AI Score</span></th>
  </tr>
  <tr>
  </tr></thead>
<tbody>
  <tr>
    <td><span>Gemini Free</span></td>
    <td><span>3.8</span></td>
    <td><span>4.0</span></td>
  </tr>
  <tr>
    <td><span>Gemini Pro</span></td>
    <td><span>3.8</span></td>
    <td><span>5.0</span></td>
  </tr>
  <tr>
    <td><span>Gemini Ultra</span></td>
    <td><span>4.1</span></td>
    <td><span>5.0</span></td>
  </tr>
  <tr>
    <td><span>Gemini Business</span></td>
    <td><span>4.7</span></td>
    <td><span>5.0</span></td>
  </tr>
  <tr>
    <td><span>Gemini Enterprise</span></td>
    <td><span>4.7</span></td>
    <td><span>5.0</span></td>
  </tr>
  <tr>
    <td></td>
    <td></td>
    <td></td>
  </tr>
  <tr>
    <td><span>OpenAI Free</span></td>
    <td><span>3.3</span></td>
    <td><span>1.0</span></td>
  </tr>
  <tr>
    <td><span>OpenAI Plus</span></td>
    <td><span>3.3</span></td>
    <td><span>3.0</span></td>
  </tr>
  <tr>
    <td><span>OpenAI Pro</span></td>
    <td><span>3.3</span></td>
    <td><span>3.0</span></td>
  </tr>
  <tr>
    <td><span>OpenAI Team</span></td>
    <td><span>4.3</span></td>
    <td><span>3.0</span></td>
  </tr>
  <tr>
    <td><span>OpenAI Enterprise</span></td>
    <td><span>4.3</span></td>
    <td><span>4.0</span></td>
  </tr>
  <tr>
    <td></td>
    <td></td>
    <td></td>
  </tr>
  <tr>
    <td><span>Anthropic Free</span></td>
    <td><span>3.9</span></td>
    <td><span>5.0</span></td>
  </tr>
  <tr>
    <td><span>Anthropic Pro</span></td>
    <td><span>3.9</span></td>
    <td><span>5.0</span></td>
  </tr>
  <tr>
    <td><span>Anthropic Max</span></td>
    <td><span>3.9</span></td>
    <td><span>5.0</span></td>
  </tr>
  <tr>
    <td><span>Anthropic Team</span></td>
    <td><span>4.9</span></td>
    <td><span>5.0</span></td>
  </tr>
  <tr>
    <td><span>Anthropic Enterprise</span></td>
    <td><span>4.9</span></td>
    <td><span>5.0</span></td>
  </tr>
</tbody></table><p><i>Note: Confidence scores are provided "as is” for informational purposes only and should not be considered a substitute for independent analysis or decision-making. All actions taken based on the scores are the sole responsibility of the user.</i></p>
    <div>
      <h2>We’re just getting started…</h2>
      <a href="#were-just-getting-started">
        
      </a>
    </div>
    <p>We're actively refining our scoring methodology. To that end, we're collaborating with a diverse group of experts in the AI ecosystem (including researchers, legal professionals, SOC teams, and more) to fine-tune our scores, optimize for transparency, accountability and extensibility. If you have insights, suggestions, or want to get involved testing new functionality, we’d love for you to <a href="https://www.cloudflare.com/lp/ai-security-user-research-program-2025"><u>express interest in our user research program</u></a>. We'd very much welcome your feedback on this scoring rubric. </p><p>Today, we’re just releasing our scoring rubric in order to solicit feedback from the community. But soon, you'll start seeing these Cloudflare Application Confidence Scores integrated into the Application Library in our SASE platform. Customers can simply click or hover over any score to reveal a detailed breakdown of the rubric and underlying components of the score. Again, if you see any issues with our scoring, please submit your feedback to <a><u>app-confidence-scores@cloudflare.com</u></a>, and our team will review it and make adjustments if appropriate. </p><p>Looking even further ahead, we plan to enable integration of these scores directly into <a href="https://developers.cloudflare.com/cloudflare-one/policies/gateway/"><u>Cloudflare Gateway</u></a> and <a href="https://developers.cloudflare.com/cloudflare-one/policies/access/"><u>Access</u></a>, allowing our customers to write policies that block or redirect traffic, apply <a href="https://developers.cloudflare.com/cloudflare-one/policies/data-loss-prevention/"><u>data loss prevention (DLP)</u></a> or <a href="https://developers.cloudflare.com/cloudflare-one/policies/browser-isolation/"><u>remote browser isolation (RBI)</u></a> or otherwise control access to sites based directly on their Cloudflare Application Confidence Score. </p><p>This is just the beginning. By prioritizing transparency in our approach, we're not only bridging a critical gap in <a href="https://www.cloudflare.com/learning/access-management/what-is-sase/">SASE capabilities</a> but also driving the industry toward stronger AI safety practices. Let us know what you think!</p><p>If you’re ready to manage risk more effectively with these Confidence Scores, <a href="https://www.cloudflare.com/products/zero-trust/plans/enterprise/?utm_medium=referral&amp;utm_source=blog&amp;utm_campaign=2025-q3-acq-gbl-connectivity-ge-ge-general-ai_week_blog"><u>reach out to Cloudflare experts for a conversation</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[AI Week]]></category>
            <category><![CDATA[Cloudflare One]]></category>
            <category><![CDATA[Cloudflare Zero Trust]]></category>
            <category><![CDATA[SASE]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[AI-SPM]]></category>
            <guid isPermaLink="false">4U0WvN8BMpHUPypHmF1Xun</guid>
            <dc:creator>Ayush Kumar</dc:creator>
            <dc:creator>Sharon Goldberg</dc:creator>
        </item>
        <item>
            <title><![CDATA[ChatGPT, Claude, & Gemini security scanning with Cloudflare CASB]]></title>
            <link>https://blog.cloudflare.com/casb-ai-integrations/</link>
            <pubDate>Tue, 26 Aug 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare CASB now scans ChatGPT, Claude, and Gemini for misconfigurations, sensitive data exposure, and compliance issues, helping organizations adopt AI with confidence.
 ]]></description>
            <content:encoded><![CDATA[ <p>Starting today, all users of <a href="https://www.cloudflare.com/zero-trust/"><u>Cloudflare One</u></a>, our <a href="https://www.cloudflare.com/learning/access-management/what-is-sase/"><u>secure access service edge (SASE)</u></a> platform, can use our API-based <a href="https://www.cloudflare.com/zero-trust/products/casb/"><u>Cloud Access Security Broker (CASB)</u></a> to assess the security posture of their generative AI (GenAI) tools: specifically, OpenAI’s <a href="https://chatgpt.com/"><u>ChatGPT</u></a>, <a href="https://www.anthropic.com/claude"><u>Claude</u></a> by Anthropic, and Google’s <a href="https://gemini.google.com/"><u>Gemini</u></a>. Organizations can connect their GenAI accounts and within minutes, start detecting misconfigurations, <a href="https://www.cloudflare.com/learning/access-management/what-is-dlp/"><u>Data Loss Prevention (DLP)</u></a> matches, data exposure and sharing, compliance risks, and more — all without having to install cumbersome software onto user devices.</p><p>As <a href="https://www.cloudflare.com/learning/ai/what-is-generative-ai/"><u>Generative AI</u></a> adoption has exploded in the enterprise, IT and Security teams need to hustle to keep themselves abreast of newly emerging <a href="https://www.cloudflare.com/the-net/generative-ai-zero-trust/"><u> security and compliance challenges</u></a> that come alongside these powerful tools. In this rapidly changing landscape, IT and Security teams need tools that help <a href="https://www.cloudflare.com/ai-security/">enable AI adoption while still protecting the security and privacy of their enterprise networks and data</a>. </p><p>Cloudflare’s API CASB and inline CASB work together to help organizations safely adopt AI tools. The API CASB integrations provide out-of-band visibility into data at rest and security posture inside popular AI tools like ChatGPT, Claude, and Gemini. At the same time, Cloudflare Gateway provides <a href="https://blog.cloudflare.com/ai-prompt-protection"><u>in-line prompt controls</u></a> and <a href="https://blog.cloudflare.com/shadow-AI-analytics"><u>Shadow AI</u></a> identification. It applies policies and DLP to traffic as it moves to these AI providers. Together, these features give organizations a unified control plane for <a href="https://blog.cloudflare.com/best-practices-sase-for-ai/">securing their use of GenAI</a>.</p>
    <div>
      <h3>What’s new</h3>
      <a href="#whats-new">
        
      </a>
    </div>
    <p>ChatGPT, Claude and Gemini are now all live in the integrations supported by <a href="https://developers.cloudflare.com/cloudflare-one/applications/scan-apps/casb-integrations/"><u>Cloudflare’s API CASB</u></a>. These integrations are available to all Cloudflare One users, account owners can easily connect their GenAI tenants, and CASB will scan for security issues across multiple domains:</p><ul><li><p><b>Agentless Connections:</b> Connect ChatGPT, Claude, and Gemini via agentless, API‑based integrations to scan posture and data risks; no endpoint software to install.</p></li><li><p><b>Posture Management:</b> Detect insecure settings and misconfigurations that can lead to data exposure or misuse.</p></li><li><p><b>DLP Detection:</b> Identify where <a href="https://developers.cloudflare.com/cloudflare-one/policies/data-loss-prevention/"><u>sensitive data</u></a> has been uploaded in chat attachments (prompts coming soon).</p></li><li><p><b>GenAI-specific Insights:</b> Surface risks associated with the unique capability of a given AI provider's toolsets.</p></li></ul><p>Admins can now answer questions like: What are our employees doing in ChatGPT? What data is being uploaded and used in Claude? Is Gemini configured correctly in Google Workspace?</p><p>Now let’s take a closer look at each integration.</p>
    <div>
      <h3>OpenAI ChatGPT</h3>
      <a href="#openai-chatgpt">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6dO0h3q9modcmRPAQeiCOH/d8d54f5233e0026a63569b53cbb8d9a6/image2.png" />
          </figure><p>Cloudflare’s CASB integration with OpenAI’s ChatGPT scans for several types of insights, including:</p><ul><li><p><b>Capability Activation</b>: Highlights capabilities that are specific to ChatGPT’s feature set, like <a href="https://platform.openai.com/docs/actions/introduction"><u>actions</u></a>, <a href="https://platform.openai.com/docs/guides/tools-code-interpreter"><u>code execution</u></a>, <a href="https://help.openai.com/en/articles/9237897-chatgpt-search"><u>web access</u></a>.</p></li><li><p><b>External Exposure: </b>Finds chats and GPTs that are shared beyond the tenant, like GPTs shared publicly or listed on the <a href="https://openai.com/index/introducing-the-gpt-store/"><u>GPT Store</u></a>, and ties them back to their owners for quick triage.</p></li><li><p><b>Secrets, Keys and Invites</b>: Identifies API keys that aren’t rotated or are no longer used to maintain credential hygiene. Identifies over‑privileged or stale invites.</p></li><li><p><b>Sensitive Content (via DLP)</b>: Detects sensitive data (e.g. credential and secrets, financial / health information, source code, etc.) via <a href="https://developers.cloudflare.com/cloudflare-one/policies/data-loss-prevention/dlp-profiles/"><u>DLP profile</u></a> matches in uploaded chat attachments to enable targeted response.</p></li></ul>
    <div>
      <h3>Anthropic Claude</h3>
      <a href="#anthropic-claude">
        
      </a>
    </div>
    <p>For Claude, Cloudflare is able to provide the following out-of-band detections:</p><ul><li><p><b>Secrets, Keys and Invites:</b> Surfaces high‑risk invites and entitlement drift early so the least‑privilege access control stays tight. Spots unused API keys and rotation gaps before they turn into forgotten open doors.</p></li><li><p><b>Sensitive Content (via DLP)</b>: Monitors for <a href="https://developers.cloudflare.com/cloudflare-one/policies/data-loss-prevention/dlp-profiles/predefined-profiles/"><u>sensitive data</u></a> in uploaded files to help organizations safely enable Claude usage while maintaining compliance. Security teams get this information as quickly as CASB scans, giving them the visibility they need to help employees use Claude productively and securely with sensitive data.</p></li></ul><p>As Anthropic continues to expand Claude's API capabilities and features, Cloudflare will add corresponding security detections to match new functionality as it becomes available.</p>
    <div>
      <h3>Google Gemini</h3>
      <a href="#google-gemini">
        
      </a>
    </div>
    <p>Cloudflare’s detections for Google Gemini appear as part of our API CASB integration for Google Workspace:</p><ul><li><p><b>Identity &amp; MFA</b>: Identifies Gemini users and admins without MFA, leaving them prime targets for compromise. Imagine if an IT admin relied on Gemini daily to process corporate data, but their Google Workspace account lacked multi-factor authentication. One successful phishing email could give an attacker privileged access to Gemini and the wider Google Workspace environment — turning a minor oversight into an organization-wide breach. </p></li><li><p><b>License Hygiene</b>: Flags suspended accounts still holding Gemini or <a href="https://support.google.com/a/answer/16345165"><u>AI Ultra</u></a> licenses to cut cost and reduce exposure. An AI Ultra user has access to more powerful and riskier features, like <a href="https://deepmind.google/models/project-mariner/"><u>Project Mariner</u></a>, a research prototype that acts as an autonomous agent, capable of automating up to 10 tasks simultaneously across web browsers. An attacker can cause more damage by compromising an AI Ultra user, which is why we include this in our set of detections.</p></li></ul><p>The Gemini integration has a narrower scope because Google has structured their product and API differently than OpenAI or Anthropic. For organizations, Gemini is delivered as a <a href="https://workspace.google.com/"><u>Google Workspace</u></a> add-on. Enterprises enable Gemini features in Gmail, Docs, Sheets, and other Google Workspace apps through add-on licenses such as Gemini Enterprise or AI Ultra. Our CASB detections focus on identity, MFA, and license hygiene, rather than posture issues like public sharing or custom assistant publishing because Gemini does not yet provide those API endpoints.</p>
    <div>
      <h3>The Future of GenAI Posture Management</h3>
      <a href="#the-future-of-genai-posture-management">
        
      </a>
    </div>
    <p>Like countless other organizations, Cloudflare is adopting GenAI, on the same journey to make these environments even safer than they are today. We are excited to extend our management coverage to our customers so they can continue to innovate with GenAI. But looking ahead, we’re encouraged to see GenAI providers take concrete steps towards making security, compliance, and data privacy even more important tenets of their platforms.</p>
    <div>
      <h3>Secure GenAI beyond the reach of Inline Controls</h3>
      <a href="#secure-genai-beyond-the-reach-of-inline-controls">
        
      </a>
    </div>
    <p>Generative AI adoption brings new security requirements. Cloudflare CASB delivers out-of-band visibility across these tools, surfacing insights on top of inline controls. With posture, access, and data under control, organizations can embrace GenAI confidently and securely.</p><p><b>How to get started:</b></p><ul><li><p><b>For existing Cloudflare One customers:</b> Contact your account manager or enable the integrations directly in your dashboard today.</p></li><li><p><b>New to Cloudflare One?</b> <a href="https://dash.cloudflare.com/sign-up/zero-trust"><u>Sign up now</u></a> for 50 free seats to begin securely using Gen AI immediately. For larger deployments, request a <a href="https://www.cloudflare.com/products/zero-trust/plans/enterprise/?utm_medium=referral&amp;utm_source=blog&amp;utm_campaign=2025-q3-acq-gbl-connectivity-ge-ge-general-ai_week_blog"><u>consultation with our experts</u></a>.</p></li></ul><p>If you want to preview other new functionality and help shape our roadmap,<a href="https://www.cloudflare.com/lp/ai-security-user-research-program-2025"><u> express interest in our user research program</u></a> for <a href="https://www.cloudflare.com/learning/ai/what-is-ai-security/">AI security</a>. </p><div>
  
</div><p></p> ]]></content:encoded>
            <category><![CDATA[AI Week]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[AI-SPM]]></category>
            <category><![CDATA[CASB]]></category>
            <category><![CDATA[Cloudflare One]]></category>
            <category><![CDATA[SASE]]></category>
            <category><![CDATA[SAAS Security]]></category>
            <guid isPermaLink="false">ZCOT8h5K8IwD7kDikj0G1</guid>
            <dc:creator>Alex Dunbrack</dc:creator>
        </item>
        <item>
            <title><![CDATA[Best Practices for Securing Generative AI with SASE]]></title>
            <link>https://blog.cloudflare.com/best-practices-sase-for-ai/</link>
            <pubDate>Tue, 26 Aug 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ This guide provides best practices for Security and IT leaders to securely adopt generative AI using Cloudflare’s SASE architecture as part of a strategy for AI Security Posture Management (AI-SPM). ]]></description>
            <content:encoded><![CDATA[ <p>As <a href="https://www.cloudflare.com/learning/ai/what-is-generative-ai/"><u>Generative AI</u></a> revolutionizes businesses everywhere, security and IT leaders find themselves in a tough spot. Executives are mandating speedy adoption of Generative AI tools to drive efficiency and stay abreast of competitors. Meanwhile, IT and Security teams must rapidly develop an <a href="https://www.cloudflare.com/ai-security/">AI Security Strategy</a>, even before the organization really understands exactly how it plans to adopt and deploy Generative AI. </p><p>IT and Security teams are no strangers to “building the airplane while it is in flight”. But this moment comes with new and complex security challenges. There is an explosion in new AI capabilities adopted by employees across all business functions — both sanctioned and unsanctioned. AI Agents are ingesting authentication credentials and autonomously interacting with sensitive corporate resources. Sensitive data is being shared with AI tools, even as security and compliance frameworks struggle to keep up.</p><p>While it demands strategic thinking from Security and IT leaders, the problem of governing the use of AI internally is far from insurmountable. <a href="https://www.cloudflare.com/zero-trust/"><u>SASE (Secure Access Service Edge)</u></a> is a popular cloud-based network architecture that combines networking and security functions into a single, integrated service that provides employees with secure and efficient access to the Internet and to corporate resources, regardless of their location. The SASE architecture can be effectively extended to meet the risk and security needs of organizations in a world of AI. </p><p>Cloudflare’s SASE Platform is uniquely well-positioned to help IT teams govern their AI usage in a secure and responsible way — without extinguishing innovation. What makes Cloudflare different in this space is that we are one of the few SASE vendors that operate not just in cybersecurity, but also in AI infrastructure. This includes: providing AI infrastructure for developers (e.g. <a href="https://developers.cloudflare.com/workers-ai/"><u>Workers AI</u></a>, <a href="https://developers.cloudflare.com/ai-gateway/"><u>AI Gateway</u></a>, <a href="https://developers.cloudflare.com/agents/guides/remote-mcp-server/"><u>remote MCP servers</u></a>, <a href="https://realtime.cloudflare.com/"><u>Realtime AI Apps</u></a>) to securing public-facing LLMs (e.g. <a href="https://developers.cloudflare.com/waf/detections/firewall-for-ai/"><u>Firewall for AI</u></a> or <a href="https://blog.cloudflare.com/ai-labyrinth/"><u>AI Labyrinth</u></a>), to allowing content creators to <a href="https://blog.cloudflare.com/introducing-pay-per-crawl/"><u>charge AI crawlers for access to their content</u></a>, and the list goes on. Our expertise in this space gives us a unique view into governing AI usage inside an organization.  It also gives our customers the opportunity to plug different components of our platform together to build out their AI <i>and</i> AI cybersecurity infrastructure.</p><p>This week, we are taking this AI expertise and using it to help ensure you have what you need to implement a successful <a href="https://www.cloudflare.com/learning/ai/what-is-ai-security/">AI Security Strategy</a>. As part of this, we are announcing several new AI Security Posture Management (AI-SPM) features, including:</p><ul><li><p><a href="http://blog.cloudflare.com/shadow-AI-analytics/"><u>shadow AI reporting</u></a> to gain visibility into employee’s use of AI,</p></li><li><p><a href="http://blog.cloudflare.com/confidence-score-rubric/"><u>confidence scoring</u></a> of AI providers to manage risk, </p></li><li><p><a href="http://blog.cloudflare.com/ai-prompt-protection/"><u>AI prompt protection</u></a> to defend against malicious inputs and prevent data loss, </p></li><li><p>out-of-band <a href="http://blog.cloudflare.com/casb-ai-integrations/"><u>API CASB integrations </u></a>with AI providers to detect misconfigurations, </p></li><li><p>new tools that <a href="http://blog.cloudflare.com/zero-trust-mcp-server-portals/"><u>untangle and secure</u></a>  <a href="https://www.cloudflare.com/learning/ai/what-is-model-context-protocol-mcp/"><u>Model Context Protocol (MCP)</u></a> deployments in the enterprise.</p></li></ul><p>All of these new AI-SPM features are built directly into Cloudflare’s powerful <a href="https://www.cloudflare.com/zero-trust/"><u>SASE</u></a> platform.</p><p>And we’re just getting started. In the coming months you can expect to see additional valuable AI-SPM features launch across the <a href="https://www.cloudflare.com/"><u>Cloudflare platform</u></a>, as we continue investing in making Cloudflare the best place to protect, connect, and build with AI.</p>
    <div>
      <h3>What’s in this AI security guide?</h3>
      <a href="#whats-in-this-ai-security-guide">
        
      </a>
    </div>
    <p>In this guide, we will cover best practices for adopting generative AI in your organization using Cloudflare’s <a href="https://www.cloudflare.com/zero-trust/"><u>SASE (Secure Access Service Edge)</u></a> platform. We start by covering how IT and Security leaders can formulate their AI Security Strategy. Then, we show how to implement this strategy using long-standing features of our SASE platform alongside the new AI-SPM features we launched this week. </p><p>This guide below is divided into three key pillars for dealing with (human) employee access to AI – Visibility, Risk Management and Data Protection — followed by additional guidelines around deploying agentic AI in the enterprise using MCP. Our objective is to help you align your security strategy with your business goals while driving adoption of AI across all your projects and teams. </p><p>And we do this all using our single <a href="https://www.cloudflare.com/zero-trust/"><u>SASE</u></a> platform, so you don’t have to deploy and manage a complex hodgepodge of point solutions and security tools. In fact, we provide you with an overview of your AI security posture in a single dashboard, as you can see here:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5y6ZHDu9lwCSHZ1FuZsoWT/b3f6a9eb034a3cdb2b663cff428a2335/1.png" />
          </figure><p><i>AI Security Report in Cloudflare’s SASE platform</i></p>
    <div>
      <h2>Develop your AI Security Strategy</h2>
      <a href="#develop-your-ai-security-strategy">
        
      </a>
    </div>
    <p>The first step to securing AI usage is to establish your organization's level of risk tolerance. This includes pinpointing your biggest security concerns for your users and your data, along with relevant legal and compliance requirements.   Relevant issues to consider include: </p><ul><li><p>Do you have specific <b>sensitive data that should not be shared</b> with certain AI tools? (Some examples include personally identifiable information (PII), personal health information (PHI), sensitive financial data, secrets and credentials, source code or other proprietary business information.)</p></li><li><p>Are there <b>business decisions that your employees should not be making using assistance from AI</b>? (For instance, the EU AI Act AI prohibits the use of AI to evaluate or classify individuals based on their social behavior, personal characteristics, or personality traits.)</p></li><li><p>Are you subject to <b>compliance frameworks</b> that require you to produce records of the generative AI tools that your employees used, and perhaps even the prompts that your employees input into AI providers? (For example, HIPAA requires organizations to implement audit trails that records who accessed PHI and when, GDPR requires the same for PII, SOC2 requires the same for secrets and credentials.)</p></li><li><p>Do you have specific data protection requirements that require employees to use the <b>sanctioned, enterprise version of a certain generative AI provider</b>, and avoid certain AI tools or their consumer versions?  (Enterprise AI tools often have more favorable terms of service, including shorter data retention periods, more limited data-sharing with third-parties, and/or a promise not to train AI models on user inputs.)</p></li><li><p>Do you require employees to completely <b>avoid the use of certain AI tools</b>, perhaps because they are unreliable, unreviewed or headquartered in a risky geography? </p></li><li><p>Are there security protections offered by your organization's sanctioned AI providers and to what extent do you plan to <b>protect against misconfigurations of AI tools</b> that can result in leaks of sensitive data?  </p></li><li><p>What is your <a href="https://www.cloudflare.com/the-net/building-cyber-resilience/secure-govern-ai-agents/">policy around the use of autonomous AI agents</a>?  What is your strategy for <b>adopting the </b><a href="https://www.cloudflare.com/learning/ai/what-is-model-context-protocol-mcp/"><b><u>Model Context Protocol (MCP)</u></b></a>? (The Model Context Protocol is a standard way to make information available to large language models (LLMs), similar to the way an application programming interface (API) works. It supports agentic AI that autonomously pursues goals and takes action.)</p></li></ul><p>While almost every organization has relevant compliance requirements that implicate their use of generative AI, there is no “one size fits all” for addressing these issues. </p><ul><li><p>Some organizations have mandates to broadly adopt AI tools of all stripes, while others require employees to interact with sanctioned AI tools only. </p></li><li><p>Some organizations are rapidly adopting the MCP, while others are not yet ready for agents to autonomously interact with their corporate resources. </p></li><li><p>Some organizations have robust requirements around data loss prevention (DLP), while others are still early in the process of deploying DLP in their organization.</p></li></ul><p>Even with this diversity of goals and requirements, Cloudflare SASE provides a flexible platform for the implementation of your organization’s AI Security Strategy.</p>
    <div>
      <h2>Build a solid foundation for AI Security </h2>
      <a href="#build-a-solid-foundation-for-ai-security">
        
      </a>
    </div>
    <p>To implement your AI Security Strategy, you first need a solid <a href="https://developers.cloudflare.com/reference-architecture/architectures/sase/"><u>SASE deployment</u></a>. </p><p>SASE provides a unified platform that consolidates security and networking, replacing a fragmented patchwork of point solutions with a single platform that controls application visibility, user authentication, <a href="https://www.cloudflare.com/learning/access-management/what-is-dlp/"><u>Data Loss Prevention (DLP)</u></a>, and other policies for access to the Internet and access to internal corporate resources.  SASE is the essential foundation for an effective AI Security Strategy. </p><p><a href="https://www.cloudflare.com/learning/access-management/what-is-sase/"><u>SASE architecture</u></a> allows you to execute your AI security strategy by discovering and inventorying the AI tools used by your employees. With this visibility, you can proactively manage risk and support compliance requirements by monitoring AI prompts and responses to understand what data is being shared with AI tools. Robust DLP allows you to scan and block sensitive data from being entered into AI tools, preventing data leakage and protecting your organization's most valuable information. Our <a href="https://developers.cloudflare.com/cloudflare-one/policies/gateway/"><u>Secure Web Gateway (SWG)</u></a> allows you to redirect traffic from unsanctioned AI providers to user education pages or to sanctioned enterprise AI providers. And our new integration of MCP tooling into our SASE platform helps you secure the deployment of agentic AI inside your organization.</p><p>If you're just starting your SASE journey, our <a href="https://developers.cloudflare.com/learning-paths/secure-internet-traffic/concepts/"><u>Secure Internet Traffic Deployment Guide</u></a> is the best place to begin. For this guide, however, we will skip these introductory details and dive right into using SASE to secure the use of Generative AI. </p>
    <div>
      <h2>Gain visibility into your AI landscape </h2>
      <a href="#gain-visibility-into-your-ai-landscape">
        
      </a>
    </div>
    <p>You can't protect what you can't see. The first step is to gain visibility into your AI landscape, which is essential for discovering and inventorying all the AI tools that your employees are using, deploying or experimenting with in your organization. </p>
    <div>
      <h3>Discover Shadow AI </h3>
      <a href="#discover-shadow-ai">
        
      </a>
    </div>
    <p>Shadow AI refers to the use of AI applications that haven't been officially sanctioned by your IT department. Shadow AI is not an uncommon phenomenon – Salesforce found that <a href="https://www.salesforce.com/news/stories/ai-at-work-research/?utm_campaign=amer_cbaw&amp;utm_content=Salesforce_World+Tour&amp;utm_medium=organic_social&amp;utm_source=linkedin"><u>over half of the knowledge workers it surveyed</u></a> admitted to using unsanctioned AI tools at work. Use of unsanctioned AI is not necessarily a sign of malicious intent; employees are often just trying to do their jobs better. As an IT or Security leader, your goal should be to discover Shadow AI and then apply the appropriate AI security policy. There are two powerful ways to do this: inline and out-of-band.</p>
    <div>
      <h4>Discover employee usage of AI, inline</h4>
      <a href="#discover-employee-usage-of-ai-inline">
        
      </a>
    </div>
    <p>The most direct way to get visibility is by using <a href="https://www.cloudflare.com/zero-trust/products/gateway/"><u>Cloudflare's Secure Web Gateway (SWG)</u></a>. </p><p>SWG helps you get a clear picture of both sanctioned and unsanctioned AI and chat applications. By reviewing your detected usage, you'll gain insight into which AI apps are being used in your organization. This knowledge is essential for building policies that support approved tools, and block or control risky ones. This feature requires you to deploy the WARP client in Gateway proxy mode on your end-user devices.</p><p>You can review your company’s AI app usage using our new Application Library and <a href="http://blog.cloudflare.com/shadow-AI-analytics/"><u>Shadow IT </u></a>dashboards. These tools allow you to: </p><ul><li><p>Review traffic from user devices to understand how many users engage with a specific application over time.</p></li><li><p>Denote application’s status (e.g., Approved, Unapproved) inside your organization, and use that as input to a variety of SWG policies that control access to applications with that status. </p></li><li><p> Automate assessment of SaaS and Gen AI applications at scale with our soon-to-be-released <a href="http://blog.cloudflare.com/confidence-score-rubric/"><u>Cloudflare Application Confidence Scores</u><b><u>. </u></b></a></p></li></ul>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3NFrOpJkBMH6tsPZVec02Q/37b54f7477082dedcac2adcba31e2c29/2.png" />
          </figure><p><sup><i>Shadow IT dashboard showing utilization of applications of different status (Approved, Unapproved, In Review, Unreviewed).</i></sup></p>
    <div>
      <h4>Discover employee usage of AI, out-of-band</h4>
      <a href="#discover-employee-usage-of-ai-out-of-band">
        
      </a>
    </div>
    <p>Even if your organization doesn't use a device client, you can still get valuable data on Shadow AI usage if you use Cloudflare's integrations for Cloud Access Security Broker (<a href="https://www.cloudflare.com/zero-trust/products/casb/"><u>CASB</u></a>) with services like Google Workspace, Microsoft 365, or GitHub. </p><p><a href="https://www.cloudflare.com/zero-trust/products/casb/"><u>Cloudflare CASB</u></a> provides high-fidelity detail about your SaaS environments, including sensitive data visibility and suspicious user activity. By integrating CASB with your SSO provider, you can see if your users have authenticated to any third-party AI applications, giving you a clear and non-invasive sense of app usage across your organization.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3HDUtSAX9f5XZasSyACTiV/367f80a5d745070fd8e0191d0e36e61d/3.png" />
          </figure><p><sup><i>An API CASB integration with Google Workspace, showing findings filtered to third party integrations. Findings discover multiple LLM integrations.</i></sup></p>
    <div>
      <h2>Implement an AI risk management framework</h2>
      <a href="#implement-an-ai-risk-management-framework">
        
      </a>
    </div>
    <p>Now that you’ve gained visibility into your AI landscape, the next step is to proactively manage that risk. Cloudflare’s SASE platform allows you to monitor AI prompts and responses, enforce granular security policies, coach users on secure behavior, and prevent misconfigurations in your enterprise AI providers.</p>
    <div>
      <h3>Detect and monitor AI prompts and responses</h3>
      <a href="#detect-and-monitor-ai-prompts-and-responses">
        
      </a>
    </div>
    <p>If you have <a href="https://developers.cloudflare.com/learning-paths/replace-vpn/configure-device-agent/enable-tls-decryption/"><u>TLS decryption enabled</u></a> in your SASE platform, you can gain new and powerful insights into how your employees are using AI with our new <a href="http://blog.cloudflare.com/ai-prompt-protection/"><u>AI prompt protection</u></a> feature.  </p><p>AI Prompt Protection provides you with visibility into the exact prompts and responses from your employees’ interactions with supported AI applications. This allows you to go beyond simply knowing which tools are being used and gives you insight into exactly what kind of information is being shared.  </p><p>This feature also works with <a href="https://developers.cloudflare.com/cloudflare-one/policies/data-loss-prevention/dlp-profiles/"><u>DLP profiles</u></a> to detect sensitive data in prompts. You can also choose whether to block the action or simply monitor it.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/JpNZiyklt6qBRjW4LZuSW/1ea4043b6d03f8de31ce24175aa6ca02/4.png" />
          </figure><p><sup><i>Log entry for a prompt detected using AI prompt protection.</i></sup></p>
    <div>
      <h3>Build granular AI security policies</h3>
      <a href="#build-granular-ai-security-policies">
        
      </a>
    </div>
    <p>Once your monitoring tools give you a clear understanding of AI usage, you can begin building security policies to achieve your security goals. Cloudflare's Gateway allows you to create policies based on application categories, application approval status, users, user groups, and device status. For example, you can:</p><ul><li><p>create policies to explicitly allow approved AI applications while blocking unapproved AI applications;</p></li><li><p>create <a href="https://developers.cloudflare.com/changelog/2025-04-11-http-redirect-custom-block-page-redirect/"><u>policies that redirect users</u></a> from unapproved AI applications to an approved AI application;</p></li><li><p>limit access to certain applications to specific users or groups that have specific device security posture;</p></li><li><p>build policies to enable prompt capture (with<a href="http://blog.cloudflare.com/ai-prompt-protection/"><u> AI prompt protection</u></a>) for specific high-risk user groups, such as contractors or new employees, without affecting the rest of the organization; and</p></li><li><p>put certain applications behind <a href="https://developers.cloudflare.com/cloudflare-one/policies/browser-isolation/"><u>Remote Browser Isolation (RBI)</u></a>, to prevent end users from uploading files or pasting data into the application.</p></li></ul>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2BCDxoKrUDRAOO13V8Qd4W/28e84e4529f3e040ba4a2c3c98c6eed7/5.png" />
          </figure><p><sup><i>Gateway application status policy selector</i></sup></p><p>All of these policies can be written in Cloudflare Gateway’s unified policy builder, making it easy to deploy your AI Security Strategy across your organization.</p>
    <div>
      <h3>Control access to internal LLMs </h3>
      <a href="#control-access-to-internal-llms">
        
      </a>
    </div>
    <p>You can use <a href="https://developers.cloudflare.com/cloudflare-one/policies/access/"><u>Cloudflare Access</u></a> to control your employees’ access to your organization’s internal LLMs, including any <a href="https://www.cloudflare.com/learning/ai/how-to-secure-training-data-against-ai-data-leaks/">proprietary models you train internally</a> and/or models that your organization runs on <a href="https://developers.cloudflare.com/workers-ai/"><u>Cloudflare Worker’s AI</u></a>. </p><p>Cloudflare Access allows you to gate access to these LLMs using fine-grained policies, including ensuring users are granted access based on their identity, user group, device posture, and other contextual signals. For example, you can use <a href="https://developers.cloudflare.com/cloudflare-one/policies/access/"><u>Cloudflare Access</u></a> to write a policy that ensures that only certain data scientists at your organization can access a <a href="https://developers.cloudflare.com/workers-ai/"><u>Workers AI</u></a> model that is <a href="https://developers.cloudflare.com/workers-ai/guides/tutorials/fine-tune-models-with-autotrain/"><u>trained</u></a> on certain types of customer data. </p>
    <div>
      <h3>Manage the security posture of third-party AI providers</h3>
      <a href="#manage-the-security-posture-of-third-party-ai-providers">
        
      </a>
    </div>
    <p>As you define which AI tools are sanctioned, you can develop functional security controls for consistent usage. Cloudflare newly supports <a href="http://blog.cloudflare.com/casb-ai-integrations/"><u>API CASB integrations with popular AI tools</u></a> like OpenAI (ChatGPT), Anthropic (Claude), and Google Gemini. These "out-of-band" integrations provide immediate visibility into how users are engaging with sanctioned AI tools, allowing you to report on posture management findings include:</p><ul><li><p>Misconfigurations related to sharing settings.</p></li><li><p>Best practices for API key management.</p></li><li><p>DLP profile matches in uploaded attachments</p></li><li><p>Riskier AI features (e.g. autonomous web browsing, code execution) that are toggled on</p></li></ul>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/0a6FVjCwejeyUzdQR0pyb/79f29b0d92c27bcd400ed7ded8d4c4e3/6.png" />
          </figure><p><sup><i>OpenAI API CASB Integration showing riskier features that are toggled on, security posture risks like unused admin credentials, and an uploaded attachment with a DLP profile match.</i></sup></p>
    <div>
      <h2>Layer on data protection </h2>
      <a href="#layer-on-data-protection">
        
      </a>
    </div>
    <p>Robust data protection is the final pillar that protects your employee’s access to AI.. </p>
    <div>
      <h3>Prevent data loss</h3>
      <a href="#prevent-data-loss">
        
      </a>
    </div>
    <p>Our SASE platform has long supported Data Loss Prevention (<a href="https://developers.cloudflare.com/cloudflare-one/policies/data-loss-prevention/"><u>DLP</u></a>) tools that scan and block sensitive data from being entered into AI tools, to prevent data leakage and protect your organization's most valuable information.  You can write policies that detect sensitive data while adapting to <a href="https://blog.cloudflare.com/improving-data-loss-prevention-accuracy-with-ai-context-analysis/"><u>organization-specific traffic patterns</u></a>, and use Cloudflare Gateway’s unified policy builder to apply these to your users' interactions with AI tools or other applications. For example, you could write a DLP policy that detects and blocks the upload of a social security number (SSN), phone number or address.</p><p>As part of our new <a href="http://blog.cloudflare.com/ai-prompt-protection/"><u>AI prompt protection</u></a> feature, you can now also gain a semantic understanding of your users’ interactions with supported AI providers. Prompts are classified <i>inline </i>into meaningful, high-level topics that include PII, credentials and secrets, source code, financial information, code abuse / malicious code and prompt injection / jailbreak.  You can then build inline granular policies based on these high-level topic classifications. For example, you could create a policy that blocks a non-HR employee from submitting a prompt with the intent to receive PII from the response, while allowing the HR team to do so during a compensation planning cycle. </p><p>Our new <a href="http://blog.cloudflare.com/ai-prompt-protection/"><u>AI prompt protection</u></a> feature empowers you to apply smart, user-specific DLP rules that empower your teams to get work done, all while strengthening your security posture. To use our most advanced DLP feature, you'll need to enable TLS decryption to inspect traffic.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3dUnu8P5cMS18k9BxkGoHY/16fdccae7f8e99dc34ebfe7399db4b94/7.png" />
          </figure><p><sup><i>The above policy blocks all ChatGPT prompts that may receive PII back in the response for employees in engineering, marketing, product, and finance </i></sup><a href="https://developers.cloudflare.com/cloudflare-one/policies/gateway/identity-selectors/"><sup><i><u>user groups</u></i></sup></a><sup><i>. </i></sup></p>
    <div>
      <h2>Secure MCP — and Agentic AI </h2>
      <a href="#secure-mcp-and-agentic-ai">
        
      </a>
    </div>
    <p>MCP (Model Context Protocol) is an emerging AI standard, where MCP servers act as a translation layer for <a href="https://www.cloudflare.com/learning/ai/what-is-agentic-ai/"><u>AI agents</u></a>, allowing them to communicate with public and private APIs, understand datasets, and perform actions. Because these servers are a primary entry point for AI agents to engage with and manipulate your data, they are a new and critical security asset for your security team to manage.</p><p>Cloudflare already offers a robust set of developer tools for deploying <a href="https://developers.cloudflare.com/agents/guides/remote-mcp-server/"><u>remote MCP servers</u></a>—a cloud-based server that acts as a bridge between a user's data and tools and various AI applications. But now our customers are asking for help securing their enterprise MCP deployments. </p><p>That is why we’re making MCP security controls a core part of our SASE platform.</p>
    <div>
      <h4>Control MCP Authorization</h4>
      <a href="#control-mcp-authorization">
        
      </a>
    </div>
    <p>MCP servers typically use OAuth for authorization, where the server inherits the permissions of the authorizing user. While this adheres to least-privilege for the user, it can lead to <b>authorization sprawl </b>— where the agent accumulates an excessive number of permissions over time. This makes the agent a high-value target for attackers.</p><p><a href="https://developers.cloudflare.com/cloudflare-one/applications/configure-apps/mcp-servers"><u>Cloudflare Access</u></a> now helps you manage authorization sprawl by applying <a href="https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/"><u>Zero Trust principles</u></a> to MCP server access. A Zero Trust model assumes no user, device, or network can be trusted implicitly, so every request is continuously verified. This <a href="https://developers.cloudflare.com/cloudflare-one/applications/configure-apps/mcp-servers"><u>approach </u></a>ensures secure authentication and management of these critical assets as your business adopts more agentic workflows. </p>
    <div>
      <h4>Centralize management of MCP servers</h4>
      <a href="#centralize-management-of-mcp-servers">
        
      </a>
    </div>
    <p><a href="http://blog.cloudflare.com/zero-trust-mcp-server-portals/"><u>Cloudflare MCP Server Portal</u></a> is a new feature in Cloudflare’s SASE platform that centralizes the management, security, and observation of an organization’s MCP servers.</p><p>MCP Server Portal allows you to register all your MCP servers with Cloudflare and provide your end users with a single, unified Portal endpoint to configure in their MCP client. This approach simplifies the user experience, because it eliminates the need to configure a one-to-one connection between every MCP client and server. It also means that new MCP servers dynamically become available to users whenever they are added to the Portal. </p><p>Beyond these usability enhancements, MCP Server Portal addresses the significant security risks associated with MCP in the enterprise. The current decentralized approach of MCP deployments creates a tangle of unmanaged one-to-one connections that are difficult to secure. The lack of centralized controls creates a variety of risks including prompt injection, tool injection (where malicious code is part of the MCP server itself), supply chain attacks and data leakage. </p><p>MCP Server Portals solve this by routing all MCP traffic through Cloudflare, allowing for centralized policy enforcement, comprehensive visibility and logging, and a curated user experience based on the principle of least privilege. Administrators can review and approve MCP servers before making them available, and users are only presented with the servers and tools they are authorized to use, which prevents the use of unvetted or malicious third-party servers.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/64a5Snga1xwRHeCmdbYrpj/f23dc4584618f0c37fb0be8f3399554b/8.png" />
          </figure><p><sup><i>An MCP Server Portal in the Cloudflare Dashboard</i></sup></p><p>All of these features are only the beginning of our MCP security roadmap, as we continue advancing our support for MCP infrastructure and security controls across the entire Cloudflare platform.</p>
    <div>
      <h2>Implement your AI security strategy in a single platform</h2>
      <a href="#implement-your-ai-security-strategy-in-a-single-platform">
        
      </a>
    </div>
    <p>As organizations rapidly develop and deploy their AI security strategies, Cloudflare’s SASE platform is ideally situated to implement policies that balance productivity with data and security controls.</p><p>Our SASE has a full suite of features to protect employee interactions with AI. Some of these features are deeply integrated in our <a href="https://developers.cloudflare.com/cloudflare-one/policies/gateway/"><u>Secure Web Gateway (SWG)</u></a>, including the ability to write fine-grained access policies, gain visibility into <a href="http://blog.cloudflare.com/shadow-AI-analytics/"><u>Shadow IT </u></a>and introspect on interactions with AI tools using <a href="http://blog.cloudflare.com/ai-prompt-protection/"><u>AI prompt protection</u></a>. Apart from these inline controls, our <a href="https://developers.cloudflare.com/cloudflare-one/applications/casb/"><u>CASB</u></a> provides visibility and control using out-of-band API integrations. Our Cloudflare <a href="https://developers.cloudflare.com/cloudflare-one/policies/access/"><u>Access</u></a> product can apply Zero Trust principles while protecting employee access to corporate LLMs that are hosted on <a href="https://developers.cloudflare.com/workers-ai/"><u>Workers AI</u></a> or elsewhere. We’re newly integrating controls for <a href="http://blog.cloudflare.com/zero-trust-mcp-server-portals/"><u>securing MCP</u></a> that can also be used alongside Cloudflare’s <a href="https://blog.cloudflare.com/remote-model-context-protocol-servers-mcp/"><u>Remote MCP Server</u></a> platform.</p><p>And all of these features are integrated directly into Cloudflare’s SASE’s unified dashboard, providing a unified platform for you to implement your AI security strategy. You can even gain a holistic view of all of your AI-SPM controls using our newly-released AI-SPM overview dashboard. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6WzeNXp9TbX0h0QF8Nyby5/bcbeb8824e3eb5558826aed2cb17c11a/9.png" />
          </figure><p><sup><i>AI security report showing utilization of AI applications.</i></sup></p><p>As one the few SASE vendors that also offer AI infrastructure, Cloudflare’s SASE platform can also be deployed alongside products from our developer and application security platforms to holistically implement your AI security strategy alongside your AI infrastructure strategy (using, for example, <a href="https://developers.cloudflare.com/workers-ai/"><u>Workers AI</u></a>, <a href="https://developers.cloudflare.com/ai-gateway/"><u>AI Gateway</u></a>, <a href="https://developers.cloudflare.com/agents/guides/remote-mcp-server/"><u>remote MCP servers</u></a>, <a href="https://realtime.cloudflare.com/"><u>Realtime AI Apps</u></a>, <a href="https://developers.cloudflare.com/waf/detections/firewall-for-ai/"><u>Firewall for AI</u></a>, <a href="https://blog.cloudflare.com/ai-labyrinth/"><u>AI Labyrinth</u></a>, or <a href="https://blog.cloudflare.com/introducing-pay-per-crawl/"><u>pay per crawl</u></a> .)</p>
    <div>
      <h2>Cloudflare is committed to helping enterprises securely adopt AI</h2>
      <a href="#cloudflare-is-committed-to-helping-enterprises-securely-adopt-ai">
        
      </a>
    </div>
    <p>Ensuring AI is scalable, safe, and secure is a natural extension of Cloudflare’s mission, given so much of our success relies on a safe Internet. As AI adoption continues to accelerate, so too does our mission to provide a market-leading set of controls for AI Security Posture Management (AI-SPM). Learn more about how <a href="https://developers.cloudflare.com/learning-paths/holistic-ai-security/concepts/"><u>Cloudflare helps secure AI</u></a> or start exploring our new AI-SPM features in Cloudflare’s SASE <a href="https://dash.cloudflare.com/"><u>dashboard </u></a>today!</p> ]]></content:encoded>
            <category><![CDATA[AI Week]]></category>
            <category><![CDATA[Cloudflare One]]></category>
            <category><![CDATA[Cloudflare Zero Trust]]></category>
            <category><![CDATA[SASE]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[AI-SPM]]></category>
            <category><![CDATA[DLP]]></category>
            <category><![CDATA[CASB]]></category>
            <category><![CDATA[Access]]></category>
            <category><![CDATA[MCP]]></category>
            <guid isPermaLink="false">55IAKy7DMqbZKAy8htcUiO</guid>
            <dc:creator>AJ Gerstenhaber</dc:creator>
            <dc:creator>Sharon Goldberg</dc:creator>
            <dc:creator>Corey Mahan</dc:creator>
            <dc:creator>Yumna Moazzam</dc:creator>
        </item>
        <item>
            <title><![CDATA[Welcome to AI Week 2025]]></title>
            <link>https://blog.cloudflare.com/welcome-to-ai-week-2025/</link>
            <pubDate>Sun, 24 Aug 2025 16:00:00 GMT</pubDate>
            <description><![CDATA[ We’re seeing AI fundamentally change how people work across every industry. Customer support agents can respond to ten times the tickets. Software engineers are reviewers of AI generated code instead ]]></description>
            <content:encoded><![CDATA[ <p>We are witnessing in real time as AI fundamentally changes how people work across every industry. Customer support agents can respond to ten times the tickets. Software engineers are reviewers of AI generated code instead of spending hours pounding out boiler plate code. Salespeople can get back to focusing on building relationships instead of tedious follow up and administration. </p><p>This technology feels magical, and Cloudflare is committed to helping companies build world class AI-driven experiences for their employees and customers.</p><p>There is a but, however. Any time a brand new technology with such widespread appeal emerges, the technology often outpaces the tools in place to govern, secure and control the technology. We're already starting to see stories of vibe coded apps leaking all their users' details. LLM chats that were intended to only be shared between colleagues, are actually out on the web, being indexed by search engines for all the world to see. AI Agents are being given the keys to the application kingdom, enabling them to work autonomously across an organization — but without <a href="https://www.cloudflare.com/the-net/building-cyber-resilience/secure-govern-ai-agents/">proper tracking and control</a>. And then there’s the risk of a well-meaning employee uploading confidential company or customer data into an LLM, which then uses it to train future models.</p><p>Beyond internal data used for LLM training, content creators and media companies are also faced with a decision about how they want LLM scrapers and information retrieval bots to interact with their content. Cloudflare has found that it can be <a href="https://blog.cloudflare.com/ai-search-crawl-refer-ratio-on-radar/#how-does-this-measurement-work"><u>hundreds, or even thousands, of times harder</u></a> to generate site traffic (and therefore ad revenue) from an AI response versus a search engine result.</p><p>We're hearing more and more of these stories from CISOs, CIOs, Creators, and even CEOs. These leaders are faced with a difficult choice: clamping down on all AI usage and bots — or letting them run wild. There needs to be something in between. And for that to be a real option, the tools to manage and secure AI need to catch up to AI itself.</p><p>This week, that's what Cloudflare is focused on. Welcome to AI Week! Over the coming week, we will focus on four core areas to help companies secure and deliver AI experiences safely and securely:</p><ul><li><p><b>Securing AI environments and workflows:</b> AI is incredibly powerful. The problem is, innovation is outpacing control — we want to change that. And as one of the few zero trust providers also building out AI infrastructure for the web, we’re uniquely positioned to be able to do so. </p></li><li><p><b>Protecting original content from misuse by AI: </b>AI Companies are devouring organic content as quickly as it’s created… and creators aren’t seeing any benefit. We want to give content creators control over the content that they have worked so hard to develop.</p></li><li><p><b>Helping developers build world-class, secure, AI experiences: </b>the possibilities for developers to create new applications on top of (or even building with) AI are endless.  We want to allow developers to create AI driven applications that are as close to users as possible, with security controls built-in from day one.</p></li><li><p><b>Making Cloudflare better for you with AI: </b>AI is changing the nature of interfaces. For example, finding and mitigating issues buried in thousands and millions of logs and events across website, employee, and email usage is something that used to be tedious — but now with AI, it can be made easy. We’re working day and night to integrate AI into Cloudflare itself to make things more efficient for ourselves and our customers.</p></li></ul>
    <div>
      <h3>Securing AI environments and workflows</h3>
      <a href="#securing-ai-environments-and-workflows">
        
      </a>
    </div>
    <p>As Artificial Intelligence innovation continues to accelerate at an unprecedented pace, the speed of its development is increasingly outpacing the implementation of robust security controls. This rapid advancement, while promising immense benefits, simultaneously introduces novel and complex security challenges that traditional measures are often ill-equipped to address. Organizations are finding themselves grappling with the inherent risks of adopting powerful AI tools without adequate safeguards, leading to vulnerabilities such as Shadow AI and the uncontrolled proliferation of AI models, making the development of <a href="https://www.cloudflare.com/learning/ai/what-is-ai-security/">specialized AI security</a> paramount.

As we look around the zero trust space, none of the other providers are moving fast enough to keep up with AI’s pace of innovation. This is something we know a thing or two about — and after this week, if you’re worried about governing AI usage inside your organization, we will have you covered. </p><p>We will be announcing new and powerful controls to detect Shadow AI and control unauthorized AI usage. Additionally, we’ve built options for teams to establish the “paved path” of AI tooling in an organization to supercharge employee productivity without sacrificing security. Finally, we’ll be announcing new ways of protecting your own models from poisoning or attacks.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5g62AFkZ0G3Q29EXKOtwrP/443371d60c8792dabb703373c9f36816/BLOG-2881_2.png" />
          </figure>
    <div>
      <h3>Protecting original content from AI</h3>
      <a href="#protecting-original-content-from-ai">
        
      </a>
    </div>
    <p>The explosion of Large Language Models (LLMs) has also created a new challenge for content creators: the <a href="https://www.cloudflare.com/learning/ai/how-to-prevent-web-scraping/">unauthorized scraping</a> and training of their valuable content. Cloudflare recognizes the critical need for creators to maintain control over their intellectual property. That's why we've introduced Crawl Control, a groundbreaking initiative designed to empower content owners to manage how their content is accessed and used by AI models.</p><p>In the past two months, we've seen incredible progress with Crawl Control. We've significantly expanded the number of participating content providers, allowing more creators to leverage this innovative protection. We've also refined our detection mechanisms to more accurately identify AI crawlers and ensure that only authorized access occurs. Furthermore, we've streamlined the integration process, making it easier for new publishers to onboard and begin protecting their content within minutes. Our goal remains to provide content creators with the tools they need to thrive in the age of AI, ensuring they are compensated and acknowledged for the content they produce.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/21wEPuSaH0qaAvMnE8g3J5/89933c6c1c286852a94e7acc5d5628ca/BLOG-2881_3.png" />
          </figure>
    <div>
      <h3>Helping you build world-class, secure, AI experiences</h3>
      <a href="#helping-you-build-world-class-secure-ai-experiences">
        
      </a>
    </div>
    <p>We believe that AI experiences should have security controls by default. This is why we are heavily investing in both our developer platform’s AI Gateway and the associated security controls for those products. This two pronged approach allows developers to iterate and test new ideas without the fear of painful or embarrassing security issues.</p><p>The Cloudflare AI Gateway allows developers to deploy AI-driven applications with unparalleled speed and efficiency, ensuring that these applications are as close to end-users as possible. This proximity minimizes latency and maximizes performance, delivering a seamless and responsive user experience that is critical in today's fast-paced digital landscape.</p><p>This week, we're announcing significant enhancements to the AI Gateway, further solidifying its position as the premier platform for AI application deployment. These improvements include advanced caching mechanisms that reduce redundant model calls, leading to faster response times and lower operational costs. We are also introducing expanded <a href="https://www.cloudflare.com/learning/performance/what-is-observability/">observability features</a>, providing developers with deeper insights into their AI model's performance and usage patterns, which will enable more effective debugging and optimization. Furthermore, new integrations with popular AI frameworks and services will simplify the development workflow, allowing developers to leverage the AI Gateway's benefits with even greater ease. Our commitment is to provide developers with the tools to innovate and deliver cutting-edge AI experiences to their users.</p>
    <div>
      <h3>Making Cloudflare better with AI </h3>
      <a href="#making-cloudflare-better-with-ai">
        
      </a>
    </div>
    <p>We’re integrating AI across our entire product suite to enhance the Cloudflare experience itself. From intelligent threat detection that adapts to emerging attack patterns, to AI-powered optimizations that fine-tune network performance, our goal is to leverage AI to make our platform more intuitive, efficient, and secure. We envision a future where Cloudflare’s products proactively anticipate user needs, automate complex tasks, and deliver unparalleled insights, all powered by seamlessly embedded AI. This commitment to internal AI integration ensures that as the digital landscape evolves, Cloudflare remains at the forefront of innovation, continuously delivering superior value to our users.</p><p>We cannot wait to share these updates and announcements with you. Follow our <a href="https://www.cloudflare.com/innovation-week/ai-week-2025/"><u>AI Week hub page</u></a> for all the latest releases from our <a href="https://blog.cloudflare.com/"><u>blog</u></a> and <a href="https://cloudflare.tv/"><u>CloudflareTV</u></a>.</p><div>
  
</div><p></p> ]]></content:encoded>
            <category><![CDATA[AI Week]]></category>
            <category><![CDATA[AI-SPM]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <guid isPermaLink="false">7ygz3iUKcvkInoEdnjrjQp</guid>
            <dc:creator>Kenny Johnson</dc:creator>
            <dc:creator>James Allworth</dc:creator>
        </item>
    </channel>
</rss>