
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Wed, 08 Apr 2026 09:11:34 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Innovating to address streaming abuse — and our latest transparency report]]></title>
            <link>https://blog.cloudflare.com/h1-2025-transparency-report/</link>
            <pubDate>Fri, 19 Dec 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare's H1 2025 Transparency Report is here. We discuss our principles on content blocking and our innovative approach to combating unauthorized streaming and copyright abuse. ]]></description>
            <content:encoded><![CDATA[ <p>Cloudflare's latest <a href="https://www.cloudflare.com/transparency/"><u>transparency report</u></a> — covering the first half of 2025 — is now live. As part of our commitment to transparency, Cloudflare publishes such reports twice a year, describing how we handle legal requests for customer information and reports of abuse of our services. Although we’ve been publishing these reports for over 10 years, we’ve continued to adapt our transparency reporting and our commitments to reflect Cloudflare’s growth and changes as a company. Most recently, we made <a href="https://blog.cloudflare.com/cloudflare-2024-transparency-reports-now-live-with-new-data-and-a-new-format/"><u>changes</u></a> to the format of our reports to make them even more comprehensive and understandable.</p><p>In general, we try to provide updates on our approach or the requests that we receive in the transparency report itself. To that end, we have some notable updates for the first half of 2025. But our transparency report can only go so far in explaining the numbers. </p><p>In this blog post, we’ll do a deeper dive on one topic: Cloudflare’s approach to streaming and claims of copyright violations. Given increased access to AI tools and other systems for abuse, bad actors have become increasingly sophisticated in the way they attempt to abuse systems to stream copyrighted content, often incorporating steps to hide their behavior. We’ve responded by experimenting with new ways to address allegations of streaming and copyright infringement, working closely with rightsholders to better identify domains and accounts that might be streaming to speed up our processes to respond in real time and to better identify possible risks. </p><p>This effort aligns with the interests of policymakers, rightsholders, and online service providers in preventing pirated streaming of sporting and other events over the Internet. Indeed, the same actors who infringe legitimate intellectual property rights with unauthorized streaming may seek to misuse Cloudflare’s services, impacting performance, costs, and reliability for other users. This shared interest in identifying and responding to unauthorized streaming has led to opportunities for partnerships and better information sharing. Preventing unauthorized streaming is a hard problem that requires those partnerships, with streamers constantly finding new ways to evade detection and preventive actions.</p>
    <div>
      <h3>Innovating to address abuse and identify new threats </h3>
      <a href="#innovating-to-address-abuse-and-identify-new-threats">
        
      </a>
    </div>
    <p>With approximately 20% of the web behind Cloudflare’s network, building smart and scalable abuse processes has never been optional. Even as a much smaller company with more limited services, we <a href="https://blog.cloudflare.com/out-of-the-clouds-and-into-the-weeds-cloudflares-approach-to-abuse-in-new-products/"><u>recognized</u></a> the importance of creating a system that efficiently got abuse reporting to those best positioned to action the reports, typically the website owner or hosting provider. Our view was that we could play an important role in ensuring that allegations of abuse reported to us went to those entities without compromising their security.</p><p>As we have developed new services, we have applied a service-specific <a href="https://www.cloudflare.com/trust-hub/abuse-approach/"><u>approach to abuse</u></a>, reflecting the nature of the services provided, legal requirements, and human rights considerations. This approach means that we treat hosted content differently than content on websites that use our security and CDN services, an approach reflected throughout our transparency report. </p><p>Beyond Cloudflare’s response to individual abuse reports, we also recognize the value of systems that learn from the abuse reports we receive. Not only do efforts to identify abuse patterns improve our ability to detect and mitigate abuse on our network, they enable us to better protect our customers from a wide range of cyber threats.</p><p>Rapid developments in AI and constantly improving technologies create new challenges and new opportunities. Bad actors have learned how to use AI to quickly stand up sophisticated phishing campaigns, or shift and divide unauthorized streaming traffic to evade detection. LLMs also enable misuse of abuse reporting systems, facilitating the creation of large volumes of low quality or even malicious abuse reports.</p><p>At the same time, the ability to apply machine learning and AI to the reams of traffic and information behind Cloudflare’s network has enabled the development of new tools to detect and mitigate abusive conduct. Cloudflare has created automated systems that can keep up with the scale of the issue, all while more accurately identifying genuine abuse. In 2024, as reflected in the temporary surge in phishing actions reported in our <a href="https://cf-assets.www.cloudflare.com/slt3lc6tev37/7vust2n7oACblNR2Jk7jZx/5b84afdbb6fbdcc751d6a7ba9a7f938b/H2_2024_AbuseProcessesTransparencyReport_AQFinal.pdf?_gl=1*3escw2*_gcl_au*MjgzODYzMTA4LjE3NTY4NDEzMjg.*_ga*MmIwZjcyYmUtY2EzYi00ZDdlLWJhZWEtOTM5NDQ2MjFhZGEz*_ga_SQCRB0TXZW*czE3NjIxMTg3OTkkbzIxMiRnMSR0MTc2MjEyMTM3OCRqNTkkbDAkaDAkZHdnZlU5UHM2VU5YUUlhRVVlUkNKb1g0ck1kM3ZiR2xZM0E."><u>abuse transparency report</u></a>, Cloudflare expanded the use of automated systems to respond to reports of technical abuse like phishing. Behind the scenes, Cloudflare has taken similar steps to identify new patterns of abusive behavior, to help prevent bad actors from using our services in the first place.</p><p>Knowing that bad actors aren’t likely to give up, Cloudflare has continued innovating in 2025. We’re exploring new ways to learn about and respond to abuse, with the goal of identifying and pursuing the strategies with the most promise for long-term impact.</p>
    <div>
      <h3>Technical responses to streaming abuse</h3>
      <a href="#technical-responses-to-streaming-abuse">
        
      </a>
    </div>
    <p>Cloudflare has always believed that, regardless of their size, websites deserve a secure, fast, reliable web presence. And because we didn’t think you should have to pay for coming under cyberattack, we’ve offered a <a href="https://www.cloudflare.com/plans/free/"><u>free plan</u></a> for websites since Cloudflare launched in 2010. That system — which protects websites around the world from cyberattack for free — works because websites do not consume much bandwidth.</p><p>Streaming is different. Every second of a typical video requires as much bandwidth as loading a full webpage. To ensure that we can continue to provide free services, we’ve always restricted use of our free services to deliver streaming video. Although most of our customers respect these limitations and understand the role they play in enabling our ability to provide these services for free, we sometimes have users attempt to misconfigure our service to stream video.</p><p>In the first half of 2025, Cloudflare worked with several large rightsholders on efforts to address unauthorized streaming. This included providing rightsholders with an API for streamlined reporting, giving feedback on the quality of reports to ensure rightsholders are giving us actionable information, and, after verifying reports against our own internal metrics, taking steps to respond to streaming reports at scale.</p><p>Those efforts bore results, helping us better identify and action unauthorized streaming. The engagement resulted in a significant increase in DMCA reports that Cloudflare received for websites using our hosted services, from approximately 11,000 in the second half of 2024 to approximately 125,000 in the first half of 2025. It also enabled us to speed up our notice and takedown process as we took action in response to 54,000 reports, compared to 1,000 reports in the second half of 2024. Using information from these reports, we identified additional signs of abusive behavior, leading us to terminate hosting services to another 21,000 accounts.</p><p>Cloudflare also relied on information provided by rightsholders to bolster our technical tools for preventing unauthorized streaming over Cloudflare’s network by websites using our non-hosted services. To maintain the ability to provide free and low-cost services to static websites, we may take action on websites using those services if they appear to be streaming, regardless of whether that content infringes on copyright. Over the years, we have built a variety of tools to identify and restrict this type of streaming. While rightsholders’ streaming reports are focused on infringement, we can use these reports as signals to help inform our technical tools and improve our response. Working closely with rightsholders has improved our response time on their specific abuse reports and has also helped us prevent thousands of similar websites attempting to stream in an unauthorized manner over our network before they have ever been identified as infringing.</p><p>The information about streamer tactics and techniques gleaned from these efforts are useful in our broader cybersecurity efforts. Earlier this year, for example, we used information from our streaming program to help a smaller customer whose services were being abused to host streaming content without their knowledge. Understanding how illegal streamers were accessing and abusing their services enabled us to provide them guidance and tools to prevent the behavior.</p><p>While we have made significant progress on this issue, we fully expect that streamers will adjust their behavior in response to the steps we’ve taken. Cloudflare’s work is not done, and we will continue to look for innovative ways to prevent and address this type of abuse. </p>
    <div>
      <h3>Addressing blocking demands</h3>
      <a href="#addressing-blocking-demands">
        
      </a>
    </div>
    <p>As Cloudflare has been collaborating with rightsholders on technical solutions to streaming that address the issue in real time, many regulators and rightsholders have taken a clunkier approach: pursuing legally-mandated blocking of the Internet. Lack of technical expertise or sheer indifference can lead to significant overblocking of innocent websites, often without transparency or accountability for those responsible. We share the view of civil society groups like the <a href="https://www.internetsociety.org/resources/policybriefs/2025/perspectives-on-internet-content-blocking/"><u>Internet Society</u></a> that the best and most effective approach remains removing illegal content at the source.</p><p>One of the most notorious examples of overblocking has been actions by Spanish football league LaLiga. Working through ISPs in Spain, they have engaged in widespread blocking of IP addresses shared by many thousands of websites during matches, without any government oversight. This has caused severe Internet outages across Spain during the time of matches. The disproportionate effect of IP address blocking is <a href="https://blog.cloudflare.com/consequences-of-ip-blocking/"><u>well known</u></a>. LaLiga has nonetheless been unapologetic about causing the blocking of countless unrelated websites, suggesting that their commercial interests should trump the rights of Spanish Internet users to access the broader Internet during match times. Although this approach ignores well-established legal principles requiring that any blocking be proportionate to the problem, the Spanish government has not acted to protect the rights of Spanish Internet users. Balanced against these clear harms and lack of government willingness to provide sufficient oversight, we have seen no concrete evidence that such blunt force blocking efforts meaningfully solve the issue.</p><p>Cloudflare believes that regulators and rightsholders have a responsibility to seek out proportionate ways to prevent online infringement, and that working collaboratively with service providers offers the best way to effectively address abuse without fundamentally damaging the Internet. For reasons illustrated by the LaLiga example, blocking at the infrastructure layer is often overbroad, non-transparent, and ineffective.</p><p>Although we have real concerns about blocking, and particularly the way blocking has been co-opted by rightsholders to further their commercial interests over the rights of ordinary Internet users to access lawful content, Cloudflare has examined ways that blocking might be applied as a more targeted or proportionate response. In general, Cloudflare has found that blocking is of limited effectiveness, as determined users will find ways to circumvent restrictions. Nonetheless, Cloudflare has taken steps to comply with valid orders related to our CDN services that satisfy human rights principles relating to proportionality, due process, free expression, and transparency. In countries with laws that provide for blocking access to online content and provide appropriate oversight, Cloudflare may geoblock websites to limit access in the relevant jurisdiction to those websites through Cloudflare’s CDN services.</p><p>Cloudflare has never blocked through our public DNS resolver. As we have previously <a href="https://blog.cloudflare.com/latest-copyright-decision-in-germany-rejects-blocking-through-global-dns-resolvers/"><u>described</u></a>, we believe demands to block through public DNS are at odds with the desire for an open Internet and would require the creation of new tools that are contrary to the design of our resolver. We continue to litigate against efforts to require us to build such capabilities. Cloudflare has sometimes taken action to geoblock access to websites through Cloudflare’s CDN and security services, in response to DNS blocking orders.</p><p>In the first half of 2025, Cloudflare saw a marked increase in the number of blocking orders it received in Europe. Private rightsholders obtained multiple orders directing Cloudflare to block access to websites in Belgium, France, and Italy. While Cloudflare has challenged aspects of those orders, we have taken steps to comply with them by geoblocking access to the websites at issue in the relevant countries through Cloudflare’s CDN and security services. </p><p>Cloudflare also began giving effect to UK court orders directing other service providers to block websites identified as being dedicated to copyright infringement. Based on a voluntary agreement with rightsholders, Cloudflare is geoblocking websites subject to these orders through our pass-through CDN and security services. When we take action on domains pursuant to these orders, we post an interstitial page that returns a <a href="https://developers.cloudflare.com/support/troubleshooting/http-status-codes/4xx-client-error/error-451/"><u>451 status code</u></a> that directs the visitor to the specific order, which includes a process for affected parties to contest the blocking action.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/74JGtoTseoNdxz0xLIW4AK/55545650ab85002692ca9bb07ba6a2a9/image3.png" />
          </figure><p><sup>Example of a 451 error page in the UK.</sup></p><p>Our efforts in the UK to block content based on a finding of infringement in an order directed to a third party reflect our desire to experiment with more targeted approaches than the overblocking we have seen in other countries in Europe, as well as our understanding that the UK’s regime includes important protections around proportionality, due process, and transparency, including an opportunity for affected parties to seek redress. We are currently monitoring the impact of this approach, and have taken these steps with the understanding that we can change course if we see the system being abused. </p><p>Finally, in the first half of 2025, we have seen an expansion of areas for which blocking has been demanded. We received official government notices in France and Belgium that websites using our hosted services were offering gambling services illegally in those jurisdictions. In both cases, we were able to share the notice with our customer, and they took action themselves to address it. This illustrates the benefit of connecting our customer directly with the government regulator so that they can address issues with their websites, rather than proceeding directly to a blocking demand. </p>
    <div>
      <h3>Looking forward</h3>
      <a href="#looking-forward">
        
      </a>
    </div>
    <p>Cloudflare will continue to look for ways to work with rightsholders and regulators to find effective and proportionate ways to address online abuse. As a company that values transparency, we use our biannual transparency reports to describe the principles we apply in doing this work, and in responding to abuse reports or requests for customer information more generally. We invite you to dive into the numbers and <a href="https://www.cloudflare.com/transparency/"><u>learn more here</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Transparency]]></category>
            <guid isPermaLink="false">5mt8quFYw1l3UpRAh6JsHU</guid>
            <dc:creator>Justin Paine</dc:creator>
        </item>
        <item>
            <title><![CDATA[How Cloudflare is using automation to tackle phishing head on]]></title>
            <link>https://blog.cloudflare.com/how-cloudflare-is-using-automation-to-tackle-phishing/</link>
            <pubDate>Mon, 17 Mar 2025 05:00:00 GMT</pubDate>
            <description><![CDATA[ How Cloudflare is using threat intelligence and our Developer Platform products to automate phishing abuse reports. ]]></description>
            <content:encoded><![CDATA[ <p>Phishing attacks have grown both in volume and in sophistication over recent years. Today’s threat isn’t just about sending out generic <a href="https://www.cloudflare.com/learning/email-security/what-is-email/"><u>emails</u></a> — bad actors are using advanced phishing techniques like <a href="https://bolster.ai/blog/man-in-the-middle-phishing"><u>2 factor monster in the middle</u></a> (MitM) attacks, <a href="https://blog.cloudflare.com/how-cloudflare-cloud-email-security-protects-against-the-evolving-threat-of-qr-phishing/"><u>QR codes</u></a> to bypass detection rules, and <a href="https://www.malwarebytes.com/blog/news/2025/01/ai-supported-spear-phishing-fools-more-than-50-of-targets"><u>using artificial intelligence (AI)</u></a> to craft personalized and targeted phishing messages at scale. Industry organizations such as the Anti-Phishing Working Group (APWG) <a href="https://docs.apwg.org/reports/apwg_trends_report_q2_2024.pdf"><u>have shown</u></a> that phishing incidents continue to climb year over year.</p><p>To combat both the increase in phishing attacks and the growing complexity, we have built advanced automation tooling to both detect and take action. </p><p>In the first half of 2024, Cloudflare resolved 37% of phishing reports using automated means, and the median time to take action on hosted phishing reports was 3.4 days. In the second half of 2024, after deployment of our new tooling, we were able to expand our automated systems to resolve 78% of phishing reports with a median time to take action on hosted phishing reports of under an hour.</p><p>In this post we dig into some of the details of how we implemented these improvements.</p>
    <div>
      <h3>The phishing site problem</h3>
      <a href="#the-phishing-site-problem">
        
      </a>
    </div>
    <p><a href="https://blog.cloudflare.com/dispelling-the-generative-ai-fear-how-cloudflare-secures-inboxes-against-ai-enhanced-phishing/"><u>Cloudflare has observed a similar increase</u></a> in the volume of phishing activity throughout 2023 and 2024. We receive <a href="https://abuse.cloudflare.com/"><u>abuse reports</u></a> from anyone on the Internet that may have seen potentially abusive behaviors from websites using Cloudflare services. Our Trust &amp; Safety investigators and engineers have been tasked with responding to these complaints, and more recently have been using the data from these reports to improve our threat intelligence, brand protection, and email security product offerings.</p><p>Cloudflare has always believed in using the vast amounts of traffic that flows through our network to improve threat detection and customer security. This has been at the core of how we protect our customers from <a href="https://www.cloudflare.com/learning/ddos/glossary/denial-of-service/"><u>DoS attacks</u></a> and other <a href="https://www.cloudflare.com/learning/security/what-is-cyber-security/"><u>cybersecurity</u></a> threats. We've been applying the same concepts our internal teams use to mitigate <a href="https://www.cloudflare.com/learning/email-security/how-to-prevent-phishing/"><u>phishing</u></a> to improve detection of phishing on our network and our ability to detect and notify our customers about potential risks to their brand.</p><p>Prior to last year, phishing abuse reported to Cloudflare relied on manual, human review and intervention to remediate. Trust &amp; Safety (T&amp;S) investigators would have to look at each complaint, the allegations made by the reporter, and the content on the reported websites to make assessments as quickly as possible about whether the website was phishing or <a href="https://www.cloudflare.com/learning/ddos/glossary/malware/"><u>malware</u></a>.</p><p>Given the growing scale of our customer base and phishing across the Internet, this became unsustainable. By collecting a group of internal experts on abuse, we were able to tackle this problem by using insights across our network, internal data from our <a href="https://developers.cloudflare.com/cloudflare-one/email-security/"><u>Email Security</u></a> product, external feeds from trusted sources, and years of abuse report processing data to automatically assess risk of likely phishing and recommend appropriate action.</p>
    <div>
      <h3>Turning our intelligence inward</h3>
      <a href="#turning-our-intelligence-inward">
        
      </a>
    </div>
    <p>We built our automated phishing identification on the <a href="https://www.cloudflare.com/developer-platform/products/"><u>Cloudflare Developer Platform</u></a> so that we could meet our scanning demand without concern for how we might scale. This allowed us to focus more on creating a great phishing detection engine and less on the infrastructure required to meet that demand. </p><p>Each URL submitted to our phishing detection <a href="https://workers.cloudflare.com/"><u>Worker</u></a> begins with an initial scan by the <a href="https://radar.cloudflare.com/scan"><u>Cloudflare URL Scanner</u></a>. The scan provides us with the rendered HTML, network requests, and attributes of the site. After scanning, we collect reputational information about the site by submitting the HTML and page resources to our in-house <a href="https://www.cloudflare.com/learning/ai/what-is-machine-learning/"><u>machine learning</u></a> classifiers; meanwhile, the <a href="https://www.cloudflare.com/learning/security/what-are-indicators-of-compromise/"><u>indicators of compromise (IOCs)</u></a> are sent to our suite of <a href="https://www.cloudflare.com/learning/security/glossary/threat-intelligence-feed/"><u>threat feeds</u></a> and domain categorization tools to highlight any known malicious sites or site categorizations.</p><p>Once we have all of this information collected, we expose it to a set of rules and heuristics that identify the URL as phishing or not based on how T&amp;S investigators have traditionally responded to similar abuse reports and patterns of bad behaviors we’ve observed. Rules will suggest decisions to make against the reports, and remediations to make against harmful content. It is through this process that we were able to convert the manual reviews by T&amp;S investigators into an automated flow of phishing identification. We also recognize that reporters make mistakes or even deliberately try to weaponize abuse processes. Our rules must therefore consider the possibility of false positives, in which reports are created against legitimate websites (intentionally or unintentionally). False positives can erode the trust of our customers and create incidents, so automation must include processes to disregard erroneous reports.</p><p>The magic of all of this was the powerful suite of tools on the Cloudflare Developer Platform. Whether it was using <a href="https://developers.cloudflare.com/kv/"><u>KV</u></a> to store report summaries that could scale indefinitely or <a href="https://developers.cloudflare.com/durable-objects/"><u>Durable Objects</u></a> to keep running counters of an unlimited number of attributes that could be tracked or leveraged over time, we were able to integrate the solutions quickly allowing us easily add or remove new enrichments with little effort. We also made use of <a href="https://developers.cloudflare.com/hyperdrive/"><u>Hyperdrive</u></a> to access the internal Postgres database that stores our abuse reports, <a href="https://developers.cloudflare.com/queues/"><u>Queues</u></a> to manage the scanning jobs, <a href="https://developers.cloudflare.com/workers-ai/"><u>Workers AI</u></a> to run machine learning classifiers, and <a href="https://developers.cloudflare.com/d1/"><u>D1</u></a> to store detection logs for efficacy and evaluation review. To tie it all together, the team also deployed a <a href="https://developers.cloudflare.com/pages/framework-guides/deploy-a-remix-site/"><u>Remix Pages UI</u></a> to present all the phishing detection engine’s analysis to T&amp;S investigators for follow-on investigations and evaluations of inconclusive results.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7MQYa4u71uKm9J6AaNxQNy/0cce686f51988ece4a1a46d87dae6df9/image1.png" />
          </figure><p><sup><i>Architecture of Trust &amp; Safety’s phishing automation detection pipeline</i></sup></p>
    <div>
      <h3>Moving forward</h3>
      <a href="#moving-forward">
        
      </a>
    </div>
    <p>The same intelligence we’re gathering to expedite and refine abuse report processing isn’t just for abuse response; it’s also used to empower our customers. By analyzing patterns and trends of abusive behaviors — such as identifying common phrases used in phishing attempts, recognizing infrastructure used by malicious actors or spotting coordinated campaigns across multiple domains — we enhance the efficacy of our application security, email security, and threat intelligence products.</p><p>For our <a href="https://developers.cloudflare.com/learning-paths/application-security/security-center/brand-protection/"><u>Brand Protection</u></a> customers, this translates into a significant advantage: the ability to easily report suspected abuse directly from the Cloudflare dashboard. This feature ensures that potential phishing sites are addressed rapidly, minimizing the risk to your customers and brand reputation. Furthermore, the Trust and Safety team can use this information to take action on similar threats across the Cloudflare network, protecting all customers, even those who aren't Brand Protection users.</p><p>Alongside our network-wide efforts, we’ve also been partnering with our customers, as well as experts outside of Cloudflare, to understand trends they are seeing in their own phishing mitigation efforts. By soliciting intelligence regarding the abuse issues that affect the attack’s targets, we can better identify and prevent abuse of Cloudflare products. We’ve been able to use these partnerships and discussions with external organizations to craft highly targeted rules that head off emerging patterns of phishing activity. </p>
    <div>
      <h3>It takes a village: if you see something, say something</h3>
      <a href="#it-takes-a-village-if-you-see-something-say-something">
        
      </a>
    </div>
    <p>If you believe you’ve identified phishing activity that is passing through Cloudflare’s network, please report it via our <a href="https://abuse.cloudflare.com/"><u>abuse reporting form</u></a>. For technical users who might be interested in a programmatic way to report to us, please review our <a href="https://developers.cloudflare.com/api/resources/abuse_reports/"><u>abuse reporting API</u></a> documentation.</p><p>We invite all of our customers to join us in helping make the Internet safer:</p><ol><li><p>Enterprise customers should speak with their Customer Success Manager about enabling <a href="https://blog.cloudflare.com/safeguarding-your-brand-identity-logo-matching-for-brand-protection/"><u>Brand Protection</u></a>, included by default for all enterprise customers. </p></li><li><p>For existing users of the Brand Protection product, update your <a href="https://developers.cloudflare.com/security-center/brand-protection/"><u>brand's assets</u></a>, so we can better identify the legitimate websites and logos of our customers vs. possible phishing activity.</p></li><li><p>As a Cloudflare customer, make sure your <a href="https://developers.cloudflare.com/fundamentals/setup/account/account-security/abuse-contact/"><u>abuse contact</u></a> is up-to-date in the Cloudflare dashboard.</p></li></ol><p></p> ]]></content:encoded>
            <category><![CDATA[Security Week]]></category>
            <category><![CDATA[Abuse]]></category>
            <category><![CDATA[Threat Intelligence]]></category>
            <category><![CDATA[Phishing]]></category>
            <category><![CDATA[Policy & Legal]]></category>
            <guid isPermaLink="false">3Bb3gcZ92DhVXA44P3XF7x</guid>
            <dc:creator>Javier Castro</dc:creator>
            <dc:creator>Justin Paine</dc:creator>
            <dc:creator>Rachael Truong</dc:creator>
        </item>
        <item>
            <title><![CDATA[First Half 2019 Transparency Report and an Update on a Warrant Canary]]></title>
            <link>https://blog.cloudflare.com/first-half-2019-transparency-report-and-an-update-on-a-warrant-canary/</link>
            <pubDate>Fri, 20 Dec 2019 21:49:36 GMT</pubDate>
            <description><![CDATA[ Today, we are releasing Cloudflare’s transparency report for the first half of 2019. We recognize the importance of keeping the reports current, but It’s taken us a little longer ]]></description>
            <content:encoded><![CDATA[ <p>Today, we are releasing <a href="https://www.cloudflare.com/transparency/">Cloudflare’s transparency report</a> for the first half of 2019. We recognize the importance of keeping the reports current, but It’s taken us a little longer than usual to put it together. We have a few notable updates.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4xY1LkLltSH3mLIdmrOzEJ/d090e2f5d85f1dadc2ddd868242a6d58/canary-1.png" />
            
            </figure>
    <div>
      <h3>Pulling a warrant canary</h3>
      <a href="#pulling-a-warrant-canary">
        
      </a>
    </div>
    <p>Since we issued our very first transparency report in 2014, we’ve maintained a number of commitments - known as warrant canaries - about what actions we will take and how we will respond to certain types of law enforcement requests. We supplemented those initial commitments <a href="/cloudflare-transparency-update-joining-cloudflares-flock-of-warrant-canaries-2/">earlier this year</a>, so that our current warrant canaries state that Cloudflare has never:</p><ol><li><p>Turned over our encryption or authentication keys or our customers' encryption or authentication keys to anyone.</p></li><li><p>Installed any law enforcement software or equipment anywhere on our network.</p></li><li><p>Terminated a customer or taken down content due to political pressure*</p></li><li><p>Provided any law enforcement organization a feed of our customers' content transiting our network.</p></li><li><p>Modified customer content at the request of law enforcement or another third party.</p></li><li><p>Modified the intended destination of DNS responses at the request of law enforcement or another third party.</p></li><li><p>Weakened, compromised, or subverted any of its encryption at the request of law enforcement or another third party.</p></li></ol><p>These commitments serve as a statement of values to remind us what is important to us as a company, to convey not only what we do, but what we believe we should do. For us to maintain these commitments. We have to believe not only that we’ve met them in the past, but that we can continue to meet them.</p><p>Unfortunately, there is one warrant canary that no longer meets the test for remaining on our website. After Cloudlfare terminated the Daily Stormer’s service in 2017, Matthew <a href="/why-we-terminated-daily-stormer/">observed</a>:</p><p><i>"We're going to have a long debate internally about whether we need to remove the bullet about not terminating a customer due to political pressure. It's powerful to be able to say you've never done something. And, after today, make no mistake, it will be a little bit harder for us to argue against a government somewhere pressuring us into taking down a site they don't like."</i></p><p>We addressed this issue in our subsequent transparency reports by retaining the statement, but adding an asterisk identifying the Daily Stormer debate and the criticism that we had received in the wake of our decision to terminate services. Our goal was to signal that we remained committed to the principle that we should not terminate a customer due to political pressure, while not ignoring the termination. We also sought to be public about the termination and our reasons for the decision, ensuring that it would not go unnoticed.</p><p>Although that termination sparked significant debate about whether infrastructure companies making decisions about what content should remain online, we haven’t yet seen politically accountable actors put forth real alternatives to address deeply troubling content and behavior online. Since that time, we’ve seen even more real world consequences from the vitriol and hateful content spread online, from the screeds posted in connection with the terror attacks in Christchurch, Poway and El Paso to the posting of video glorifying those attacks. Indeed, in the absence of true public policy initiatives to address those concerns, the pressure on tech companies -- even deep Internet infrastructure companies like Cloudflare --  to make judgments about what stays online has only increased.  </p><p>In August 2019, Cloudflare terminated service to 8chan based on their failure to moderate their hate-filled platform in a way that inspired murderous acts. Although we don’t think removing cybersecurity services to force a site offline is the right public policy approach to the hate festering online, a site’s failure to take responsibility to prevent or mitigate the harm caused by its platform leaves service providers like us with few choices. We’ve come to recognize that the prolonged and persistent lawlessness of others might require action by those further down the technical stack. Although we’d prefer that governments recognize that need, and build mechanisms for due process, if they fail to act, infrastructure companies may be required to take action to prevent harm.</p><p>And that brings us back to our warrant canary. If we believe we might have an obligation to terminate customers, even in a limited number of cases, retaining a commitment that we will never terminate a customer “due to political pressure” is untenable. We could, in theory, argue that terminating a lawless customer like 8chan was not a termination “due to political pressure.” But that seems wrong. We shouldn’t be parsing specific words of our commitments to explain to people why we don’t believe we’ve violated the standard.</p><p>We remain committed to the principle that providing cybersecurity services to everyone, regardless of content, makes the Internet a better place. Although we’re removing the warrant canary from our website, we believe that to earn and maintain our users’ trust, we must be transparent about the actions we take. We therefore commit to reporting on any action that we take to terminate a user that could be viewed as a termination “due to political pressure.”</p>
    <div>
      <h3>UK/US Cloud agreement</h3>
      <a href="#uk-us-cloud-agreement">
        
      </a>
    </div>
    <p>As we’ve described <a href="/digital-evidence-across-borders-and-engagement-with-non-us-authorities/">previously</a>, governments have been working to find ways to improve law enforcement access to digital evidence across borders. Those efforts resulted in a new U.S. law, the Clarifying Lawful Overseas Use of Data (CLOUD) Act, premised on the idea that law enforcement around the world should be able to get access to electronic content related to their citizens when conducting law enforcement investigations, wherever that data is stored, as long as they are bound by sufficient procedural safeguards to ensure due process.</p><p>On October 3, 2019, the US and UK signed the first Executive Agreement under this law. According to the requirements of U.S. law, that Agreement will go into effect in 180 days, in March 2020, unless Congress takes action to block it. There is an ongoing debate as to whether the agreement includes sufficient due process and privacy protections. We’re going to take a wait and see approach, and will closely monitor any requests we receive after the agreement goes into effect.</p><p>For the time being, Cloudflare intends to comply with appropriately scoped and targeted requests for data from UK law enforcement, provided that those requests are consistent with the law and international human rights standards. Information about the legal requests that Cloudflare receives from non-U.S. governments pursuant to the CLOUD Act will be included in future transparency reports.</p> ]]></content:encoded>
            <category><![CDATA[Policy & Legal]]></category>
            <category><![CDATA[Trust & Safety]]></category>
            <category><![CDATA[Transparency]]></category>
            <category><![CDATA[Policy & Legal]]></category>
            <guid isPermaLink="false">26p8e8McNC9PBOC8HjH5ql</guid>
            <dc:creator>Alissa Starzak</dc:creator>
            <dc:creator>Justin Paine</dc:creator>
        </item>
        <item>
            <title><![CDATA[Announcing the CSAM Scanning Tool, Free for All Cloudflare Customers]]></title>
            <link>https://blog.cloudflare.com/the-csam-scanning-tool/</link>
            <pubDate>Wed, 18 Dec 2019 18:02:42 GMT</pubDate>
            <description><![CDATA[ Two weeks ago we wrote about Cloudflare's approach to dealing with child sexual abuse material (CSAM). We first began working with the National Center for Missing and Exploited Children (NCMEC), the US-based organization that acts as a clearinghouse for removing this abhorrent content ]]></description>
            <content:encoded><![CDATA[ <p>Two weeks ago we wrote about <a href="/cloudflares-response-to-csam-online/">Cloudflare's approach to dealing with child sexual abuse material (CSAM)</a>. We first began working with the National Center for Missing and Exploited Children (NCMEC), the US-based organization that acts as a clearinghouse for removing this abhorrent content, within months of our public launch in 2010. Over the last nine years, our Trust &amp; Safety team has worked with <a href="http://www.missingkids.com/">NCMEC</a>, <a href="https://www.interpol.int/en/Crimes/Crimes-against-children">Interpol</a>, and nearly 60 other public and private agencies around the world to design our program. And we are proud of the work we've done to remove CSAM from the Internet.</p><p>The most repugnant cases, in some ways, are the easiest for us to address. While Cloudflare is not able to remove content hosted by others, we will take steps to terminate services to a website when it becomes clear that the site is dedicated to sharing CSAM or if the operators of the website and its host fail to take appropriate steps to take down CSAM content. When we terminate websites, we purge our caches — something that takes effect within seconds globally — and we block the website from ever being able to use Cloudflare's network again.</p>
    <div>
      <h3>Addressing the Hard Cases</h3>
      <a href="#addressing-the-hard-cases">
        
      </a>
    </div>
    <p>The hard cases are when a customer of ours runs a service that allows user generated content (such as a discussion forum) and a user uploads CSAM, or if they’re hacked, or if they have a malicious employee that is storing CSAM on their servers. We've seen many instances of these cases where services intending to do the right thing are caught completely off guard by CSAM that ended up on their sites. Despite the absence of intent or malice in these cases, there’s still a need to identify and remove that content quickly.</p><p>Today we're proud to take a step to help deal with those hard cases. Beginning today, every Cloudflare customer can login to their dashboard and enable access to the CSAM Scanning Tool. As the CSAM Scanning Tool moves through development to production, the tool will check all Internet properties that have enabled CSAM Scanning for this illegal content. Cloudflare will automatically send a notice to you when it flags CSAM material, block that content from being accessed (with a 451 “blocked for legal reasons” status code), and take steps to support proper reporting of that content in compliance with legal obligations.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4AbFvKTllB8PtLhuXka8Cd/83db523c6bab38856bdf3c8aa757c39e/csam-tool.png" />
            
            </figure><p>CSAM Scanning will be available via the Cloudflare dashboard at no cost for all customers regardless of their plan level. You can find this tool under the “Caching” tab in your dashboard. We're hopeful that by opening this tool to all our customers for free we can help do even more to counter CSAM online and help protect our customers from the legal and reputational risk that CSAM can pose to their businesses.</p><p>It has been a long journey to get to the point where we could commit to offering this service to our millions of users. To understand what we're doing and why it has been challenging from a technical and policy perspective, you need to understand a bit about the state of the art of tracking CSAM.</p>
    <div>
      <h3>Finding Similar Images</h3>
      <a href="#finding-similar-images">
        
      </a>
    </div>
    <p>Around the same time as Cloudflare was first conceived in 2009, a Dartmouth professor named Hany Farid was working on software that could compare images against a list of hashes maintained by NCMEC. Microsoft took the lead in creating a tool, PhotoDNA, that used Prof. Farid’s work to identify CSAM automatically.</p><p>In its earliest days, Microsoft used PhotoDNA for their services internally and, in late 2009, <a href="http://blogs.msdn.com/b/microsoftuseducation/archive/2009/12/17/microsoft-donates-photodna-technology-to-make-the-internet-safer-for-kids.aspx">donated the technology to NCMEC</a> to help manage its use by other organizations. Social networks were some of the first adopters. In 2011, <a href="http://www.huffingtonpost.com/2011/05/20/facebook-photodna-microsoft-child-pornography_n_864695.html">Facebook rolled out an implementation</a> of the technology as part of their abuse process. <a href="https://www.theguardian.com/technology/2013/jul/22/twitter-photodna-child-abuse">Twitter incorporated it in 2014</a>.</p><p>The process is known as a fuzzy hash. Traditional hash algorithms like MD5, SHA1, and SHA256 take a file (such as an image or document) of arbitrary length and output a fixed length number that is, effectively, the file’s digital fingerprint. For instance, if you take the MD5 of this picture then the resulting fingerprint is <b>605c83bf1bba62e85f4f5fccc56bc128</b>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/56l9Sl2FmrYei4gHB1ejSi/edb419fe336a2f3b81dcd7964f93bf3f/base-image.jpg" />
            
            </figure><p>The base image</p><p>If we change a single pixel in the picture to be slightly off white rather than pure white, it's visually identical but the fingerprint changes completely to <b>42ea4fb30a440d8787477c6c37b9daed</b>. As you can see from the two fingerprints, a small change to the image results in a massive and unpredictable change to the output of a traditional hash.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3dbf5FVy09FFVrTDvhTbkm/9a64cea0427199fe630f212da2c53dd2/base-image-pixel-changed.jpg" />
            
            </figure><p>The base image with a single pixel changed</p><p>This is great for some uses of hashing where you want to definitively identify if the document you're looking at is exactly the same as the one you've seen before. For example, if an extra zero is added to a digital contract, you want the hash of the document used in its signature to no longer be valid.</p>
    <div>
      <h3>Fuzzy Hashing</h3>
      <a href="#fuzzy-hashing">
        
      </a>
    </div>
    <p>However, in the case of CSAM, this characteristic of traditional hashing is a liability. In order to avoid detection, the criminals producing CSAM resize, add noise, or otherwise alter the image in such a way that it looks the same but it would result in a radically different hash.</p><p>Fuzzy hashing works differently. Instead of determining if two photos are exactly the same it instead attempts to get at the essence of a photograph. This allows the software to calculate hashes for two images and then compare the "distance" between the two. While the fuzzy hashes may still be different between two photographs that have been altered, unlike with traditional hashing, you can compare the two and see how similar the images are.</p><p>So, in the two photos above, the fuzzy hash of the first image is</p>
            <pre><code>00e308346a494a188e1042333147267a
653a16b94c33417c12b433095c318012
5612442030d1484ce82c613f4e224733
1dd84436734e4a5c6e25332e507a8218
6e3b89174e30372d</code></pre>
            <p>and the second image is</p>
            <pre><code>00e308346a494a188e1042333147267a
653a16b94c33417c12b433095c318012
5612442030d1484ce82c613f4e224733
1dd84436734e4a5c6e25332e507a8218
6e3b89174e30372d</code></pre>
            <p>There's only a slight difference between the two in terms of pixels and the fuzzy hashes are identical.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1XABj7ngnOZ2SrnRyOl3mI/1adc7f88b188d15057b155b006d686b8/base-image-altered-2.jpeg.jpeg" />
            
            </figure><p>The base image after increasing the saturation, changing to sepia, adding a border and then adding random noise.</p><p>Fuzzy hashing is designed to be able to identify images that are substantially similar. For example, we modified the image of dogs by first enhancing its color, then changing it to sepia, then adding a border and finally adding random noise.  The fuzzy hash of the new image is</p>
            <pre><code>00d9082d6e454a19a20b4e3034493278
614219b14838447213ad3409672e7d13
6e0e4a2033de545ce731664646284337
1ecd4038794a485d7c21233f547a7d2e
663e7c1c40363335</code></pre>
            <p>This looks quite different from the hash of the unchanged image above, but fuzzy hashes are compared by seeing how close they are to each other.</p><p>The largest possible distance between two images is about 5m units. These two fuzzy hashes are just 4,913 units apart (the smaller the number, the more similar the images) indicating that they are substantially the same image.</p><p>Compare that with two unrelated photographs. The photograph below has a fuzzy hash of</p>
            <pre><code>011a0d0323102d048148c92a4773b60d
0d343c02120615010d1a47017d108b14
d36fff4561aebb2f088a891208134202
3e21ff5b594bff5eff5bff6c2bc9ff77
1755ff511d14ff5b</code></pre>
            
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/59LaVGZVc82yuxKulGMcWM/7e8064c951ef89935735b82048761440/image-1.jpg" />
            
            </figure><p>The photograph below has a fuzzy hash of</p>
            <pre><code>062715154080356b8a52505955997751
9d221f4624000209034f1227438a8c6a
894e8b9d675a513873394a2f3d000722
781407ff475a36f9275160ff6f231eff
465a17f1224006ff</code></pre>
            
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/67Np3uJW3La9k5dzGJ3tEC/b896e80908d3c9079bf9e3320724b040/D18OTNFaeIrwcpN2MmNhljU9mau-GND77Cu9qV8lWo8Na3ciZlQ7pTg-wP9bqDK4ILo5-6yCM96uVlvKkDnrxbOCK3-XhlAQx-ln7AJp-k6_5YClzo9jXvkCzRb.jpeg" />
            
            </figure><p>The distance between the two hashes is calculated as 713,061. Through experimentation, it's possible to set a distance threshold under which you can consider two photographs to be likely related.</p>
    <div>
      <h3>Fuzzy Hashing's Intentionally Black Box</h3>
      <a href="#fuzzy-hashings-intentionally-black-box">
        
      </a>
    </div>
    <p>How does it work? While there has been lots of work on fuzzy hashing published, the innards of the process are intentionally a bit of a mystery. The New York Times recently <a href="https://www.nytimes.com/interactive/2019/11/09/us/internet-child-sex-abuse.html">wrote a story</a> that was probably the most public discussion of how such technology works. The challenge was if criminal producers and distributors of CSAM knew exactly how such tools worked then they might be able to craft how they alter their images in order to beat it. To be clear, Cloudflare will be running the CSAM Screening Tool on behalf of the website operator from within our secure points of presence. We will not be distributing the software directly to users. We will remain vigilant for potential attempted abuse of the platform, and will take prompt action as necessary.</p>
    <div>
      <h3>Tradeoff Between False Negatives and False Positives</h3>
      <a href="#tradeoff-between-false-negatives-and-false-positives">
        
      </a>
    </div>
    <p>We have been working with a number of authorities on how we can best roll it out this functionality to our customers. One of the challenges for a network with as diverse a set of customers as Cloudflare's is what the appropriate threshold should be to set the comparison distance between the fuzzy hashes.</p><p>If the threshold is too strict — meaning that it's closer to a traditional hash and two images need to be virtually identical to trigger a match — then you're more likely to have have many false negatives (i.e., CSAM that isn't flagged). If the threshold is too loose, then it's possible to have many false positives. False positives may seem like the lesser evil, but there are legitimate concerns that increasing the possibility of false positives at scale could waste limited resources and further <a href="https://www.nytimes.com/interactive/2019/09/28/us/child-sex-abuse.html">overwhelm the existing ecosystem</a>.  We will work to iterate the CSAM Scanning Tool to provide more granular control to the website owner while supporting the ongoing effectiveness of the ecosystem. Today, we believe we can offer a good first set of options for our customers that will allow us to more quickly flag CSAM without overwhelming the resources of the ecosystem.</p>
    <div>
      <h3>Different Thresholds for Different Customers</h3>
      <a href="#different-thresholds-for-different-customers">
        
      </a>
    </div>
    <p>The same desire for a granular approach was reflected in our conversations with our customers. When we asked what was appropriate for them, the answer varied radically based on the type of business, how sophisticated its existing abuse process was, and its likely exposure level and tolerance for the risk of CSAM being posted on their site.</p><p>For instance, a mature social network using Cloudflare with a sophisticated abuse team may want the threshold set quite loose, but not want the material to be automatically blocked because they have the resources to manually review whatever is flagged.</p><p>A new startup dedicated to providing a forum to new parents may want the threshold set quite loose and want any hits automatically blocked because they haven't yet built a sophisticated abuse team and the risk to their brand is so high if CSAM material is posted -- even if that will result in some false positives.</p><p>A commercial financial institution may want to set the threshold quite strict because they're less likely to have user generated content and would have a low tolerance for false positives, but then automatically block anything that's detected because if somehow their systems are compromised to host known CSAM they want to stop it immediately.</p>
    <div>
      <h3>Different Requirements for Different Jurisdictions</h3>
      <a href="#different-requirements-for-different-jurisdictions">
        
      </a>
    </div>
    <p>There also may be challenges based on where our customers are located and the laws and regulations that apply to them. Depending on where a customers business is located and where they have users, they may choose to use one, more than one, or all the different available hash lists.</p><p>In other words, one size does not fit all and, ideally, we believe allowing individual site owners to set the parameters that make the most sense for their particular site will result in lower false negative rates (i.e., more CSAM being flagged) than if we try and set one global standard for every one of our customers.</p>
    <div>
      <h3>Improving the Tool Over Time</h3>
      <a href="#improving-the-tool-over-time">
        
      </a>
    </div>
    <p>Over time, we are hopeful that we can improve CSAM screening for our customers. We expect that we will add additional lists of hashes from numerous global agencies for our customers with users around the world to subscribe to. We're committed to enabling this flexibility without overly burdening the ecosystem that is set up to fight this horrible crime.</p><p>Finally, we believe there may be an opportunity to help build the next generation of fuzzy hashing. For example, the software can only scan images that are at rest in memory on a machine, not those that are streaming. We're talking with Hany Farid, the former Dartmouth professor who now teaches at Berkeley California, about ways that we may be able to build a more flexible fuzzy hashing system in order to flag images before they're even posted.</p>
    <div>
      <h3>Concerns and Responsibility</h3>
      <a href="#concerns-and-responsibility">
        
      </a>
    </div>
    <p>One question we asked ourselves back when we began to consider offering CSAM scanning was whether we were the right place to be doing this at all. We share the universal concern about the distribution of depictions of horrific crimes against children, and believe it should have no place on the Internet, however Cloudflare is a network infrastructure provider, not a content platform.</p><p>But we thought there was an appropriate role for us to play in this space. Fundamentally, Cloudflare delivers tools to our more than 2 million customers that were previously reserved for only the Internet giants. The security, performance, and reliability services that we offer, often for free, without us would have been extremely expensive or limited to the Internet giants like Facebook and Google.</p><p>Today there are startups that are working to build the next Internet giant and compete with Facebook and Google because they can use Cloudflare to be secure, fast, and reliable online. But, as the regulatory hurdles around dealing with incredibly difficult issues like CSAM continue to increase, many of them lack access to sophisticated tools to scan proactively for CSAM. You have to get big to get into the club that gives you access to these tools, and, concerningly, being in the club is increasingly a prerequisite to getting big.</p><p>If we want more competition for the Internet giants we need to make these tools available more broadly and to smaller organizations. From that perspective, we think it makes perfect sense for us to help democratize this powerful tool in the fight against CSAM.</p><p>We hope this will help enable our customers to build more sophisticated content moderation teams appropriate for their own communities and will allow them to scale in a responsible way to compete with the Internet giants of today. That is directly aligned with our mission of helping build a better Internet, and it's why we're announcing that we will be making this service available for free for all our customers.</p> ]]></content:encoded>
            <category><![CDATA[Policy & Legal]]></category>
            <category><![CDATA[Abuse]]></category>
            <guid isPermaLink="false">54YgxLpRSXSe6u6jDEfW2r</guid>
            <dc:creator>Justin Paine</dc:creator>
            <dc:creator>John Graham-Cumming</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare’s Response to CSAM Online]]></title>
            <link>https://blog.cloudflare.com/cloudflares-response-to-csam-online/</link>
            <pubDate>Fri, 06 Dec 2019 14:06:00 GMT</pubDate>
            <description><![CDATA[ Responding to incidents of child sexual abuse material (CSAM) online has been a priority at Cloudflare from the beginning. The stories of CSAM victims are tragic, and bring to light an appalling corner of the Internet.  ]]></description>
            <content:encoded><![CDATA[ <p>Responding to incidents of child sexual abuse material (CSAM) online has been a priority at Cloudflare from the beginning. The stories of CSAM victims are tragic, and bring to light an appalling corner of the Internet. When it comes to CSAM, our position is simple: We don’t tolerate it. We abhor it. It’s a crime, and we do what we can to support the processes to identify and remove that content.</p><p>In 2010, within months of Cloudflare’s launch, we connected with the <a href="http://www.missingkids.com/">National Center for Missing and Exploited Children</a> (NCMEC) and started a collaborative process to understand our role and how we could cooperate with them. Over the years, we have been in regular communication with a number of government and advocacy groups to determine what Cloudflare should and can do to respond to reports about CSAM that we receive through our abuse process, or how we can provide information supporting investigations of websites using Cloudflare’s services.</p><p>Recently, <a href="https://twitter.com/mhkeller/status/1196818679683530752">36 tech companies</a>, including Cloudflare, received <a href="https://storage.googleapis.com/blog-cloudflare-com-assets/2019/12/senatorletter.pdf">this letter</a> from a group of U.S Senators asking for more information about how we handle CSAM content. The Senators referred to influential New York Times stories published in late September and early November that conveyed the disturbing number of images of child sexual abuse on the Internet, with graphic detail about the horrific photos and how the recirculation of imagery retraumatizes the victims. The stories focused on shortcomings and challenges in bringing violators to justice, as well as efforts, or lack thereof, by a group of tech companies including Amazon, Facebook, Google, Microsoft, and Dropbox, to eradicate as much of this material as possible through existing processes or new tools like PhotoDNA that could proactively identify CSAM material.  </p><p>We think it is important to share our response to the Senators (copied at the end of this blog post), talk publicly about what we’ve done in this space, and address what else we believe can be done.</p>
    <div>
      <h2>How Cloudflare Responds to CSAM</h2>
      <a href="#how-cloudflare-responds-to-csam">
        
      </a>
    </div>
    <p>From our work with NCMEC, we know that they are focused on doing everything they can to validate the legitimacy of CSAM reports and then work as quickly as possible to have website operators, platform moderators, or website hosts remove that content from the Internet. Even though Cloudflare is not in a position to remove content from the Internet for users of our core services, we have worked continually over the years to understand the best ways we can contribute to these efforts.</p>
    <div>
      <h3>Addressing  Reports</h3>
      <a href="#addressing-reports">
        
      </a>
    </div>
    <p>The first prong of Cloudflare’s response to CSAM is proper reporting of any allegation we receive. Every report we receive about content on a website using Cloudflare’s services filed under the “child pornography” category on our <a href="https://www.cloudflare.com/abuse/">abuse report page</a> leads to three actions:</p><ol><li><p>We forward the report to NCMEC. In addition to the content of the report made to Cloudflare, we provide NCMEC with information identifying the hosting provider of the website, contact information for that hosting provider, and the origin IP address where the content at issue can be located.</p></li><li><p>We forward the report to both the website operator and hosting provider so they can take steps to remove the content, and we provide the origin IP of where the content is located on the system so they can locate the content quickly. (Since 2017, we have given reporting parties the opportunity to file an anonymous report if they would prefer that either the host or the website operator not be informed of their identity).</p></li><li><p>We provide anyone who makes a report information about the identity of the hosting provider and contact information for the hosting provider in case they want to follow up directly.</p></li></ol><p>Since our founding, Cloudflare has forwarded 5,208 reports to NCMEC. Over the last three years, we have provided 1,111 reports in 2019 (to date), 1,417 in 2018, and 627 in 2017.  </p><p>Reports filed under the “child pornography” category account for about 0.2% of the abuse complaints Cloudflare receives. These reports are treated as the highest priority for our Trust &amp; Safety team and they are moved to the front of the abuse response queue. We are generally able to respond by filing the report with NCMEC and providing the additional information within a matter of minutes regardless of time of day or day of the week.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5gldDA1mBLXQkHbABJIv9e/71dbbefd701e70d89f2a98630ee47f6d/form-fill-report_2x.png" />
            
            </figure>
    <div>
      <h3>Requests for Information</h3>
      <a href="#requests-for-information">
        
      </a>
    </div>
    <p>The second main prong of our response to CSAM is operation of our “trusted  reporter” program to provide relevant information to support the investigations of nearly 60 child safety organizations around the world. The "trusted reporter" program was established in response to our ongoing work with these organizations and their requests for both information about the hosting provider of the websites at issue as well as information about the origin IP address of the content at issue. Origin IP information, which is generally sensitive security information because it would allow hackers to circumvent certain security protections for a website, like DDoS protections, is provided to these organizations through dedicated channels on an expedited basis.</p><p>Like NCMEC, these organizations are responsible for investigating reports of CSAM on websites or hosting providers operated out of their local jurisdictions, and they seek the resources to identify and contact those parties as quickly as possible to have them remove the content. Participants in the “trusted reporter” program include groups like the <a href="https://www.iwf.org.uk/">Internet Watch Foundation</a> (IWF), the <a href="https://www.inhope.org/">INHOPE Association</a>, the <a href="https://www.esafety.gov.au/">Australian eSafety Commission</a>, and <a href="https://www.meldpunt-kinderporno.nl/">Meldpunt</a>. Over the past five years, we have responded to more than 13,000 IWF requests, and more than 5,000 requests from Meldpunt. We respond to such requests on the same day, and usually within a couple of hours. In a similar way, Cloudflare also receives and responds to law enforcement requests for information as part of investigations related to CSAM or exploitation of a minor.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7waacDQO5qvlvRoG95aGeP/5460dea950b94dfde9ebab57e74f5f50/trusted-reporter_2x.png" />
            
            </figure><p>Among this group, the Canadian Centre for Child Protection has been engaged in a unique effort that is worthy of specific mention. The Centre’s <a href="https://www.cybertip.ca/app/en/">Cybertip</a> program operates their Project Arachnid initiative, a novel approach that employs an automated web crawler that proactively searches the Internet to identify images that match a known CSAM hash, and then alerts hosting providers when there is a match. Based on our ongoing work with Project Arachnid, we have responded to more than 86,000 reports by providing information about the hosting provider and provide the origin IP address, which we understand they use to contact that hosting provider directly with that report and any subsequent reports.</p><p>Although we typically process these reports within a matter of hours, we’ve heard from participants in our “trusted reporter” program that the non-instantaneous response from us causes friction in their systems. They want to be able to query our systems directly to get the hosting provider and origin IP  information, or better, be able to build extensions on their automated systems that could interface with the data in our systems to remove any delay whatsoever. This is particularly relevant for folks in the Canadian Centre’s Project Arachnid, who want to make our information a part of their automated system.  After scoping out this solution for a while, we’re now confident that we have a way forward and informed some trusted reporters in November that we will be making available an API that will allow them to obtain instantaneous information in response to their requests pursuant to their investigations. We expect this functionality to be online in the first quarter of 2020.</p>
    <div>
      <h3>Termination of Services</h3>
      <a href="#termination-of-services">
        
      </a>
    </div>
    <p>Cloudflare takes steps in appropriate circumstances to terminate its services from a site when it becomes clear that the site is dedicated to sharing CSAM or if the operators of the website and its host fail to take appropriate steps to take down CSAM content. In most circumstances, CSAM reports involve individual images that are posted on user generated content sites and are removed quickly by responsible website operators or hosting providers. In other circumstances, when operators or hosts fail to take action, Cloudflare is unable on its own to delete or remove the content but will take steps to terminate services to the  website.  We follow up on reports from NCMEC or other organizations when they report to us that they have completed their initial investigation and confirmed the legitimacy of the complaint, but have not been able to have the website operator or host take down the content. We also work with Interpol to identify and discontinue services from such sites they have determined have not taken steps to address CSAM.</p><p>Based upon these determinations and interactions, we have terminated service to 5,428 domains over the past 8 years.</p><p>In addition, Cloudflare has introduced new products where we do serve as the host of content, and we would be in a position to remove content from the Internet, including Cloudflare Stream and Cloudflare Workers.  Although these products have limited adoption to date, we expect their utilization will increase significantly over the next few years. Therefore, we will be conducting scans of the content that we host for users of these products using PhotoDNA (or similar tools) that make use of NCMEC’s image hash list. If flagged, we will remove that content immediately. We are working on that functionality now, and expect it will be in place in the first half of 2020.</p>
    <div>
      <h2>Part of an Organized Approach to Addressing CSAM</h2>
      <a href="#part-of-an-organized-approach-to-addressing-csam">
        
      </a>
    </div>
    <p>Cloudflare’s approach to addressing CSAM operates within a comprehensive legal and policy backdrop. Congress and the law enforcement and child protection communities have long collaborated on how best to combat the exploitation of children. Recognizing the importance of combating the online spread of CSAM, NCMEC first created the <a href="http://www.missingkids.org/gethelpnow/cybertipline">CyberTipline</a> in 1998, to provide a centralized reporting system for members of the public and online providers to report the exploitation of children online.</p><p>In 2006, Congress conducted a year-long <a href="https://www.govinfo.gov/content/pkg/CPRT-109HPRT31737/html/CPRT-109HPRT31737.htm">investigation</a> and then passed a number of laws to address the sexual abuse of children. Those laws attempted to calibrate the various interests at stake and coordinate the ways various parties should respond. The policy balance Congress struck on addressing CSAM on the Internet had a number of elements for online service providers.</p><p>First, Congress formalized NCMEC’s role as the central clearinghouse for reporting and investigation, through the CyberTipline. The law adds a <a href="https://uscode.house.gov/view.xhtml?req=18+USC+2258A&amp;f=treesort&amp;fq=true&amp;num=3&amp;hl=true&amp;edition=prelim&amp;granuleId=USC-prelim-title18-section2258A">requirement</a>, backed up by fines, for online providers to report any reports of CSAM to NCMEC. The law specifically notes that to preserve privacy, they were not creating a requirement to monitor content or affirmatively search or screen content to identify possible reports.</p><p>Second, Congress responded to the many stories of child victims who emphasized the continuous harm done by the transmission of imagery of their abuse. As described by <a href="http://www.missingkids.com/theissues/sexualabuseimagery">NCMEC</a>, “not only do these images and videos document victims’ exploitation and abuse, but when these files are shared across the internet, child victims suffer re-victimization each time the image of their sexual abuse is viewed” even when viewed for ostensibly legitimate investigative purposes. To help address this concern, the law <a href="https://uscode.house.gov/view.xhtml?hl=false&amp;edition=prelim&amp;path=&amp;req=granuleid%3AUSC-prelim-title18-section2258B&amp;f=treesort&amp;fq=true&amp;num=0&amp;saved=%7CMTggVVNDIDIyNThB%7CdHJlZXNvcnQ%3D%7CdHJ1ZQ%3D%3D%7C3%7Ctrue%7Cprelim">directs</a> providers to minimize the number of employees provided access to any visual depiction of child sexual abuse.  </p><p>Finally, to ensure that child safety and law enforcement organizations had the records necessary to conduct an investigation, the law <a href="https://uscode.house.gov/view.xhtml?hl=false&amp;edition=prelim&amp;path=&amp;req=granuleid%3AUSC-prelim-title18-section2258A&amp;f=treesort&amp;fq=true&amp;num=0&amp;saved=%7CMTggVVNDIDIyNThB%7CdHJlZXNvcnQ%3D%7CdHJ1ZQ%3D%3D%7C3%7Ctrue%7Cprelim">directs</a> providers to preserve not only the report to NCMEC, but also “any visual depictions, data, or other digital files that are reasonably accessible and may provide context or additional information about the reported material or person” for a period of 90 days.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3NhDSyBzr2XtU8bm1NSCuL/8dd8e1d02dd9d3c3f04ebf088d25676e/stats-_2x.png" />
            
            </figure><p>Because Cloudflare’s services are used so extensively—by more than 20 million Internet properties, and based on <a href="https://w3techs.com/technologies/history_overview/proxy/all/q">data from W3Techs</a>, more than 10% of the world’s top 10 million websites—we have worked hard to understand these policy principles in order to respond appropriately in a broad variety of circumstances. The processes described in this blogpost were designed to make sure that we comply with these principles, as completely and quickly as possible, and take other steps to support the system’s underlying goals.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>We are under no illusion that our work in this space is done. We will continue to work with groups that are dedicated to fighting this abhorrent crime and provide tools to more quickly get them information to take CSAM content down and investigate the criminals who create and distribute it.</p><p><a href="https://storage.googleapis.com/blog-cloudflare-com-assets/2019/12/cloudflareresponse.pdf"><b>Cloudflare's Senate Response (PDF)</b></a></p><p><a href="https://www.scribd.com/document/438491024/Cloudflare-s-Senate-Response#from_embed">Cloudflare's Senate Res...</a> by <a href="https://www.scribd.com/user/490411479/Cloudflare#from_embed">Cloudflare</a> on Scribd</p> ]]></content:encoded>
            <category><![CDATA[Policy & Legal]]></category>
            <category><![CDATA[Community]]></category>
            <category><![CDATA[Trust & Safety]]></category>
            <guid isPermaLink="false">5Not63XGszOE0baXeqaEzN</guid>
            <dc:creator>Doug Kramer</dc:creator>
            <dc:creator>Justin Paine</dc:creator>
        </item>
        <item>
            <title><![CDATA[Out of the Clouds and into the weeds: Cloudflare’s approach to abuse in new products]]></title>
            <link>https://blog.cloudflare.com/out-of-the-clouds-and-into-the-weeds-cloudflares-approach-to-abuse-in-new-products/</link>
            <pubDate>Wed, 27 Feb 2019 13:00:00 GMT</pubDate>
            <description><![CDATA[ In a blogpost yesterday, we addressed the principles we rely upon when faced with numerous and various requests to address the content of websites that use our services.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>In a <a href="/unpacking-the-stack-and-addressing-complaints-about-content/">blogpost</a> yesterday, we addressed the principles we rely upon when faced with numerous and various requests to address the content of websites that use our services. We believe the building blocks that we provide for other people to share and access content online should be provided in a content-neutral way. We also believe that our users should understand the policies we have in place to address complaints and law enforcement requests, the type of requests we receive, and the way we respond to those requests. In this post, we do the dirty work of addressing how those principles are put into action, specifically with regard to Cloudflare’s expanding set of features and products.</p>
    <div>
      <h3>Abuse reports and new products</h3>
      <a href="#abuse-reports-and-new-products">
        
      </a>
    </div>
    <p>Currently, we receive abuse reports and law enforcement requests on fewer than one percent of the more than thirteen million domains that use Cloudflare’s network. Although the reports we receive run the gamut -- from phishing, malware or other technical abuses of our network to complaints about content -- the overwhelming majority are allegations of copyright violations or violations of other intellectual property rights. Most of the complaints that we receive do not identify concerns with particular Cloudflare services or products.</p><p>In the last year or so, we’ve also launched a variety of new products, including our video product (<a href="https://www.cloudflare.com/products/stream-delivery/">Cloudflare Stream</a>), a serverless edge computing platform (<a href="https://www.cloudflare.com/products/cloudflare-workers/">Cloudflare Workers</a>), a <a href="https://www.cloudflare.com/products/registrar/">self-serve registrar service</a>, and a privacy-focused recursive resolver (<a href="https://1.1.1.1/">1.1.1.1</a>), among others. Each of these services raises its own complex set of questions.  </p><p>There is no one-size-fits-all solution to address possible abuse of our products. Different types of services come with different expectations, as well as different legal and contractual obligations. Yet as we discussed in relation to our focus on transparency on <a href="/cloudflare-transparency-update-joining-cloudflares-flock-of-warrant-canaries-2/">Monday</a>, being fully transparent means being consistent and predictable so our users can anticipate how we will respond to new situations.</p>
    <div>
      <h3>Developing an approach to abuse</h3>
      <a href="#developing-an-approach-to-abuse">
        
      </a>
    </div>
    <p>To help us sort through how to address both complaints and law enforcement requests, when we introduce new products or features, we ask ourselves four basic sets of questions about the relationship between the service we’re providing and potential complaints about content:</p><ul><li><p>First, how are Cloudflare’s services interacting with the website content? For example, are we doing anything more than providing security and acting as a reliable conduit from one location to another?  Are we providing definitive storage of content? Did we provide the website its domain name through our registrar service? Is the Cloudflare service or product doing anything that could be seen as organizing, analyzing, or promoting content?</p></li><li><p>Second, what type of action might a law enforcement or private complainant want us to take and what are the consequences of it?  What sort of information might law enforcement request -- private information about the user, content of what was sent over the Internet, or logs that would track activity?  Will third parties request information about a website; would they request removal of content from the Internet? Would removing our services address the problem presented?</p></li><li><p>Third, what laws, regulations or contractual requirements apply? Does the nature of our interaction with the online content impact our legal obligations? Has the law enforcement request or regulation satisfied basic principles of the rule of law or due process?</p></li><li><p>Fourth, will our response to the matter presented scale to address the variety of different requests or complaints we may receive over time, covering a variety of different subject matters and viewpoints? Can we craft a principled and content-neutral process to respond to the request? Would our response have an overbroad impact, either by impacting more than the problematic content or changing the Internet in jurisdictions beyond the one that has issued the law or regulation at issue?</p></li></ul><p>Although those preliminary questions help us determine what actions we must take, we also do our best to think about the broader implications on the Internet of any steps we might take to address complaints.</p>
    <div>
      <h2>So how does this work in practice?</h2>
      <a href="#so-how-does-this-work-in-practice">
        
      </a>
    </div>
    
    <div>
      <h3>Response to abuse complaints for customers using our proxy and CDN services</h3>
      <a href="#response-to-abuse-complaints-for-customers-using-our-proxy-and-cdn-services">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7fYyp9YRicdb7b4tQSIBnS/6ae08708e364e32a5c907f04d1b2459c/image5.png" />
            
            </figure><p>People often come to Cloudflare with abuse complaints because our network sits in front of our customers’ sites in order to protect them from cyber attacks and to improve the performance of their website.</p><p>There aren’t a lot of laws or regulations that impose obligations to address content on those providing security or <a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/">CDN services</a>, for good reason. Most people complaining about content are looking for someone who can take that content off the Internet entirely. As we’ve talked about on <a href="/thoughts-on-abuse/">other</a> <a href="/anonymity-and-abuse-reports/">occasions</a>, Cloudflare is unable to remove content that we don’t host, so we therefore try to make sure that the complaint gets to its intended audience -- the hosting provider who has the ability to remove the material from the Internet. As described on <a href="https://www.cloudflare.com/abuse/">our abuse page</a>,  complaining parties automatically receive information about how to contact the hosting provider, and unless the complaining party requests otherwise, abuse complaints are automatically forwarded to both the website owner and the hosting company to allow them to take action.</p><p>This approach has another benefit, consistent with the fourth set of questions we ask ourselves. It prevents addressing content with an unnecessarily blunt tool. Cloudflare is unable to remove its security and CDN services from only a sliver of problematic content on a website.  If we remove our services, it has to be from an entire domain or subdomain, which may cause considerable collateral damage. For example, think of the vast array of sites that allow individual independent users to upload content (“user generated content”). A website owner or host may be able to curate or deal with specific content, but if companies like Cloudflare had to respond to allegations of abuse by a single user’s upload of a single piece of concerning content by removing our core services from an entire site, and making it vulnerable to a cyberattack, those sites would be much more difficult to operate and the content contributed by all other users would be put at risk.</p><p>Similarly, there are a number of different infrastructure services that cooperate to make sure each connection on the Internet can happen successfully – DNS, <a href="https://www.cloudflare.com/learning/dns/glossary/what-is-a-domain-name-registrar/">registrars</a>, registries, security, etc.  If each of the providers of those services, any one of which could put the entire transmission at risk, is applying blunt tools to address content, then the aperture of what content will stay online will get smaller and smaller. Those are bad results for the Internet. Actions to address troubling content online should focus narrowly on the actual concern to avoid unintended collateral consequences.</p><p>While we are unable to remove content we do not host, we are able to take steps to address abuse of our services, such as phishing and malware attacks. Phishing attacks typically fall into two buckets -- a website that has been compromised (unintentional phishing) or a website solely dedicated to intentionally misleading others to gather information (intentional phishing). These buckets are treated differently.</p><p>We discussed earlier that we aim to use the most precise tools possible when addressing abuse, and we take a similar approach for unintentional phishing content. If a website has been compromised (typically an outdated CMS) we can place a warning interstitial page in front of that specific phishing content to protect users from accidentally falling victim to the attack. In the majority of situations, this action is taken at a URL level of granularity.</p><p>In the case of intentional phishing attacks, such a domain like  my-totally-secure-login-page{.}com in combination with our Trust &amp; Safety team being able to confirm the presence of phishing content on the website, we take broader action including a domain-wide interstitial warning page (effectively *my-totally-secure-login-page{.}com/*), and in some cases we may terminate our services to the intentionally malicious domain. To be clear though, this does not remove the phishing content that remains hosted by the website’s hosting provider. Ultimately, action still needs to be taken by the website owner or hosting provider to fully remove the underlying issue.</p>
    <div>
      <h3>Response to complaints about content stored definitively on our network</h3>
      <a href="#response-to-complaints-about-content-stored-definitively-on-our-network">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2Mz81IWy2rQJhZgHnVwXJ9/df8e0f2ec7ca2a0d1240131009164bbc/image4.png" />
            
            </figure><p>We think our approach requires a different set of responses for the small, but growing, number of Cloudflare products that include some sort of storage. Cloudflare Stream, for example, allows users to store, transcode, distribute and playback their videos. And Cloudflare Workers may allow users to store certain content at the edge of our network without a core host server. Although we are not a website hosting provider, these products mean we may be the only place where a certain piece of content is stored in some cases.  </p><p>When we are the definitive repository for content through any of our services, Cloudflare will carefully review any complaints about that content and may disable access to it in response to a valid legal takedown request from either government or private actors. Most often, these legal takedown requests are from individuals alleging copyright infringement.  Under the U.S. Digital Millennium Copyright Act, there is a specific process online storage providers follow to remove or disable access to content alleged to infringe copyright and provide an opportunity for those who post the material to contest that it is infringing. We have already begun implementing this process for content stored on our network.  That’s why we’ve begun a new section of our <a href="https://cloudflare.invisionapp.com/share/RUPOO3MPDKH#/screens">transparency report</a> on requests for content takedown pursuant to U.S. copyright law for content that is stored on our network.  </p><p>We haven’t received any government requests yet to take down content stored on our network. Given the significant potential impact on freedom of expression from a government ordering that content be removed, if we do receive those requests in the future, we will carefully analyze the factual basis and legal authority for the request.  If we determine that the order is valid and requires Cloudflare action, we will do our best to address the request as narrowly as possible, for example, by clarifying overbroad requests or limiting blocking of access to the content to those areas where it violates local law, a practice known as “geo-blocking”. We will also update our transparency report on any government requests that we receive in the future and any actions we take.</p>
    <div>
      <h3>Response to complaints about our registrar service</h3>
      <a href="#response-to-complaints-about-our-registrar-service">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6FxcoT7686OkzBPJTPM7tN/ed90c776932edafbc6b95d59377d1703/registrar.png" />
            
            </figure><p>If you sign up for our self-serve registrar service, you’re legally bound by the terms of our contract with the Internet Corporation for Assigned Names and Numbers (ICANN), a non-profit organization responsible for coordinating unique Internet identifiers across the world, as well as our contract with the relevant domain name registry.  </p><p>Our registrar-focused <a href="https://www.cloudflare.com/products/registrar/abuse/">web page</a> for abuse reporting does not reference abuse complaints about a website’s content.  In our role as a domain registrar, Cloudflare has no control or ability to remove particular content from a domain. We would be limited to simply revoking or suspending the domain registration altogether which would remove the website owner’s control over the <a href="https://www.cloudflare.com/learning/dns/glossary/what-is-a-domain-name/">domain name</a>. Such actions would typically only be done at the direction of the relevant domain name registry, in accordance with their registration rules associated with the <a href="https://www.cloudflare.com/learning/dns/top-level-domain/">Top Level Domain</a>, or more usually to address incidents of abuse as raised by the registry or ICANN. We therefore treat content-related complaints submitted based on our registrar services the same way we treat complaints about content for sites using our CDN or proxy services.  We forward them to the website owner and the website hosting company to allow them to take action or we work in tandem with the relevant registry and at their direction.</p><p>Running a registrar service comes with other legal obligations. As an ICANN accredited registrar, part of our contractual obligations include adhering to third party dispute resolution processes regarding trademark disputes, as handled by providers such as the World Intellectual Property Organization (WIPO) and the National Arbitration  Forum. Also, we continue to be part of the ICANN community discussions on how best to handle the collection, publication and provision of access to personal data in the WHOIS database in a manner consistent with the EU’s General Data Protection Regulation (GDPR) and other privacy frameworks. We will provide more updates on that front when the discussions have ripened.</p>
    <div>
      <h3>Response to complaints about IPFS</h3>
      <a href="#response-to-complaints-about-ipfs">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5T3SHdqfJMZSvtb0C4LBbo/84cd4798a1cb309eeae75972d2a3ca8e/ipfs.png" />
            
            </figure><p>Back in September, we <a href="/distributed-web-gateway/">announced</a> that Cloudflare would be providing a gateway to the InterPlanetary File System (IPFS). Cloudflare’s IPFS gateway is a way to access content stored on the IPFS peer-to-peer network. Because Cloudflare is not acting as the definitive storage for the IPFS network, we do not have the ability to remove content from that network. We simply operate as a cache in front of IPFS, much as we do for our more traditional customers.</p><p>Because content is stored on potentially dozens of nodes in IPFS, if one node that was caching content goes down, the network will just look for the same content on another node. That fact makes IPFS exceptionally resilient. That same resilience, however, means that unlike with our traditional customers, with IPFS, there is no single host to inform of a complaint about content stored on the IPFS network.  Cloudflare often has no knowledge of who the owner is of content being accessed through the gateway, and this makes it impossible to notify the specific owner when we receive a complaint.</p><p>The law hasn’t yet quite caught up with distributed networks like IPFS, and there’s a notable debate among IPFS users about how best to deal with abuse. Some argue that having problematic content stored on IPFS will discourage adoption of the protocol, and advocate for the development of lists of problematic hashes that  IPFS gateways could choose to block. Others point out that any mechanism intended to block IPFS content will itself be subject to abuse. We don’t have the answer to that debate, but it does demonstrate to us the importance of being thoughtful about how we proceed.</p><p>For the time being, our plan is to respond to U.S. court orders that require us to clear our cache of content stored on IPFS. More importantly, however, we intend to report in future transparency reports on any law enforcement requests we receive to clear our IPFS cache, to ensure continued public discussion.</p>
    <div>
      <h3>Cloudflare Resolvers: 1.1.1.1 and Resolver for Firefox</h3>
      <a href="#cloudflare-resolvers-1-1-1-1-and-resolver-for-firefox">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/atuUDCyhmzyh4RqbtOd6U/76647f964b85043f8d1296e5dd038dfd/1111-1.gif" />
            
            </figure><p>In April of last year, we <a href="/announcing-1111/">launched</a> our first DNS resolver, 1.1.1.1.  In June, we partnered with Mozilla to provide direct DNS resolution from within the Firefox browser using the Cloudflare Resolver for Firefox. Our goal with both resolvers was to develop fast DNS services that were focused on user privacy.  </p><p>We often get questions about how how we deal with both abuse complaints and law enforcement requests related to our resolvers.  Both of our resolvers are intended to provide only direct DNS resolution. In other words, Cloudflare does not block or filter content through either 1.1.1.1 or the Cloudflare Resolver for Firefox. If Cloudflare were to receive a request from a law enforcement or government agency to block access to domains or content through one of our resolvers, Cloudflare would fight that request. At this point, we have not yet received any government requests to block content through our resolvers. Cloudflare would also document any request to block content from our resolvers in our semi-annual transparency report, unless we were legally prohibited from doing so.</p><p>Similarly, Cloudflare has not received any government requests for data about the users of our resolvers, and would fight such a request if necessary. Given our public commitment not to retain any personally identifiable information for more than 24 hours, we believe it is unlikely that we would have any information even if asked. Nonetheless, if we were to receive a government request for data about a resolver user, we would document the request in our transparency report, unless legally prohibited from doing so.    </p>
    <div>
      <h3>The long road ahead</h3>
      <a href="#the-long-road-ahead">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/52nr5Co31KS2aVzil4x90h/c2d650f2d18ca8c78d0a13a9148a9603/road.png" />
            
            </figure><p>Although new products offered by Cloudflare in the future, as well as the legal and regulatory landscape, may change over the years, we expect that our approach to thinking about new products will stand the test of time. We’re guided by some central principles -- allowing our infrastructure to be as neutral as possible, following the rule of law or requiring due process, being open about what we’re doing, and making sure that we’re consistent regardless of the wide variety of issues we face. And we will work hard to make sure that doesn’t change, because even the smallest tweaks to the way we do things can have a significant impact at the scale we operate.</p> ]]></content:encoded>
            <category><![CDATA[Freedom of Speech]]></category>
            <category><![CDATA[Policy & Legal]]></category>
            <category><![CDATA[Politics]]></category>
            <category><![CDATA[Abuse]]></category>
            <category><![CDATA[Due Process]]></category>
            <category><![CDATA[Community]]></category>
            <guid isPermaLink="false">3TokDJcXCygYPTjnifbwUM</guid>
            <dc:creator>Justin Paine</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare Transparency Update: Joining Cloudflare’s Flock of (Warrant) Canaries]]></title>
            <link>https://blog.cloudflare.com/cloudflare-transparency-update-joining-cloudflares-flock-of-warrant-canaries-2/</link>
            <pubDate>Mon, 25 Feb 2019 14:00:00 GMT</pubDate>
            <description><![CDATA[ Today, Cloudflare is releasing its transparency report for the second half of 2018. We have been publishing biannual Transparency Reports since 2013. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Today, Cloudflare is releasing its <a href="https://www.cloudflare.com/transparency/updates/">transparency report</a> for the second half of 2018. We have been <a href="https://www.cloudflare.com/transparency/">publishing</a> biannual Transparency Reports since 2013.</p><p>We believe an essential part of earning the trust of our customers is being transparent about our features and services, what we do – and do not do – with our users’ data, and generally how we conduct ourselves in our engagement with third parties such as law enforcement authorities.  We also think that an important part of being fully transparent is being rigorously consistent and anticipating future circumstances, so our users not only know how we have behaved in the past, but are able to anticipate with reasonable certainty how we will act in the future, even in difficult cases.</p><p>As part of that effort, we have set forth certain ‘warrant canaries’ – statements of things we have never done as a company. As described in greater detail below, the report published today adds three new ‘warrant canaries’, which is the first time we’ve added to that list since 2013. The transparency report is also distinguished because it adds new reporting on requests for user information from foreign law enforcement, and requests for user information that we receive from government agencies that are not part of law enforcement.</p><p>This is the first in a series of blog posts this week that will describe our process and the commitments we make in relation to the handling of user data and abuse queries, our interactions with the law enforcement and the security communities, and our essential red-lines when it comes to how we operate as a company. The specific updates will include:</p><ul><li><p>Monday: This blogpost on the updated transparency report and new warrant canaries.</p></li><li><p>Tuesday: An updated discussion about how we address requests for content moderation</p></li><li><p>Wednesday: How we plan to deal with abuse of new products</p></li><li><p>Thursday: Dealing with requests from non-US law enforcement</p></li></ul><p>This is an exciting time of growth for Cloudflare and we are only just getting started, so we do expect more complexity over the years. However, the fundamentals remain for us, always - transparency, due process, openness, integrity and a commitment to improving the Internet for all. We are excited to share more with you this week!</p>
    <div>
      <h3>New Warrant Canaries</h3>
      <a href="#new-warrant-canaries">
        
      </a>
    </div>
    <p>From the beginning, and consistent with our mission of “helping build a better Internet,” Cloudflare has relied on a set of values that inform how we work with our customers, with law enforcement, and with other third parties. Maintaining the privacy and trust of our users and supporting a secure, well-functioning, and content-neutral Internet is essential to us.</p><p>It’s not enough for us to be transparent about the things we do willingly, because tech companies are pressured every day to take the easy way out and avoid controversy or conflict by doing seemingly small things easily and quietly that are corrosive to these values. So, for many years, we have published a list of “things we have never done” in our transparency report to demonstrate our commitment to these values.</p><p>The rationale behind including “warrant canaries” in our transparency report is twofold. On one hand, if Cloudflare is asked by law enforcement or a third party to act against one of the warrant canaries and not disclose it publicly, we will still have to remove it from our list. The removal of the warrant canary, like the silence of a canary in the coal mine, will signal to our customers that something is not right. And in addition, these statements serve as a signal to groups which may ask us to take actions contravening our values that such actions are not so easy for us to take. We have said before and re-commit here: if Cloudflare were asked to take an action violating one of the warrant canaries, we would pursue legal remedies challenging the request in order to protect our customers from what we believe are improper, illegal, or unconstitutional requests.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2xOkIYGjQYv3DaGruYxMAS/17c2644547861ee34c7a4840c1514f68/canary-1.png" />
            
            </figure>
    <div>
      <h3>Why add new warrant canaries?</h3>
      <a href="#why-add-new-warrant-canaries">
        
      </a>
    </div>
    <p>We have not added warrant canaries since we put out our first transparency report in 2013. The original canaries are as follows:</p><ul><li><p>Cloudflare has never turned over our SSL keys or our customers SSL keys to anyone.</p></li><li><p>Cloudflare has never installed any law enforcement software or equipment anywhere on our network.</p></li><li><p>Cloudflare has never terminated a customer or taken down content due to political pressure.</p></li><li><p>Cloudflare has never provided any law enforcement organization a feed of our customers' content transiting our network.</p></li></ul><p>So, why change that this year? Though the company develops new products each year, the addition of new types of services in 2018, notably Cloudflare Workers and DNS Resolver 1.1.1.1, expanded our capabilities in a way that we believe is worth addressing. Similarly, regulation of technology has been changing globally, and we feel it is pertinent to respond to these developments.</p><p>The new canaries, and the issues they are intended to address, are outlined below.  To be clear, we haven’t necessarily received law enforcement requests to do any of these things at this point.  We just want to make sure we lay out our commitments as clearly as possible before we get a request.</p>
    <div>
      <h3>The new canaries</h3>
      <a href="#the-new-canaries">
        
      </a>
    </div>
    <p><b>Cloudflare has never modified customer content at the request of law enforcement or another third party.</b></p><p>The Internet has come a long way since the early days when every visitor to a website saw precisely the same content. Cookies and other techniques allow developers to customize the user experience. In the last year and a half, Cloudflare launched Workers, which allows website developers to customize their websites using edge side code. Using Workers, our customers can do things like customizing their websites, serving different versions of their website to different types of visitors or to those in different locations. Although being able to alter the version of a website particular visitors see or what application runs for different visitors is a powerful new tool for our customers, we recognize that it also holds the potential for mischief and abuse. Governments or malicious actors could in theory use edge-side code to modify the content of a website, make changes only for particular viewers, or collect information about the visitors to a site.</p><p>We believe that only those who are empowered to change the site itself should be empowered to make changes by running code at the edge. We will therefore fight requests to make modifications, either by adding apps or modifying content, at the request of a third party without the customer’s consent.</p><p><b>Cloudflare has never modified the intended destination of DNS responses at the request of law enforcement or another third party.</b></p><p>The privacy and security of DNS Resolver 1.1.1.1 are very important to us, and were front of mind when designing the service, as described <a href="/announcing-1111/">here</a>. At Cloudflare we believe that part of helping to build a better Internet is to ensure that users are routed to the website they intend to visit.</p><p>DNS spoofing, or cache poisoning, exploits the functioning of DNS resolvers in order to route unsuspecting visitors incorrectly. If we think of DNS as the phonebook of the Internet, DNS spoofing is similar to someone taking new phonebooks from people’s doors and replacing them with fakes. In this new copy, the attacker has changed ordinary people’s numbers to the numbers of phone scammers. When a user with one of the affected books looks up and calls the number of, say, a landscaping service, or even a friend, they end up dialing a scammer instead. In DNS spoofing, a person looking up an affected website would be directed to a fake website, or somewhere different entirely, rather than the intended destination.</p><p>We saw a concrete example of this type of DNS spoofing earlier this month. On February 10, 2019, Venezuelan opposition leader Juan Guaido asked Venezuelans to volunteer to help international humanitarian organizations deliver aid into the country. A day after this public announcement, however, a similarly named website was set up, and users in Venezuela trying to visit the original and official website were redirected -- using DNS spoofing -- to the fake website. The fake website had a form to register personal data, such as name, email and cell phone.</p><p>According to <a href="https://motherboard.vice.com/en_us/article/d3mdxm/venezuela-government-hack-activists-phishing">Motherboard</a>:</p><blockquote><p>While studying the fake website, researchers found phishing sites hosted on the same IP address. And there’s evidence that the people behind the second, apparently fake and malicious, website were working for the <a href="https://www.nytimes.com/2019/01/23/world/americas/venezuela-protests-guaido-maduro.html"><b>government</b></a> of Maduro, according to security firm CrowdStrike and independent researchers.</p></blockquote><blockquote><p>“It’s clearly the work of the Venezuelan government trying to identify the people working against them, so that they can put a stop to it,” Adam Meyers, the vice president of intelligence at CrowdStrike, a firm that’s analyzed the attacks, told Motherboard in a phone call.</p></blockquote><p>This type of DNS spoofing can be done for any number of purposes, from gaining sensitive information to preventing access to websites with controversial content. Making a commitment not to modify the intended destination of DNS responses at the request of law enforcement or a third party is an affirmation of our desire to ensure the reliability of 1.1.1.1 and do our best to maintain confidence in the DNS and Internet infrastructure more generally.</p><p>Occasionally, law enforcement uses Cloudflare for domains they have seized from <a href="https://www.cloudflare.com/learning/dns/glossary/what-is-a-domain-name-registrar/">domain registrars</a> using legal process. Because law enforcement has obtained legal control of the website in those circumstances (through seizure), that service does not involve modification of DNS responses.</p><p><b>Cloudflare has never weakened, compromised, or subverted any of its encryption at the request of law enforcement or another third party.</b></p><p>We believe encryption is critical to a trustworthy and secure Internet. Encryption prevents the theft of private data, making it safer to bank, shop, and communicate online.</p><p>Because of the importance of encryption to the Internet ecosystem, we have a team constantly working on new ways to increase encryption on the Internet, whether that means providing <a href="https://www.cloudflare.com/application-services/products/ssl/">SSL certificates for free</a> to all our users, <a href="/esni/">pioneering eSNI</a> or supporting <a href="/dns-resolver-1-1-1-1/">DNS over TLS and DNS over HTTPS</a> on 1.1.1.1.</p><p>Because encryption can complicate efforts to obtain access to digital evidence, however, law enforcement agencies have pushed for tools to gain access to encrypted material. These efforts range from the FBI’s attempt to get a court order to require Apple to assist them in obtaining encrypted data from an iPhone in February 2015, to Australia’s new Assistance and Access law, passed last fall. We’re concerned that these types of efforts will raise questions about the security of encryption products. As one Cloudflare employee put it after Australia’s law passed, “tech companies now have to do code reviews of everything coming out of Australia” to ensure there are no vulnerabilities.</p><p>We added the new commitment to prevent this uncertainty. Our intent is to continue focusing on ways to improve current encryption methods and deployment of these methods, not weaken them.</p><p><b>Cloudflare has never turned over our encryption or authentication keys or our customers' encryption or authentication keys to anyone.</b></p><p>This is a slight modification to a previous commitment.  The wording previously referred to “SSL keys” rather than “encryption and authentication keys.” Given the deprecation of SSL, we wanted to be absolutely clear that we were referring to all encryption and authentication keys, not just those from a deprecated security protocol.</p><p>Our goal in modifying this canary is to provide additional security for our customers. We therefore believe it makes sense to distill the language to encompass the crux of what we will not do, which is provide our customers’ keys to third parties.</p> ]]></content:encoded>
            <category><![CDATA[Transparency]]></category>
            <category><![CDATA[Trust & Safety]]></category>
            <category><![CDATA[Policy & Legal]]></category>
            <guid isPermaLink="false">1fwUBKWTTfPKSqz9W3e3kR</guid>
            <dc:creator>Alissa Starzak</dc:creator>
            <dc:creator>Justin Paine</dc:creator>
            <dc:creator>Erin Walk</dc:creator>
        </item>
        <item>
            <title><![CDATA[DDoS Ransom: An Offer You Can Refuse]]></title>
            <link>https://blog.cloudflare.com/ddos-ransom-an-offer-you-can-refuse/</link>
            <pubDate>Mon, 06 Feb 2017 21:43:03 GMT</pubDate>
            <description><![CDATA[ Cloudflare has covered DDoS ransom groups in the past. First, we reported on the copycat group claiming to be the Armada Collective and then not too long afterwards, we covered the "new" Lizard Squad. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Cloudflare has covered DDoS ransom groups several times in the past. First, we reported on the copycat group claiming to be the <a href="/empty-ddos-threats-meet-the-armada-collective/">Armada Collective</a> and then not too long afterwards, we covered the "new" <a href="/lizard-squad-ransom-threats-new-name-same-faux-armada-collective-m-o-2/">Lizard Squad</a>. While in both cases the groups made threats that were ultimately empty, these types of security events can send teams scrambling to determine the correct response. Teams in this situation can choose from three types of responses: pay the ransom and enable these groups to continue their operations, not pay and hope for the best, or prepare an action plan to get protected.</p>
    <div>
      <h3>Breaking the Ransom Cycle</h3>
      <a href="#breaking-the-ransom-cycle">
        
      </a>
    </div>
    <p>We can’t stress enough that you should never pay the ransom. We fully understand that in the moment when your website is being attacked it might seem like a reasonable solution, but by paying the ransom, you only perpetuate the DDoS ransom group’s activities and entice other would be ransomers to start making similar threats. In fact, we have seen reports of victim organizations receiving multiple subsequent threats after they have paid the ransom. It would seem these groups are sharing lists of organizations that pay, and those organizations are more likely to be targeted again in the future. Victim organizations pay the ransom often enough that we see new “competitors” pop up every few months. As of a few weeks ago, a new group, intentionally left unnamed, has emerged and begun targeting financial institutions around the world. This group follows a similar modus operandi as previous groups, but with a significant twist.</p>
    <div>
      <h3>Mostly Bark and Little Bite</h3>
      <a href="#mostly-bark-and-little-bite">
        
      </a>
    </div>
    <p>The main difference between previous copycats and this new group is that this group actually sends a small demonstration attack before sending the ransom email to the typical role-based email accounts. The hope is to demonstrate to the target that the group will follow through with the ransom threat and convince them to pay the amount requested before the deadline passes. Unsurprisingly though, if the ransom amount is not paid before the deadline expires, the group does not launch a second attack.</p><p>When targeting an organization, the group sends two variations of a ransom email. The first variation is a standard threat:</p>
            <pre><code>Subject: ddos attack
 
Hi!
  
If you dont pay 8 bitcoin until 17. january your network will be hardly ddosed! Our attacks are super powerfull. And if you dont pay until 17.
january ddos attack will start and price to stop will double!

We are not kidding and we will do small demo now on [XXXXXXXX] to show we are serious.

Pay and you are safe from us forever.
 
OUR BITCOIN ADDRESS: [XXXXXXXX]
 
Dont reply, we will ignore! Pay and we will be notify you payed and you are safe.
 
Cheers!</code></pre>
            <p>Interestingly, the second email variation makes reference to "mirai" -- the IoT-based botnet that has been in the news recently as having contributed to many <a href="https://krebsonsecurity.com/2016/09/krebsonsecurity-hit-with-record-ddos/">significant</a> <a href="https://dyn.com/blog/dyn-analysis-summary-of-friday-october-21-attack/">attacks</a>. It is important to note -- while the second variation of ransom email references “mirai” there is no actual evidence that these demonstration attacks have anything to do with the Mirai botnet.</p>
            <pre><code>Subject: DDoS Attack on XXXXXXXX!
 
Hi!
 
If you dont pay 6 bitcoin in 24 hours your servers will be hardly ddosed!
 
Our attacks are super powerfull. And if you dont pay in 24 hours ddos attack will start and price to stop will double and keep go up!
 
IMPORTANT - You think you protected by CloudFlare but we pass CloudFlare and attack your servers directly.
 
We are not kidding and we will do small demo now to show we are serious.
 
We dont want to make damage now so we will run small attack on 2 not important your IPs - XXXXXXXX and XXXXXXXX. Just small UDP flood for 1 hour to prove us. But dont ignore our demand as we then launch heavy attack by Mirai on all your servers!!
 
Pay and you are safe from us forever.
 
OUR BITCOIN ADDRESS: [XXXXXXXX]
 
Dont reply, we will ignore! Pay and we will be notify you payed and you are safe.
 
Cheers!</code></pre>
            <p>While no two attacks are identical, the group’s demonstration attacks do generally follow a pattern. The attacks usually peak around 10 Gbps, last for less than an hour and use either DNS amplification or NTP reflection as the attack method. Without detailing specifics so as not to tip off the bad people, there are also specific characteristics about the demonstration attacks that support the theory the attacks are using a booter/stresser type of service to carry out the attacks. Neither of these attack types are new, and Cloudflare successfully mitigates attacks that are substantially larger in volume many times a week.</p><p>While in this instance not paying the ransom doesn’t lead to a subsequent attack, this outcome isn’t guaranteed. Not only can your site possibly go down during the demonstration attack, but there is still nothing stopping either the original ransomer or a different attacker from launching a future attack. Regardless of an attacker’s true intent, taking no action is a suboptimal plan.</p>
    <div>
      <h4>Building an Action Plan</h4>
      <a href="#building-an-action-plan">
        
      </a>
    </div>
    <p>Scrambling to build an action plan while actively under attack is not only stressful, but this is often when avoidable mistakes happen. We recommend doing your research about what protection is right for you ahead of time. DDoS protection, as well as other application level protections, don’t have to be a hassle to implement, and it can be done in under an hour with Cloudflare. Having a plan and implementing protection before a security event occurs can keep your site running smoothly. However, if you find yourself under attack and without an action plan, it’s important to remember that many of these groups are bluffing. Even when these groups are not bluffing, paying the ransom will only encourage them to continue their efforts. If you have received one of these emails, we encourage you to <a href="https://www.cloudflare.com/under-attack-hotline/">reach out</a> so that we can discuss the specifics of your situation, and whether or not the specific group in question is known to follow through with their threats.</p> ]]></content:encoded>
            <category><![CDATA[DDoS]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Reliability]]></category>
            <category><![CDATA[Policy & Legal]]></category>
            <guid isPermaLink="false">7HIGwtnbKGeg5Nfup35Nzq</guid>
            <dc:creator>Justin Paine</dc:creator>
        </item>
        <item>
            <title><![CDATA[Lizard Squad Ransom Threats: New Name, Same Faux Armada Collective M.O.]]></title>
            <link>https://blog.cloudflare.com/lizard-squad-ransom-threats-new-name-same-faux-armada-collective-m-o-2/</link>
            <pubDate>Fri, 29 Apr 2016 23:21:17 GMT</pubDate>
            <description><![CDATA[ CloudFlare recently wrote about the group of cyber criminals claiming to be be the "Armada Collective." In that article, we stressed that this group had not followed through on any of the ransom threats they had made.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>CloudFlare <a href="/empty-ddos-threats-meet-the-armada-collective/">recently wrote about</a> the group of cyber criminals claiming to be be the "Armada Collective." In that article, we stressed that this group had not followed through on any of the ransom threats they had made. Quite simply, this copycat group of cyber criminals had not actually carried out a single DDoS attack—they were only trying to make easy money through fear by using the name of the original “Armada Collective” group from late 2015.</p><p>Since we published that article earlier this week, this copycat group claiming to be "Armada Collective" has stopped sending ransom threats to website owners. Extorting companies proves to be challenging when the group’s email actively encourages target companies to the search for the phrase “Armada Collective” on Google. The first search result for this phrase now returns CloudFlare’s article outing this group as a fraud.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/71TqcOf8YWebhY3xMVCRIm/f5507f1e043a0bd644b53af933c24869/armada-collective-google-search.png" />
            
            </figure><p>Beginning late Thursday evening (Pacific Standard Time) several CloudFlare customers began to receive threatening emails from a "new" group calling itself the “Lizard Squad”. These emails have a similar modus operandi to the previous ransom emails. This group was threatening DDoS attacks unless a ransom amount was paid to a Bitcoin address before a deadline. Based on discussions with other security vendors, we can confirm that at least 500 of these emails have been sent out by this group claiming to be the “Lizard Squad.”</p><p>Each of these emails is exactly identical, including a Bitcoin address that has been re-used. As we discussed in our previous article, re-using the Bitcoin address means the group of cyber criminals has no way of identify which company has paid their ransom. If this group was legitimate, you’d expect to see a unique Bitcoin address for each individual target company.</p><p>Included below is an example email from the "Lizard Squad" compared to the Armada Collective:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2BUyTDyYsQrRItfvJIl67V/809136bcc9ebfb7691d256af90c6865d/lizard-squad-ransom-email-1.png" />
            
            </figure><p>While the emails have some differences, they are ultimately identical in their goal and how they go about attempting to extort money from the target companies. Similar to the group claiming to be the "Armada Collective", there is a general consensus within the security community that this group claiming to be the "Lizard Squad" is not in fact actually the group they claim to be. This is another copycat.</p><p>Unsurprisingly, we haven’t seen any example of the "Lizard Squad" actually following through on their threats. CloudFlare will continue to monitor the situation, and we’ll provide an update if any further changes develop.</p><p>CloudFlare would like to continue to stress the importance of not paying ransom if you receive a threat. Paying the ransom only emboldens these cyber criminals and provides them with funding to attack other companies. If you receive a threat please <a href="https://www.cloudflare.com/under-attack-hotline/">reach out to CloudFlare</a>, and our team would be happy to discuss whether an attacker is known to carry through on their threats. While the threats made by these imposter groups are unlikely to result in an actual attack, we do encourage companies to use a service like CloudFlare to proactively protect their infrastructure against these types of attacks when there is a legitimate threat.</p> ]]></content:encoded>
            <category><![CDATA[DDoS]]></category>
            <category><![CDATA[Attacks]]></category>
            <category><![CDATA[Reliability]]></category>
            <category><![CDATA[eCommerce]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">gxrpMeFicnRP3Ybu626lR</guid>
            <dc:creator>Justin Paine</dc:creator>
        </item>
    </channel>
</rss>