
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Fri, 10 Apr 2026 00:08:59 GMT</lastBuildDate>
        <item>
            <title><![CDATA[A simpler path to a safer Internet: an update to our CSAM scanning tool]]></title>
            <link>https://blog.cloudflare.com/a-simpler-path-to-a-safer-internet-an-update-to-our-csam-scanning-tool/</link>
            <pubDate>Wed, 24 Sep 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare has made it even easier to enable our free child safety tooling for all customers. ]]></description>
            <content:encoded><![CDATA[ <p>Launching a website or an online community brings people together to create and share. The operators of these platforms, sadly, also have to navigate what happens when bad actors attempt to misuse those destinations to spread the most heinous content like child sexual abuse material (CSAM).</p><p>We are committed to helping anyone on the Internet protect their platform from this kind of misuse. We <a href="https://blog.cloudflare.com/the-csam-scanning-tool/"><u>first launched</u></a> a CSAM Scanning Tool several years ago to give any website on the Internet the ability to programmatically scan content uploaded to their platform for instances of CSAM in partnership with National Center for Missing and Exploited Children (NCMEC), Interpol, and dozens of other organizations committed to protecting children. That release took technology that was only available to the largest social media platforms and provided it to any website.</p><p>However, the tool we offered still required setup work that added friction to its adoption. To help our customers file reports to NCMEC, they needed to create their own credentials. That step of creating credentials and sharing them was too confusing or too much work for small site owners. We did our best helping them with secondary reports, but we needed a method that made this seamless to encourage adoption.</p><p>Today’s announcement makes that process significantly easier for site owners, helping them contribute to keeping the Internet safer with even less manual effort. The tool no longer requires website operators to create and provide their own unique NCMEC credentials. The result is that we have seen monthly adoption of the tool increase by 1,600% since the introduction of this change in February.</p>
    <div>
      <h3>How does it work?</h3>
      <a href="#how-does-it-work">
        
      </a>
    </div>
    <p>Services that attempt to flag and stop the spread of CSAM rely on partner organizations, like NCMEC, who maintain lists of hashes of known CSAM. These hashes are numerical representations of images that rely on an algorithm to create a kind of digital fingerprint for a photo. Partners who operate these tools, like Cloudflare, check hashes of content provided against the list maintained by organizations like NCMEC to see if there is a match. You can read about the operation in detail in our previous announcement <a href="https://blog.cloudflare.com/the-csam-scanning-tool/#finding-similar-images"><u>here</u></a>.</p><p>We rely on fuzzy hashing, a technique that goes beyond simple one-to-one matches. If a photo of CSAM is altered even slightly — by adding a filter, cropping it, or adding some noise — the fingerprint changes completely.</p><p>A fuzzy hash, on the other hand, creates a "perceptual fingerprint." Even if an image is modified, its fuzzy hash will remain similar to the original. This allows our tool to identify matches with a high degree of confidence, even if the abuser tries to disguise the content.</p><p>The removal of the requirement to share the credential with Cloudflare removes one additional step to deploying and enabling our tool, but site operators are still expected to continue to file their own reports with NCMEC or their regional equivalent.</p>
    <div>
      <h3>What is the process now?</h3>
      <a href="#what-is-the-process-now">
        
      </a>
    </div>
    <p>The process for using the tool is now straightforward and simple:</p><p><b>Enable the Tool:</b> Activate the CSAM Scanning Tool on your Cloudflare zone and verify your notification email address.</p><p><b>Scan and Detect: </b>Our tool scans your cached content for potential CSAM, creating a fuzzy hash of each image. If a match is found with a known bad hash, a detection event is created.</p><p><b>Remediate: </b>Cloudflare blocks the URL to any identified matches and notifies you so that you may take further action.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6cTjykOBheTnzbmcwjKoSI/63fb00a39807897c8b2feda9af373ec0/unnamed.png" />
          </figure>
    <div>
      <h3>What is next?</h3>
      <a href="#what-is-next">
        
      </a>
    </div>
    <p>We believe that the tools for a safer Internet should be available for everyone  — not just a few large companies.</p><p>We invite you to enable the CSAM Scanning Tool on your website today. For more technical details on how it works, please visit our <a href="https://developers.cloudflare.com/cache/reference/csam-scanning/"><u>developer documentation</u></a>. We also welcome you to join our community to discuss the technology and help us continue to build a better Internet.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Trust & Safety]]></category>
            <category><![CDATA[Abuse]]></category>
            <category><![CDATA[Legal]]></category>
            <guid isPermaLink="false">4SD2BwOE3yemddmMT25cnO</guid>
            <dc:creator>Rachael Truong</dc:creator>
        </item>
        <item>
            <title><![CDATA[How Cloudflare is using automation to tackle phishing head on]]></title>
            <link>https://blog.cloudflare.com/how-cloudflare-is-using-automation-to-tackle-phishing/</link>
            <pubDate>Mon, 17 Mar 2025 05:00:00 GMT</pubDate>
            <description><![CDATA[ How Cloudflare is using threat intelligence and our Developer Platform products to automate phishing abuse reports. ]]></description>
            <content:encoded><![CDATA[ <p>Phishing attacks have grown both in volume and in sophistication over recent years. Today’s threat isn’t just about sending out generic <a href="https://www.cloudflare.com/learning/email-security/what-is-email/"><u>emails</u></a> — bad actors are using advanced phishing techniques like <a href="https://bolster.ai/blog/man-in-the-middle-phishing"><u>2 factor monster in the middle</u></a> (MitM) attacks, <a href="https://blog.cloudflare.com/how-cloudflare-cloud-email-security-protects-against-the-evolving-threat-of-qr-phishing/"><u>QR codes</u></a> to bypass detection rules, and <a href="https://www.malwarebytes.com/blog/news/2025/01/ai-supported-spear-phishing-fools-more-than-50-of-targets"><u>using artificial intelligence (AI)</u></a> to craft personalized and targeted phishing messages at scale. Industry organizations such as the Anti-Phishing Working Group (APWG) <a href="https://docs.apwg.org/reports/apwg_trends_report_q2_2024.pdf"><u>have shown</u></a> that phishing incidents continue to climb year over year.</p><p>To combat both the increase in phishing attacks and the growing complexity, we have built advanced automation tooling to both detect and take action. </p><p>In the first half of 2024, Cloudflare resolved 37% of phishing reports using automated means, and the median time to take action on hosted phishing reports was 3.4 days. In the second half of 2024, after deployment of our new tooling, we were able to expand our automated systems to resolve 78% of phishing reports with a median time to take action on hosted phishing reports of under an hour.</p><p>In this post we dig into some of the details of how we implemented these improvements.</p>
    <div>
      <h3>The phishing site problem</h3>
      <a href="#the-phishing-site-problem">
        
      </a>
    </div>
    <p><a href="https://blog.cloudflare.com/dispelling-the-generative-ai-fear-how-cloudflare-secures-inboxes-against-ai-enhanced-phishing/"><u>Cloudflare has observed a similar increase</u></a> in the volume of phishing activity throughout 2023 and 2024. We receive <a href="https://abuse.cloudflare.com/"><u>abuse reports</u></a> from anyone on the Internet that may have seen potentially abusive behaviors from websites using Cloudflare services. Our Trust &amp; Safety investigators and engineers have been tasked with responding to these complaints, and more recently have been using the data from these reports to improve our threat intelligence, brand protection, and email security product offerings.</p><p>Cloudflare has always believed in using the vast amounts of traffic that flows through our network to improve threat detection and customer security. This has been at the core of how we protect our customers from <a href="https://www.cloudflare.com/learning/ddos/glossary/denial-of-service/"><u>DoS attacks</u></a> and other <a href="https://www.cloudflare.com/learning/security/what-is-cyber-security/"><u>cybersecurity</u></a> threats. We've been applying the same concepts our internal teams use to mitigate <a href="https://www.cloudflare.com/learning/email-security/how-to-prevent-phishing/"><u>phishing</u></a> to improve detection of phishing on our network and our ability to detect and notify our customers about potential risks to their brand.</p><p>Prior to last year, phishing abuse reported to Cloudflare relied on manual, human review and intervention to remediate. Trust &amp; Safety (T&amp;S) investigators would have to look at each complaint, the allegations made by the reporter, and the content on the reported websites to make assessments as quickly as possible about whether the website was phishing or <a href="https://www.cloudflare.com/learning/ddos/glossary/malware/"><u>malware</u></a>.</p><p>Given the growing scale of our customer base and phishing across the Internet, this became unsustainable. By collecting a group of internal experts on abuse, we were able to tackle this problem by using insights across our network, internal data from our <a href="https://developers.cloudflare.com/cloudflare-one/email-security/"><u>Email Security</u></a> product, external feeds from trusted sources, and years of abuse report processing data to automatically assess risk of likely phishing and recommend appropriate action.</p>
    <div>
      <h3>Turning our intelligence inward</h3>
      <a href="#turning-our-intelligence-inward">
        
      </a>
    </div>
    <p>We built our automated phishing identification on the <a href="https://www.cloudflare.com/developer-platform/products/"><u>Cloudflare Developer Platform</u></a> so that we could meet our scanning demand without concern for how we might scale. This allowed us to focus more on creating a great phishing detection engine and less on the infrastructure required to meet that demand. </p><p>Each URL submitted to our phishing detection <a href="https://workers.cloudflare.com/"><u>Worker</u></a> begins with an initial scan by the <a href="https://radar.cloudflare.com/scan"><u>Cloudflare URL Scanner</u></a>. The scan provides us with the rendered HTML, network requests, and attributes of the site. After scanning, we collect reputational information about the site by submitting the HTML and page resources to our in-house <a href="https://www.cloudflare.com/learning/ai/what-is-machine-learning/"><u>machine learning</u></a> classifiers; meanwhile, the <a href="https://www.cloudflare.com/learning/security/what-are-indicators-of-compromise/"><u>indicators of compromise (IOCs)</u></a> are sent to our suite of <a href="https://www.cloudflare.com/learning/security/glossary/threat-intelligence-feed/"><u>threat feeds</u></a> and domain categorization tools to highlight any known malicious sites or site categorizations.</p><p>Once we have all of this information collected, we expose it to a set of rules and heuristics that identify the URL as phishing or not based on how T&amp;S investigators have traditionally responded to similar abuse reports and patterns of bad behaviors we’ve observed. Rules will suggest decisions to make against the reports, and remediations to make against harmful content. It is through this process that we were able to convert the manual reviews by T&amp;S investigators into an automated flow of phishing identification. We also recognize that reporters make mistakes or even deliberately try to weaponize abuse processes. Our rules must therefore consider the possibility of false positives, in which reports are created against legitimate websites (intentionally or unintentionally). False positives can erode the trust of our customers and create incidents, so automation must include processes to disregard erroneous reports.</p><p>The magic of all of this was the powerful suite of tools on the Cloudflare Developer Platform. Whether it was using <a href="https://developers.cloudflare.com/kv/"><u>KV</u></a> to store report summaries that could scale indefinitely or <a href="https://developers.cloudflare.com/durable-objects/"><u>Durable Objects</u></a> to keep running counters of an unlimited number of attributes that could be tracked or leveraged over time, we were able to integrate the solutions quickly allowing us easily add or remove new enrichments with little effort. We also made use of <a href="https://developers.cloudflare.com/hyperdrive/"><u>Hyperdrive</u></a> to access the internal Postgres database that stores our abuse reports, <a href="https://developers.cloudflare.com/queues/"><u>Queues</u></a> to manage the scanning jobs, <a href="https://developers.cloudflare.com/workers-ai/"><u>Workers AI</u></a> to run machine learning classifiers, and <a href="https://developers.cloudflare.com/d1/"><u>D1</u></a> to store detection logs for efficacy and evaluation review. To tie it all together, the team also deployed a <a href="https://developers.cloudflare.com/pages/framework-guides/deploy-a-remix-site/"><u>Remix Pages UI</u></a> to present all the phishing detection engine’s analysis to T&amp;S investigators for follow-on investigations and evaluations of inconclusive results.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7MQYa4u71uKm9J6AaNxQNy/0cce686f51988ece4a1a46d87dae6df9/image1.png" />
          </figure><p><sup><i>Architecture of Trust &amp; Safety’s phishing automation detection pipeline</i></sup></p>
    <div>
      <h3>Moving forward</h3>
      <a href="#moving-forward">
        
      </a>
    </div>
    <p>The same intelligence we’re gathering to expedite and refine abuse report processing isn’t just for abuse response; it’s also used to empower our customers. By analyzing patterns and trends of abusive behaviors — such as identifying common phrases used in phishing attempts, recognizing infrastructure used by malicious actors or spotting coordinated campaigns across multiple domains — we enhance the efficacy of our application security, email security, and threat intelligence products.</p><p>For our <a href="https://developers.cloudflare.com/learning-paths/application-security/security-center/brand-protection/"><u>Brand Protection</u></a> customers, this translates into a significant advantage: the ability to easily report suspected abuse directly from the Cloudflare dashboard. This feature ensures that potential phishing sites are addressed rapidly, minimizing the risk to your customers and brand reputation. Furthermore, the Trust and Safety team can use this information to take action on similar threats across the Cloudflare network, protecting all customers, even those who aren't Brand Protection users.</p><p>Alongside our network-wide efforts, we’ve also been partnering with our customers, as well as experts outside of Cloudflare, to understand trends they are seeing in their own phishing mitigation efforts. By soliciting intelligence regarding the abuse issues that affect the attack’s targets, we can better identify and prevent abuse of Cloudflare products. We’ve been able to use these partnerships and discussions with external organizations to craft highly targeted rules that head off emerging patterns of phishing activity. </p>
    <div>
      <h3>It takes a village: if you see something, say something</h3>
      <a href="#it-takes-a-village-if-you-see-something-say-something">
        
      </a>
    </div>
    <p>If you believe you’ve identified phishing activity that is passing through Cloudflare’s network, please report it via our <a href="https://abuse.cloudflare.com/"><u>abuse reporting form</u></a>. For technical users who might be interested in a programmatic way to report to us, please review our <a href="https://developers.cloudflare.com/api/resources/abuse_reports/"><u>abuse reporting API</u></a> documentation.</p><p>We invite all of our customers to join us in helping make the Internet safer:</p><ol><li><p>Enterprise customers should speak with their Customer Success Manager about enabling <a href="https://blog.cloudflare.com/safeguarding-your-brand-identity-logo-matching-for-brand-protection/"><u>Brand Protection</u></a>, included by default for all enterprise customers. </p></li><li><p>For existing users of the Brand Protection product, update your <a href="https://developers.cloudflare.com/security-center/brand-protection/"><u>brand's assets</u></a>, so we can better identify the legitimate websites and logos of our customers vs. possible phishing activity.</p></li><li><p>As a Cloudflare customer, make sure your <a href="https://developers.cloudflare.com/fundamentals/setup/account/account-security/abuse-contact/"><u>abuse contact</u></a> is up-to-date in the Cloudflare dashboard.</p></li></ol><p></p> ]]></content:encoded>
            <category><![CDATA[Security Week]]></category>
            <category><![CDATA[Abuse]]></category>
            <category><![CDATA[Threat Intelligence]]></category>
            <category><![CDATA[Phishing]]></category>
            <category><![CDATA[Policy & Legal]]></category>
            <guid isPermaLink="false">3Bb3gcZ92DhVXA44P3XF7x</guid>
            <dc:creator>Javier Castro</dc:creator>
            <dc:creator>Justin Paine</dc:creator>
            <dc:creator>Rachael Truong</dc:creator>
        </item>
        <item>
            <title><![CDATA[Applying Human Rights Frameworks to our approach to abuse]]></title>
            <link>https://blog.cloudflare.com/applying-human-rights-frameworks-to-our-approach-to-abuse/</link>
            <pubDate>Thu, 15 Dec 2022 14:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare launched its first Human Rights Policy in 2021, formally stating our commitment to respect human rights under the UN Guiding Principles on Business and Human Rights (UNGPs) ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Last year, we launched Cloudflare’s first Human Rights Policy, formally stating our commitment to respect human rights under the UN Guiding Principles on Business and Human Rights (UNGPs) and articulating how we planned to meet the commitment as a business to respect human rights. Our Human Rights Policy describes many of the concrete steps we take to implement these commitments, from protecting the privacy of personal data to respecting the rights of our diverse workforce.</p><p>We also look to our human rights commitments in considering how to approach complaints of abuse by those using our services. Cloudflare has long taken positions that reflect our belief that we must consider the implications of our actions for both Internet users and the Internet as a whole. The UNGPs guide that understanding by encouraging us to think systematically about how the decisions Cloudflare makes may affect people, with the goal of building processes to incorporate those considerations.</p><p>Human rights frameworks have also been adopted by policymakers seeking to regulate content and behavior online in a rights-respecting way. The Digital Services Act recently passed by the European Union, for example, includes a variety of requirements for intermediaries like Cloudflare that come from human rights principles. So using human rights principles to help guide our actions is not only the right thing to do, it is likely to be required by law at some point down the road.</p><p>So what does it mean to apply human rights frameworks to our response to abuse? As we’ll talk about in more detail below, we use human rights concepts like access to fair process, proportionality (the idea that actions should be carefully calibrated to minimize any effect on rights), and transparency.</p>
    <div>
      <h3>Human Rights online</h3>
      <a href="#human-rights-online">
        
      </a>
    </div>
    <p>The first step is to understand the integral role the Internet plays in human rights. We use the Internet not only to find and share information, but for education, commerce, employment, and social connection. Not only is the Internet essential to our rights of freedom of expression, opinion and association, the UN <a href="https://www2.ohchr.org/english/bodies/hrcouncil/docs/17session/A.HRC.17.27_en.pdf">considers it</a> an enabler of all of our human rights.</p><p>The Internet allows activists and human rights defenders to expose abuses across the globe. It allows collective causes to grow into global movements. It provides the foundation for large-scale organizing for political and social change in ways that have never been possible before. But all of that depends on having access to it.</p><p>And as we’ve seen, access to a free, open, and interconnected Internet is not guaranteed.  Authoritarian governments take advantage of the critical role it plays by denying access to it altogether and using other tactics to intimidate their populations. As described by a <a href="https://documents-dds-ny.un.org/doc/UNDOC/GEN/G22/341/55/PDF/G2234155.pdf?OpenElement">recent UN report</a>, government-mandated Internet “shutdowns complement other digital measures used to suppress dissent, such as intensified censorship, systematic content filtering and mass surveillance, as well as the use of government-sponsored troll armies, cyberattacks and targeted surveillance against journalists and human rights defenders.” Online access is limited by the failure to invest in infrastructure or lack of individual resources. Private interests looking to leverage Internet infrastructure to solve commercial content problems result in overblocking of unrelated websites. Cyberattacks make even critical infrastructure inaccessible. Gatekeepers limit entry for business reasons, risking the silencing of those without financial or political clout.</p><p>If we want to maintain an Internet that is for everyone, we need to develop rules within companies that don’t take access to it for granted. Processes that could limit Internet access should be thoughtful and well-grounded in human rights principles.</p>
    <div>
      <h3>The impact of free services</h3>
      <a href="#the-impact-of-free-services">
        
      </a>
    </div>
    <p>Cloudflare is unique among our competitors because we offer a variety of services that entities can sign up for free online. Our free services make it possible for everyone - nonprofits, <a href="https://www.cloudflare.com/small-business/">small businesses</a>, developers, and vulnerable voices around the world - to have access to security services they otherwise might be unable to afford.</p><p>Cloudflare’s approach of providing free and low cost security services online is consistent with human rights and the push for greater access to the Internet for everyone. Having a free plan removes barriers to the Internet. It means you don’t have to be a big company, a government, or an organization with a popular cause to protect yourself from those who might want to silence you through a cyberattack.</p><p>Making access to security services easily available for free also has the potential to relegate DDoS attacks to the dustbin of history. If we can <a href="https://www.cloudflare.com/learning/ddos/how-to-prevent-ddos-attacks/">stop DDoS</a> from being an effective means of attack, we may yet be able to divert attackers from using them. Ridding the world of the scourge of DDoS attacks would benefit everyone. In particular, though, it would benefit vulnerable entities doing good for the world who do not otherwise have the means to defend themselves.</p><p>But that same free services model that empowers vulnerable groups and has the potential to eliminate DDoS attacks once and for all means that we at Cloudflare are often not picking our customers; they are picking us. And that comes with its own risk. For every dissenting voice challenging an oppressive regime that signs up for our service, there may also be a bad actor doing things online that are inconsistent with our values.</p><p>To reflect that reality, we need an abuse framework that satisfies our goals of expanding access to the global Internet and getting rid of cyberattacks, while also finding ways, both as a company and together with the broader Internet community, to address human rights harms.</p>
    <div>
      <h3>Applying the UNGP framework to online activity</h3>
      <a href="#applying-the-ungp-framework-to-online-activity">
        
      </a>
    </div>
    <p>As we’ve described <a href="/cloudflare-and-human-rights-joining-the-global-network-initiative-gni/">before</a>, the UNGPs assign businesses and governments different obligations when it comes to human rights. Governments are required to <i>protect</i> human rights within their territories, taking appropriate steps to prevent, investigate, punish and redress harms. Companies, on the other hand, are expected to <i>respect</i> human rights. That means that companies should conduct due diligence to avoid taking actions that would infringe on the rights of others, and remedy any harms that do occur.</p><p>It can be challenging to apply that UNGP protect/respect/remedy framework to online activities. Because the Internet serves as an enabler of a variety of human rights, decisions that alter access to the Internet - from serving a particular market to changing access to particular services - can affect the rights of many different people, sometimes in competing ways.</p><p>Access to the Internet is also not typically provided by a single company. When you visit a website online, you’re experiencing the services of many different providers. Just for that single website, there’s probably a website owner who created the website, a website host storing the content, a <a href="https://www.cloudflare.com/learning/dns/glossary/what-is-a-domain-name-registrar/">domain name registrar</a> providing the domain name, a domain name registry running the <a href="https://www.cloudflare.com/learning/dns/top-level-domain/">top level domain</a> like .com or <a href="https://www.cloudflare.com/application-services/products/registrar/buy-org-domains/">.org</a>, a reverse proxy helping keep the website online in case of attack, a <a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/">content delivery network</a> improving the efficiency of Internet transmissions, a transit provider transmitting the website content across the Internet, the ISPs delivering the content to the end user, and a browser to make the website’s content intelligible to you.</p><p>And that description doesn’t even include the captcha provider that helps make sure the site is visited by humans rather than bots, the open source software developer whose code was used to build the site, the various plugins that enable the site to show video or accept payments, or the many other providers online who might play an important role in your user experience. So our ability to exercise our human rights online is dependent on the actions of many providers, acting as part of an ecosystem to bring us the Internet.</p><p>Trying to understand the appropriate role for companies is even more complicated when it comes to questions of online abuse. Online abuse is not generally caused by one of the many infrastructure providers who facilitate access to the Internet; the harm is caused by a third party. Because of the variety of providers mentioned above, a company may have limited options at its disposal to do anything that would help address the online harm in a targeted way, consistent with human rights principles. For example, blocking access to parts of the Internet, or stepping aside to allow a site to be subjected to a cyberattack, has the potential to have profound negative impact on others’ access to the Internet and thus human rights.</p><p>To help work through those competing human rights concerns, Cloudflare strives to build processes around online abuse that incorporate human rights principles. Our approach focuses on three recognized human rights principles: (1) fair process for both complainants and users, (2) proportionality, and (3) transparency. And we have engaged, and continue to engage, extensively with human rights focused groups like the <a href="https://globalnetworkinitiative.org/">Global Network Initiative</a> and the <a href="https://www.ohchr.org/en/business-and-human-rights/b-tech-project">UN’s B-Tech Project</a>, as well as our Project Galileo partners and many other stakeholders, to understand the impact of our policies.</p>
    <div>
      <h3>Fair abuse processes - Grievance mechanisms for complainants</h3>
      <a href="#fair-abuse-processes-grievance-mechanisms-for-complainants">
        
      </a>
    </div>
    <p>Human rights law, and the UNGPs in particular, stress that individuals and communities who are harmed should have mechanisms for remediation of the harm. Those mechanisms - which include both legal processes like going to court and more informal private processes - should be applied equitably and fairly, in a predictable and transparent way. A company like Cloudflare can help by establishing grievance mechanisms that give people an opportunity to raise their concerns about harm, or to challenge deprivation of rights.</p><p>To address online abuse by entities that might be using Cloudflare services, Cloudflare has an <a href="https://www.cloudflare.com/trust-hub/reporting-abuse/">abuse reporting form</a> that is open to anyone online. Our website includes a detailed description of how to report problematic activity. Individuals worried about retaliation, such as those submitting complaints of threatening or harassing behavior, can choose to submit complaints anonymously, although it may limit the ability to follow up on the complaint.</p><p>Cloudflare uses the information we receive through that abuse reporting process to respond to complaints about online abuse based on the types of services we may be providing as well as the nature of the complaint.</p><p>Because of the way Cloudflare <a href="https://www.cloudflare.com/products/zero-trust/threat-defense/">protects entities from cyberattack</a>, a complainant may not know who is hosting the content that is the source of the alleged harm. To make sure that someone who might have been harmed has an opportunity to remediate that harm, Cloudflare has created an abuse process to get complaints to the right place. If the person submitting the complaint is seeking to remove content, something that Cloudflare cannot do if it is providing only performance or security services, Cloudflare will forward the complaint to the website owner and hosting provider for appropriate action.</p>
    <div>
      <h3>Fair abuse processes - Notice and Appeal for Cloudflare users</h3>
      <a href="#fair-abuse-processes-notice-and-appeal-for-cloudflare-users">
        
      </a>
    </div>
    <p>Trying to build a fair policy around abuse requires understanding that complaints are not always submitted in good faith, and that abuse processes can themselves be abused. Cloudflare, for example, has received abuse complaints that appear to be intended to intimidate journalists reporting on government corruption, to silence political opponents, and to disrupt competitors.</p><p>A fair abuse process therefore also means being fair to Cloudflare users or website owners who might suffer consequences of a complaint. Cloudflare generally provides notice to our users of potential complaints so that they can respond to allegations of abuse, although individual circumstances and anonymous complaints sometimes make that difficult.</p><p>We also strive to provide users with notice of potential actions we might take, as well as an opportunity to provide additional information that might inform our decisions about appropriate action. Users can also seek reconsideration of decisions.</p>
    <div>
      <h3>Proportionality - Differentiating our products</h3>
      <a href="#proportionality-differentiating-our-products">
        
      </a>
    </div>
    <p>Proportionality is a core principle of human rights. In human rights law, proportionality means that any interference with rights should be as limited and narrow as possible in seeking to address the harm. In other words, the goal of proportionality is to minimize the collateral effect of an action on other human rights.</p><p>Proportionality is an important principle for Internet infrastructure because of the dependencies among different providers required to access the Internet. A government demand that a single ISP shut off or throttle access to the Internet can have dramatic real-life <a href="https://documents-dds-ny.un.org/doc/UNDOC/GEN/G22/341/55/PDF/G2234155.pdf?OpenElement">effects</a>,“depriving thousands or even millions of their only means of reaching their loved ones, continuing their work or participating in political debates or decision-making.” Voluntary action by individual providers can have a similar broad cascading effect, completely eliminating access to certain services or swaths of content.</p><p>To avoid these kinds of consequences, we apply the concept of proportionality to address abuse on our network, particularly when a complaint implicates other rights, like freedom of expression. Complaints about content are best addressed by those able to take the most targeted action possible. A complaint about a single image or post, for example, should not result in an entire website being taken down.</p><p>The principle of proportionality is the basis for our use of <a href="/cloudflares-abuse-policies-and-approach/">different approaches</a> to address abuse for different types of products. If we’re hosting content with products like Cloudflare Pages, Cloudflare Images, or Cloudflare Stream, we’re able to take more granular, specific action. In those cases, we have an acceptable hosting policy that enables us to take action on particular pieces of content. We give the Cloudflare user an opportunity to take down the content themselves before following notice and takedown, which allows them to contest the takedown if they believe it is inappropriate.</p><p>But when we’re only providing security services that prevent the site being removed from the Internet by a cyberattack, Cloudflare can’t take targeted action on particular pieces of content. Nor do we generally see termination of DDoS protection services as the right or most effective remedy for addressing a website with harmful content. Termination of security services only resolves the concerns if the site is removed from the Internet by DDoS attack, an act which is illegal in most jurisdictions. From a human rights standpoint, making content inaccessible through a vigilante cyber attack is not only inconsistent with the principle of proportionality, but with the principles of notice and due process. It also provides no opportunities for remediation of harm in the event of a mistake.</p><p>Likewise, when we’re providing core Internet technology services like DNS, we do not have the ability to take granular action. Our only options are blunt instruments.</p><p>In those circumstances, there are actors in the broader Internet ecosystem who can take targeted action, even if we can’t. Typically, that would be a website owner or hosting provider that has the ability to remove individual pieces of content. Proportionality therefore sometimes means recognizing that we can’t and shouldn’t try to solve every problem, particularly when we are not the right party to take action. But we can still play an important role in helping complainants identify the right provider, so they can have their concerns addressed.</p><p>The EU recently formally embraced the concept of proportionality in abuse processes in the Digital Services Act. They pointed out that when intermediaries must be involved to address illegal content, requests “should, as a general rule, be directed to the specific provider that has the technical and operational ability to act against specific items of illegal content, to prevent and minimize any possible negative effects on the availability and accessibility of information that is not illegal content.” [DSA, Recital 27]</p>
    <div>
      <h3>Transparency - Reporting on abuse</h3>
      <a href="#transparency-reporting-on-abuse">
        
      </a>
    </div>
    <p>Human rights law emphasizes the importance of transparency - from both governments and companies - on decisions that have an effect on human rights. Transparency allows for public accountability and improves trust in the overall system.</p><p>This human rights principle is one that has always made sense to us, because transparency is a core value to Cloudflare as well. And if you believe, as we do, that the way different providers tackle questions of abuse will have long term ripple effects, we need to make sure people understand the trade-offs with decisions we make that could impact human rights. We have never taken the easy option of making a difficult decision quietly. We try to blog about the difficult decisions we have made, and then use those blogs to engage with external stakeholders to further our own learning.</p><p>In addition to our blogs, we have worked to build up more systematic reporting of our evaluation process and decision-making. Last year, we published a page on our website describing our <a href="https://www.cloudflare.com/trust-hub/abuse-approach/">approach to abuse</a>. We continue to take steps to expand information in our <a href="https://www.cloudflare.com/transparency/updates/">biannual transparency report</a> about our full range of responses to abuse, from removal of content in our storage products to reports on child sexual abuse material to the National Center for Missing and Exploited Children (NCMEC).</p>
    <div>
      <h3>Transparency - Reporting on the circumstances when we terminate services</h3>
      <a href="#transparency-reporting-on-the-circumstances-when-we-terminate-services">
        
      </a>
    </div>
    <p>We’ve also sought to be transparent about the limited number of circumstances where we will terminate even DDoS protection services, consistent with our respect for human rights and our view that opening a site up to DDoS attack is almost never a proportional response to address content. Most of the circumstances in which we terminate all services are tied to legal obligations, reflecting the judgment of policymakers and impartial decision makers about when barring entities from access to the Internet is appropriate.</p><p>Even in those circumstances, we try to provide users notice, and where appropriate, an opportunity to address the harm themselves. The legal areas that can result in termination of all services are described in more detail below.</p><p><i>Child Sexual Abuse Material:</i> As described in more detail <a href="/cloudflares-response-to-csam-online/">here</a>, Cloudflare has a policy to report any allegation of child sexual abuse material (CSAM) to the National Center for Missing and Exploited Children (NCMEC) for additional investigation and response. When we have reason to believe, in conjunction with those working in child safety, that a website is solely dedicated to CSAM or that a website owner is deliberately ignoring legal requirements to remove CSAM, we may terminate services. We recently began reporting on those terminations in our biannual transparency report.</p><p><i>Sanctions:</i> The United States has a legal regime that prohibits companies from doing business with any entity or individual on a public list of sanctioned parties, called the Specially Designated Nationals (SDN) list. US provides entities on the SDN list, which includes designated terrorist organizations, human rights violators, and others, notice of the determination and an opportunity to challenge the US designation. Cloudflare will terminate services to entities or individuals that it can identify as having been added to the SDN list.</p><p>The US sanctions regime also restricts companies from doing business with certain sanctioned countries and regions - specifically Cuba, North Korea, Syria, Iran, and the Crimea, Luhansk and Donetsk regions of Ukraine. Cloudflare may terminate certain services if it identifies users as coming from those countries or regions.  Those country and regional sanctions, however, generally have a number of legal exceptions (known as general licenses) that allow Cloudflare to offer certain kinds of services even when individuals and entities come from the sanctioned regions.</p><p><i>Court orders**:**</i> Cloudflare occasionally receives third-party orders in the United States directing Cloudflare and other service providers to terminate services to websites due to copyright or other prohibited content. Because we have no ability to remove content from the Internet that we do not host, we don’t believe that termination of Cloudflare’s security services is an effective means for addressing such content. Our experience has borne that out. Because other service providers are better positioned to address the issues, most of the domains that we have been ordered to terminate are no longer using Cloudflare’s services by the time Cloudflare must take action. Cloudflare nonetheless may terminate services to repeat copyright infringers and others in response to valid orders that are consistent with due process protections and comply with relevant laws.</p><p><i>SESTA/FOSTA</i>: In 2018, the United States passed the Fight Online Sex Trafficking Act (FOSTA) and the Stop Enabling Sex Traffickers Act (SESTA), for the purpose of fighting online sex trafficking. The law’s broad establishment of criminal penalties for the provision of online services that facilitate prostitution or sex trafficking, however, means that companies that provide any online services to sex workers are at risk of breaking the law. To be clear, we think the law is profoundly misguided and poorly drafted. Research has <a href="https://www.antitraffickingreview.org/index.php/atrjournal/article/view/448/364">shown</a> that the law has had detrimental effects on the financial stability, safety, access to community and health outcomes of online sex workers, while being <a href="https://www.gao.gov/assets/gao-21-385.pdf">largely ineffective</a> for addressing human trafficking. But to avoid the risk of criminal liability, we may take steps to terminate services to domains that appear to fall under the ambit of the law. Since the law’s passage, we have terminated services to a few domains due to SESTA/FOSTA. We intend to incorporate any SESTA/FOSTA terminations in our biannual transparency report.</p><p><i>Technical abuse:</i> Cloudflare sometimes receives reports of websites involved in phishing or malware attacks using our services. As a security company, our preference when we receive those reports is to do what we can to prevent the sites from causing harm. When we confirm the abuse, we will therefore place a warning interstitial page to protect users from accidentally falling victim to the attack or to disrupt the attack. Potential phishing victims also benefit from learning that they nearly fell victim to a <a href="https://www.cloudflare.com/learning/access-management/phishing-attack/">phishing attack</a>. In cases when we believe a user to be intentionally phishing or distributing malware and the security interests appear to support additional action, however, we may opt to terminate services to the intentionally malicious domain.</p><p><i>Voluntary terminations:</i> In three well-publicized instances, Cloudflare has taken steps to voluntarily terminate services or block access to sites whose users were intentionally causing harm to others. In 2017, we terminated the neo-Nazi troll site <a href="/why-we-terminated-daily-stormer/">The Daily Stormer</a>. In 2019, we terminated the conspiracy theory forum <a href="/terminating-service-for-8chan/">8chan</a>. And earlier this year, we blocked access to <a href="/kiwifarms-blocked/">Kiwi Farms</a>. Each of those circumstances had their own unique set of facts. But part of our consideration for the actions in those cases was that the sites had inspired physical harm to people in the offline world. And notwithstanding the real world threats and harm, neither law enforcement nor other service providers who could take more targeted action had effectively addressed the harm.</p><p>We continue to believe that there are more effective, long term solutions to address online activity that leads to real world physical threats than seeking to take sites offline by DDoS and cyberattack. And we have been heartened to see jurisdictions like the EU try to grapple with a regulatory response to illegal online activity that preserves human rights online. Looking forward, we hope to see a day when states have developed rights-respecting ways to successfully protect human rights offline based on online activity, and remedy does not depend on vigilante justice through cyberattack.</p>
    <div>
      <h3>Continuous learning</h3>
      <a href="#continuous-learning">
        
      </a>
    </div>
    <p>Addressing abuse online is a long term and ever-shifting challenge for the entire Internet ecosystem. We continuously refine our abuse processes based on the reports we receive, the many conversations we have with stakeholders affected by online abuse, and our engagement with policymakers, other industry participants, and civil society. Make no mistake, the process can sometimes be a bumpy one, where perspectives on the right approach collide. But the one thing we can promise is that we will continue to try to engage, learn, and adapt. Because, together, we think we can build abuse frameworks that reflect respect for human rights and help build a better Internet.</p> ]]></content:encoded>
            <category><![CDATA[Impact Week]]></category>
            <category><![CDATA[Abuse]]></category>
            <category><![CDATA[Human Rights]]></category>
            <category><![CDATA[Policy & Legal]]></category>
            <guid isPermaLink="false">1Ops3w32z5G5njKgs2Iy0J</guid>
            <dc:creator>Alissa Starzak</dc:creator>
        </item>
        <item>
            <title><![CDATA[Blocking Kiwifarms]]></title>
            <link>https://blog.cloudflare.com/kiwifarms-blocked/</link>
            <pubDate>Sat, 03 Sep 2022 22:15:35 GMT</pubDate>
            <description><![CDATA[ We have blocked Kiwifarms. Visitors to any of the Kiwifarms sites that use any of Cloudflare's services will see a Cloudflare block page and a link to this post.  ]]></description>
            <content:encoded><![CDATA[ <p>We have blocked Kiwifarms. Visitors to any of the Kiwifarms sites that use any of Cloudflare's services will see a Cloudflare block page and a link to this post. Kiwifarms may move their sites to other providers and, in doing so, come back online, but we have taken steps to block their content from being accessed through our infrastructure.</p><p>This is an extraordinary decision for us to make and, given Cloudflare's role as an Internet infrastructure provider, a dangerous one that we are not comfortable with. However, the rhetoric on the Kiwifarms site and specific, targeted threats have escalated over the last 48 hours to the point that we believe there is an unprecedented emergency and immediate threat to human life unlike we have previously seen from Kiwifarms or any other customer before.</p>
    <div>
      <h3>Escalating threats</h3>
      <a href="#escalating-threats">
        
      </a>
    </div>
    <p>Kiwifarms has frequently been host to revolting content. Revolting content alone does not create an emergency situation that necessitates the action we are taking today. Beginning approximately two weeks ago, a pressure campaign started with the goal to deplatform Kiwifarms. That pressure campaign targeted Cloudflare as well as other providers utilized by the site.</p><p>Cloudflare provided security services to Kiwifarms, protecting them from DDoS and other cyberattacks. We have never been their hosting provider. <a href="/cloudflares-abuse-policies-and-approach/">As we outlined last Wednesday</a>, we do not believe that terminating security services is appropriate, even to revolting content. In a law-respecting world, the answer to even illegal content is not to use other illegal means like DDoS attacks to silence it.</p><p>We are also not taking this action directly because of the pressure campaign. While we have empathy for its organizers, we are committed as a security provider to protecting our customers even when they run deeply afoul of popular opinion or even our own morals. The <a href="/cloudflares-abuse-policies-and-approach/">policy we articulated last Wednesday remains our policy</a>. We continue to believe that the best way to relegate cyberattacks to the dustbin of history is to give everyone the tools to prevent them.</p><p>However, as the pressure campaign escalated, so did the rhetoric on the Kiwifarms site. Feeling attacked, users of Kiwifarms became even more aggressive. Over the last two weeks, we have proactively reached out to law enforcement in multiple jurisdictions highlighting what we believe are potential criminal acts and imminent threats to human life that were posted to the site.</p>
    <div>
      <h3>Legal process</h3>
      <a href="#legal-process">
        
      </a>
    </div>
    <p>While law enforcement in these areas are working to investigate what we and others reported, unfortunately the process is moving more slowly than the escalating risk. While we believe that in every other situation we have faced — including the Daily Stormer and 8chan — it would have been appropriate as an infrastructure provider for us to wait for legal process, in this case the imminent and emergency threat to human life which continues to escalate causes us to take this action.</p><p>Hard cases make bad law. This is a hard case and we would caution anyone from seeing it as setting precedent. The <a href="/cloudflares-abuse-policies-and-approach/">policies we articulated last Wednesday remain our policies</a>. For an infrastructure provider like Cloudflare, legal process is still the correct way to deal with revolting and potentially illegal content online.</p><p>But we need a mechanism when there is an emergency threat to human life for infrastructure providers to work expediently with legal authorities in order to ensure the decisions we make are grounded in due process. Unfortunately, that mechanism does not exist and so we are making this uncomfortable emergency decision alone.</p>
    <div>
      <h3>Not the end</h3>
      <a href="#not-the-end">
        
      </a>
    </div>
    <p>Finally, we are aware and concerned that our action may only fan the flames of this emergency. Kiwifarms itself will most likely find other infrastructure that allows them to come back online, as the Daily Stormer and 8chan did themselves after we terminated them. And, even if they don't, the individuals that used the site to increasingly terrorize will feel even more isolated and attacked and may lash out further. There is real risk that by taking this action today we may have further heightened the emergency.</p><p>We will continue to work proactively with law enforcement to help with their investigations into the site and the individuals who have posted what may be illegal content to it. And we recognize that while our blocking Kiwifarms temporarily addresses the situation, it by no means solves the underlying problem. That solution will require much more work across society. We are hopeful that our action today will help provoke conversations toward addressing the larger problem. And we stand ready to participate in that conversation.</p> ]]></content:encoded>
            <category><![CDATA[Abuse]]></category>
            <category><![CDATA[Policy & Legal]]></category>
            <guid isPermaLink="false">7tWDjvEz0pDvEf8xc8Zk0H</guid>
            <dc:creator>Matthew Prince</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare's abuse policies & approach]]></title>
            <link>https://blog.cloudflare.com/cloudflares-abuse-policies-and-approach/</link>
            <pubDate>Wed, 31 Aug 2022 13:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare launched nearly twelve years ago. Over that time, our set of services has become much more complicated. With that complexity we have developed policies around how we handle abuse of different features Cloudflare provides ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6KFpKT5IYgDuwxdCYL4S1s/babd5693105204319201da5b58e6b98b/The-Cloudflare-Blog-1.png" />
            
            </figure><p>Cloudflare launched nearly twelve years ago. We’ve grown to operate a network that spans more than 275 cities in over 100 countries. We have millions of customers: from small businesses and individual developers to approximately 30 percent of the Fortune 500. Today, more than 20 percent of the web relies directly on Cloudflare’s services.</p><p>Over the time since we launched, our set of services has become much more complicated. With that complexity we have developed policies around how we handle abuse of different Cloudflare features. Just as a broad platform like Google has different abuse policies for search, Gmail, YouTube, and Blogger, Cloudflare has <a href="/out-of-the-clouds-and-into-the-weeds-cloudflares-approach-to-abuse-in-new-products/">developed different abuse policies</a> as we have introduced new products.</p><p>We published our updated approach to abuse last year at:</p><p><a href="https://www.cloudflare.com/trust-hub/abuse-approach/">https://www.cloudflare.com/trust-hub/abuse-approach/</a></p><p>However, as questions have arisen, we thought it made sense to describe those policies in more detail here.  </p><p>The policies we built reflect ideas and recommendations from human rights experts, activists, academics, and regulators. Our guiding principles require abuse policies to be specific to the service being used. This is to ensure that any actions we take both reflect the ability to address the harm and minimize unintended consequences. We believe that someone with an abuse complaint must have access to an abuse process to reach those who can most effectively and narrowly address their complaint — anonymously if necessary. And, critically, we strive always to be transparent about both our policies and the actions we take.</p>
    <div>
      <h3>Cloudflare's products</h3>
      <a href="#cloudflares-products">
        
      </a>
    </div>
    <p>Cloudflare provides a broad range of products that fall generally into three buckets: hosting products (e.g., Cloudflare Pages, Cloudflare Stream, Workers KV, Custom Error Pages), security services (e.g., DDoS Mitigation, Web Application Firewall, Cloudflare Access, Rate Limiting), and core Internet technology services (e.g., Authoritative DNS, Recursive DNS/1.1.1.1, WARP). For a complete list of our products and how they map to these categories, you can see our <a href="https://www.cloudflare.com/trust-hub/abuse-approach/">Abuse Hub</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/0jGLSWqF5X7h8ZGsARPIe/50f3abc20a250a34dbd27647f721de1b/pasted-image-0--2--1.png" />
            
            </figure><p>As described below, our policies take a different approach on a product-by-product basis in each of these categories.</p>
    <div>
      <h3>Hosting products</h3>
      <a href="#hosting-products">
        
      </a>
    </div>
    <p>Hosting products are those products where Cloudflare is the ultimate host of the content. This is different from products where we are merely providing security or temporary caching services and the content is hosted elsewhere. Although many people confuse our security products with hosting services, we have distinctly different policies for each. Because the vast majority of Cloudflare customers do not yet use our hosting products, abuse complaints and actions involving these products are currently relatively rare.</p><p>Our decision to disable access to content in hosting products fundamentally results in that content being taken offline, at least until it is republished elsewhere. Hosting products are subject to our <a href="https://www.cloudflare.com/trust-hub/abuse-approach/">Acceptable Hosting Policy</a>. Under that policy, for these products, we may remove or disable access to content that we believe:</p><ul><li><p>Contains, displays, distributes, or encourages the creation of child sexual abuse material, or otherwise exploits or promotes the exploitation of minors.</p></li><li><p>Infringes on intellectual property rights.</p></li><li><p>Has been determined by appropriate legal process to be defamatory or libelous.</p></li><li><p>Engages in the unlawful distribution of controlled substances.</p></li><li><p>Facilitates human trafficking or prostitution in violation of the law.</p></li><li><p>Contains, installs, or disseminates any active malware, or uses our platform for exploit delivery (such as part of a command and control system).</p></li><li><p>Is otherwise illegal, harmful, or violates the rights of others, including content that discloses sensitive personal information, incites or exploits violence against people or animals, or seeks to defraud the public.</p></li></ul><p>We maintain discretion in how our Acceptable Hosting Policy is enforced, and generally seek to apply content restrictions as narrowly as possible. For instance, if a shopping cart platform with millions of customers uses Cloudflare Workers KV and one of their customers violates our Acceptable Hosting Policy, we will not automatically terminate the use of Cloudflare Workers KV for the entire platform.</p><p>Our guiding principle is that organizations closest to content are best at determining when the content is abusive. It also recognizes that overbroad takedowns can have significant unintended impact on access to content online.</p>
    <div>
      <h3>Security services</h3>
      <a href="#security-services">
        
      </a>
    </div>
    <p>The overwhelming majority of Cloudflare's millions of customers use only our security services. Cloudflare made a decision early in our history that we wanted to make security tools as widely available as possible. This meant that we provided many tools for free, or at minimal cost, to best limit the impact and effectiveness of a wide range of cyberattacks. Most of our customers pay us nothing.</p><p>Giving everyone the ability to sign up for our services online also reflects our view that cyberattacks not only should not be used for silencing vulnerable groups, but are not the appropriate mechanism for addressing problematic content online. We believe cyberattacks, in any form, should be relegated to the dustbin of history.</p><p>The decision to provide security tools so widely has meant that we've had to think carefully about when, or if, we ever terminate access to those services. We recognized that we needed to think through what the effect of a termination would be, and whether there was any way to set standards that could be applied in a fair, transparent and non-discriminatory way, consistent with human rights principles.</p><p>This is true not just for the content where a complaint may be filed  but also for the precedent the takedown sets. Our conclusion — informed by all of the many conversations we have had and the thoughtful discussion in the broader community — is that voluntarily terminating access to services that protect against cyberattack is not the correct approach.</p>
    <div>
      <h3>Avoiding an abuse of power</h3>
      <a href="#avoiding-an-abuse-of-power">
        
      </a>
    </div>
    <p>Some argue that we should terminate these services to content we find reprehensible so that others can launch attacks to knock it offline. That is the equivalent argument in the physical world that the fire department shouldn't respond to fires in the homes of people who do not possess sufficient moral character. Both in the physical world and online, that is a dangerous precedent, and one that is over the long term most likely to disproportionately harm vulnerable and marginalized communities.</p><p>Today, more than 20 percent of the web uses Cloudflare's security services. When considering our policies we need to be mindful of the impact we have and precedent we set for the Internet as a whole. Terminating security services for content that our team personally feels is disgusting and immoral would be the popular choice. But, in the long term, such choices make it more difficult to protect content that supports oppressed and marginalized voices against attacks.</p>
    <div>
      <h3>Refining our policy based on what we’ve learned</h3>
      <a href="#refining-our-policy-based-on-what-weve-learned">
        
      </a>
    </div>
    <p>This isn't hypothetical. Thousands of times per day we receive calls that we terminate security services based on content that someone reports as offensive. Most of these don’t make news. Most of the time these decisions don’t conflict with our moral views. Yet two times in the past we decided to terminate content from our security services because we found it reprehensible. In 2017, we terminated the neo-Nazi troll site <a href="/why-we-terminated-daily-stormer/">The Daily Stormer</a>. And in 2019, we terminated the conspiracy theory forum <a href="/terminating-service-for-8chan/">8chan</a>.</p><p>In a deeply troubling response, after both terminations we saw a dramatic increase in authoritarian regimes attempting to have us terminate security services for human rights organizations — often citing the language from our own justification back to us.</p><p>Since those decisions, we have had significant discussions with policy makers worldwide. From those discussions we concluded that the power to terminate security services for the sites was not a power Cloudflare should hold. Not because the content of those sites wasn't abhorrent — it was — but because security services most closely resemble Internet utilities.</p><p>Just as the telephone company doesn't terminate your line if you say awful, racist, bigoted things, we have concluded in consultation with politicians, policy makers, and experts that turning off security services because we think what you publish is despicable is the wrong policy. To be clear, just because we did it in a limited set of cases before doesn’t mean we were right when we did. Or that we will ever do it again.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7tBErj7SMPOb8RTPTfKVam/f785844a18b57a059bdd25a08fe47e54/pasted-image-0--4--3.png" />
            
            </figure><p>But that doesn’t mean that Cloudflare can’t play an important role in protecting those targeted by others on the Internet. We have long supported human rights groups, journalists, and other uniquely vulnerable entities online through <a href="https://www.cloudflare.com/galileo/">Project Galileo</a>. Project Galileo offers free cybersecurity services to nonprofits and advocacy groups that help strengthen our communities.</p><p>Through the <a href="https://www.cloudflare.com/athenian/">Athenian Project</a>, we also play a role in protecting election systems throughout the United States and abroad. Elections are one of the areas where the systems that administer them need to be fundamentally trustworthy and neutral. Making choices on what content is deserving or not of security services, especially in any way that could in any way be interpreted as political, would undermine our ability to provide trustworthy protection of election infrastructure.</p>
    <div>
      <h3>Regulatory realities</h3>
      <a href="#regulatory-realities">
        
      </a>
    </div>
    <p>Our policies also respond to regulatory realities. Internet content regulation laws passed over the last five years around the world have largely drawn a line between services that host content and those that provide security and conduit services. Even when these regulations impose obligations on platforms or hosts to moderate content, they exempt security and conduit services from playing the role of moderator without legal process. This is sensible regulation borne of a thorough regulatory process.</p><p>Our policies follow this well-considered regulatory guidance. We prevent security services from being used by sanctioned organizations and individuals. We also terminate security services for content which is illegal in the United States — where Cloudflare is headquartered. This includes Child Sexual Abuse Material (CSAM) as well as content subject to Fight Online Sex Trafficking Act (FOSTA). But, otherwise, we believe that cyberattacks are something that everyone should be free of. Even if we fundamentally disagree with the content.</p><p>In respect of the rule of law and due process, we follow legal process controlling security services. We will restrict content in geographies where we have received legal orders to do so. For instance, if a court in a country prohibits access to certain content, then, following that court's order, we generally will restrict access to that content in that country. That, in many cases, will limit the ability for the content to be accessed in the country. However, we recognize that just because content is illegal in one jurisdiction does not make it illegal in another, so we narrowly tailor these restrictions to align with the jurisdiction of the court or legal authority.</p><p>While we follow legal process, we also believe that transparency is critically important. To that end, wherever these content restrictions are imposed, we attempt to link to the particular legal order that required the content be restricted. This transparency is necessary for people to participate in the legal and legislative process. We find it deeply troubling when ISPs comply with court orders by invisibly blackholing content — not giving those who try to access it any idea of what legal regime prohibits it. Speech can be curtailed by law, but proper application of the Rule of Law requires whoever curtails it to be transparent about why they have.</p>
    <div>
      <h3>Core Internet technology services</h3>
      <a href="#core-internet-technology-services">
        
      </a>
    </div>
    <p>While we will generally follow legal orders to restrict security and conduit services, we have a higher bar for core Internet technology services like Authoritative DNS, Recursive DNS/1.1.1.1, and WARP. The challenge with these services is that restrictions on them are global in nature. You cannot easily restrict them just in one jurisdiction so the most restrictive law ends up applying globally.</p><p>We have generally challenged or appealed legal orders that attempt to restrict access to these core Internet technology services, even when a ruling only applies to our free customers. In doing so, we attempt to suggest to regulators or courts more tailored ways to restrict the content they may be concerned about.</p><p>Unfortunately, these cases are becoming more common where largely copyright holders are attempting to get a ruling in one jurisdiction and have it apply worldwide to terminate core Internet technology services and effectively wipe content offline. Again, we believe this is a dangerous precedent to set, placing the control of what content is allowed online in the hands of whatever jurisdiction is willing to be the most restrictive.</p><p>So far, we’ve largely been successful in making arguments that this is not the right way to regulate the Internet and getting these cases overturned. Holding this line we believe is fundamental for the healthy operation of the global Internet. But each showing of discretion across our security or core Internet technology services weakens our argument in these important cases.</p>
    <div>
      <h3>Paying versus free</h3>
      <a href="#paying-versus-free">
        
      </a>
    </div>
    <p>Cloudflare provides both free and paid services across all the categories above. Again, the majority of our customers use our free services and pay us nothing.</p><p>Although most of the concerns we see in our abuse process relate to our free customers, we do not have different moderation policies based on whether a customer is free versus paid. We do, however, believe that in cases where our values are diametrically opposed to a paying customer that we should take further steps to not only not profit from the customer, but to use any proceeds to further our companies’ values and oppose theirs.</p><p>For instance, when a site that opposed LGBTQ+ rights signed up for a paid version of DDoS mitigation service we worked with our Proudflare employee resource group to identify an organization that supported LGBTQ+ rights and donate 100 percent of the fees for our services to them. We don't and won't talk about these efforts publicly because we don't do them for marketing purposes; we do them because they are aligned with what we believe is morally correct.</p>
    <div>
      <h3>Rule of Law</h3>
      <a href="#rule-of-law">
        
      </a>
    </div>
    <p>While we believe we have an obligation to restrict the content that we host ourselves, we do not believe we have the political legitimacy to determine generally what is and is not online by restricting security or core Internet services. If that content is harmful, the right place to restrict it is legislatively.</p><p>We also believe that an Internet where cyberattacks are used to silence what's online is a broken Internet, no matter how much we may have empathy for the ends. As such, we will look to legal process, not popular opinion, to guide our decisions about when to terminate our security services or our core Internet technology services.</p><p>In spite what some may claim, we are not free speech absolutists. We do, however, believe in the Rule of Law. Different countries and jurisdictions around the world will determine what content is and is not allowed based on their own norms and laws. In assessing our obligations, we look to whether those laws are limited to the jurisdiction and consistent with our obligations to respect human rights under the <a href="https://www.ohchr.org/sites/default/files/documents/publications/guidingprinciplesbusinesshr_en.pdf">United Nations Guiding Principles on Business and Human Rights</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3xMuKqx9FMSlG0dQaQB7tY/28a0b309ad48f14256f4200dd852794a/pasted-image-0--3--2.png" />
            
            </figure><p>There remain many injustices in the world, and unfortunately much content online that we find reprehensible. We can solve some of these injustices, but we cannot solve them all. But, in the process of working to improve the security and functioning of the Internet, we need to make sure we don’t cause it long-term harm.</p><p>We will continue to have conversations about these challenges, and how best to approach securing the global Internet from cyberattack. We will also continue to cooperate with legitimate law enforcement to help investigate crimes, to <a href="https://www.cloudflare.com/galileo/">donate funds and services</a> to support equality, human rights, and other causes we believe in, and to participate in policy making around the world to help preserve the free and open Internet.</p> ]]></content:encoded>
            <category><![CDATA[Abuse]]></category>
            <category><![CDATA[Freedom of Speech]]></category>
            <category><![CDATA[Policy & Legal]]></category>
            <guid isPermaLink="false">1dO5CZvpkSasLMSaW3LabY</guid>
            <dc:creator>Matthew Prince</dc:creator>
            <dc:creator>Alissa Starzak</dc:creator>
        </item>
        <item>
            <title><![CDATA[Announcing the CSAM Scanning Tool, Free for All Cloudflare Customers]]></title>
            <link>https://blog.cloudflare.com/the-csam-scanning-tool/</link>
            <pubDate>Wed, 18 Dec 2019 18:02:42 GMT</pubDate>
            <description><![CDATA[ Two weeks ago we wrote about Cloudflare's approach to dealing with child sexual abuse material (CSAM). We first began working with the National Center for Missing and Exploited Children (NCMEC), the US-based organization that acts as a clearinghouse for removing this abhorrent content ]]></description>
            <content:encoded><![CDATA[ <p>Two weeks ago we wrote about <a href="/cloudflares-response-to-csam-online/">Cloudflare's approach to dealing with child sexual abuse material (CSAM)</a>. We first began working with the National Center for Missing and Exploited Children (NCMEC), the US-based organization that acts as a clearinghouse for removing this abhorrent content, within months of our public launch in 2010. Over the last nine years, our Trust &amp; Safety team has worked with <a href="http://www.missingkids.com/">NCMEC</a>, <a href="https://www.interpol.int/en/Crimes/Crimes-against-children">Interpol</a>, and nearly 60 other public and private agencies around the world to design our program. And we are proud of the work we've done to remove CSAM from the Internet.</p><p>The most repugnant cases, in some ways, are the easiest for us to address. While Cloudflare is not able to remove content hosted by others, we will take steps to terminate services to a website when it becomes clear that the site is dedicated to sharing CSAM or if the operators of the website and its host fail to take appropriate steps to take down CSAM content. When we terminate websites, we purge our caches — something that takes effect within seconds globally — and we block the website from ever being able to use Cloudflare's network again.</p>
    <div>
      <h3>Addressing the Hard Cases</h3>
      <a href="#addressing-the-hard-cases">
        
      </a>
    </div>
    <p>The hard cases are when a customer of ours runs a service that allows user generated content (such as a discussion forum) and a user uploads CSAM, or if they’re hacked, or if they have a malicious employee that is storing CSAM on their servers. We've seen many instances of these cases where services intending to do the right thing are caught completely off guard by CSAM that ended up on their sites. Despite the absence of intent or malice in these cases, there’s still a need to identify and remove that content quickly.</p><p>Today we're proud to take a step to help deal with those hard cases. Beginning today, every Cloudflare customer can login to their dashboard and enable access to the CSAM Scanning Tool. As the CSAM Scanning Tool moves through development to production, the tool will check all Internet properties that have enabled CSAM Scanning for this illegal content. Cloudflare will automatically send a notice to you when it flags CSAM material, block that content from being accessed (with a 451 “blocked for legal reasons” status code), and take steps to support proper reporting of that content in compliance with legal obligations.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4AbFvKTllB8PtLhuXka8Cd/83db523c6bab38856bdf3c8aa757c39e/csam-tool.png" />
            
            </figure><p>CSAM Scanning will be available via the Cloudflare dashboard at no cost for all customers regardless of their plan level. You can find this tool under the “Caching” tab in your dashboard. We're hopeful that by opening this tool to all our customers for free we can help do even more to counter CSAM online and help protect our customers from the legal and reputational risk that CSAM can pose to their businesses.</p><p>It has been a long journey to get to the point where we could commit to offering this service to our millions of users. To understand what we're doing and why it has been challenging from a technical and policy perspective, you need to understand a bit about the state of the art of tracking CSAM.</p>
    <div>
      <h3>Finding Similar Images</h3>
      <a href="#finding-similar-images">
        
      </a>
    </div>
    <p>Around the same time as Cloudflare was first conceived in 2009, a Dartmouth professor named Hany Farid was working on software that could compare images against a list of hashes maintained by NCMEC. Microsoft took the lead in creating a tool, PhotoDNA, that used Prof. Farid’s work to identify CSAM automatically.</p><p>In its earliest days, Microsoft used PhotoDNA for their services internally and, in late 2009, <a href="http://blogs.msdn.com/b/microsoftuseducation/archive/2009/12/17/microsoft-donates-photodna-technology-to-make-the-internet-safer-for-kids.aspx">donated the technology to NCMEC</a> to help manage its use by other organizations. Social networks were some of the first adopters. In 2011, <a href="http://www.huffingtonpost.com/2011/05/20/facebook-photodna-microsoft-child-pornography_n_864695.html">Facebook rolled out an implementation</a> of the technology as part of their abuse process. <a href="https://www.theguardian.com/technology/2013/jul/22/twitter-photodna-child-abuse">Twitter incorporated it in 2014</a>.</p><p>The process is known as a fuzzy hash. Traditional hash algorithms like MD5, SHA1, and SHA256 take a file (such as an image or document) of arbitrary length and output a fixed length number that is, effectively, the file’s digital fingerprint. For instance, if you take the MD5 of this picture then the resulting fingerprint is <b>605c83bf1bba62e85f4f5fccc56bc128</b>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/56l9Sl2FmrYei4gHB1ejSi/edb419fe336a2f3b81dcd7964f93bf3f/base-image.jpg" />
            
            </figure><p>The base image</p><p>If we change a single pixel in the picture to be slightly off white rather than pure white, it's visually identical but the fingerprint changes completely to <b>42ea4fb30a440d8787477c6c37b9daed</b>. As you can see from the two fingerprints, a small change to the image results in a massive and unpredictable change to the output of a traditional hash.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3dbf5FVy09FFVrTDvhTbkm/9a64cea0427199fe630f212da2c53dd2/base-image-pixel-changed.jpg" />
            
            </figure><p>The base image with a single pixel changed</p><p>This is great for some uses of hashing where you want to definitively identify if the document you're looking at is exactly the same as the one you've seen before. For example, if an extra zero is added to a digital contract, you want the hash of the document used in its signature to no longer be valid.</p>
    <div>
      <h3>Fuzzy Hashing</h3>
      <a href="#fuzzy-hashing">
        
      </a>
    </div>
    <p>However, in the case of CSAM, this characteristic of traditional hashing is a liability. In order to avoid detection, the criminals producing CSAM resize, add noise, or otherwise alter the image in such a way that it looks the same but it would result in a radically different hash.</p><p>Fuzzy hashing works differently. Instead of determining if two photos are exactly the same it instead attempts to get at the essence of a photograph. This allows the software to calculate hashes for two images and then compare the "distance" between the two. While the fuzzy hashes may still be different between two photographs that have been altered, unlike with traditional hashing, you can compare the two and see how similar the images are.</p><p>So, in the two photos above, the fuzzy hash of the first image is</p>
            <pre><code>00e308346a494a188e1042333147267a
653a16b94c33417c12b433095c318012
5612442030d1484ce82c613f4e224733
1dd84436734e4a5c6e25332e507a8218
6e3b89174e30372d</code></pre>
            <p>and the second image is</p>
            <pre><code>00e308346a494a188e1042333147267a
653a16b94c33417c12b433095c318012
5612442030d1484ce82c613f4e224733
1dd84436734e4a5c6e25332e507a8218
6e3b89174e30372d</code></pre>
            <p>There's only a slight difference between the two in terms of pixels and the fuzzy hashes are identical.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1XABj7ngnOZ2SrnRyOl3mI/1adc7f88b188d15057b155b006d686b8/base-image-altered-2.jpeg.jpeg" />
            
            </figure><p>The base image after increasing the saturation, changing to sepia, adding a border and then adding random noise.</p><p>Fuzzy hashing is designed to be able to identify images that are substantially similar. For example, we modified the image of dogs by first enhancing its color, then changing it to sepia, then adding a border and finally adding random noise.  The fuzzy hash of the new image is</p>
            <pre><code>00d9082d6e454a19a20b4e3034493278
614219b14838447213ad3409672e7d13
6e0e4a2033de545ce731664646284337
1ecd4038794a485d7c21233f547a7d2e
663e7c1c40363335</code></pre>
            <p>This looks quite different from the hash of the unchanged image above, but fuzzy hashes are compared by seeing how close they are to each other.</p><p>The largest possible distance between two images is about 5m units. These two fuzzy hashes are just 4,913 units apart (the smaller the number, the more similar the images) indicating that they are substantially the same image.</p><p>Compare that with two unrelated photographs. The photograph below has a fuzzy hash of</p>
            <pre><code>011a0d0323102d048148c92a4773b60d
0d343c02120615010d1a47017d108b14
d36fff4561aebb2f088a891208134202
3e21ff5b594bff5eff5bff6c2bc9ff77
1755ff511d14ff5b</code></pre>
            
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/59LaVGZVc82yuxKulGMcWM/7e8064c951ef89935735b82048761440/image-1.jpg" />
            
            </figure><p>The photograph below has a fuzzy hash of</p>
            <pre><code>062715154080356b8a52505955997751
9d221f4624000209034f1227438a8c6a
894e8b9d675a513873394a2f3d000722
781407ff475a36f9275160ff6f231eff
465a17f1224006ff</code></pre>
            
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/67Np3uJW3La9k5dzGJ3tEC/b896e80908d3c9079bf9e3320724b040/D18OTNFaeIrwcpN2MmNhljU9mau-GND77Cu9qV8lWo8Na3ciZlQ7pTg-wP9bqDK4ILo5-6yCM96uVlvKkDnrxbOCK3-XhlAQx-ln7AJp-k6_5YClzo9jXvkCzRb.jpeg" />
            
            </figure><p>The distance between the two hashes is calculated as 713,061. Through experimentation, it's possible to set a distance threshold under which you can consider two photographs to be likely related.</p>
    <div>
      <h3>Fuzzy Hashing's Intentionally Black Box</h3>
      <a href="#fuzzy-hashings-intentionally-black-box">
        
      </a>
    </div>
    <p>How does it work? While there has been lots of work on fuzzy hashing published, the innards of the process are intentionally a bit of a mystery. The New York Times recently <a href="https://www.nytimes.com/interactive/2019/11/09/us/internet-child-sex-abuse.html">wrote a story</a> that was probably the most public discussion of how such technology works. The challenge was if criminal producers and distributors of CSAM knew exactly how such tools worked then they might be able to craft how they alter their images in order to beat it. To be clear, Cloudflare will be running the CSAM Screening Tool on behalf of the website operator from within our secure points of presence. We will not be distributing the software directly to users. We will remain vigilant for potential attempted abuse of the platform, and will take prompt action as necessary.</p>
    <div>
      <h3>Tradeoff Between False Negatives and False Positives</h3>
      <a href="#tradeoff-between-false-negatives-and-false-positives">
        
      </a>
    </div>
    <p>We have been working with a number of authorities on how we can best roll it out this functionality to our customers. One of the challenges for a network with as diverse a set of customers as Cloudflare's is what the appropriate threshold should be to set the comparison distance between the fuzzy hashes.</p><p>If the threshold is too strict — meaning that it's closer to a traditional hash and two images need to be virtually identical to trigger a match — then you're more likely to have have many false negatives (i.e., CSAM that isn't flagged). If the threshold is too loose, then it's possible to have many false positives. False positives may seem like the lesser evil, but there are legitimate concerns that increasing the possibility of false positives at scale could waste limited resources and further <a href="https://www.nytimes.com/interactive/2019/09/28/us/child-sex-abuse.html">overwhelm the existing ecosystem</a>.  We will work to iterate the CSAM Scanning Tool to provide more granular control to the website owner while supporting the ongoing effectiveness of the ecosystem. Today, we believe we can offer a good first set of options for our customers that will allow us to more quickly flag CSAM without overwhelming the resources of the ecosystem.</p>
    <div>
      <h3>Different Thresholds for Different Customers</h3>
      <a href="#different-thresholds-for-different-customers">
        
      </a>
    </div>
    <p>The same desire for a granular approach was reflected in our conversations with our customers. When we asked what was appropriate for them, the answer varied radically based on the type of business, how sophisticated its existing abuse process was, and its likely exposure level and tolerance for the risk of CSAM being posted on their site.</p><p>For instance, a mature social network using Cloudflare with a sophisticated abuse team may want the threshold set quite loose, but not want the material to be automatically blocked because they have the resources to manually review whatever is flagged.</p><p>A new startup dedicated to providing a forum to new parents may want the threshold set quite loose and want any hits automatically blocked because they haven't yet built a sophisticated abuse team and the risk to their brand is so high if CSAM material is posted -- even if that will result in some false positives.</p><p>A commercial financial institution may want to set the threshold quite strict because they're less likely to have user generated content and would have a low tolerance for false positives, but then automatically block anything that's detected because if somehow their systems are compromised to host known CSAM they want to stop it immediately.</p>
    <div>
      <h3>Different Requirements for Different Jurisdictions</h3>
      <a href="#different-requirements-for-different-jurisdictions">
        
      </a>
    </div>
    <p>There also may be challenges based on where our customers are located and the laws and regulations that apply to them. Depending on where a customers business is located and where they have users, they may choose to use one, more than one, or all the different available hash lists.</p><p>In other words, one size does not fit all and, ideally, we believe allowing individual site owners to set the parameters that make the most sense for their particular site will result in lower false negative rates (i.e., more CSAM being flagged) than if we try and set one global standard for every one of our customers.</p>
    <div>
      <h3>Improving the Tool Over Time</h3>
      <a href="#improving-the-tool-over-time">
        
      </a>
    </div>
    <p>Over time, we are hopeful that we can improve CSAM screening for our customers. We expect that we will add additional lists of hashes from numerous global agencies for our customers with users around the world to subscribe to. We're committed to enabling this flexibility without overly burdening the ecosystem that is set up to fight this horrible crime.</p><p>Finally, we believe there may be an opportunity to help build the next generation of fuzzy hashing. For example, the software can only scan images that are at rest in memory on a machine, not those that are streaming. We're talking with Hany Farid, the former Dartmouth professor who now teaches at Berkeley California, about ways that we may be able to build a more flexible fuzzy hashing system in order to flag images before they're even posted.</p>
    <div>
      <h3>Concerns and Responsibility</h3>
      <a href="#concerns-and-responsibility">
        
      </a>
    </div>
    <p>One question we asked ourselves back when we began to consider offering CSAM scanning was whether we were the right place to be doing this at all. We share the universal concern about the distribution of depictions of horrific crimes against children, and believe it should have no place on the Internet, however Cloudflare is a network infrastructure provider, not a content platform.</p><p>But we thought there was an appropriate role for us to play in this space. Fundamentally, Cloudflare delivers tools to our more than 2 million customers that were previously reserved for only the Internet giants. The security, performance, and reliability services that we offer, often for free, without us would have been extremely expensive or limited to the Internet giants like Facebook and Google.</p><p>Today there are startups that are working to build the next Internet giant and compete with Facebook and Google because they can use Cloudflare to be secure, fast, and reliable online. But, as the regulatory hurdles around dealing with incredibly difficult issues like CSAM continue to increase, many of them lack access to sophisticated tools to scan proactively for CSAM. You have to get big to get into the club that gives you access to these tools, and, concerningly, being in the club is increasingly a prerequisite to getting big.</p><p>If we want more competition for the Internet giants we need to make these tools available more broadly and to smaller organizations. From that perspective, we think it makes perfect sense for us to help democratize this powerful tool in the fight against CSAM.</p><p>We hope this will help enable our customers to build more sophisticated content moderation teams appropriate for their own communities and will allow them to scale in a responsible way to compete with the Internet giants of today. That is directly aligned with our mission of helping build a better Internet, and it's why we're announcing that we will be making this service available for free for all our customers.</p> ]]></content:encoded>
            <category><![CDATA[Policy & Legal]]></category>
            <category><![CDATA[Abuse]]></category>
            <guid isPermaLink="false">54YgxLpRSXSe6u6jDEfW2r</guid>
            <dc:creator>Justin Paine</dc:creator>
            <dc:creator>John Graham-Cumming</dc:creator>
        </item>
        <item>
            <title><![CDATA[Digital Evidence Across Borders and Engagement with Non-U.S. Authorities]]></title>
            <link>https://blog.cloudflare.com/digital-evidence-across-borders-and-engagement-with-non-us-authorities/</link>
            <pubDate>Thu, 28 Feb 2019 13:00:00 GMT</pubDate>
            <description><![CDATA[ Since we first started reporting in 2013, our transparency report has focused on requests from U.S. law enforcement. Previous versions of the report noted that, as a U.S. company, we ask non-U.S. law enforcement agencies to obtain formal U.S. legal process before providing customer data.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Since we first started reporting in 2013, our transparency report has focused on requests from U.S. law enforcement. Previous versions of the report noted that, as a U.S. company, we ask non-U.S. law enforcement agencies to obtain formal U.S. legal process before providing customer data.</p><p>As more countries pass laws that seek to extend beyond their national borders and as we expand into new markets, the question of how to handle requests from non-U.S. law enforcement has become more complicated. It seems timely to talk about our engagement with non-U.S. law enforcement and how our practice is changing. But first, some background on the changes that we’ve seen over the last year.</p>
    <div>
      <h3>Law enforcement access to data across borders</h3>
      <a href="#law-enforcement-access-to-data-across-borders">
        
      </a>
    </div>
    <p>The explosion of cloud services -- and the fact that data may be stored outside the countries of residence of those who generated it -- has been a challenge for governments conducting law enforcement investigations. A number of U.S. laws, like the Stored Communications Act or the Electronic Communications Privacy Act restrict companies from providing particular types of data, such as the content of communications, to any person or entity, including foreign law enforcement agencies, without U.S. legal process. To get access to electronic data stored outside their home borders, law enforcement agencies around the world have long used Mutual Legal Assistance Treaties (MLATs) that allow one country to ask for another country’s help to get access to evidence. Unfortunately, the MLAT process can be slow and cumbersome.</p><p>Countries frustrated by the inability of law enforcement to quickly gather evidence held outside their borders have taken matters into their own hands. Some have proposed laws mandating that important data about their citizens remain in country, where it can be easily accessed when requested. Others have proposed laws that would allow law enforcement to get access to data wherever it is stored, which puts companies in the position of potentially violating one country’s laws in order to comply with another’s.</p><p>In short, a new paradigm that allows law enforcement to access appropriate digital evidence across borders, with sufficient procedural safeguards to protect our users’ privacy and ensure due process, is long overdue.</p>
    <div>
      <h3>U.S. CLOUD Act</h3>
      <a href="#u-s-cloud-act">
        
      </a>
    </div>
    <p>In March 2018, the U.S. Congress passed the Clarifying Lawful Overseas Use of Data (CLOUD) Act as part of a large bill funding the government. The idea behind the law is that governments that protect their citizens’ due process rights and civil liberties should be able to get access to electronic content related to their citizens when conducting law enforcement investigations, wherever that data is stored.</p><p>The CLOUD Act anticipates that the U.S. government will enter into agreements with other countries’ governments to give each of the participating governments access to data stored in other participating countries for the purpose of investigating and prosecuting certain crimes. Under the law, the U.S. government will have to determine that a country has “robust substantive and procedural protections for privacy and civil liberties” before entering into an agreement with that country. After a country enters a formal agreement with the United States, U.S. companies would no longer be restricted by U.S. law from providing that country’s law enforcement with access to content data in response to a valid law enforcement request.</p><p>From a practical standpoint, the CLOUD Act envisions that U.S. companies like Cloudflare will be providing information directly to governments that have entered into agreements with the U.S. government. The idea is to change the relevant question away from “where is the data stored?” to “is the person being investigated a citizen or resident of the country asking for the information?”, recognizing every government’s right to investigate crimes that occur within its borders or affect its citizens.</p>
    <div>
      <h3>Movement in Europe</h3>
      <a href="#movement-in-europe">
        
      </a>
    </div>
    <p>Governments outside the United States have also moved forward with proposals that would provide law enforcement agencies authority to obtain information related to their citizens across borders. The United Kingdom, for example, has been working to update their laws and negotiate a bilateral agreement with the United States for access to data maintained by U.S. companies, consistent with the framework established in the CLOUD Act.</p><p>The European Union has also been active in moving forward with a framework on obtaining electronic evidence across borders. Much like the U.S. CLOUD Act, the European Commission’s eEvidence Regulation would allow EU Member States to seek digital evidence outside of their national borders provided that fundamental rights are protected. The European Commission also envisions entering into negotiations with U.S. authorities on data sharing arrangements under the mandate of EU Member States.</p>
    <div>
      <h3>So where does all of this leave us?</h3>
      <a href="#so-where-does-all-of-this-leave-us">
        
      </a>
    </div>
    <p>As a U.S. company that stores customer records inside the United States, Cloudflare has long held the view that non-U.S. governments should have to follow U.S. due process requirements in order to obtain any records about our customers. When non-U.S. governments have come to us requesting records, we have explained the nature of our service and, to the extent they were interested in obtaining data, encouraged them to submit a request to the U.S. Department of Justice through the MLAT process.</p><p>But it’s important to note that these processes serve an important function and are not just intended to delay the efforts of foreign law enforcement. They have helped us address some of the more challenging requests that we have seen. Let’s say, for example, law enforcement from an otherwise-respected nation sent us a court order demanding information about websites run by a vocal group of dissenters or even the organizers of a separatist referendum and also asked us to redirect that website to a location of their choosing. In that case, we would direct that foreign agency to submit an MLAT request. In situations like this, we might not receive subsequent legal process from the U.S. government, either because the government declined to ask the Department of Justice for an MLAT related to activity that could be viewed as political or because the Department of Justice declined to process it.</p><p>With the changing legal and policy landscape, as well as our increased presence in non-U.S. locations, we think it’s time to take a step towards the new framework that is taking shape.</p>
    <div>
      <h3>What type of information could we provide to non-US law enforcement?</h3>
      <a href="#what-type-of-information-could-we-provide-to-non-us-law-enforcement">
        
      </a>
    </div>
    <p>The overwhelming majority of information that U.S. law enforcement seeks from Cloudflare through legal process is what we consider to be basic subscriber data -- the type of information that customers give us when they sign up for service. That includes things like name, email address, physical address, phone number, the means and source of payment, and non-content information about a customer’s account, such as data about login times and IP addresses used to login to the account.</p><p>Although we consider this account information to be private customer data, worthy of protection, we share the commonly held view that it is less sensitive than information considered to be content, such as email communications or documents created by users. In fact, U.S. law allows law enforcement to compel us to provide basic subscriber data with a subpoena, a type of legal process that does not require prior judicial review.</p><p>Recent policy discussions have convinced us that there may be situations where it is appropriate to provide this type of basic subscriber information to non-U.S law enforcement in response to non-U.S. legal process similar to a subpoena, a view in line with that of many other tech companies. We may therefore respond to requests for subscriber information if a government is seeking information about a crime in its country or about its citizens, we have employees in the country, and appropriate due process requirements and international standards have been met. We will also consider whether the country has signed a CLOUD Act agreement with the United States.</p><p>The CLOUD Act and other existing U.S. laws govern the provision of more sensitive, content data to non-U.S. law enforcement. U.S. companies are legally prohibited from providing content data to a non-U.S. government absent a U.S. CLOUD Act agreement with that country. Given the nature of our service, however, we rarely have records that constitute content that we could provide to law enforcement regardless of jurisdiction.</p>
    <div>
      <h3>Overall Principles We Follow</h3>
      <a href="#overall-principles-we-follow">
        
      </a>
    </div>
    <p>When we talk about our relationship with law enforcement, we often say that it is not Cloudflare's intent to make law enforcement's work any harder or any easier. We respect both that law enforcement agencies have a job to do and that our customers have rights relating to how their data is shared with law enforcement.</p><p>Regardless of what government is asking, there are certain standards we believe must be followed before we turn over customer data. Our goal is to maintain a healthy and open relationship with law enforcement officials so that they understand and respect our positions on each of these standards. The principles which remain important to us are as follows:</p><ul><li><p><b>Require Due Process.</b> Cloudflare requires government entities seeking access to personal customer information to obtain appropriate legal process, including prior independent judicial review of any request for content.</p></li><li><p><b>Provide Notice.</b> We believe our customers deserve to be notified when we receive legal requests for their information, whether the requests come from law enforcement or private parties involved in civil litigation. We will provide that notice before we disclose the information, unless prohibited by law.</p></li><li><p><b>Protect Privacy and User Rights.</b> Whether inside or outside the United States, Cloudflare will fight law enforcement requests that we believe are overbroad, illegal, or wrongly issued. This includes requests to delay or prevent notice that appear unnecessarily broad, given the government interests at stake.</p></li><li><p><b>Be Transparent.</b> We believe the ability to report on the numbers and types of requests that we get from law enforcement, as well as how we respond, is critical to building trust with our customers. We will fight requests that unnecessarily restrict our ability to be transparent with our users.</p></li></ul><p>Consistent with the last standard, we also intend to update our transparency report to reflect any requests that we receive from non-U.S. law enforcement authorities, whether for user information or anything else.</p> ]]></content:encoded>
            <category><![CDATA[Policy & Legal]]></category>
            <category><![CDATA[Politics]]></category>
            <category><![CDATA[Abuse]]></category>
            <category><![CDATA[Due Process]]></category>
            <category><![CDATA[Community]]></category>
            <guid isPermaLink="false">4YcHdL78G4t1QL1hKNYsbS</guid>
            <dc:creator>Caroline Greer</dc:creator>
        </item>
        <item>
            <title><![CDATA[Out of the Clouds and into the weeds: Cloudflare’s approach to abuse in new products]]></title>
            <link>https://blog.cloudflare.com/out-of-the-clouds-and-into-the-weeds-cloudflares-approach-to-abuse-in-new-products/</link>
            <pubDate>Wed, 27 Feb 2019 13:00:00 GMT</pubDate>
            <description><![CDATA[ In a blogpost yesterday, we addressed the principles we rely upon when faced with numerous and various requests to address the content of websites that use our services.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>In a <a href="/unpacking-the-stack-and-addressing-complaints-about-content/">blogpost</a> yesterday, we addressed the principles we rely upon when faced with numerous and various requests to address the content of websites that use our services. We believe the building blocks that we provide for other people to share and access content online should be provided in a content-neutral way. We also believe that our users should understand the policies we have in place to address complaints and law enforcement requests, the type of requests we receive, and the way we respond to those requests. In this post, we do the dirty work of addressing how those principles are put into action, specifically with regard to Cloudflare’s expanding set of features and products.</p>
    <div>
      <h3>Abuse reports and new products</h3>
      <a href="#abuse-reports-and-new-products">
        
      </a>
    </div>
    <p>Currently, we receive abuse reports and law enforcement requests on fewer than one percent of the more than thirteen million domains that use Cloudflare’s network. Although the reports we receive run the gamut -- from phishing, malware or other technical abuses of our network to complaints about content -- the overwhelming majority are allegations of copyright violations or violations of other intellectual property rights. Most of the complaints that we receive do not identify concerns with particular Cloudflare services or products.</p><p>In the last year or so, we’ve also launched a variety of new products, including our video product (<a href="https://www.cloudflare.com/products/stream-delivery/">Cloudflare Stream</a>), a serverless edge computing platform (<a href="https://www.cloudflare.com/products/cloudflare-workers/">Cloudflare Workers</a>), a <a href="https://www.cloudflare.com/products/registrar/">self-serve registrar service</a>, and a privacy-focused recursive resolver (<a href="https://1.1.1.1/">1.1.1.1</a>), among others. Each of these services raises its own complex set of questions.  </p><p>There is no one-size-fits-all solution to address possible abuse of our products. Different types of services come with different expectations, as well as different legal and contractual obligations. Yet as we discussed in relation to our focus on transparency on <a href="/cloudflare-transparency-update-joining-cloudflares-flock-of-warrant-canaries-2/">Monday</a>, being fully transparent means being consistent and predictable so our users can anticipate how we will respond to new situations.</p>
    <div>
      <h3>Developing an approach to abuse</h3>
      <a href="#developing-an-approach-to-abuse">
        
      </a>
    </div>
    <p>To help us sort through how to address both complaints and law enforcement requests, when we introduce new products or features, we ask ourselves four basic sets of questions about the relationship between the service we’re providing and potential complaints about content:</p><ul><li><p>First, how are Cloudflare’s services interacting with the website content? For example, are we doing anything more than providing security and acting as a reliable conduit from one location to another?  Are we providing definitive storage of content? Did we provide the website its domain name through our registrar service? Is the Cloudflare service or product doing anything that could be seen as organizing, analyzing, or promoting content?</p></li><li><p>Second, what type of action might a law enforcement or private complainant want us to take and what are the consequences of it?  What sort of information might law enforcement request -- private information about the user, content of what was sent over the Internet, or logs that would track activity?  Will third parties request information about a website; would they request removal of content from the Internet? Would removing our services address the problem presented?</p></li><li><p>Third, what laws, regulations or contractual requirements apply? Does the nature of our interaction with the online content impact our legal obligations? Has the law enforcement request or regulation satisfied basic principles of the rule of law or due process?</p></li><li><p>Fourth, will our response to the matter presented scale to address the variety of different requests or complaints we may receive over time, covering a variety of different subject matters and viewpoints? Can we craft a principled and content-neutral process to respond to the request? Would our response have an overbroad impact, either by impacting more than the problematic content or changing the Internet in jurisdictions beyond the one that has issued the law or regulation at issue?</p></li></ul><p>Although those preliminary questions help us determine what actions we must take, we also do our best to think about the broader implications on the Internet of any steps we might take to address complaints.</p>
    <div>
      <h2>So how does this work in practice?</h2>
      <a href="#so-how-does-this-work-in-practice">
        
      </a>
    </div>
    
    <div>
      <h3>Response to abuse complaints for customers using our proxy and CDN services</h3>
      <a href="#response-to-abuse-complaints-for-customers-using-our-proxy-and-cdn-services">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7fYyp9YRicdb7b4tQSIBnS/6ae08708e364e32a5c907f04d1b2459c/image5.png" />
            
            </figure><p>People often come to Cloudflare with abuse complaints because our network sits in front of our customers’ sites in order to protect them from cyber attacks and to improve the performance of their website.</p><p>There aren’t a lot of laws or regulations that impose obligations to address content on those providing security or <a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/">CDN services</a>, for good reason. Most people complaining about content are looking for someone who can take that content off the Internet entirely. As we’ve talked about on <a href="/thoughts-on-abuse/">other</a> <a href="/anonymity-and-abuse-reports/">occasions</a>, Cloudflare is unable to remove content that we don’t host, so we therefore try to make sure that the complaint gets to its intended audience -- the hosting provider who has the ability to remove the material from the Internet. As described on <a href="https://www.cloudflare.com/abuse/">our abuse page</a>,  complaining parties automatically receive information about how to contact the hosting provider, and unless the complaining party requests otherwise, abuse complaints are automatically forwarded to both the website owner and the hosting company to allow them to take action.</p><p>This approach has another benefit, consistent with the fourth set of questions we ask ourselves. It prevents addressing content with an unnecessarily blunt tool. Cloudflare is unable to remove its security and CDN services from only a sliver of problematic content on a website.  If we remove our services, it has to be from an entire domain or subdomain, which may cause considerable collateral damage. For example, think of the vast array of sites that allow individual independent users to upload content (“user generated content”). A website owner or host may be able to curate or deal with specific content, but if companies like Cloudflare had to respond to allegations of abuse by a single user’s upload of a single piece of concerning content by removing our core services from an entire site, and making it vulnerable to a cyberattack, those sites would be much more difficult to operate and the content contributed by all other users would be put at risk.</p><p>Similarly, there are a number of different infrastructure services that cooperate to make sure each connection on the Internet can happen successfully – DNS, <a href="https://www.cloudflare.com/learning/dns/glossary/what-is-a-domain-name-registrar/">registrars</a>, registries, security, etc.  If each of the providers of those services, any one of which could put the entire transmission at risk, is applying blunt tools to address content, then the aperture of what content will stay online will get smaller and smaller. Those are bad results for the Internet. Actions to address troubling content online should focus narrowly on the actual concern to avoid unintended collateral consequences.</p><p>While we are unable to remove content we do not host, we are able to take steps to address abuse of our services, such as phishing and malware attacks. Phishing attacks typically fall into two buckets -- a website that has been compromised (unintentional phishing) or a website solely dedicated to intentionally misleading others to gather information (intentional phishing). These buckets are treated differently.</p><p>We discussed earlier that we aim to use the most precise tools possible when addressing abuse, and we take a similar approach for unintentional phishing content. If a website has been compromised (typically an outdated CMS) we can place a warning interstitial page in front of that specific phishing content to protect users from accidentally falling victim to the attack. In the majority of situations, this action is taken at a URL level of granularity.</p><p>In the case of intentional phishing attacks, such a domain like  my-totally-secure-login-page{.}com in combination with our Trust &amp; Safety team being able to confirm the presence of phishing content on the website, we take broader action including a domain-wide interstitial warning page (effectively *my-totally-secure-login-page{.}com/*), and in some cases we may terminate our services to the intentionally malicious domain. To be clear though, this does not remove the phishing content that remains hosted by the website’s hosting provider. Ultimately, action still needs to be taken by the website owner or hosting provider to fully remove the underlying issue.</p>
    <div>
      <h3>Response to complaints about content stored definitively on our network</h3>
      <a href="#response-to-complaints-about-content-stored-definitively-on-our-network">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2Mz81IWy2rQJhZgHnVwXJ9/df8e0f2ec7ca2a0d1240131009164bbc/image4.png" />
            
            </figure><p>We think our approach requires a different set of responses for the small, but growing, number of Cloudflare products that include some sort of storage. Cloudflare Stream, for example, allows users to store, transcode, distribute and playback their videos. And Cloudflare Workers may allow users to store certain content at the edge of our network without a core host server. Although we are not a website hosting provider, these products mean we may be the only place where a certain piece of content is stored in some cases.  </p><p>When we are the definitive repository for content through any of our services, Cloudflare will carefully review any complaints about that content and may disable access to it in response to a valid legal takedown request from either government or private actors. Most often, these legal takedown requests are from individuals alleging copyright infringement.  Under the U.S. Digital Millennium Copyright Act, there is a specific process online storage providers follow to remove or disable access to content alleged to infringe copyright and provide an opportunity for those who post the material to contest that it is infringing. We have already begun implementing this process for content stored on our network.  That’s why we’ve begun a new section of our <a href="https://cloudflare.invisionapp.com/share/RUPOO3MPDKH#/screens">transparency report</a> on requests for content takedown pursuant to U.S. copyright law for content that is stored on our network.  </p><p>We haven’t received any government requests yet to take down content stored on our network. Given the significant potential impact on freedom of expression from a government ordering that content be removed, if we do receive those requests in the future, we will carefully analyze the factual basis and legal authority for the request.  If we determine that the order is valid and requires Cloudflare action, we will do our best to address the request as narrowly as possible, for example, by clarifying overbroad requests or limiting blocking of access to the content to those areas where it violates local law, a practice known as “geo-blocking”. We will also update our transparency report on any government requests that we receive in the future and any actions we take.</p>
    <div>
      <h3>Response to complaints about our registrar service</h3>
      <a href="#response-to-complaints-about-our-registrar-service">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6FxcoT7686OkzBPJTPM7tN/ed90c776932edafbc6b95d59377d1703/registrar.png" />
            
            </figure><p>If you sign up for our self-serve registrar service, you’re legally bound by the terms of our contract with the Internet Corporation for Assigned Names and Numbers (ICANN), a non-profit organization responsible for coordinating unique Internet identifiers across the world, as well as our contract with the relevant domain name registry.  </p><p>Our registrar-focused <a href="https://www.cloudflare.com/products/registrar/abuse/">web page</a> for abuse reporting does not reference abuse complaints about a website’s content.  In our role as a domain registrar, Cloudflare has no control or ability to remove particular content from a domain. We would be limited to simply revoking or suspending the domain registration altogether which would remove the website owner’s control over the <a href="https://www.cloudflare.com/learning/dns/glossary/what-is-a-domain-name/">domain name</a>. Such actions would typically only be done at the direction of the relevant domain name registry, in accordance with their registration rules associated with the <a href="https://www.cloudflare.com/learning/dns/top-level-domain/">Top Level Domain</a>, or more usually to address incidents of abuse as raised by the registry or ICANN. We therefore treat content-related complaints submitted based on our registrar services the same way we treat complaints about content for sites using our CDN or proxy services.  We forward them to the website owner and the website hosting company to allow them to take action or we work in tandem with the relevant registry and at their direction.</p><p>Running a registrar service comes with other legal obligations. As an ICANN accredited registrar, part of our contractual obligations include adhering to third party dispute resolution processes regarding trademark disputes, as handled by providers such as the World Intellectual Property Organization (WIPO) and the National Arbitration  Forum. Also, we continue to be part of the ICANN community discussions on how best to handle the collection, publication and provision of access to personal data in the WHOIS database in a manner consistent with the EU’s General Data Protection Regulation (GDPR) and other privacy frameworks. We will provide more updates on that front when the discussions have ripened.</p>
    <div>
      <h3>Response to complaints about IPFS</h3>
      <a href="#response-to-complaints-about-ipfs">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5T3SHdqfJMZSvtb0C4LBbo/84cd4798a1cb309eeae75972d2a3ca8e/ipfs.png" />
            
            </figure><p>Back in September, we <a href="/distributed-web-gateway/">announced</a> that Cloudflare would be providing a gateway to the InterPlanetary File System (IPFS). Cloudflare’s IPFS gateway is a way to access content stored on the IPFS peer-to-peer network. Because Cloudflare is not acting as the definitive storage for the IPFS network, we do not have the ability to remove content from that network. We simply operate as a cache in front of IPFS, much as we do for our more traditional customers.</p><p>Because content is stored on potentially dozens of nodes in IPFS, if one node that was caching content goes down, the network will just look for the same content on another node. That fact makes IPFS exceptionally resilient. That same resilience, however, means that unlike with our traditional customers, with IPFS, there is no single host to inform of a complaint about content stored on the IPFS network.  Cloudflare often has no knowledge of who the owner is of content being accessed through the gateway, and this makes it impossible to notify the specific owner when we receive a complaint.</p><p>The law hasn’t yet quite caught up with distributed networks like IPFS, and there’s a notable debate among IPFS users about how best to deal with abuse. Some argue that having problematic content stored on IPFS will discourage adoption of the protocol, and advocate for the development of lists of problematic hashes that  IPFS gateways could choose to block. Others point out that any mechanism intended to block IPFS content will itself be subject to abuse. We don’t have the answer to that debate, but it does demonstrate to us the importance of being thoughtful about how we proceed.</p><p>For the time being, our plan is to respond to U.S. court orders that require us to clear our cache of content stored on IPFS. More importantly, however, we intend to report in future transparency reports on any law enforcement requests we receive to clear our IPFS cache, to ensure continued public discussion.</p>
    <div>
      <h3>Cloudflare Resolvers: 1.1.1.1 and Resolver for Firefox</h3>
      <a href="#cloudflare-resolvers-1-1-1-1-and-resolver-for-firefox">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/atuUDCyhmzyh4RqbtOd6U/76647f964b85043f8d1296e5dd038dfd/1111-1.gif" />
            
            </figure><p>In April of last year, we <a href="/announcing-1111/">launched</a> our first DNS resolver, 1.1.1.1.  In June, we partnered with Mozilla to provide direct DNS resolution from within the Firefox browser using the Cloudflare Resolver for Firefox. Our goal with both resolvers was to develop fast DNS services that were focused on user privacy.  </p><p>We often get questions about how how we deal with both abuse complaints and law enforcement requests related to our resolvers.  Both of our resolvers are intended to provide only direct DNS resolution. In other words, Cloudflare does not block or filter content through either 1.1.1.1 or the Cloudflare Resolver for Firefox. If Cloudflare were to receive a request from a law enforcement or government agency to block access to domains or content through one of our resolvers, Cloudflare would fight that request. At this point, we have not yet received any government requests to block content through our resolvers. Cloudflare would also document any request to block content from our resolvers in our semi-annual transparency report, unless we were legally prohibited from doing so.</p><p>Similarly, Cloudflare has not received any government requests for data about the users of our resolvers, and would fight such a request if necessary. Given our public commitment not to retain any personally identifiable information for more than 24 hours, we believe it is unlikely that we would have any information even if asked. Nonetheless, if we were to receive a government request for data about a resolver user, we would document the request in our transparency report, unless legally prohibited from doing so.    </p>
    <div>
      <h3>The long road ahead</h3>
      <a href="#the-long-road-ahead">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/52nr5Co31KS2aVzil4x90h/c2d650f2d18ca8c78d0a13a9148a9603/road.png" />
            
            </figure><p>Although new products offered by Cloudflare in the future, as well as the legal and regulatory landscape, may change over the years, we expect that our approach to thinking about new products will stand the test of time. We’re guided by some central principles -- allowing our infrastructure to be as neutral as possible, following the rule of law or requiring due process, being open about what we’re doing, and making sure that we’re consistent regardless of the wide variety of issues we face. And we will work hard to make sure that doesn’t change, because even the smallest tweaks to the way we do things can have a significant impact at the scale we operate.</p> ]]></content:encoded>
            <category><![CDATA[Freedom of Speech]]></category>
            <category><![CDATA[Policy & Legal]]></category>
            <category><![CDATA[Politics]]></category>
            <category><![CDATA[Abuse]]></category>
            <category><![CDATA[Due Process]]></category>
            <category><![CDATA[Community]]></category>
            <guid isPermaLink="false">3TokDJcXCygYPTjnifbwUM</guid>
            <dc:creator>Justin Paine</dc:creator>
        </item>
        <item>
            <title><![CDATA[The Curious Case of the Garden State Imposter]]></title>
            <link>https://blog.cloudflare.com/the-curious-case-of-the-garden-state-imposter/</link>
            <pubDate>Wed, 13 Feb 2019 22:44:49 GMT</pubDate>
            <description><![CDATA[ Dealing with abuse complaints isn’t easy, for any Internet company. The variety of subject matters at issue, the various legal and regulatory requirements, and the uncertain intentions of complaining parties combine to create a ridiculously complex situation. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Dealing with abuse complaints isn’t easy, for any Internet company. The variety of subject matters at issue, the various legal and regulatory requirements, and the uncertain intentions of complaining parties combine to create a ridiculously complex situation.  We often suggest to those who propose easy answers to this challenge that they spend a few hours tracking the terminal of a member of our Trust &amp; Safety team to get a feel for how difficult it can be. Yet even we were a bit surprised by an unusual abuse report we’ve been dealing with recently.</p><p>Last week, we received what looked like a notable law enforcement request: a complaint from an entity that identified itself as the “New Jersey Office of the Attorney General” and claimed to be a notice Cloudflare was “serving files consisting of 3D printable firearms in violation of NJ Stat. Ann. § 2C:39-9 3(I)(2).”  The complaint further asked us to “delete all files described within 24 hours” and threatened “to press charges in order to preserve the safety of the citizens of New Jersey.”</p><p>Because we are generally not the host of information, and are unable to remove content from the Internet that we don’t host, our abuse process is specifically set up to forward complaints about content to the website host. Cloudflare also provides the contact information for the hosting provider to the person filing the complaint so that they can address their report with the host of the content in question. That is what we did in this case.</p><p>We took no action with respect to the underlying allegation. As a preliminary matter, we confirmed we were not hosting the allegedly infringing content, and any action we might have taken would not have impacted the availability of the content online. Perhaps even more importantly, in order for an Internet infrastructure provider like Cloudflare to take action on content, we believe due process requires more than a threat of legal action.</p>
    <div>
      <h3>Complaint Oddities</h3>
      <a href="#complaint-oddities">
        
      </a>
    </div>
    <p>A few days after we forwarded the complaint, we saw news reports indicating that the website operator and a number of other entities had sued the State of New Jersey over the complaint we had forwarded. That lawsuit prompted us to take a closer look at the complaint. We immediately noticed a few anomalies with the complaint.</p><p>First, when law enforcement agencies contact us, they typically reach out directly, through a dedicated email line. Indeed, we specifically encourage law enforcement to contact us directly on our abuse page, because it facilitates a personalized review and response. The NJ-related request did not come in through this channel, but was instead submitted through our general abuse form. This was one data point that raised our skepticism as to the legitimacy of this report.</p><p>Second, the IP address linked to the complaint was geo-located to the Slovak Republic, which seemed like an unlikely location for the New Jersey Attorney General to be submitting an abuse report from. This particular data point was a strong indicator that this might be a fraudulent report.</p><p>Third, while the contact information provided in the complaint appeared to be a legitimate, publicly available email address operated by the State of NJ, it was one intended for public reporting of tips of criminal misconduct, as advertised <a href="https://www.nj.gov/lps/dcj/email.htm">here</a>. It seems unlikely that a state attorney general would use such an email to threaten criminal prosecution. On occasion, we see this technique used when an individual would like to have Cloudflare’s response to an abuse report sent to some type of presumably interested party. The person filing this misattributed abuse report likely hopes that the party who controls that email address will then initiate some type of investigation or action based on that abuse report.</p><p>All of these factors — which were all part of the complaint passed on to the website owner and operator — made us skeptical that the complaint was legitimate. Nonetheless, we observed that the New Jersey Attorney General’s office was aware of and participating in the litigation. This raised questions about our skepticism about the complaint’s legitimacy, and made us believe that individuals from New Jersey were likely to contact us.  </p><p>On Friday, we were contacted by the New Jersey Attorney General’s office, and in response to a request, including legal process, we provided additional information about the complaint. Yesterday, the New Jersey Attorney General’s office solved the mystery for us in a <a href="https://www.dropbox.com/s/qnftyw4oaa8c0yu/19cv4753_9.pdf?dl=0">submission to the court</a> confirming the complaint was a fake.</p><p>We have investigated other abuse reports submitted from this IP address, and we have identified a clear pattern of fake abuse reports. To be clear, this IP address has never impersonated law enforcement individuals prior to this NJ-related report. We have taken steps to block this IP address from submitting any further fake abuse reports.</p>
    <div>
      <h3>Why does a fake complaint matter?</h3>
      <a href="#why-does-a-fake-complaint-matter">
        
      </a>
    </div>
    <p>Abusing the abuse process by filing fake abuse reports can be a highly effective way to silence speech on the Internet. It is effectively a form of a denial of service attack. A fake abuse report can potentially result in a hosting provider taking their customer offline based on an unconfirmed allegation. In certain contexts such as copyright claims, the hosting provider is incentivized to act first and then ask questions later so as to reduce their potential liability as the host of the problematic content. The hosting provider’s sense of urgency to block the identified content leads to the sinister effectiveness of a fake abuse complaint. The content owner can submit a counter-notice to have access to the content restored, but that can be a daunting task if the potentially fake abuse report was sent by a well-funded organization or by law enforcement.</p><p>YouTube has recently been targeted by exactly this problem as recently reported by <a href="https://www.theverge.com/2019/2/11/18220032/youtube-copystrike-blackmail-three-strikes-copyright-violation">The Verge</a>. Bad actors are abusing their “copyright strikes” system by sending ransom demands to seemingly innocent content creators. This type of attack can best be summarized as “pay me or I’ll file an abuse complaint and get you taken down”.</p><p>We don’t know who submitted the complaint or what their motivation might have been, but the incident does remind us of the importance of proceeding carefully when we receive complaints and requests from law enforcement.  Dealing with abuse complaints and requests from law enforcement is never easy. And although many complaints are legitimate, this complaint was a good reminder that at least some legal demands are just attempts to game our abuse process. We’ll continue to explore ways of minimizing the possibility that our abuse process can itself be abused by bad actors.</p> ]]></content:encoded>
            <category><![CDATA[Policy & Legal]]></category>
            <category><![CDATA[Abuse]]></category>
            <guid isPermaLink="false">3tgN65spatkAUvTE66XDm6</guid>
            <dc:creator>Alissa Starzak</dc:creator>
        </item>
        <item>
            <title><![CDATA[Why Some Phishing Emails Are Mysteriously Disappearing]]></title>
            <link>https://blog.cloudflare.com/combatting-phishing-with-dns/</link>
            <pubDate>Tue, 12 Dec 2017 14:00:00 GMT</pubDate>
            <description><![CDATA[ Phishing is the absolute worst.

Unfortunately, sometimes phishing campaigns use Cloudflare for the very convenient, free DNS.  ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4cYYmZBGDpwiXQwvj2UxUo/6373c09bfee5ab68bf6c239487f3e758/Artboard-30-2.png" />
            
            </figure><p>Phishing is the absolute worst.</p><p>Unfortunately, sometimes phishing campaigns use Cloudflare for the very convenient, free DNS. To be clear –– there’s a difference between a compromised server being leveraged to send phishing emails and an intentionally malicious website dedicated to this type of activity. The latter clearly violates our terms of service.</p><p>In the past, our Trust and Safety team would kick these intentional phishers off the platform, but now we have a new trick up our sleeve and a way for their malicious emails to mysteriously disappear into the ether.</p>
    <div>
      <h3>Background: How Email Works</h3>
      <a href="#background-how-email-works">
        
      </a>
    </div>
    <p>SMTP - the protocol used for sending email - was <a href="/the-history-of-email/">finalized in 1982</a>, when it was just a <a href="https://blog.ted.com/what-the-internet-looked-like-in-1982-a-closer-look-at-danny-hillis-vintage-directory-of-users/">small community</a> online. Many of them knew and trusted each other, and so the protocol was built entirely on trust. In an SMTP message, the MAIL FROM field can be arbitrarily defined. That means you could send an email from any email address, even one you don’t own.</p><p>This is great for phishers, and bad for everyone else.</p><p>The solution to <a href="https://www.cloudflare.com/learning/email-security/how-to-prevent-phishing/">prevent email spoofing</a> was to create the Sender Policy Framework (SPF). SPF allows the domain owner to specify which servers are allowed to send email from that domain. That policy is stored in a DNS TXT record like this one from cloudflare.com:</p>
            <pre><code>$ dig cloudflare.com txt
"v=spf1 ip4:199.15.212.0/22 ip4:173.245.48.0/20 include:_spf.google.com include:spf1.mcsv.net include:spf.mandrillapp.com include:mail.zendesk.com include:customeriomail.com include:stspg-customer.com -all"</code></pre>
            <p>This says that email clients should only accept cloudflare.com emails if they come from an IP in the ranges 199.15.212.0/22, 173.245.48.0/20, or one of the IP ranges found in the SPF records for the other domains listed. Then if a receiving email server receives an email from <a>someone@cloudflare.com</a> from the server at 185.12.80.67, that email server would check the SPF records of all the allowed domains until it finds that 185.12.80.67 is allowed because 185.12.80.0/22 is listed in mail.zendesk.com’s SPF record:</p>
            <pre><code>$ dig txt mail.zendesk.com
"v=spf1 ip4:192.161.144.0/20 ip4:185.12.80.0/22 ip4:96.46.150.192/27 ip4:174.137.46.0/24 ip4:188.172.128.0/20 ip4:216.198.0.0/18 ~all"</code></pre>
            <p>Additional methods for <a href="https://www.cloudflare.com/zero-trust/solutions/email-security-services/">securing email</a> were created after SPF. SPF only validates the email sender but doesn’t do anything about verifying the content of the email. (While SMTP can be sent over an encrypted connection, SMTP is <a href="https://blog.filippo.io/the-sad-state-of-smtp-encryption/">notoriously easy to downgrade</a> to plaintext with an on-path attacker.)</p><p>To verify the content, domain owners can sign email messages using DKIM. The email sender includes the message signature in an email header called DKIM-Signature and stores the key in a DNS TXT record.</p>
            <pre><code>$ dig txt smtpapi._domainkey.cloudflare.com
"k=rsa\; t=s\; p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDPtW5iwpXVPiH5FzJ7Nrl8USzuY9zqqzjE0D1r04xDN6qwziDnmgcFNNfMewVKN2D1O+2J9N14hRprzByFwfQW76yojh54Xu3uSbQ3JP0A7k8o8GutRF8zbFUA8n0ZH2y0cIEjMliXY4W4LwPA7m4q0ObmvSjhd63O9d8z1XkUBwIDAQAB"</code></pre>
            <p>There’s one more mechanism for controlling email spoofing called DMARC. DMARC sets the overarching email policy, indicates what to do if the policies are not met and sets a reporting email address for logging invalid mail attempts. Cloudflare’s DMARC record says that noncomplying emails should be sent to junk mail, 100% of messages should be subject to filtering and if policies are not met, send the report to the two email addresses below.</p>
            <pre><code>$ dig txt _dmarc.cloudflare.com
"v=DMARC1\; p=quarantine\; pct=100\; rua=mailto:rua@cloudflare.com, mailto:gjqhulld@ag.dmarcian.com"</code></pre>
            <p>When an email server receives an email from <a>someone@cloudflare.com</a>, it first checks SPF, DKIM and DMARC records to know whether the email is valid, and if not, how to route it.</p>
    <div>
      <h3>Stopping Phishy Behavior</h3>
      <a href="#stopping-phishy-behavior">
        
      </a>
    </div>
    <p>For known phishing campaigns using the Cloudflare platform for evil, we have a DNS trick for getting their phishing campaigns to stop. If you remember, there are three DNS records required for sending email: SPF, DKIM and DMARC. The last one is the one that defines the overarching email policy for the domain.</p><p>What we do is rewrite the DMARC record so that the overarching email policy instructs email clients to reject all emails from that sender. We also remove the other DNS record types used for sending email.</p>
            <pre><code>"v=DMARC1; p=reject"</code></pre>
            <p>When an email client receives a phishing email, the corresponding DNS records instruct the client not to accept the email and the phishing email is not delivered.</p><p>You can see it in action on our fake phish domain, astronautrentals.com.</p><p>astronautrentals.com is configured with an SPF record, a DKIM record, and a DMARC record with a policy to accept all email.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4eFm0QSHY505LXsU2rds8C/9093fc93ccec9cd31535e997f80b4b1c/Screen-Shot-2017-12-11-at-7.58.27-PM.png" />
            
            </figure><p>However, because it is a known (fake) phishing domain, when you query DNS for these records, SPF will be missing:</p>
            <pre><code>$ dig astronautrentals.com txt
astronautrentals.com.	3600	IN	SOA	art.ns.cloudflare.com. dns.cloudflare.com. 2026351035 10000 2400 604800 3600</code></pre>
            <p>DKIM will be missing:</p>
            <pre><code>$ dig _domainkey.astronautrentals.com txt
astronautrentals.com.	3600	IN	SOA	art.ns.cloudflare.com. dns.cloudflare.com. 2026351035 10000 2400 604800 3600</code></pre>
            <p>And DMARC policy will be rewritten to reject all emails:</p>
            <pre><code>$ dig _dmarc.astronautrentals.com txt
"v=DMARC1\; p=reject"</code></pre>
            <p>If we try to send an email from @astronautrentals.com, the email never reaches the recipient because the receiving client sees the DMARC policy and rejects the email.</p><p>This DMARC alteration happens on the fly –– it's a computation we do at the moment when we answer the DNS query –– so the original DNS records are still shown to the domain owner in the Cloudflare DNS editor. This adds some mystery to why the phish attempts are failing to send.</p>
    <div>
      <h3>Using DNS To Combat Phishing</h3>
      <a href="#using-dns-to-combat-phishing">
        
      </a>
    </div>
    <p>Phishing is the absolute worst, and the problem is that it sometimes succeeds. Last year Verizon reported that <a href="https://www.prnewswire.com/news-releases/verizons-2016-data-breach-investigations-report-finds-cybercriminals-are-exploiting-human-nature-300258134.html">30% of phishing emails</a> are opened, and 13% of those opened end with the receiver clicking on the phishing link.</p><p>Keeping people safe on the internet means decreasing the number of successful phishing attempts. We're glad to be able to fight phish using the DNS.</p> ]]></content:encoded>
            <category><![CDATA[Phishing]]></category>
            <category><![CDATA[Abuse]]></category>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[Reliability]]></category>
            <guid isPermaLink="false">3EGw4u6PtzahSALx0hsmcI</guid>
            <dc:creator>Dani Grant</dc:creator>
        </item>
        <item>
            <title><![CDATA[Anonymity and Abuse Reports]]></title>
            <link>https://blog.cloudflare.com/anonymity-and-abuse-reports/</link>
            <pubDate>Sun, 07 May 2017 19:35:34 GMT</pubDate>
            <description><![CDATA[ Last Thursday, ProPublica published an article critiquing our handling of some abuse reports that we receive. Feedback from the article caused us to reevaluate how we handle abuse reports. As a result, we've decided to update our abuse reporting system. ]]></description>
            <content:encoded><![CDATA[ <p>Last Thursday, ProPublica published <a href="https://www.propublica.org/article/how-cloudflare-helps-serve-up-hate-on-the-web">an article</a> critiquing our handling of some abuse reports that we receive. Feedback from the article caused us to reevaluate how we handle abuse reports. As a result, we've decided to update our abuse reporting system to allow individuals reporting threats and child sexual abuse material to do so anonymously. We are rolling this change out and expect it to be available by the end of the week.</p><p>I appreciate the feedback we received. How we handle abuse reports has evolved over the last six and a half years of Cloudflare's history. I wanted to take this opportunity to walk through some of the rationale that got us to this point and caused us to have a blindspot to the case that was highlighted in the article.</p>
    <div>
      <h3>What Is Cloudflare?</h3>
      <a href="#what-is-cloudflare">
        
      </a>
    </div>
    <p>Cloudflare is not a hosting provider. We do not store the definitive copy of any of the content that someone may want to file an abuse claim about. If we terminate a customer it doesn’t make the content go away. Instead, we are more akin to a specialized network. One of the functions of the network that we provide is to add security to the content providers that use us. Part of doing that inherently involves hiding the location of the actual hosting provider. If we didn't do this, a malicious attacker could simply bypass Cloudflare by attacking the host directly.</p><p>That created an early question on what we should do when someone reported abusive content that was passing through our network. The first principle was we believed it was important for us to not stand in the way of valid abuse reports being submitted. The litmus test that we came up with was that the existence of Cloudflare ideally shouldn't make it any harder, or any easier, to report and address abuse.</p>
    <div>
      <h3>Mistakes of Early Abuse Reporting</h3>
      <a href="#mistakes-of-early-abuse-reporting">
        
      </a>
    </div>
    <p>The majority (83% over the last week) of the abuse reports that we get involve allegedly copyrighted material transiting our network. Our early abuse policy specified that if we received an abuse report alleging copyrighted material we'd turn over the IP address of the hosting provider so the person filing the abuse report could report the abuse directly.</p><p>It didn't take long for malicious attackers to realize this provided an effective way to bypass our protections. They would submit a fake report alleging some image on a legitimate site had been illegally copied, we'd turn over the IP address of our customer, and they'd attack it directly. Clearly that wasn't a workable model.</p><p>As a result, we revised our policy to instead act as a proxy for abuse reports that were submitted to us. If a report was submitted then we'd proxy the report through and forward it to the site owner as well as the site's host. We provided the contact information so the parties could address the issue between themselves.</p><p>While we have a Trust &amp; Safety team that is staffed around the clock, for the most part abuse handling is automated. Various firms that specialize in finding and taking down copyrighted material generate such a flood, often submitting hundreds of reports for the same allegedly copyrighted item, that manual review of every report would be infeasible.</p>
    <div>
      <h3>Violent Threats and Child Sexual Abuse</h3>
      <a href="#violent-threats-and-child-sexual-abuse">
        
      </a>
    </div>
    <p>We've always treated reports of violent threats and child sexual abuse material with additional care. Understandably, from the perspective of the individuals in the ProPublica article, it seems callous and absurd that we would ever forward these reports to the site owner. However, we had a different perspective.</p><p>The vast majority of times that violent threats or child sexual abuse material were reported to us occurred on sites that were not dedicated to those topics. Imagine a social network like Facebook was a Cloudflare customer. Somewhere on the site something was posted that included a violent threat. That post was then reported to Cloudflare as the network that sits in front of the Facebook-like site.</p><p>In our early days, it seemed reasonable and responsible to pass the complaint on to the Facebook-like customer who could then follow up directly. That also met the litmus test of being what would happen if Cloudflare didn't exist. What the policy didn't account for was site owners who could not be trusted to act responsibly with abuse reports including contact information.</p>
    <div>
      <h3>Anonymous Reporting</h3>
      <a href="#anonymous-reporting">
        
      </a>
    </div>
    <p>Beginning in 2014, we saw limited, but very concerning, reports of retaliation based on submitted abuse reports. As a result, we adjusted our process to make it so complaints about violent threats and child sexual abuse material would be sent only to the host, not to the site owner.</p><p>We’ve confirmed that in the cases reported to the site mentioned in the ProPublica article we followed this procedure. That change largely addressed the problem of people reporting abuse getting harassed. What we didn’t anticipate is that some hosts would themselves pass the full complaint, including the reporter’s contact information, on to the site owner. We assume this is what happened in the ProPublica cases.</p><p>Another change we made in 2015 was to clarify exactly what would happen when someone submitted a report by adding disclaimers to our <a href="https://www.cloudflare.com/abuse">abuse form</a>. These disclaimers appear in multiple places throughout the abuse submission flow:</p><p><i>“Cloudflare will forward all abuse reports that appear to be legitimate to the responsible hosting provider and to the website owner.”</i></p><p><i>"By submitting this report, you consent to the above information potentially being released by Cloudflare to third parties such as the website owner, the responsible hosting provider, law enforcement, and/or entities like Chilling Effects."</i></p><p>In a world without Cloudflare, if you wanted to anonymously report something, you would use a disposable email and a fake name and submit a report to the site's hosting provider or the site itself. We didn't do anything to check that the contact information used in reports was valid so we assumed, with the disclaimer in place, if people wanted to submit reports anonymously they'd do the same thing as they would have if Cloudflare didn't exist.</p><p>That was a bad assumption. As the ProPublica article made clear, many people did not read or understand the disclaimer and were surprised that we forwarded their full abuse report to the host who then, in some cases, could forward it to the site owner.</p>
    <div>
      <h3>Determining Bad Actors</h3>
      <a href="#determining-bad-actors">
        
      </a>
    </div>
    <p>In reevaluating our policy a key question was when it is appropriate to pass along the full report and when it is not. Again, from the perspective of the author of the ProPublica article, that may seem like an easy distinction. The reality is that requiring an individual working on our Trust &amp; Safety team understand the nature of every site that is on Cloudflare is untenable. Moreover, adding more human intervention that slows down the process of reporting abuse, especially in cases of violent threats and child sexual abuse material, where time may be of the essence, strikes us as a step backward.</p><p>Instead, we took the suggestions of many of the comments we received and are implementing a policy where reporters of these types of abuse can choose to submit them and not have their contact information included in what we forward. The person making the abuse report seems in the best position to judge whether or not they want their information to be relayed. Making this change requires some engineering work on our part, but we have prioritized it. By the end of this week, someone submitting an abuse report for one of these categories will have the choice of whether to do so anonymously.</p>
    <div>
      <h3>Ongoing Improvements</h3>
      <a href="#ongoing-improvements">
        
      </a>
    </div>
    <p>We are under no illusion that this latest iteration of our abuse process is perfect. In fact, we already have concerns about challenges the new system will create. Anonymous reporting opens a new vector for malicious actors to submit false reports and harass Cloudflare customers. In addition, for responsible Cloudflare customers who want to act on reports, anonymous reports may make it more difficult for them to gather more information from the reporter which may make it more difficult for well-informed action to be taken to address the issue.</p><p>We appreciate the feedback on where our previous process broke down. As new problems arise, we anticipate that we'll continue to need to make changes to how we handle abuse reports.</p>
    <div>
      <h3>Final Thoughts on Censoring the Internet</h3>
      <a href="#final-thoughts-on-censoring-the-internet">
        
      </a>
    </div>
    <p>While we clearly had a significant blindspot in how we handled one type of abuse reports, we remain committed to our belief that it is not Cloudflare's role to make determinations on what content should and should not be online. That belief comes from a number of principles.</p><p>Cloudflare is more akin to a network than a hosting provider. I'd be deeply troubled if my ISP started restricting what types of content I can access. As a network, we don't think it's appropriate for Cloudflare to be making those restrictions either.</p><p>That is not to say we support all the content that passes through Cloudflare's network. We, both as an organization and as individuals, have political beliefs and views of what is right and wrong. There are institutions — law enforcement, legislatures, and courts — that have a social and political legitimacy to determine what content is legal and illegal. We follow the lead of those organizations in all the jurisdictions we operate. But, as more and more of the Internet sits behind fewer and fewer private companies, we're concerned that the political beliefs and biases of those organizations will determine what can and cannot be online.</p><p>If you're interested, I gave <a href="https://www.youtube.com/watch?v=SWFX-zEYwN0">a talk a few years ago</a> about how we think about our role in policing online content. It's about an hour long, but if you're interested in the topic, I encourage you to watch it in order to better understand our perspective.</p><p>From time to time an organization will sign up for Cloudflare that we find revolting because they stand for something that is the opposite of what we think is right. Usually, those organizations don't pay us. Every once in awhile one of them does. When that happens it's one of the greatest pleasures of my job to quietly write the check for 100% of what they pay us to an organization that opposes them. The best way to fight hateful speech is with more speech.</p><p>I appreciate the feedback on how we can improve our abuse process. We are implementing the changes that were recommended. They take engineering, so they aren't available immediately, but will be live by the end of this week. We continue to iterate and improve on our mission of helping build a better Internet.</p> ]]></content:encoded>
            <category><![CDATA[Freedom of Speech]]></category>
            <category><![CDATA[Community]]></category>
            <category><![CDATA[Support]]></category>
            <category><![CDATA[Abuse]]></category>
            <guid isPermaLink="false">mkz4Fq2t9fSCDbDvk2L11</guid>
            <dc:creator>Matthew Prince</dc:creator>
        </item>
        <item>
            <title><![CDATA[Of Phishing Attacks and WordPress 0days]]></title>
            <link>https://blog.cloudflare.com/of-phishing-attacks-and-wordpress-0days/</link>
            <pubDate>Fri, 24 Apr 2015 00:17:28 GMT</pubDate>
            <description><![CDATA[ Proxying around 5% of the Internet’s requests gives us an interesting vantage point from which to observe malicious behavior. However, it also makes us a target.  ]]></description>
            <content:encoded><![CDATA[ <p>Proxying around 5% of the Internet’s requests gives us an interesting vantage point from which to observe malicious behavior. However, it also makes us a target. Aside from the many and varied denial of service (DDoS) attacks that break against our defenses, we also see huge number of phishing campaigns. In this blog post I'll dissect a recent phishing attack that we detected and neutralized with the help of our friends at Bluehost.</p><p>One attack that is particularly interesting because it appears to be using a brand new WordPress 0day.</p>
    <div>
      <h3>A Day Out Phishing</h3>
      <a href="#a-day-out-phishing">
        
      </a>
    </div>
    <p>The first sign we typically see that indicate a new phishing campaign is underway are the phishing emails themselves. Generally, there's a constant background noise from a few of these emails targeting individual customers every day. However, when a larger campaign starts up that trickle typically turns into a flood of similar messages.</p><p>Here's an example we've recently received:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1ievhyb1PeFa6DJpZUERYP/e3340bc4f8c7c727a7797f060592aca2/Screen-Shot-2015-04-20-at-9-28-35-PM.png" />
            
            </figure><p><b>Note</b> — CloudFlare will never send you an email like this. If you see one like it, it is fake and should be reported to our abuse team by forwarding it to <a>support@cloudflare.com</a>.</p><p>In terms of the phishing campaign timeline, these emails aren’t the first event. Much like a spider looking to trap flies, a phisher first has to build a web to trap his or her victims. One way is through landing pages.</p><p>Looking like the legitimate login page of a target domain, these landing pages have one goal - to collect your credentials. Since these landing pages are quickly identified, the phisher will often go to great lengths to ensure that he or she can put up tens or even hundreds of pages during the lifetime of a campaign, all while being extra careful that these pages can't be traced back to him or her. Generally, this means compromising a large number of vulnerable websites in order to inject a phishing toolkit.</p><p>It's no surprise that first step in most phishing campaigns will usually be the mass compromise of a large number of vulnerable websites. This is why you will often see a notable uptick in the volume of phishing emails whenever a major vulnerability comes out for one of the popular CMS platforms. This is also why protecting the Internet’s back-office is a critical step in building a better Internet. If vulnerable CMS sites are protected, not only can they flourish, but the thousands of potential victims that could get abused when their infrastructure gets hijacked for malicious purposes are also protected.</p><p>This is why, at CloudFlare, we feel that providing free, basic security to every website is such important thing and why ultimately it could be such a game changer in building a better Internet.</p>
    <div>
      <h3>Back to the phish</h3>
      <a href="#back-to-the-phish">
        
      </a>
    </div>
    <p>Returning to our phishing attack, we see that it's no different. Analyzing the “load.cloudflare.com” hyperlink on the message, we see that it's actually a link pointing to a compromised WordPress site hosted by Bluehost.</p><p><b>Note</b>: This is not a reflection on Bluehost, <i>every</i> hosting provider gets targeted at some point. What's more important is how those hosting providers subsequently respond to reports of compromised sites. In fact, Bluehost should be commended for the speed with which they responded to our requests and the way they handled the affected sites we reported.</p><p>Every other email in this particular campaign followed the same pattern. Here is the source for another one of those links that uses “activate.cloudflare.com”:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2ULCZgjAD8uyU95cBQbqaN/b2b2f208706b31f20baa566c2a51dc44/Screen-Shot-2015-04-21-at-12-05-54-PM-1.png" />
            
            </figure><p>As you can see, while the message will display that you are going to “activate.cloudflare.com”, in reality, anyone that clicks on the link will be diverted to the victim website. Which, unsurprisingly, is running an old, vulnerable version of WordPress.</p><p>Every phishing email from this campaign has followed exactly the same pattern: a basic email template addressed to $customer informing them that their site has been locked, and inviting them to click on a link that takes them to a compromised WordPress site on Bluehost.</p><p>It looks like is this attacker harvested a large number of target domains using public DNS and email records identifying administrative email addresses. This became the victim list. The attacker then targeted a convenient, vulnerable CMS platform and injected his or her phishing kit into every innocent domain that's been compromised. Finally, once that is complete the attacker will send out the phishing emails to the victim list.</p><p>As phishing attacks go, this one is remarkably unsophisticated. All a savvy user had to do to reveal the true nature of this link is a quick mouse over. As soon as you do mouse over, the link you will see -- “activate.cloudflare.com” -- does not match the true destination.</p>
    <div>
      <h3>More advanced phishing techniques</h3>
      <a href="#more-advanced-phishing-techniques">
        
      </a>
    </div>
    <p>A clever phisher could have used one of the many well known tricks to obfuscate the URL. Below are some of those techniques possible so you will know them if you see them.</p><ul><li><p><b>Image Maps.</b> Instead of using a traditional hyperlink as above, phishers have been known to put an image map in their emails. The image, of course, is of a link to a trusted site such as “<a href="http://www.safesite.com”">www.safesite.com”</a>. When an unsuspecting user clicks within the coordinates of the image map, they are diverted to the phishing site.</p></li></ul><p>Here's an example of this technique taken from an old eBay phishing email:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2hUdm0YKA9SAePdOwS8svI/7f5004c93f8005e532a354701a686e0c/Screen-Shot-2015-04-21-at-12-22-59-PM.png" />
            
            </figure><p>In order to fool Bayesian filters looking for phishing spam like this, the phisher also added some legitimate sounding words in white font to keep them from appearing. The user experience, however, is the same as the earlier phishing email. As soon as you mouse over the image map, you will see the true destination.</p><ul><li><p><b>Misspelled domain names and homoglyphs.</b> Misspelled domains can look very similar to their legitimate counterparts and by using a homoglyph -- or look-a-like character -- an attacker can make a misspelling look even less obvious. Examples include “microsft.com” or “g00gle.com” These domains look so similar to the advertised link in the phishing email that many people will miss the discrepancy when they mouse over the link.</p></li><li><p><b>Reflection, Redirection, and javascript.</b> Many websites -- even sites like answers.usa.gov -- have search features, offsite links, or vulnerable pages that have historically been abused by phishers. If the offsite link can be manipulated, typically with a cross site scripting vulnerability, it's possible for the phisher to present a link from the target domain that takes the victim to a page under the Phisher’s control. Below is an example of a historic flaw of this nature that existed on the answers.usa.gov site</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6WjPNtZuHPsd99hwLkYDWF/128bfd7419ccbfe97951a06b03e05f3a/Screen-Shot-2015-04-21-at-12-42-07-PM.png" />
            
            </figure><p>In this case, the URL looks like a legitimate “answers.usa.gov” ur,l but if you clicked on it, you would activate a cross-site scripting flaw that executes the javascript in your browser. The attacker could easily turn a page with this sort of flaw into a malicious credential harvester, all while continuing to use a link to the legitimate site.</p><p><b>Note</b> - All those extra %20’s are encoded spaces to push the javascript far enough away that it won’t be visible on mouse over.</p><p>A slightly different flaw, also on the USA.gov site, involved its URL shortening service. Open to anyone, Phishers quickly discovered that they could use this service to create shortened URLs that looked important because of the .gov prefix. A victim that might be reluctant to click on an unsolicited bit.ly link might be less reluctant if faced with a .gov link. Here's an example of an email from a campaign abusing that service:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5ZYcFpcPq9WSpiGmzpzCNo/224cd6d4e3e89131942380392064a635/page-threats-govt-spam1.jpg" />
            
            </figure><ul><li><p><b>URL obfuscation.</b> Historically, this has been one of the most popular and varied techniques. The concept is simple: use any of the available URL encoding methods to disguise the true nature of the destination URL. I'll describe a couple of historic techniques below.</p></li></ul><p><b>Note</b>: many modern browsers now warn against some of these techniques.</p><p><b>First</b> is username:password@url abuse. This notation, now deprecated because only an idiot would pass credentials in the HTTP query string these days, was designed to allow seamless access to password protected areas. Abuse is easy, for example:</p><p><a>www.safesite.com@www.evilsite.com</a></p><p><b>Next</b> is IP address obfuscation. You are probably familiar with the IP address as a dotted quad? 123.123.123.123 Well, IP addresses can also be expressed in a number of other formats which browsers will accept. By combining this with the “username:password@” trick above, an attacker can effectively hide his true destination. Below are four different methods for presenting one of Google’s IP addresses -- 74.125.131.105</p><ul><li><p><a href="http://www.safesite.com@74.125.131.105">http://www.safesite.com@74.125.131.105</a></p></li><li><p><a href="http://www.safesite.com@1249739625/">http://www.safesite.com@1249739625/</a></p></li><li><p><a href="http://www.safesite.com@0x4a.0x7d.0x83.0x69/">http://www.safesite.com@0x4a.0x7d.0x83.0x69/</a></p></li><li><p><a href="http://www.safesite.com@0112.0175.0203.0151/">http://www.safesite.com@0112.0175.0203.0151/</a></p></li></ul><p>All of these URLs go to 74.125.131.150.</p><p><b>Finally</b> we have Punycode and Homoglyph based obfuscation. Punycode was created as a way for international characters to map to valid characters for DNS, e.g., “café.com”. Using punycode this would be represented as xn--caf-dma.com. As mentioned at the start, homoglyphs are symbols which closely resemble other symbols, like 0 and O, or I and l.</p><p>By combining these two methods we can create URLs like:</p><p><a href="http://www.safesite.com⁄login.html.evilsite.com">www.safesite.com⁄login.html.evilsite.com</a></p><p>The secret to this obfuscated URL is to use a non-standard character which happens to be a homoglyph for /. The result? Instead of a page on safesite.com, you are actually taken to a subdomain of the following punycode domain:</p><p><a href="http://www.safesite.xn--comlogin-g03d.html.evilsite.com">www.safesite.xn--comlogin-g03d.html.evilsite.com</a></p><p>New obfuscation techniques like these appear all the time. Phishing is both the most common and arguably the most effective method of attack for medium to low skill attackers. Staying up to date with these techniques can be extremely useful when it comes to spotting potential phishing attempts.</p>
    <div>
      <h3>Conclusions</h3>
      <a href="#conclusions">
        
      </a>
    </div>
    <p>After further analysis, it quickly became clear that all of the endpoints in this campaign were compromised WordPress sites running WordPress 4.0 - 4.1.1.</p><p>The most likely scenario is that a new critical vulnerability has surfaced in WordPress 4.1.1 and earlier. Given that 4.1.1 was, at the time of writing, the most current version of WordPress, this can only mean one thing -- a WordPress 0day in the wild.</p><p>Checking the WordPress site confirms that that a few hours ago they announced a new critical cross-site scripting vulnerability:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1eZiSBTqEnOdf34Krrxj06/e21be77b781c78d7fa3ac4d4d68d0498/Screen-Shot-2015-04-21-at-5-11-55-PM.png" />
            
            </figure><p>While we can’t confirm for certain that this is the vulnerability our phisher was using, it seems highly likely given the version numbers compromised.</p><p>Over the last few hours, we've worked closely with our friends at Bluehost to identify the remaining affected sites compromised by this phisher so they could take them offline. A quick response like this essentially renders all remaining phishing emails in this current campaign harmless. The need to quickly neutralize Phishing sites is why CloudFlare engineers developed our own process for rapidly identifying and tagging suspected compromised sites. When a site on our network is flagged as phishing site, we impose an interstitial page that serves to both to warn potential visitors and give the site owner time to fix the issue.</p><p>You can read more about our own process in this <a href="/127760418/">blog post</a>.</p>
    <div>
      <h3>How customers can stay safe</h3>
      <a href="#how-customers-can-stay-safe">
        
      </a>
    </div>
    <p>By enabling the CloudFlare's WAF, CloudFlare customers have some protection against the sort of cross-site scripting vulnerability involved in this attack. However, anyone can still fall victim to a phishing email. Below are 7 tips to help you stay safe:</p><ul><li><p>NEVER click on links in unsolicited emails or advertisements.</p></li><li><p>Be vigilant, poor spelling and strange URLs are dead giveaways.</p></li><li><p>Mouse over the URL and see if it matches the what’s presented in the email.</p></li><li><p>Type URLs in manually where possible.</p></li><li><p>Stay up to date on your software and make sure you are running a current up-to-date antivirus client — yes, even if you're using Mac.</p></li><li><p>It’s possible to set traps for phishers: use unique, specific email addresses for each account you set up. That way if you get an email to you Bank of America email address asking for your Capital One password, you immediately know it's a phishing attack.</p></li><li><p>Finally, where possible enable two-factor authentication. While not foolproof, it makes it much harder for attackers.</p></li></ul> ]]></content:encoded>
            <category><![CDATA[WordPress]]></category>
            <category><![CDATA[Attacks]]></category>
            <category><![CDATA[DDoS]]></category>
            <category><![CDATA[Abuse]]></category>
            <guid isPermaLink="false">6T3SXr5Tn4n7xTnOtmpmJk</guid>
            <dc:creator>Marc Rogers</dc:creator>
        </item>
        <item>
            <title><![CDATA[Updating Policies]]></title>
            <link>https://blog.cloudflare.com/updating-policies/</link>
            <pubDate>Mon, 20 Aug 2012 20:39:00 GMT</pubDate>
            <description><![CDATA[ In 2009, CloudFlare's service began to take shape. While in the early days I had contributed to CloudFlare's early code, we quickly hired engineers to join Lee's team who were far smarter than I.  ]]></description>
            <content:encoded><![CDATA[ <p>Back in late 2009, CloudFlare's service began to take shape and our website first went online. While in the early days I had contributed to CloudFlare's early code, we quickly hired engineers to join Lee's team who were far smarter than I. That left me to turn my attention to another area of the site more appropriate for a recovering lawyer: our <a href="http://www.cloudflare.com/terms">Terms of Service</a> and <a href="http://www.cloudflare.com/security-policy">Privacy Policy</a>.</p><p>Generally, these documents have held up pretty well since December 5, 2009 when we first published them. However, today we're making some updates to address some issues that have come up over the last two years. I wanted to take the time to walk through the changes here so everyone is clear why we made the updates we have.</p>
    <div>
      <h3>Apps</h3>
      <a href="#apps">
        
      </a>
    </div>
    <p>Many of the changes to the Terms of Service and Privacy Policy are the result of CloudFlare's <a href="http://www.cloudflare.com/apps">Apps Marketplace</a>. From early in our history, we realized we had an opportunity to help webmasters install services to enhance their sites. Oliver Roup, a friend of mine from business school, approached up about allowing CloudFlare's users to automatically incorporate the service of a company he'd started: <a href="http://www.cloudflare.com/apps/viglink">Viglink</a>. Viglink's service automatically adds an affiliate code to appropriate links on your site so you can make money when people click on a link and then go on to purchase something.</p><p>It seemed like a no-brainer that we offer Viglink as an option to our users. We always thought it would be a service that people could turn on or off, but I wanted to make sure our Terms of Service included the possibility that if someone had the service on then affiliate codes could be added. I included the following sentence in our terms: "[CloudFlare may] Add tracking codes or affiliate codes to links that do not previously have tracking or affiliate codes." That has, over time, caused endless confusion, customer service inquiries, and even conspiracy theories.</p><p>We're building a platform that, through Apps, can allow you to update your site in a wide number of ways. While we want to acknowledge that, we also want to make something clear: it is always your choice as to what apps are enabled. As a result, we updated this key section to now read:</p><p>You retain full copyrights in any materials served through CloudFlare. Depending on the features you select or Apps you enable, CloudFlare may modify the content of your site. For example, CloudFlare may detect any email addresses and replace them with a script in order to keep it from being harvested, or CloudFlare may insert code to improve page load performance or enable a Third Party App. Depending on the features you enable, you acknowledge CloudFlare may:</p><ol><li><p>Intercept requests determined to be threats and present them with a challenge page.</p></li><li><p>Add cookies to your domain to track visitors, such as those who have successfully passed the CAPTCHA on a challenge page.</p></li><li><p>Add script to your pages to, for example, add services, Apps, or perform additional performance tracking.</p></li><li><p>Other changes to increase performance or security of your website.</p></li></ol><p>CloudFlare will make it clear whenever a feature will modify your content and, whenever possible, provide you a mechanism to allow you to disable the feature.</p><p>We've made updates elsewhere to also reflect that we allow you to install third party apps. For example, our Privacy Policy now acknowledges that you should check the Terms of Service and Privacy Policies of these app providers since they may be different from CloudFlare's. The idea of the Apps Marketplace is something that really came into focus after our initial launch, so it's appropriate now for us to update our policies to account for it.</p>
    <div>
      <h3>Abuse</h3>
      <a href="#abuse">
        
      </a>
    </div>
    <p>Section 11 of our old Terms of Service included a long list of things that, if you did on our network, we could terminate you for. The history of this section is that I searched a number of other major services to see what they had prohibited and then included just about everything that had ever been listed. This list was largely pulled from hosting providers and similar sites that actually hosted content.</p><p>This list may be appropriate for a hosting service, but it isn't as appropriate for a network provider. CloudFlare is much more akin to a network provider. People also interpreted the list as if it was self-executing computer code. Someone would find a site that told people how to build a grenade, or whatever, and write to us saying we had to terminate them. We, on the other hand, saw the list as reasons we could terminate people, not reasons we must terminate them.</p><p>Given the confusion the list created we simplified it. Today our policy remains as it was before, just without the list. If you're using CloudFlare in a way we deem inappropriate we will, at our sole discretion, terminate your use of the CloudFLare network. As I've <a href="/thoughts-on-abuse">written about before</a>, our general position is that CloudFlare is building a better Internet and it's not our role to determine what content should or should not be allowed to be published. That said, if you're using our network solely as a file locker, distributing malware or phishing, or otherwise causing per se harm then we will terminate use.</p><p>We also updated our abuse process to reflect what we've learned about running an abuse desk in front of hundreds of thousands of websites. What we learned was that as our technical defenses improved, hackers turned to abusing our abuse process to determine the identity of sites on our network. That, effectively, was a mechanism to bypass our technical protections. Our new abuse process allows legitimate rights holders to file complaints that we relay to the owners of sites with alleged violations without compromising the technical protections we offer our customers.</p>
    <div>
      <h3>Miscellaneous Other Cleanup</h3>
      <a href="#miscellaneous-other-cleanup">
        
      </a>
    </div>
    <p>There was a lot of other cruft in our terms that we cleaned up. For example, we previously included the following paragraph:</p><blockquote><p>You are granted a limited, revocable, and nonexclusive right to create a hyperlink to any non-password protected directories, so long as the link does not portray CloudFlare, its affiliated websites, or its services in a false, misleading, derogatory, or otherwise offensive matter. You may not use any of CloudFlare's proprietary graphics or trademarks as part of the link without express written permission.</p></blockquote><p>While most Terms of Service you'll find around the Internet include such paragraphs, they really are silly. We've deleted the paragraph so you can go ahead and link to our site, even if what you say is false, misleading, derogatory and offensive.</p><p>When we first started CloudFlare we also had something called the Automated Setup Tool that would login to your DNS provider and <a href="https://www.cloudflare.com/products/registrar/">Registrar</a> and make the changes for you if you gave us your username and password. While it was very cool and made the signup process even faster than it is today, we decided it was a very bad security practice to ask for people's username/password for a third party service. Much like we got rid of the Automated Setup Tool, we've now gotten rid of the section that covered how it worked. (Section 6 is now about Apps.) We also now provide software (e.g., mod_cloudflare and Railgun) so the terms were updated in various places to include that.</p><p>While I'm a recove ring lawyer, I'm not a big believer that the legal system is the best way to resolve disputes. As a result, we added an arbitration clause. Should a dispute arise in the future, it seems like a more civilized way to resolve it. We also had some problems with machine translated versions of the Terms of Service containing oddities. As a result, we added a section to make it clear that the English version of the terms is the one that is controlling. We also moved from Palo Alto, CA to San Francisco, CA more than a year ago so we finally updated the jurisdiction information.</p><p>That's the gist of the updates. For those who are interested, we'll keep the old versions of the <a href="http://www.cloudflare.com/terms-old">Terms of Service</a> and <a href="http://www.cloudflare.com/security-policy-old">Privacy Policies</a> available for a few months. While I'm sure we'll have to make additional updates to the Terms of Service and Privacy Policies in the future as we learn more about running a global network, I am confident that we will continue to operate as we always have: respecting our publishers and their visitors' privacy, operating a responsible network, and working toward building a faster, safer, smarter web for everyone.</p> ]]></content:encoded>
            <category><![CDATA[Abuse]]></category>
            <category><![CDATA[Policy & Legal]]></category>
            <category><![CDATA[Freedom of Speech]]></category>
            <guid isPermaLink="false">3KH2BSA9Wjj61sqYKrFZl2</guid>
            <dc:creator>Matthew Prince</dc:creator>
        </item>
        <item>
            <title><![CDATA[Thoughts on Abuse]]></title>
            <link>https://blog.cloudflare.com/thoughts-on-abuse/</link>
            <pubDate>Fri, 13 Jul 2012 21:47:00 GMT</pubDate>
            <description><![CDATA[ One of the behind the scenes topics we think about a lot at CloudFlare is how to handle abuse of our network. I realized that we hadn't exposed our thoughts on this clearly enough. In the next few days, we'll be making some minor updates to our Terms of Service. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>One of the behind the scenes topics we think about a lot at CloudFlare is how to handle abuse of our network. I realized that we hadn't exposed our thoughts on this clearly enough. In the next few days, we'll be making some minor updates to our Terms of Service to better align it with how we handle abuse complaints. However, I wanted to take the time to write up a post on how we think about abuse. Make sure you're comfy, this is going to be a bit of a marathon post because it's an important and complicated issue.</p><p>CloudFlare sits in front of nearly a half a million websites. Those websites include banks, national governments, Fortune 500 companies, universities, media publications, blogs, <a href="https://www.cloudflare.com/ecommerce/">ecommerce companies</a>, and just about everything else you can find online. Every day we process more page views through our network than Amazon.com, Wikipedia, Twitter, Zynga, Aol, eBay, PayPal, Apple, and Instagram — combined. That's dumbfounding given that CloudFlare is only a year and a half old from our public launch.</p>
    <div>
      <h3>Problem Sites</h3>
      <a href="#problem-sites">
        
      </a>
    </div>
    <p>While the vast majority of sites on CloudFlare are not problematic, just like on the Internet itself there are inevitably some unsavory organizations on our network. Almost exactly a year ago, I blogged about the notorious hacking group LulzSec using CloudFlare's services and our <a href="/58611873">decision not to terminate theirservice</a>. As I wrote a year ago:</p><blockquote><p>CloudFlare is firm in our belief that our role is not that of Internet censor. There are tens of thousands of websites currently using CloudFlare's network. Some of them contain information I find troubling. Such is the nature of a free and open network and, as an organization that aims to make the whole Internet faster and safer, such inherently will be our ongoing struggle. While we will respect the laws of the jurisdictions in which we operate, we do not believe it is our decision to determine what content may and may not be published. That is a slippery slope down which we will not tread.</p></blockquote>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2pJCfxKInGkuuMMBjFI4RM/f6d041493f3c65850a2fd39c69c07b47/lulzsec.jpg.scaled500.jpg" />
            
            </figure><p>Today there are hundreds of thousands of sites using CloudFlare and we remain concerned about the slippery slope. To be clear, this isn't a financial decision for us. LulzSec and other problematic customers tend to sign up for our free service and we don't make a dime off of them. When they upgrade they usually pay with stolen credit cards, which causes us significant headaches. The decision to err on the side of not terminating sites is a philosophical one: we are rebuilding the Internet, and we don't believe that we or anyone else should have the right to tell people what content they can and cannot publish online.</p>
    <div>
      <h3>Who We Terminate</h3>
      <a href="#who-we-terminate">
        
      </a>
    </div>
    <p>There is no more thankless job than running an abuse desk. In the last week, our abuse team has had to deal with "senior Iranian officials" threatening us over the fact that a pro-Israeli website was on our network while, at the same time, dealing with threats from an Israeli group who was extremely upset that a website supporting the Iranian regime was also on our network. We didn't terminate either of those sites.</p><p>No matter how repugnant an idea may be to one person or another, we don't believe we are qualified to act as judge. There are, however, at least two clear cases where we believe our network can cause harm and therefore we do take action: spreading malware or powering phishing sites.</p><p>Originally, when we would receive reports of phishing or malware we would terminate the customers immediately. The challenge was that this didn't actually solve the problem. Since we're just a proxy, not the host, us terminating the customer doesn't make the harmful content disappear. Terminating the site effectively just kicked the problem further down the road, moving it off our network and onto someone else's.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/16j1KgSQAxHuTLoNMFf04s/cc0369bf86fa7bb04df696e2511349b5/kick_the_can.jpg.scaled500.jpg" />
            
            </figure><p>photo credit: <a href="http://www.flickr.com/photos/35604385@N08/">Erectus Bee</a></p><p>This was unsatisfying to our abuse team so we reached out to the experts on the issue of malware and phishing at <a href="http://stopbadware.org/">StopBadware</a>. StopBadware is the organization Google trusts to explain about phishing and malware when they detect problems on pages that appear in the company's search index. We worked with StopBadware to design a <a href="/protecting-cloudflare-sites-from-phishing/">Google-like block page that we can display on pages where malware or phishing are detected</a>. This solution actually eliminates the knowm malware and phishing from our network and, at the same time, teaches visitors who may have been fooled by the malicious content about its risks.</p><p>This sounds easy — and, as a matter of policy, it was easy — but, technically, it was actually extremely tricky to implement. To give you some sense, we average about 150,000 requests per second through our network and we're doubling every 3 months or so. To make the block pages work, we needed to check every one of those requests against regular expressions that match known phishing or malware sites. All without slowing down requests. It took us longer than I would have liked to find a solution that could scale, but now that it is in place we are actively adding data sources to ensure we promptly remediate any malware and phishing sites on our network.</p>
    <div>
      <h3>The Rock and the Hard Place</h3>
      <a href="#the-rock-and-the-hard-place">
        
      </a>
    </div>
    <p>While we believe we have found a good solution for malware and phishing abuse reports, other abuse requests still present a vexing issues. Originally, when CloudFlare received a DMCA complaint for an alleged copyright infringement, our practice was to turn over the IP address of the site's host to the person filing the complaint. This allowed them to then take the issue up with the hosting provider.</p><p>CloudFlare has become very, very good at stopping online attacks, including DDoS attacks. As a result, people launching those attacks have begun looking for ways to bypass our protection. Starting about a year ago, we saw a spike in what turned out to be illegitimate DMCA requests. They would look technically correct, include all the required information, but the complaintant wasn't the actual copyright holder but an individual looking to attack the site. As soon as we turned over the origin IP address they would launch an attack, completely bypassing CloudFlare's protection. In other words, attackers were abusing our abuse process — a problem I wrote about when discussing how <a href="/sopa-could-create-new-denial-of-service-attac">SOPA could make things even worse</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/78n7debh71SopBhpfCcG8N/9259a751d7877afb60ebe25b0e8b2771/rock_and_hard_place.jpg.scaled500.jpg" />
            
            </figure><p>Photo credit: <a href="http://rojakdaily.wordpress.com/tag/suspended-rock/">Rojak Daily</a></p><p>If there is a way to reliably tell the difference between a legitimate and an illegitimate DMCA abuse complaint, we haven't found it. As a result, we adjusted our abuse process in order to meet the requirements of the law and allow legitimate complaintants to serve notice to infringers, but not expose our customers to attacks.</p><p>In many ways, our abuse flow today is also a sort of reverse proxy. When we receive a complaint, after some checks to ensure it's validity to the extent possible, we forward a copy of the complaint to the site owner via email. We also send a copy of the complaint to the site's hosting provider, including the site's origin IP address and instructions on how they can test to ensure that the site is, in fact, hosted on their network. We then respond to the complainant explaining how CloudFlare works, how we've relayed their complaint, and providing the identity of the site's actual host (although not the site's actual IP address).</p><p>We are continuing to refine the process over time to maximize two goals: ensuring our customers are protected from attacks, and ensuring that we don't stand in the way of legitimate complaintants. If you have suggestions on how we can improve the process while balancing these interests, we welcome your input.</p> ]]></content:encoded>
            <category><![CDATA[Malware]]></category>
            <category><![CDATA[Abuse]]></category>
            <guid isPermaLink="false">BgmPjJbaSEosbs2r0jJA0</guid>
            <dc:creator>Matthew Prince</dc:creator>
        </item>
        <item>
            <title><![CDATA[Protecting CloudFlare sites from phishing]]></title>
            <link>https://blog.cloudflare.com/protecting-cloudflare-sites-from-phishing/</link>
            <pubDate>Sun, 01 Jul 2012 19:28:00 GMT</pubDate>
            <description><![CDATA[ As the internet has grown, phishing attacks have continued to be a problem. While better awareness and focus on security has helped reduce their number, the RSA and other services tracking phishing still show that somewhere between 20-30,000 phishing attacks generally occur every month.  ]]></description>
            <content:encoded><![CDATA[ <p>[</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5VH028wOfpfTboLi31AMt2/2f7d63e2ef9625d5e21712d6a6fd2ec0/Safari-2.jpg.scaled500.jpg" />
            
            </figure><p>](<a href="http://www.flickr.com/photos/damonbillian/7139602929/">http://www.flickr.com/photos/damonbillian/7139602929/</a> "Screen Shot 2012-05-01 at 5.12.05 PM by dbillian, on Flickr")</p><p><i><b>Editor's Note:</b></i> This post was co-authored with <a href="https://www.cloudflare.com/people">Ray Bejjani</a>, CloudFlare's engineer that was the lead for this project.</p><p>As the internet has grown, phishing attacks have continued to be a problem. While better awareness and focus on security has helped reduce their number, <a href="http://www.rsa.com/phishing_reports.aspx%20">the RSA</a> and other services tracking phishing still show that somewhere between <a href="http://www.rsa.com/phishing_reports.aspx%20">20-30,000 phishing attacks</a> generally occur <i><b>every month</b></i>. In the past, criminals used to launch most of these phishing attacks through email but they have now switched to hacking sites to better hide their tracks and dupe more unsuspecting consumers into divulging their personal information unknowingly to fraudsters. This can become a difficult problem for less technical users that wish to participate and contribute to the World Wide Web.</p><p>Given our position in the internet ecosystem, we often receive phishing reports via our abuse channel. Previously, this meant a manual process to notify the appropriate parties especially the site owner. We felt we could serve our customers better, and their customers in turn, with a solution unique to CloudFlare. When we have identified a URL that is phishing we notify the owner and provide them summary information and notifications when they log in. We also begin serving a warning page in place of the bad URL. This page can be bypassed by the visitor at their preference.</p>
    <div>
      <h3>Why did CloudFlare create this new anti-phishing process?</h3>
      <a href="#why-did-cloudflare-create-this-new-anti-phishing-process">
        
      </a>
    </div>
    <ul><li><p>Our mission is to help make the web faster and safer.</p></li><li><p>We can block phishing pages as soon as they are reported to us. This provides the following benefits to sites using CloudFlare:</p><ul><li><p>Stops site visitors from potentially falling victim to identity theft</p></li><li><p>Stops site owners from being penalized for unknowingly hosting phishing content. Many search engines will blocklist your site if you're hosting malicious content, which only compounds the issue for site owners that don't know that they have been compromised</p></li><li><p>We can quickly notify site owners about the issue on their site quickly so they can clean up the malicious files.</p></li></ul></li></ul>
    <div>
      <h3>How do I protect my site from future hacks and phishing attempts?</h3>
      <a href="#how-do-i-protect-my-site-from-future-hacks-and-phishing-attempts">
        
      </a>
    </div>
    <p>If you're interested in protecting your site, whether you have been hacked or not, you can take the following steps that can secure your site:</p><ol><li><p>Use <a href="https://www.cloudflare.com/features-security">SSL</a> on your site to encrypt information between your site and your server.</p></li><li><p>In addition to providing a <a href="https://www.cloudflare.com/features-security">layer of security</a> to your site already, CloudFlare has partnered with a number of <a href="https://www.cloudflare.com/apps">app providers</a> that can help further protect site owners from malicious intrusions and provide additonal site monitoring.</p></li><li><p>Always update your site's CMS platform, plugins and server software. If you have a notification from your provider that there is a software update available, these updates were probably done to fix known exploits that have shown up since the last release to the plugin or platform. Since doing these updates often only takes a few minutes or so, you can save yourself from a potential world of hurt by doing it "now" instead of "later".</p></li></ol>
    <div>
      <h4>What should I do if CloudFlare has notified me of phishing pages on my site?</h4>
      <a href="#what-should-i-do-if-cloudflare-has-notified-me-of-phishing-pages-on-my-site">
        
      </a>
    </div>
    <p>CloudFlare will send you an email advising you as to what pages on your site are phishing content. You will also see a message on your 'My Websites' page when a domain has pages blocked for a phishing report, with a link to take you to more information about the report.</p><p>[</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4NSKQEaIFoFW7KxZ17M4Vy/5cbe53d4aed5b44f2a9962f46e479569/Unknown.jpeg.scaled500.jpg" />
            
            </figure><p>](<a href="http://www.flickr.com/photos/damonbillian/7133443373/">http://www.flickr.com/photos/damonbillian/7133443373/</a> "Screen Shot 2012-05-01 at 2.41.45 PM by dbillian, on Flickr")</p>
    <div>
      <h4>Steps you should take if you receive an email from us or see a message on your dashboard:</h4>
      <a href="#steps-you-should-take-if-you-receive-an-email-from-us-or-see-a-message-on-your-dashboard">
        
      </a>
    </div>
    <ol><li><p>If you are an experienced web administrator, chances are you already know how to remove the page(s) from your site. Remove the pages in question and then request a review that will be processed by the abuse team.</p></li><li><p>If you are not an experienced web administrator, we would recommend that you contact your hosting provider for assistance in removing the pages. You should then request the review so we can confirm the phishing pages have been removed.</p></li></ol>
    <div>
      <h3>Where should I report a site on CloudFlare that has a phishing page?</h3>
      <a href="#where-should-i-report-a-site-on-cloudflare-that-has-a-phishing-page">
        
      </a>
    </div>
    <p>If you see a site on the CloudFlare network that has a phishing page, please report the site to us via our <a href="https://www.cloudflare.com/abuse/">abuse form</a>.</p><p>Please be sure to include the following in the report:</p><ul><li><p>The domain in question</p></li><li><p>The actual page that the phishing link is located on.</p></li></ul><p>We have more exciting things on the way to help protect internet users from phishing and malware. Please stay tuned to CloudFlare updates for more developments.</p> ]]></content:encoded>
            <category><![CDATA[Phishing]]></category>
            <category><![CDATA[Abuse]]></category>
            <guid isPermaLink="false">7MzLEJUOfxNBzChii91VhR</guid>
            <dc:creator>Damon Billian</dc:creator>
        </item>
    </channel>
</rss>