Subscribe to receive notifications of new posts:

Announcing the CSAM Scanning Tool, Free for All Cloudflare Customers

12/18/2019

10 min read

Two weeks ago we wrote about Cloudflare's approach to dealing with child sexual abuse material (CSAM). We first began working with the National Center for Missing and Exploited Children (NCMEC), the US-based organization that acts as a clearinghouse for removing this abhorrent content, within months of our public launch in 2010. Over the last nine years, our Trust & Safety team has worked with NCMEC, Interpol, and nearly 60 other public and private agencies around the world to design our program. And we are proud of the work we've done to remove CSAM from the Internet.

The most repugnant cases, in some ways, are the easiest for us to address. While Cloudflare is not able to remove content hosted by others, we will take steps to terminate services to a website when it becomes clear that the site is dedicated to sharing CSAM or if the operators of the website and its host fail to take appropriate steps to take down CSAM content. When we terminate websites, we purge our caches — something that takes effect within seconds globally — and we block the website from ever being able to use Cloudflare's network again.

Addressing the Hard Cases

The hard cases are when a customer of ours runs a service that allows user generated content (such as a discussion forum) and a user uploads CSAM, or if they’re hacked, or if they have a malicious employee that is storing CSAM on their servers. We've seen many instances of these cases where services intending to do the right thing are caught completely off guard by CSAM that ended up on their sites. Despite the absence of intent or malice in these cases, there’s still a need to identify and remove that content quickly.

Today we're proud to take a step to help deal with those hard cases. Beginning today, every Cloudflare customer can login to their dashboard and enable access to the CSAM Scanning Tool. As the CSAM Scanning Tool moves through development to production, the tool will check all Internet properties that have enabled CSAM Scanning for this illegal content. Cloudflare will automatically send a notice to you when it flags CSAM material, block that content from being accessed (with a 451 “blocked for legal reasons” status code), and take steps to support proper reporting of that content in compliance with legal obligations.

CSAM Scanning will be available via the Cloudflare dashboard at no cost for all customers regardless of their plan level. You can find this tool under the “Caching” tab in your dashboard. We're hopeful that by opening this tool to all our customers for free we can help do even more to counter CSAM online and help protect our customers from the legal and reputational risk that CSAM can pose to their businesses.

It has been a long journey to get to the point where we could commit to offering this service to our millions of users. To understand what we're doing and why it has been challenging from a technical and policy perspective, you need to understand a bit about the state of the art of tracking CSAM.

Finding Similar Images

Around the same time as Cloudflare was first conceived in 2009, a Dartmouth professor named Hany Farid was working on software that could compare images against a list of hashes maintained by NCMEC. Microsoft took the lead in creating a tool, PhotoDNA, that used Prof. Farid’s work to identify CSAM automatically.

In its earliest days, Microsoft used PhotoDNA for their services internally and, in late 2009, donated the technology to NCMEC to help manage its use by other organizations. Social networks were some of the first adopters. In 2011, Facebook rolled out an implementation of the technology as part of their abuse process. Twitter incorporated it in 2014.

The process is known as a fuzzy hash. Traditional hash algorithms like MD5, SHA1, and SHA256 take a file (such as an image or document) of arbitrary length and output a fixed length number that is, effectively, the file’s digital fingerprint. For instance, if you take the MD5 of this picture then the resulting fingerprint is 605c83bf1bba62e85f4f5fccc56bc128.

The base image

If we change a single pixel in the picture to be slightly off white rather than pure white, it's visually identical but the fingerprint changes completely to 42ea4fb30a440d8787477c6c37b9daed. As you can see from the two fingerprints, a small change to the image results in a massive and unpredictable change to the output of a traditional hash.

The base image with a single pixel changed

This is great for some uses of hashing where you want to definitively identify if the document you're looking at is exactly the same as the one you've seen before. For example, if an extra zero is added to a digital contract, you want the hash of the document used in its signature to no longer be valid.

Fuzzy Hashing

However, in the case of CSAM, this characteristic of traditional hashing is a liability. In order to avoid detection, the criminals producing CSAM resize, add noise, or otherwise alter the image in such a way that it looks the same but it would result in a radically different hash.

Fuzzy hashing works differently. Instead of determining if two photos are exactly the same it instead attempts to get at the essence of a photograph. This allows the software to calculate hashes for two images and then compare the "distance" between the two. While the fuzzy hashes may still be different between two photographs that have been altered, unlike with traditional hashing, you can compare the two and see how similar the images are.

So, in the two photos above, the fuzzy hash of the first image is

00e308346a494a188e1042333147267a
653a16b94c33417c12b433095c318012
5612442030d1484ce82c613f4e224733
1dd84436734e4a5c6e25332e507a8218
6e3b89174e30372d

and the second image is

00e308346a494a188e1042333147267a
653a16b94c33417c12b433095c318012
5612442030d1484ce82c613f4e224733
1dd84436734e4a5c6e25332e507a8218
6e3b89174e30372d

There's only a slight difference between the two in terms of pixels and the fuzzy hashes are identical.

The base image after increasing the saturation, changing to sepia, adding a border and then adding random noise.

Fuzzy hashing is designed to be able to identify images that are substantially similar. For example, we modified the image of dogs by first enhancing its color, then changing it to sepia, then adding a border and finally adding random noise.  The fuzzy hash of the new image is

00d9082d6e454a19a20b4e3034493278
614219b14838447213ad3409672e7d13
6e0e4a2033de545ce731664646284337
1ecd4038794a485d7c21233f547a7d2e
663e7c1c40363335

This looks quite different from the hash of the unchanged image above, but fuzzy hashes are compared by seeing how close they are to each other.

The largest possible distance between two images is about 5m units. These two fuzzy hashes are just 4,913 units apart (the smaller the number, the more similar the images) indicating that they are substantially the same image.

Compare that with two unrelated photographs. The photograph below has a fuzzy hash of

011a0d0323102d048148c92a4773b60d
0d343c02120615010d1a47017d108b14
d36fff4561aebb2f088a891208134202
3e21ff5b594bff5eff5bff6c2bc9ff77
1755ff511d14ff5b

The photograph below has a fuzzy hash of

062715154080356b8a52505955997751
9d221f4624000209034f1227438a8c6a
894e8b9d675a513873394a2f3d000722
781407ff475a36f9275160ff6f231eff
465a17f1224006ff

The distance between the two hashes is calculated as 713,061. Through experimentation, it's possible to set a distance threshold under which you can consider two photographs to be likely related.

Fuzzy Hashing's Intentionally Black Box

How does it work? While there has been lots of work on fuzzy hashing published, the innards of the process are intentionally a bit of a mystery. The New York Times recently wrote a story that was probably the most public discussion of how such technology works. The challenge was if criminal producers and distributors of CSAM knew exactly how such tools worked then they might be able to craft how they alter their images in order to beat it. To be clear, Cloudflare will be running the CSAM Screening Tool on behalf of the website operator from within our secure points of presence. We will not be distributing the software directly to users. We will remain vigilant for potential attempted abuse of the platform, and will take prompt action as necessary.

Tradeoff Between False Negatives and False Positives

We have been working with a number of authorities on how we can best roll it out this functionality to our customers. One of the challenges for a network with as diverse a set of customers as Cloudflare's is what the appropriate threshold should be to set the comparison distance between the fuzzy hashes.

If the threshold is too strict — meaning that it's closer to a traditional hash and two images need to be virtually identical to trigger a match — then you're more likely to have have many false negatives (i.e., CSAM that isn't flagged). If the threshold is too loose, then it's possible to have many false positives. False positives may seem like the lesser evil, but there are legitimate concerns that increasing the possibility of false positives at scale could waste limited resources and further overwhelm the existing ecosystem.  We will work to iterate the CSAM Scanning Tool to provide more granular control to the website owner while supporting the ongoing effectiveness of the ecosystem. Today, we believe we can offer a good first set of options for our customers that will allow us to more quickly flag CSAM without overwhelming the resources of the ecosystem.

Different Thresholds for Different Customers

The same desire for a granular approach was reflected in our conversations with our customers. When we asked what was appropriate for them, the answer varied radically based on the type of business, how sophisticated its existing abuse process was, and its likely exposure level and tolerance for the risk of CSAM being posted on their site.

For instance, a mature social network using Cloudflare with a sophisticated abuse team may want the threshold set quite loose, but not want the material to be automatically blocked because they have the resources to manually review whatever is flagged.

A new startup dedicated to providing a forum to new parents may want the threshold set quite loose and want any hits automatically blocked because they haven't yet built a sophisticated abuse team and the risk to their brand is so high if CSAM material is posted -- even if that will result in some false positives.

A commercial financial institution may want to set the threshold quite strict because they're less likely to have user generated content and would have a low tolerance for false positives, but then automatically block anything that's detected because if somehow their systems are compromised to host known CSAM they want to stop it immediately.

Different Requirements for Different Jurisdictions

There also may be challenges based on where our customers are located and the laws and regulations that apply to them. Depending on where a customers business is located and where they have users, they may choose to use one, more than one, or all the different available hash lists.

In other words, one size does not fit all and, ideally, we believe allowing individual site owners to set the parameters that make the most sense for their particular site will result in lower false negative rates (i.e., more CSAM being flagged) than if we try and set one global standard for every one of our customers.

Improving the Tool Over Time

Over time, we are hopeful that we can improve CSAM screening for our customers. We expect that we will add additional lists of hashes from numerous global agencies for our customers with users around the world to subscribe to. We're committed to enabling this flexibility without overly burdening the ecosystem that is set up to fight this horrible crime.

Finally, we believe there may be an opportunity to help build the next generation of fuzzy hashing. For example, the software can only scan images that are at rest in memory on a machine, not those that are streaming. We're talking with Hany Farid, the former Dartmouth professor who now teaches at Berkeley California, about ways that we may be able to build a more flexible fuzzy hashing system in order to flag images before they're even posted.

Concerns and Responsibility

One question we asked ourselves back when we began to consider offering CSAM scanning was whether we were the right place to be doing this at all. We share the universal concern about the distribution of depictions of horrific crimes against children, and believe it should have no place on the Internet, however Cloudflare is a network infrastructure provider, not a content platform.

But we thought there was an appropriate role for us to play in this space. Fundamentally, Cloudflare delivers tools to our more than 2 million customers that were previously reserved for only the Internet giants. The security, performance, and reliability services that we offer, often for free, without us would have been extremely expensive or limited to the Internet giants like Facebook and Google.

Today there are startups that are working to build the next Internet giant and compete with Facebook and Google because they can use Cloudflare to be secure, fast, and reliable online. But, as the regulatory hurdles around dealing with incredibly difficult issues like CSAM continue to increase, many of them lack access to sophisticated tools to scan proactively for CSAM. You have to get big to get into the club that gives you access to these tools, and, concerningly, being in the club is increasingly a prerequisite to getting big.

If we want more competition for the Internet giants we need to make these tools available more broadly and to smaller organizations. From that perspective, we think it makes perfect sense for us to help democratize this powerful tool in the fight against CSAM.

We hope this will help enable our customers to build more sophisticated content moderation teams appropriate for their own communities and will allow them to scale in a responsible way to compete with the Internet giants of today. That is directly aligned with our mission of helping build a better Internet, and it's why we're announcing that we will be making this service available for free for all our customers.

We protect entire corporate networks, help customers build Internet-scale applications efficiently, accelerate any website or Internet application, ward off DDoS attacks, keep hackers at bay, and can help you on your journey to Zero Trust.

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here. If you're looking for a new career direction, check out our open positions.
LegalAbuse

Follow on X

Cloudflare|@cloudflare

Related posts

September 03, 2022 10:15 PM

Blocking Kiwifarms

We have blocked Kiwifarms. Visitors to any of the Kiwifarms sites that use any of Cloudflare's services will see a Cloudflare block page and a link to this post. ...