
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sat, 04 Apr 2026 06:53:25 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Improving the trustworthiness of Javascript on the Web]]></title>
            <link>https://blog.cloudflare.com/improving-the-trustworthiness-of-javascript-on-the-web/</link>
            <pubDate>Thu, 16 Oct 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ There's no way to audit a site’s client-side code as it changes, making it hard to trust sites that use cryptography. We preview a specification we co-authored that adds auditability to the web. ]]></description>
            <content:encoded><![CDATA[ <p>The web is the most powerful application platform in existence. As long as you have the <a href="https://developer.mozilla.org/en-US/docs/Web/API"><u>right API</u></a>, you can safely run anything you want in a browser.</p><p>Well… anything but cryptography.</p><p>It is as true today as it was in 2011 that <a href="https://web.archive.org/web/20200731144044/https://www.nccgroup.com/us/about-us/newsroom-and-events/blog/2011/august/javascript-cryptography-considered-harmful/"><u>Javascript cryptography is Considered Harmful</u></a>. The main problem is code distribution. Consider an end-to-end-encrypted messaging web application. The application generates <a href="https://www.cloudflare.com/learning/ssl/what-is-a-cryptographic-key/"><u>cryptographic keys</u></a> in the client’s browser that lets users view and send <a href="https://en.wikipedia.org/wiki/End-to-end_encryption"><u>end-to-end encrypted</u></a> messages to each other. If the application is compromised, what would stop the malicious actor from simply modifying their Javascript to exfiltrate messages?</p><p>It is interesting to note that smartphone apps don’t have this issue. This is because app stores do a lot of heavy lifting to provide security for the app ecosystem. Specifically, they provide <b>integrity</b>, ensuring that apps being delivered are not tampered with, <b>consistency</b>, ensuring all users get the same app, and <b>transparency</b>, ensuring that the record of versions of an app is truthful and publicly visible.</p><p>It would be nice if we could get these properties for our end-to-end encrypted web application, and the web as a whole, without requiring a single central authority like an app store. Further, such a system would benefit all in-browser uses of cryptography, not just end-to-end-encrypted apps. For example, many web-based confidential <a href="https://www.cloudflare.com/learning/ai/what-is-large-language-model/"><u>LLMs</u></a>, cryptocurrency wallets, and voting systems use in-browser Javascript cryptography for the last step of their verification chains.</p><p>In this post, we will provide an early look at such a system, called <a href="https://github.com/beurdouche/explainers/blob/main/waict-explainer.md"><u>Web Application Integrity, Consistency, and Transparency</u></a> (WAICT) that we have helped author. WAICT is a <a href="https://www.w3.org/"><u>W3C</u></a>-backed effort among browser vendors, cloud providers, and encrypted communication developers to bring stronger security guarantees to the entire web. We will discuss the problem we need to solve, and build up to a solution resembling the <a href="https://github.com/rozbb/draft-waict-transparency"><u>current transparency specification draft</u></a>. We hope to build even wider consensus on the solution design in the near future.</p>
    <div>
      <h2>Defining the Web Application</h2>
      <a href="#defining-the-web-application">
        
      </a>
    </div>
    <p>In order to talk about security guarantees of a web application, it is first necessary to define precisely what the application <i>is</i>. A smartphone application is essentially just a zip file. But a website is made up of interlinked assets, including HTML, Javascript, WASM, and CSS, that can each be locally or externally hosted. Further, if any asset changes, it could drastically change the functioning of the application. A coherent definition of an application thus requires the application to commit to precisely the assets it loads. This is done using integrity features, which we describe now.</p>
    <div>
      <h3>Subresource Integrity</h3>
      <a href="#subresource-integrity">
        
      </a>
    </div>
    <p>An important building block for defining a single coherent application is <b>subresource integrity</b> (SRI). SRI is a feature built into most browsers that permits a website to specify the cryptographic hash of external resources, e.g.,</p>
            <pre><code>&lt;script src="https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.13.7/underscore-min.js" integrity="sha512-dvWGkLATSdw5qWb2qozZBRKJ80Omy2YN/aF3wTUVC5+D1eqbA+TjWpPpoj8vorK5xGLMa2ZqIeWCpDZP/+pQGQ=="&gt;&lt;/script&gt;</code></pre>
            <p>This causes the browser to fetch <code>underscore.js</code> from <code>cdnjs.cloudflare.com</code> and verify that its SHA-512 hash matches the given hash in the tag. If they match, the script is loaded. If not, an error is thrown and nothing is executed.</p><p>If every external script, stylesheet, etc. on a page comes with an SRI integrity attribute, then the whole page is defined by just its HTML. This is close to what we want, but a web application can consist of many pages, and there is no way for a page to enforce the hash of the pages it links to.</p>
    <div>
      <h3>Integrity Manifest</h3>
      <a href="#integrity-manifest">
        
      </a>
    </div>
    <p>We would like to have a way of enforcing integrity on an entire site, i.e., every asset under a domain. For this, WAICT defines an <b>integrity manifest</b>, a configuration file that websites can provide to clients. One important item in the manifest is the <b>asset hashes dictionary</b>, mapping a hash belonging to an asset that the browser might load from that domain, to the path of that asset. Assets that may occur at any path, e.g., an error page, map to the empty string:</p>
            <pre><code>"hashes": {
"81db308d0df59b74d4a9bd25c546f25ec0fdb15a8d6d530c07a89344ae8eeb02": "/assets/js/main.js",
"fbd1d07879e672fd4557a2fa1bb2e435d88eac072f8903020a18672d5eddfb7c": "/index.html",
"5e737a67c38189a01f73040b06b4a0393b7ea71c86cf73744914bbb0cf0062eb": "/vendored/main.css",
"684ad58287ff2d085927cb1544c7d685ace897b6b25d33e46d2ec46a355b1f0e": "",
"f802517f1b2406e308599ca6f4c02d2ae28bb53ff2a5dbcddb538391cb6ad56a": ""
}
</code></pre>
            <p>The other main component of the manifest is the <b>integrity policy</b>, which tells the browser which data types are being enforced and how strictly. For example, the policy in the manifest below will:</p><ol><li><p>Reject any script before running it, if it’s missing an SRI tag and doesn’t appear in the hashes</p></li><li><p>Reject any WASM possibly after running it, if it’s missing an SRI tag and doesn’t appear in hashes</p></li></ol>
            <pre><code>"integrity-policy": "blocked-destinations=(script), checked-destinations=(wasm)"</code></pre>
            <p>Put together, these make up the integrity manifest:</p>
            <pre><code>"manifest": {
  "version": 1,
  "integrity-policy": ...,
  "hashes": ...,
}
</code></pre>
            <p>Thus, when both SRI and integrity manifests are used, the entire site and its interpretation by the browser is uniquely determined by the hash of the integrity manifest. This is exactly what we wanted. We have distilled the problem of endowing authenticity, consistent distribution, etc. to a web application to one of endowing the same properties to a single hash.</p>
    <div>
      <h2>Achieving Transparency</h2>
      <a href="#achieving-transparency">
        
      </a>
    </div>
    <p>Recall, a transparent web application is one whose code is stored in a publicly accessible, append-only log. This is helpful in two ways: 1) if a user is served malicious code and they learn about it, there is a public record of the code they ran, and so they can prove it to external parties, and 2) if a user is served malicious code and they don’t learn about it, there is still a chance that an external auditor may comb through the historical web application code and find the malicious code anyway. Of course, transparency does not help detect malicious code or even prevent its distribution, but it at least makes it <i>publicly auditable</i>.</p><p>Now that we have a single hash that commits to an entire website’s contents, we can talk about ensuring that that hash ends up in a public log. We have several important requirements here:</p><ol><li><p><b>Do not break existing sites.</b> This one is a given. Whatever system gets deployed, it should not interfere with the correct functioning of existing websites. Participation in transparency should be strictly opt-in.</p></li><li><p><b>No added round trips.</b> Transparency should not cause extra network round trips between the client and the server. Otherwise there will be a network latency penalty for users who want transparency.</p></li><li><p><b>User privacy.</b> A user should not have to identify themselves to any party more than they already do. That means no connections to new third parties, and no sending identifying information to the website.</p></li><li><p><b>User statelessness.</b> A user should not have to store site-specific data. We do not want solutions that rely on storing or gossipping per-site cryptographic information.</p></li><li><p><b>Non-centralization.</b> There should not be a single point of failure in the system—if any single party experiences downtime, the system should still be able to make progress. Similarly, there should be no single point of trust—if a user distrusts any single party, the user should still receive all the security benefits of the system.</p></li><li><p><b>Ease of opt-in.</b> The barrier of entry for transparency should be as low as possible. A site operator should be able to start logging their site cheaply and without being an expert.</p></li><li><p><b>Ease of opt-out.</b> It should be easy for a website to stop participating in transparency. Further, to avoid accidental lock-in like the <a href="https://en.wikipedia.org/wiki/HTTP_Public_Key_Pinning#Criticism_and_decline"><u>defunct HPKP spec</u></a>, it should be possible for this to happen even if all cryptographic material is lost, e.g., in the seizure or selling of a domain.</p></li><li><p><b>Opt-out is transparent.</b> As described before, because transparency is optional, it is possible for an attacker to disable the site’s transparency, serve malicious content, then enable transparency again. We must make sure this kind of attack is detectable, i.e., the act of disabling transparency must itself be logged somewhere.</p></li><li><p><b>Monitorability.</b> A website operator should be able to efficiently monitor the transparency information being published about their website. In particular, they should not have to run a high-network-load, always-on program just to notify them if their site has been hijacked.</p></li></ol><p>With these requirements in place, we can move on to construction. We introduce a data structure that will be essential to the design.</p>
    <div>
      <h3>Hash Chain</h3>
      <a href="#hash-chain">
        
      </a>
    </div>
    <p>Almost everything in transparency is an append-only log, i.e., a data structure that acts like a list and has the ability to produce an <b>inclusion proof</b>, i.e., a proof that an element occurs at a particular index in the list; and a <b>consistency proof</b>, i.e., a proof that a list is an extension of a previous version of the list. A consistency proof between two lists demonstrates that no elements were modified or deleted, only added.</p><p>The simplest possible append-only log is a <b>hash chain</b>, a list-like data structure wherein each subsequent element is hashed into the running <i>chain hash</i>. The final chain hash is a succinct representation of the entire list.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5nVcIeoKTYEd0hydT9jdpj/fd90a78cba46c1893058a7ff40a42fac/BLOG-2875_2.png" />
          </figure><p><sub>A hash chain. The green nodes represent the </sub><sub><i>chain hash</i></sub><sub>, i.e., the hash of the element below it, concatenated with the previous chain hash. </sub></p><p>The proof structures are quite simple. To prove inclusion of the element at index i, the prover provides the chain hash before <code>i</code>, and all the elements after <code>i</code>:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7tYbCRVVTV3osE3lWs8YjG/be76d6022420ffa3d78b0180ef69bb1a/BLOG-2875_3.png" />
          </figure><p><sub>Proof of inclusion for the second element in the hash chain. The verifier knows only the final chain hash. It checks equality of the final computed chain hash with the known final chain hash. The light green nodes represent hashes that the verifier computes. </sub></p><p>Similarly, to prove consistency between the chains of size i and j, the prover provides the elements between i and j:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/rR4DMJIVxNxePI6DARtFD/e9da2930043864a4add3a74b699535d6/BLOG-2875_4.png" />
          </figure><p><sub>Proof of consistency of the chain of size one and chain of size three. The verifier has the chain hashes from the starting and ending chains. It checks equality of the final computed chain hash with the known ending chain hash. The light green nodes represent hashes that the verifier computes. </sub></p>
    <div>
      <h3>Building Transparency</h3>
      <a href="#building-transparency">
        
      </a>
    </div>
    <p>We can use hash chains to build a transparency scheme for websites.</p>
    <div>
      <h4>Per-Site Logs</h4>
      <a href="#per-site-logs">
        
      </a>
    </div>
    <p>As a first step, let’s give every site its own log, instantiated as a hash chain (we will discuss how these all come together into one big log later). The items of the log are just the manifest of the site at a particular point in time:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/35o9mKoVPkOExKRFHRgWTu/305d589e0a584a3200670aab5b060c2b/BLOG-2875_5.png" />
          </figure><p><sub>A site’s hash chain-based log, containing three historical manifests. </sub></p><p>In reality, the log does not store the manifest itself, but the manifest hash. Sites designate an <b>asset host</b> that knows how to map hashes to the data they reference. This is a content-addressable storage backend, and can be implemented using strongly cached static hosting solutions.</p><p>A log on its own is not very trustworthy. Whoever runs the log can add and remove elements at will and then recompute the hash chain. To maintain the append-only-ness of the chain, we designate a trusted third party, called a <b>witness</b>. Given a hash chain consistency proof and a new chain hash, a witness:</p><ol><li><p>Verifies the consistency proof with respect to its old stored chain hash, and the new provided chain hash.</p></li><li><p>If successful, signs the new chain hash along with a signature timestamp.</p></li></ol><p>Now, when a user navigates to a website with transparency enabled, the sequence of events is:</p><ol><li><p>The site serves its manifest, an inclusion proof showing that the manifest appears in the log, and all the signatures from all the witnesses who have validated the log chain hash.</p></li><li><p>The browser verifies the signatures from whichever witnesses it trusts.</p></li><li><p>The browser verifies the inclusion proof. The manifest must be the newest entry in the chain (we discuss how to serve old manifests later).</p></li><li><p>The browser proceeds with the usual manifest and SRI integrity checks.</p></li></ol><p>At this point, the user knows that the given manifest has been recorded in a log whose chain hash has been saved by a trustworthy witness, so they can be reasonably sure that the manifest won’t be removed from history. Further, assuming the asset host functions correctly, the user knows that a copy of all the received code is readily available.</p><p><b>The need to signal transparency.</b> The above algorithm works, but we have a problem: if an attacker takes control of a site, they can simply stop serving transparency information and thus implicitly disable transparency without detection. So we need an explicit mechanism that keeps track of every website that has enrolled into transparency.</p>
    <div>
      <h3>The Transparency Service</h3>
      <a href="#the-transparency-service">
        
      </a>
    </div>
    <p>To store all the sites enrolled into transparency, we want a global data structure that maps a site domain to the site log’s chain hash. One efficient way of representing this is a <b>prefix tree</b> (a.k.a., a <i>trie</i>). Every leaf in the tree corresponds to a site’s domain, and its value is the chain hash of that site’s log, the current log size, and the site’s asset host URL. For a site to prove validity of its transparency data, it will have to present an inclusion proof for its leaf. Fortunately, these proofs are efficient for prefix trees.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/26ieMXRdvWIhLKv6J6Cdd7/29814a4a51c8ca8e3279e9b5756d0c67/BLOG-2875_6.png" />
          </figure><p><sub>A prefix tree with four elements. Each leaf’s path corresponds to a domain. Each leaf’s value is the chain hash of its site’s log. </sub></p><p>To add itself to the tree, a site proves possession of its domain to the <b>transparency service</b>, i.e., the party that operates the prefix tree, and provides an asset host URL. To update the entry, the site sends the new entry to the transparency service, which will compute the new chain hash. And to unenroll from transparency, the site just requests to have its entry removed from the tree (an adversary can do this too; we discuss how to detect this below).</p>
    <div>
      <h4>Proving to Witnesses and Browsers</h4>
      <a href="#proving-to-witnesses-and-browsers">
        
      </a>
    </div>
    <p>Now witnesses only need to look at the prefix tree instead of individual site logs, and thus they must verify whole-tree updates. The most important thing to ensure is that every site’s log is append-only. So whenever the tree is updated, it must produce a “proof” containing every new/deleted/modified entry, as well as a consistency proof for each entry showing that the site log corresponding to that entry has been properly appended to. Once the witness has verified this prefix tree update proof, it signs the root.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2bq4UBxoOgKPcysPPxnKD8/9e8c2a8a3b092fffae853b8d477efb07/BLOG-2875_7.png" />
          </figure><p><sub>The sequence of updating a site’s assets and serving the site with transparency enabled.</sub></p><p>The client-side verification procedure is as in the previous section, with two modifications:</p><ol><li><p>The client now verifies two inclusion proofs: one for the integrity policy’s membership in the site log, and one for the site log’s membership in a prefix tree.</p></li><li><p>The client verifies the signature over the prefix tree root, since the witness no longer signs individual chain hashes. As before, the acceptable public keys are whichever witnesses the client trusts.</p></li></ol><p><b>Signaling transparency.</b> Now that there is a single source of truth, namely the prefix tree, a client can know a site is enrolled in transparency by simply fetching the site’s entry in the tree. This alone would work, but it violates our requirement of “no added round trips,” so we instead require that client browsers will ship with the list of sites included in the prefix tree. We call this the <b>transparency preload list</b>. </p><p>If a site appears in the preload list, the browser will expect it to provide an inclusion proof in the prefix tree, or else a proof of non-inclusion in a newer version of the prefix tree, thereby showing they’ve unenrolled. The site must provide one of these proofs until the last preload list it appears in has expired. Finally, even though the preload list is derived from the prefix tree, there is nothing enforcing this relationship. Thus, the preload list should also be published transparently.</p>
    <div>
      <h4>Filling in Missing Properties</h4>
      <a href="#filling-in-missing-properties">
        
      </a>
    </div>
    <p>Remember we still have the requirements of monitorability, opt-out being transparent, and no single point of failure/trust. We fill in those details now.</p><p><b>Adding monitorability.</b> So far, in order for a site operator to ensure their site was not hijacked, they would have to constantly query every transparency service for its domain and verify that it hasn’t been tampered with. This is certainly better than the <a href="https://ct.cloudflare.com/"><u>500k events per hour</u></a> that CT monitors have to ingest, but it still requires the monitor to be constantly polling the prefix tree, and it imposes a constant load for the transparency service.</p><p>We add a field to the prefix tree leaf structure: the leaf now stores a “created” timestamp, containing the time the leaf was created. Witnesses ensure that the “created” field remains the same over all leaf updates (and it is deleted when the leaf is deleted). To monitor, a site operator need only keep the last observed “created” and “log size” fields of its leaf. If it fetches the latest leaf and sees both unchanged, it knows that no changes occurred since the last check.</p><p><b>Adding transparency of opt-out.</b> We must also do the same thing as above for leaf deletions. When a leaf is deleted, a monitor should be able to learn when the deletion occurred within some reasonable time frame. Thus, rather than outright removing a leaf, the transparency service responds to unenrollment requests by replacing the leaf with a <b>tombstone</b> value, containing just a “created” timestamp. As before, witnesses ensure that this field remains unchanged until the leaf is permanently deleted (after some visibility period) or re-enrolled.</p><p><b>Permitting multiple transparency services.</b> Since we require that there be no single point of failure or trust, we imagine an ecosystem where there are a handful of non-colluding, reasonably trustworthy transparency service providers, each with their own prefix tree. Like Certificate Transparency (CT), this set should not be too large. It must be small enough that reasonable levels of trust can be established, and so that independent auditors can reasonably handle the load of verifying all of them.</p><p>Ok that’s the end of the most technical part of this post. We’re now going to talk about how to tweak this system to provide all kinds of additional nice properties.</p>
    <div>
      <h2>(Not) Achieving Consistency</h2>
      <a href="#not-achieving-consistency">
        
      </a>
    </div>
    <p>Transparency would be useless if, every time a site updates, it serves 100,000 new versions of itself. Any auditor would have to go through every single version of the code in order to ensure no user was targeted with malware. This is bad even if the velocity of versions is lower. If a site publishes just one new version per week, but every version from the past ten years is still servable, then users can still be served extremely old, potentially vulnerable versions of the site, without anyone knowing. Thus, in order to make transparency valuable, we need <b>consistency</b>, the property that every browser sees the same version of the site at a given time.</p><p>We will not achieve the strongest version of consistency, but it turns out that weaker notions are sufficient for us. If, unlike the above scenario, a site had 8 valid versions of itself at a given time, then that would be pretty manageable for an auditor. So even though it’s true that users don’t all see the same version of the site, they will all still benefit from transparency, as desired.</p><p>We describe two types of inconsistency and how we mitigate them.</p>
    <div>
      <h3>Tree Inconsistency</h3>
      <a href="#tree-inconsistency">
        
      </a>
    </div>
    <p>Tree inconsistency occurs when transparency services’ prefix trees disagree on the chain hash of a site, thus disagreeing on the history of the site. One way to fully eliminate this is to establish a consensus mechanism for prefix trees. A simple one is majority voting: if there are five transparency services, a site must present three tree inclusion proofs to a user, showing the chain hash is present in three trees. This, of course, triples the tree inclusion proof size, and lowers the fault tolerance of the entire system (if three log operators go down, then no transparent site can publish any updates).</p><p>Instead of consensus, we opt to simply limit the amount of inconsistency by limiting the number of transparency services. In 2025, Chrome <a href="https://www.gstatic.com/ct/log_list/v3/log_list.json"><u>trusts</u></a> eight Certificate Transparency logs. A similar number of transparency services would be fine for our system. Plus, it is still possible to detect and prove the existence of inconsistencies between trees, since roots are signed by witnesses. So if it becomes the norm to use the same version on all trees, then social pressure can be applied when sites violate this.</p>
    <div>
      <h3>Temporal Inconsistency</h3>
      <a href="#temporal-inconsistency">
        
      </a>
    </div>
    <p>Temporal inconsistency occurs when a user gets a newer or older version of the site (both still unexpired), depending on some external factors such as geographic location or cookie values. In the extreme, as stated above, if a signed prefix root is valid for ten years, then a site can serve a user any version of the site from the last ten years.</p><p>As with tree inconsistency, this can be resolved using consensus mechanisms. If, for example, the latest manifest were published on a blockchain, then a user could fetch the latest blockchain head and ensure they got the latest version of the site. However, this incurs an extra network round trip for the client, and requires sites to wait for their hash to get published on-chain before they can update. More importantly, building this kind of consensus mechanism into our specification would drastically increase its complexity. We’re aiming for v1.0 here.</p><p>We mitigate temporal inconsistency by requiring reasonably short validity periods for witness signatures. Making prefix root signatures valid for, e.g., one week would drastically limit the number of simultaneously servable versions. The cost is that site operators must now query the transparency service at least once a week for the new signed root and inclusion proof, even if nothing in the site changed. The sites cannot skip this, and the transparency service must be able to handle this load. This parameter must be tuned carefully.</p>
    <div>
      <h2>Beyond Integrity, Consistency, and Transparency</h2>
      <a href="#beyond-integrity-consistency-and-transparency">
        
      </a>
    </div>
    <p>Providing integrity, consistency, and transparency is already a huge endeavor, but there are some additional app store-like security features that can be integrated into this system without too much work.</p>
    <div>
      <h3>Code Signing</h3>
      <a href="#code-signing">
        
      </a>
    </div>
    <p>One problem that WAICT doesn’t solve is that of <i>provenance</i>: where did the code the user is running come from, precisely? In settings where audits of code happen frequently, this is not so important, because some third party will be reading the code regardless. But for smaller self-hosted deployments of open-source software, this may not be viable. For example, if Alice hosts her own version of <a href="https://cryptpad.org/"><u>Cryptpad</u></a> for her friend Bob, how can Bob be sure the code matches the real code in Cryptpad’s Github repo?</p><p><b>WEBCAT.</b> The folks at the Freedom of Press Foundation (FPF) have built a solution to this, called <a href="https://securedrop.org/news/introducing-webcat-web-based-code-assurance-and-transparency/"><u>WEBCAT</u></a>. This protocol allows site owners to announce the identities of the developers that have signed the site’s integrity manifest, i.e., have signed all the code and other assets that the site is serving to the user. Users with the WEBCAT plugin can then see the developer’s <a href="https://www.sigstore.dev/"><u>Sigstore</u></a> signatures, and trust the code based on that.</p><p>We’ve made WAICT extensible enough to fit WEBCAT inside and benefit from the transparency components. Concretely, we permit manifests to hold additional metadata, which we call <b>extensions</b>. In this case, the extension holds a list of developers’ Sigstore identities. To be useful, browsers must expose an API for browser plugins to access these extension values. With this API, independent parties can build plugins for whatever feature they wish to layer on top of WAICT.</p>
    <div>
      <h2>Cooldown</h2>
      <a href="#cooldown">
        
      </a>
    </div>
    <p>So far we have not built anything that can prevent attacks in the moment. An attacker who breaks into a website can still delete any code-signing extensions, or just unenroll the site from transparency entirely, and continue with their attack as normal. The unenrollment will be logged, but the malicious code will not be, and by the time anyone sees the unenrollment, it may be too late.</p><p>To prevent spontaneous unenrollment, we can enforce <b>unenrollment cooldown</b> client-side. Suppose the cooldown period is 24 hours. Then the rule is: if a site appears on the preload list, then the client will require that either 1) the site have transparency enabled, or 2) the site have a tombstone entry that is at least 24 hours old. Thus, an attacker will be forced to either serve a transparency-enabled version of the site, or serve a broken site for 24 hours.</p><p>Similarly, to prevent spontaneous extension modifications, we can enforce <b>extension cooldown</b> on the client. We will take code signing as an example, saying that any change in developer identities requires a 24 hour waiting period to be accepted. First, we require that extension <code>dev-ids</code> has a preload list of its own, letting the client know which sites have opted into code signing (if a preload list doesn’t exist then any site can delete the extension at any time). The client rule is as follows: if the site appears in the preload list, then both 1) <code>dev-ids</code> must exist as an extension in the manifest, and 2) <code>dev-ids-inclusion</code> must contain an inclusion proof showing that the current value of dev-ids was in a prefix tree that is at least 24 hours old. With this rule, a client will reject values of <code>dev-ids</code> that are newer than a day. If a site wants to delete <code>dev-ids</code>, they must 1) request that it be removed from the preload list, and 2) in the meantime, replace the dev-ids value with the empty string and update <code>dev-ids-inclusion</code> to reflect the new value.</p>
    <div>
      <h2>Deployment Considerations</h2>
      <a href="#deployment-considerations">
        
      </a>
    </div>
    <p>There are a lot of distinct roles in this ecosystem. Let’s sketch out the trust and resource requirements for each role.</p><p><b>Transparency service.</b> These parties store metadata for every transparency-enabled site on the web. If there are 100 million domains, and each entry is 256B each (a few hashes, plus a URL), this comes out to 26GB for a single tree, not including the intermediate hashes. To prevent size blowup, there would probably have to be a pruning rule that unenrolls sites after a long inactivity period. Transparency services should have largely uncorrelated downtime, since, if all services go down, no transparency-enabled site can make any updates. Thus, transparency services must have a moderate amount of storage, be relatively highly available, and have downtime periods uncorrelated with each other.</p><p>Transparency services require some trust, but their behavior is narrowly constrained by witnesses. Theoretically, a service can replace any leaf’s chain hash with its own, and the witness will validate it (as long as the consistency proof is valid). But such changes are detectable by anyone that monitors that leaf.</p><p><b>Witness.</b> These parties verify prefix tree updates and sign the resulting roots. Their storage costs are similar to that of a transparency service, since they must keep a full copy of a prefix tree for every transparency service they witness. Also like the transparency services, they must have high uptime. Witnesses must also be trusted to keep their signing key secret for a long period of time, at least long enough to permit browser trust stores to be updated when a new key is created.</p><p><b>Asset host.</b> These parties carry little trust. They cannot serve bad data, since any query response is hashed and compared to a known hash. The only malicious behavior an asset host can do is refuse to respond to queries. Asset hosts can also do this by accident due to downtime.</p><p><b>Client.</b> This is the most trust-sensitive part. The client is the software that performs all the transparency and integrity checks. This is, of course, the web browser itself. We must trust this.</p><p>We at Cloudflare would like to contribute what we can to this ecosystem. It should be possible to run both a transparency service and a witness. Of course, our witness should not monitor our own transparency service. Rather, we can witness other organizations’ transparency services, and our transparency service can be witnessed by other organizations.</p>
    <div>
      <h3>Supporting Alternate Ecosystems</h3>
      <a href="#supporting-alternate-ecosystems">
        
      </a>
    </div>
    <p>WAICT should be compatible with non-standard ecosystems, ones where the large players do not really exist, or at least not in the way they usually do. We are working with the FPF on defining transparency for alternate ecosystems with different network and trust environments. The primary example we have is that of the Tor ecosystem.</p><p>A paranoid Tor user may not trust existing transparency services or witnesses, and there might not be any other trusted party with the resources to self-host these functionalities. For this use case, it may be reasonable to put the <a href="https://github.com/freedomofpress/webcat-infra-chain"><u>prefix tree on a blockchain somewhere</u></a>. This makes the usual domain validation impossible (there’s no validator server to speak of), but this is fine for onion services. Since an onion address is just a public key, a signature is sufficient to prove ownership of the domain.</p><p>One consequence of a consensus-backed prefix tree is that witnesses are now unnecessary, and there is only need for the single, canonical, transparency service. This mostly solves the problems of tree inconsistency at the expense of latency of updates.</p>
    <div>
      <h2>Next Steps</h2>
      <a href="#next-steps">
        
      </a>
    </div>
    <p>We are still very early in the standardization process. One of the more immediate next steps is to get subresource integrity working for more data types, particularly WASM and images. After that, we can begin standardizing the integrity manifest format. And then after that we can start standardizing all the other features. We intend to work on this specification hand-in-hand with browsers and the IETF, and we hope to have some exciting betas soon.</p><p>In the meantime, you can follow along with our <a href="https://github.com/rozbb/draft-waict-transparency"><u>transparency specification draft</u></a>, check out the open problems, and share your ideas. Pull requests and issues are always welcome!</p>
    <div>
      <h2>Acknowledgements</h2>
      <a href="#acknowledgements">
        
      </a>
    </div>
    <p>Many thanks to Dennis Jackson from Mozilla for the lengthy back-and-forth meetings on design, to Giulio B and Cory Myers from FPF for their immensely helpful influence and feedback, and to Richard Hansen for great feedback.</p> ]]></content:encoded>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Malicious JavaScript]]></category>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[Deep Dive]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[Research]]></category>
            <guid isPermaLink="false">6GRxDReQHjD7T0xpfjf7Iq</guid>
            <dc:creator>Michael Rosenberg</dc:creator>
        </item>
        <item>
            <title><![CDATA[Orange Me2eets: We made an end-to-end encrypted video calling app and it was easy]]></title>
            <link>https://blog.cloudflare.com/orange-me2eets-we-made-an-end-to-end-encrypted-video-calling-app-and-it-was/</link>
            <pubDate>Thu, 26 Jun 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Orange Meets, our open-source video calling web application, now supports end-to-end encryption using the MLS protocol with continuous group key agreement. ]]></description>
            <content:encoded><![CDATA[ <p>Developing a new video conferencing application often begins with a peer-to-peer setup using <a href="https://webrtc.org/"><u>WebRTC</u></a>, facilitating direct data exchange between clients. While effective for small demonstrations, this method encounters scalability hurdles with increased participants. The data transmission load for each client escalates significantly in proportion to the number of users, as each client is required to send data to every other client except themselves (n-1).</p><p>In the scaling of video conferencing applications, Selective Forwarding Units (SFUs) are essential.  Essentially a media stream routing hub, an SFU receives media and data flows from participants and intelligently determines which streams to forward. By strategically distributing media based on network conditions and participant needs, this mechanism minimizes bandwidth usage and greatly enhances scalability. Nearly every video conferencing application today uses SFUs.</p><p>In 2024, we announced <a href="https://blog.cloudflare.com/cloudflare-calls-anycast-webrtc/"><u>Cloudflare Realtime</u></a> (then called Cloudflare Calls), our suite of WebRTC products, and we also released <a href="https://github.com/cloudflare/orange"><u>Orange Meets</u></a>, an open source video chat application built on top of our SFU.</p><p>We also realized that use of an SFU often comes with a privacy cost, as there is now a centralized hub that could see and listen to all the media contents, even though its sole job is to forward media bytes between clients as a data plane.</p><p>We believe end-to-end encryption should be the industry standard for secure communication and that’s why today we’re excited to share that we’ve implemented and open sourced end-to-end encryption in Orange Meets. Our generic implementation is client-only, so it can be used with any WebRTC infrastructure. Finally, our new <i>designated committer </i>distributed algorithm is verified in a bounded model checker to verify this algorithm handles edge cases gracefully.</p>
    <div>
      <h2>End-to-end encryption for video conferencing is different than for text messaging</h2>
      <a href="#end-to-end-encryption-for-video-conferencing-is-different-than-for-text-messaging">
        
      </a>
    </div>
    <p>End-to-end encryption describes a secure communication channel whereby only the intended participants can read, see, or listen to the contents of the conversation, not anybody else. WhatsApp and iMessage, for example, are end-to-end-encrypted, which means that the companies that operate those apps or any other infrastructure can’t see the contents of your messages. </p><p>Whereas encrypted group chats are usually long-lived, highly asynchronous, and low bandwidth sessions, video and audio calls are short-lived, highly synchronous, and require high bandwidth. This difference comes with plenty of interesting tradeoffs, which influenced the design of our system.</p><p>We had to consider how factors like the ephemeral nature of calls, compared to the persistent nature of group text messages, also influenced the way we designed E2EE for Orange Meets. In chat messages, users must be able to decrypt messages sent to them while they were offline (e.g. while taking a flight). This is not a problem for real-time communication.</p><p>The bandwidth limitations around audio/video communication and the use of an SFU prevented us from using some of the E2EE technologies already available for text messages. Apple’s iMessage, for example, encrypts a message N-1 times for an N-user group chat. We can't encrypt the video for each recipient, as that could saturate the upload capacity of Internet connections as well as slow down the client. Media has to be encrypted once and decrypted by each client while preserving secrecy around only the current participants of the call.</p>
    <div>
      <h2>Messaging Layer Security (MLS)</h2>
      <a href="#messaging-layer-security-mls">
        
      </a>
    </div>
    <p>Around the same time we were working on Orange Meets, we saw a lot of excitement around new apps being built with <a href="https://messaginglayersecurity.rocks/"><u>Messaging Layer Security</u></a> (MLS), an IETF-standardized protocol that describes how you can do a group key exchange in order to establish end-to-end-encryption for group communication. </p><p>Previously, the only way to achieve these properties was to essentially run your own fork of the <a href="https://signal.org/docs/"><u>Signal protocol</u></a>, which itself is more of a living protocol than a solidified standard. Since MLS is standardized, we’ve now seen multiple high-quality implementations appear, and we’re able to use them to achieve Signal-level security with far less effort.</p><p>Implementing MLS here wasn’t easy: it required a moderate amount of client modification, and the development and verification of an encrypted room-joining protocol. Nonetheless, we’re excited to be pioneering a standards-based approach that any customer can run on our network, and to share more details about how our implementation works. </p><p>We did not have to make any changes to the SFU to get end-to-end encryption working. Cloudflare’s SFU doesn’t care about the contents of the data forwarded on our data plane and whether it’s encrypted or not.</p>
    <div>
      <h2>Orange Meets: the basics </h2>
      <a href="#orange-meets-the-basics">
        
      </a>
    </div>
    <p>Orange Meets is a video calling application built on <a href="https://workers.cloudflare.com/"><u>Cloudflare Workers</u></a> that uses the <a href="https://developers.cloudflare.com/realtime/calls-vs-sfus/"><u>Cloudflare Realtime SFU service</u></a> as the data plane. The roles played by the three main entities in the application are as follows:</p><ul><li><p>The <i>user</i> is a participant in the video call. They connect to the Orange Meets server and SFU, described below.</p></li><li><p>The <i>Orange Meets Server </i>is a simple service run on a Cloudflare Worker that runs the small-scale coordination logic of Orange Meets, which is concerned with which user is in which video call — called a <i>room </i>— and what the state of the room is. Whenever something in the room changes, like a participant joining or leaving, or someone muting themselves, the app server broadcasts the change to all room participants. You can use any backend server for this component, we just chose Cloudflare Workers for its convenience.</p></li><li><p>Cloudflare Realtime <i>Selective Forwarding Unit</i> (SFU) is a service that Cloudflare runs, which takes everyone’s audio and video and broadcasts it to everyone else. These connections are potentially lossy, using UDP for transmission. This is done because a dropped video frame from five seconds ago is not very important in the context of a video call, and so should not be re-sent, as it would be in a TCP connection.</p></li></ul>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/61htaksggj580PqX02XoVB/3b0f1ed34ee681e41b2009257fdc8525/image2.png" />
          </figure><p><sup><i>The network topology of Orange Meets</i></sup></p><p>Next, we have to define what we mean by end-to-end encryption in the context of video chat.</p>
    <div>
      <h2>End-to-end encrypting Orange Meets </h2>
      <a href="#end-to-end-encrypting-orange-meets">
        
      </a>
    </div>
    <p>The most immediate way to end-to-end encrypt Orange Meets is to simply have the initial users agree on a symmetric encryption/decryption key at the beginning of a call, and just encrypt every video frame using that key. This is sufficient to hide calls from Cloudflare’s SFU. Some source-encrypted video conferencing implementations, such as <a href="https://jitsi.org/e2ee-in-jitsi/"><u>Jitsi Meet</u></a>, work this way.</p><p>The issue, however, is that kicking a malicious user from a call does not invalidate their key, since the keys are negotiated just once. A joining user learns the key that was used to encrypt video from before they joined. These failures are more formally referred to as failures of <i>post-compromise security</i> and <i>perfect forward secrecy</i>. When a protocol successfully implements these in a group setting, we call the protocol a <b>continuous group key agreement protocol</b>.</p><p>Fortunately for us, MLS is a continuous group key agreement protocol that works out of the box, and the nice folks at <a href="https://phnx.im/"><u>Phoenix R&amp;D</u></a> and <a href="https://cryspen.com/"><u>Cryspen</u></a> have a well-documented <a href="https://github.com/openmls/openmls/tree/main"><u>open-source Rust implementation</u></a> of most of the MLS protocol. </p><p>All we needed to do was write an MLS client and compile it to WASM, so we could decrypt video streams in-browser. We’re using WASM since that’s one way of running Rust code in the browser. If you’re running a video conferencing application on a desktop or mobile native environment, there are other MLS implementations in your preferred programming language.</p><p>Our setup for encryption is as follows:</p><p><b>Make a web worker for encryption.</b> We wrote a web worker in Rust that accepts a WebRTC video stream, broken into individual frames, and encrypts each frame. This code is quite simple, as it’s just an MLS encryption:</p>
            <pre><code>group.create_message(
	&amp;self.mls_provider,
	self.my_signing_keys.as_ref()?,
	frame,
)</code></pre>
            <p><b>Postprocess outgoing audio/video.</b> We take our normal stream and, using some newer features of the <a href="https://developer.mozilla.org/en-US/docs/Web/API/WebRTC_API"><u>WebRTC API</u></a>, add a transform step to it. This transform step simply sends the stream to the worker:</p>
            <pre><code>const senderStreams = sender.createEncodedStreams()
const { readable, writable } = senderStreams
this.worker.postMessage(
	{
    	    type: 'encryptStream',
    	    in: readable,
    	    out: writable,
	},
	[readable, writable]
)</code></pre>
            <p>And the same for decryption:</p>
            <pre><code>const receiverStreams = receiver.createEncodedStreams()
const { readable, writable } = receiverStreams
this.worker.postMessage(
	{
    	    type: 'decryptStream',
    	    in: readable,
    	    out: writable,
	},
	[readable, writable]
)</code></pre>
            <p>Once we do this for both audio and video streams, we’re done.</p>
    <div>
      <h2>Handling different codec behaviors</h2>
      <a href="#handling-different-codec-behaviors">
        
      </a>
    </div>
    <p>The streams are now encrypted before sending and decrypted before rendering, but the browser doesn’t know this. To the browser, the stream is still an ordinary video or audio stream. This can cause errors to occur in the browser’s depacketizing logic, which expects to see certain bytes in certain places, depending on the codec. This results in some extremely cypherpunk artifacts every dozen seconds or so:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/72baLJkLPZPdbjHjGVxSU5/2ea34b02826aacc2b23086b463a4938f/image3.png" />
          </figure><p>Fortunately, this exact issue was discovered by engineers at Discord, who handily documented it in their <a href="https://github.com/discord/dave-protocol/blob/main/protocol.md"><u>DAVE</u></a> E2EE videocalling protocol. For the VP8 codec, which we use by default, the solution is simple: split off the first 1–10 bytes of each packet, and send them unencrypted:</p>
            <pre><code>fn split_vp8_header(frame: &amp;[u8]) -&gt; Option&lt;(&amp;[u8], &amp;[u8])&gt; {
    // If this is a keyframe, keep 10 bytes unencrypted. Otherwise, 1 is enough
    let is_keyframe = frame[0] &gt;&gt; 7 == 0;
    let unencrypted_prefix_size = if is_keyframe { 10 } else { 1 };
    frame.split_at_checked(unencrypted_prefix_size)
}</code></pre>
            <p>These bytes are not particularly important to encrypt, since they only contain versioning info, whether or not this frame is a keyframe, some constants, and the width and height of the video.</p><p>And that’s truly it for the stream encryption part! The only thing remaining is to figure out how we will let new users join a room.</p>
    <div>
      <h2>“Join my Orange Meet” </h2>
      <a href="#join-my-orange-meet">
        
      </a>
    </div>
    <p>Usually, the only way to join the call is to click a link. And since the protocol is encrypted, a joining user needs to have some cryptographic information in order to decrypt any messages. How do they receive this information, though? There are a few options.</p><p>DAVE does it by using an MLS feature called <i>external proposals</i>. In short, the Discord server registers itself as an <i>external sender</i>, i.e., a party that can send administrative messages to the group, but cannot receive any. When a user wants to join a room, they provide their own cryptographic material, called a <i>key package</i>, and the server constructs and sends an MLS <a href="https://www.rfc-editor.org/rfc/rfc9420.html#section-12.1.8"><u>External Add message</u></a> to the group to let them know about the new user joining. Eventually, a group member will <i>commit</i> this External Add, sending the joiner a <i>Welcome</i> message containing all information necessary to send and receive video.
</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1gQm3r3Bai8Rks4M82JuSh/87ff851a12505f5c17c241e3f1eade6a/image4.png" />
          </figure><p><sup><i>A user joining a group via MLS external proposals. Recall the Orange Meets app server functions as a broadcast channel for the whole group. We consider a group of 3 members. We write member #2 as the one committing to the proposal, but this can be done by any member. Member #2 also sends a Commit message to the other members, but we omit this for space.</i></sup><sup>  </sup></p><p>This is a perfectly viable way to implement room joining, but implementing it would require us to extend the Orange Meets server logic to have some concept of MLS. Since part of our goal is to keep things as simple as possible, we would like to do all our cryptography client-side.</p><p>So instead we do what we call the <i>designated committer</i> algorithm. When a user joins a group, they send their cryptographic material to one group member, the <i>designated committer</i>, who then constructs and sends the Add message to the rest of the group. Similarly, when notified of a user’s exit, the designated committer constructs and sends a Remove message to the rest of the group. With this setup, the server’s job remains nothing more than broadcasting messages! It’s quite simple too—the full implementation of the designated committer state machine comes out to <a href="https://github.com/cloudflare/orange/blob/66e80d6d9146e2aedd4668e581810c0ee6aeb4a0/rust-mls-worker/src/mls_ops.rs#L90-L446"><u>300 lines of Rust</u></a>, including the MLS boilerplate, and it’s about as efficient.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3k3U7kFcYTwY81XzSrggt8/c27945dec311f251493826542704d370/image1.png" />
          </figure><p><sup><i>A user joining a group via the designated committer algorithm.</i></sup></p><p>One cool property of the designated committer algorithm is that something like this isn’t possible in a text group chat setting, since any given user (in particular, the designated committer) may be offline for an arbitrary period of time. Our method works because it leverages the fact that video calls are an inherently synchronous medium.</p>
    <div>
      <h3>Verifying the Designated Committer Algorithm with TLA<sup>+</sup></h3>
      <a href="#verifying-the-designated-committer-algorithm-with-tla">
        
      </a>
    </div>
    <p>The designated committer algorithm is a pretty neat simplification, but it comes with some non-trivial edge cases that we need to make sure we handle, such as:</p><ul><li><p><i>How do we make sure there is only one designated committer at a time?</i> The designated committer is the alive user with the smallest index in the MLS group state, which all users share.</p></li><li><p><i>What happens if the designated committer exits?</i> Then the next user will take its place. Every user keeps track of pending Adds and Removes, so it can continue where the previous designated committer left off.</p></li><li><p><i>If a user has not caught up to all messages, could they think they’re the designated committer?</i> No, they have to believe first that all prior eligible designated committers are disconnected.</p></li></ul><p>To make extra sure that this algorithm was correct, we formally modeled it and put it through the <a href="https://lamport.azurewebsites.net/tla/high-level-view.html"><u>TLA</u><u><sup>+</sup></u></a> model checker. To our surprise, it caught some low-level bugs! In particular, it found that, if the designated committer dies while adding a user, the protocol does not recover. We fixed these by breaking up MLS operations and enforcing a strict ordering on messages locally (e.g., a Welcome is always sent before its corresponding Add).</p><p>You can find an explainer, lessons learned, and the full <a href="https://learntla.com/core/index.html"><u>PlusCal</u></a> program (a high-level language that compiles to TLA<sup>+</sup>) <a href="https://github.com/cloudflareresearch/orange-e2ee-model-check"><u>here</u></a>. The caveat, as with any use of a bounded model checker, is that the checking is, well, bounded. We verified that no invalid protocol states are possible in a group of up to five users. We think this is good evidence that the protocol is correct for an arbitrary number of users. Because there are only two distinct roles in the protocol (designated committer and other group member), any weird behavior ought to be reproducible with two or three users, max.</p>
    <div>
      <h2>Preventing Monster-in-the-Middle attacks</h2>
      <a href="#preventing-monster-in-the-middle-attacks">
        
      </a>
    </div>
    <p>One important concern to address in any end-to-end encryption setup is how to prevent the service provider from replacing users’ key packages with their own. If the Orange Meets app server did this, and colluded with a malicious SFU to decrypt and re-encrypt video frames on the fly, then the SFU could see all the video sent through the network, and nobody would know.</p><p>To resolve this, like DAVE, we include a <i>safety number</i> in the corner of the screen for all calls. This number uniquely represents the cryptographic state of the group. If you check out-of-band (e.g., in a Signal group chat) that everyone agrees on the safety number, then you can be sure nobody’s key material has been secretly replaced.</p><p>In fact, you could also read the safety number aloud in the video call itself, but doing this is not provably secure. Reading a safety number aloud is an <i>in-band verification</i> mechanism, i.e., one where a party authenticates a channel within that channel. If a malicious app server colluding with a malicious SFU were able to construct believable video and audio of the user reading the safety number aloud, it could bypass this safety mechanism. So if your threat model includes adversaries that are able to break into a Worker and Cloudflare’s SFU, and simultaneously generate real-time deep-fakes, you should use out-of-band verification 😄.</p>
    <div>
      <h2>Future work</h2>
      <a href="#future-work">
        
      </a>
    </div>
    <p>There are some areas we could improve on:</p><ul><li><p>There is another attack vector for a malicious app server: it is possible to simply serve users malicious JavaScript. This problem, more generally called the <a href="https://web.archive.org/web/20200731144044/https://www.nccgroup.com/us/about-us/newsroom-and-events/blog/2011/august/javascript-cryptography-considered-harmful/"><u>JavaScript Cryptography Problem</u></a>, affects any in-browser application where the client wants to hide data from the server. Fortunately, we are working on a standard to address this, called <a href="https://github.com/beurdouche/explainers/blob/main/waict-explainer.md"><u>Web Application Manifest Consistency, Integrity, and Transparency</u></a>. In short, like our <a href="https://blog.cloudflare.com/key-transparency/"><u>Code Verify</u></a> solution for WhatsApp, this would allow every website to commit to the JavaScript it serves, and have a third party create an auditable log of the code. With transparency, malicious JavaScript can still be distributed, but at least now there is a log that records the code.</p></li><li><p>We can make out-of-band authentication easier by placing trust in an identity provider. Using <a href="https://www.bastionzero.com/openpubkey"><u>OpenPubkey</u></a>, it would be possible for a user to get the identity provider to sign their cryptographic material, and then present that. Then all the users would check the signature before using the material. Transparency would also help here to ensure no signatures were made in secret.</p></li></ul>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>We built end-to-end encryption into the Orange Meets video chat app without a lot of engineering time, and by modifying just the client code. To do so, we built a WASM (compiled from Rust) <a href="https://github.com/cloudflare/orange/blob/e2ee/rust-mls-worker"><u>service worker</u></a> that sets up an <a href="https://www.rfc-editor.org/rfc/rfc9420.html"><u>MLS</u></a> group and does stream encryption and decryption, and designed a new joining protocol for groups, called the <i>designated committer algorithm</i>, and <a href="https://github.com/cloudflareresearch/orange-e2ee-model-check"><u>formally modeled it in TLA</u><u><sup>+</sup></u></a>. We made comments for all kinds of optimizations that are left to do, so please send us a PR if you’re so inclined!</p><p>Try using Orange Meets with E2EE enabled at <a href="https://e2ee.orange.cloudflare.dev/"><u>e2ee.orange.cloudflare.dev</u></a>, or deploy your own instance using the <a href="https://github.com/cloudflare/orange"><u>open source repository</u></a> on Github.</p> ]]></content:encoded>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Privacy]]></category>
            <category><![CDATA[Encryption]]></category>
            <category><![CDATA[Video]]></category>
            <category><![CDATA[Cloudflare Realtime]]></category>
            <guid isPermaLink="false">6X6FQzpKaqVyTLVk7rw6xm</guid>
            <dc:creator>Michael Rosenberg</dc:creator>
            <dc:creator>Kevin Kipp</dc:creator>
            <dc:creator>Renan Dincer</dc:creator>
            <dc:creator>Felipe Astroza Araya</dc:creator>
            <dc:creator>Mari Galicer</dc:creator>
        </item>
    </channel>
</rss>