
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sat, 04 Apr 2026 05:33:56 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Building a serverless, post-quantum Matrix homeserver]]></title>
            <link>https://blog.cloudflare.com/serverless-matrix-homeserver-workers/</link>
            <pubDate>Tue, 27 Jan 2026 14:00:00 GMT</pubDate>
            <description><![CDATA[ As a proof of concept, we built a Matrix homeserver to Cloudflare Workers — delivering encrypted messaging at the edge with automatic post-quantum cryptography. ]]></description>
            <content:encoded><![CDATA[ <p><sup><i>* This post was updated at 11:45 a.m. Pacific time to clarify that the use case described here is a proof of concept and a personal project. Some sections have been updated for clarity.</i></sup></p><p>Matrix is the gold standard for decentralized, end-to-end encrypted communication. It powers government messaging systems, open-source communities, and privacy-focused organizations worldwide. </p><p>For the individual developer, however, the appeal is often closer to home: bridging fragmented chat networks (like Discord and Slack) into a single inbox, or simply ensuring your conversation history lives on infrastructure you control. Functionally, Matrix operates as a decentralized, eventually consistent state machine. Instead of a central server pushing updates, homeservers exchange signed JSON events over HTTP, using a conflict resolution algorithm to merge these streams into a unified view of the room's history.</p><p><b>But there is a "tax" to running it. </b>Traditionally, operating a Matrix <a href="https://matrix.org/homeserver/about/"><u>homeserver</u></a> has meant accepting a heavy operational burden. You have to provision virtual private servers (VPS), tune PostgreSQL for heavy write loads, manage Redis for caching, configure <a href="https://www.cloudflare.com/learning/cdn/glossary/reverse-proxy/"><u>reverse proxies</u></a>, and handle rotation for <a href="https://www.cloudflare.com/application-services/products/ssl/">TLS certificates</a>. It’s a stateful, heavy beast that demands to be fed time and money, whether you’re using it a lot or a little.</p><p>We wanted to see if we could eliminate that tax entirely.</p><p><b>Spoiler: We could.</b> In this post, we’ll explain how we ported a Matrix homeserver to <a href="https://workers.cloudflare.com/"><u>Cloudflare Workers</u></a>. The resulting proof of concept is a serverless architecture where operations disappear, costs scale to zero when idle, and every connection is protected by <a href="https://www.cloudflare.com/learning/ssl/quantum/what-is-post-quantum-cryptography/"><u>post-quantum cryptography</u></a> by default. You can view the source code and <a href="https://github.com/nkuntz1934/matrix-workers"><u>deploy your own instance directly from Github</u></a>.</p><a href="https://deploy.workers.cloudflare.com/?url=https://github.com/nkuntz1934/matrix-workers"><img src="https://deploy.workers.cloudflare.com/button" /></a>
<p></p><p></p>
    <div>
      <h2>From Synapse to Workers</h2>
      <a href="#from-synapse-to-workers">
        
      </a>
    </div>
    <p>Our starting point was <a href="https://github.com/matrix-org/synapse"><u>Synapse</u></a>, the Python-based reference Matrix homeserver designed for traditional deployments. PostgreSQL for persistence, Redis for caching, filesystem for media.</p><p>Porting it to Workers meant questioning every storage assumption we’d taken for granted.</p><p>The challenge was storage. Traditional homeservers assume strong consistency via a central SQL database. Cloudflare <a href="https://developers.cloudflare.com/durable-objects/"><u>Durable Objects</u></a> offers a powerful alternative. This primitive gives us the strong consistency and atomicity required for Matrix state resolution, while still allowing the application to run at the edge.</p><p>We ported the core Matrix protocol logic — event authorization, room state resolution, cryptographic verification — in TypeScript using the Hono framework. D1 replaces PostgreSQL, KV replaces Redis, R2 replaces the filesystem, and Durable Objects handle real-time coordination.</p><p>Here’s how the mapping worked out:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1JTja38UZRbFygluawrnz1/9bce290e3070155c734e874c17051551/BLOG-3101_2.png" />
          </figure>
    <div>
      <h2>From monolith to serverless</h2>
      <a href="#from-monolith-to-serverless">
        
      </a>
    </div>
    <p>Moving to Cloudflare Workers brings several advantages for a developer: simple deployment, lower costs, low latency, and built-in security.</p><p><b>Easy deployment: </b>A traditional Matrix deployment requires server provisioning, PostgreSQL administration, Redis cluster management, <a href="https://www.cloudflare.com/application-services/solutions/certificate-lifecycle-management/">TLS certificate renewal</a>, load balancer configuration, monitoring infrastructure, and on-call rotations.</p><p>With Workers, deployment is simply: wrangler deploy. Workers handles TLS, load balancing, DDoS protection, and global distribution. </p><p><b>Usage-based costs: </b>Traditional homeservers cost money whether anyone is using them or not. Workers pricing is request-based, so you pay when you’re using it, but costs drop to near zero when everyone’s asleep. </p><p><b>Lower latency globally:</b> A traditional Matrix homeserver in us-east-1 adds 200ms+ latency for users in Asia or Europe. Workers, meanwhile, run in 300+ locations worldwide. When a user in Tokyo sends a message, the Worker executes in Tokyo. </p><p><b>Built-in security: </b>Matrix homeservers can be high-value targets: They handle encrypted communications, store message history, and authenticate users. Traditional deployments require careful hardening: firewall configuration, rate limiting, DDoS mitigation, WAF rules, IP reputation filtering.</p><p>Workers provide all of this by default. </p>
    <div>
      <h3>Post-quantum protection </h3>
      <a href="#post-quantum-protection">
        
      </a>
    </div>
    <p>Cloudflare deployed post-quantum hybrid key agreement across all <a href="https://www.cloudflare.com/learning/ssl/why-use-tls-1.3/"><u>TLS 1.3</u></a> connections in <a href="https://blog.cloudflare.com/post-quantum-for-all/"><u>October 2022</u></a>. Every connection to our Worker automatically negotiates X25519MLKEM768 — a hybrid combining classical X25519 with ML-KEM, the post-quantum algorithm standardized by NIST.</p><p>Classical cryptography relies on mathematical problems that are hard for traditional computers but trivial for quantum computers running Shor’s algorithm. ML-KEM is based on lattice problems that remain hard even for quantum computers. The hybrid approach means both algorithms must fail for the connection to be compromised.</p>
    <div>
      <h3>Following a message through the system</h3>
      <a href="#following-a-message-through-the-system">
        
      </a>
    </div>
    <p>Understanding where encryption happens matters for security architecture. When someone sends a message through our homeserver, here’s the actual path:</p><p>The sender’s client takes the plaintext message and encrypts it with Megolm — Matrix’s end-to-end encryption. This encrypted payload then gets wrapped in TLS for transport. On Cloudflare, that TLS connection uses X25519MLKEM768, making it quantum-resistant.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/wGGYZ4LYspufH1c4psmL1/28acad8ab8e6535525dda413669c2d74/BLOG-3101_3.png" />
          </figure><p>The Worker terminates TLS, but what it receives is still encrypted — the Megolm ciphertext. We store that ciphertext in D1, index it by room and timestamp, and deliver it to recipients. But we never see the plaintext. The message “Hello, world” exists only on the sender’s device and the recipient’s device.</p><p>When the recipient syncs, the process reverses. They receive the encrypted payload over another quantum-resistant TLS connection, then decrypt locally with their Megolm session keys.</p>
    <div>
      <h3>Two layers, independent protection</h3>
      <a href="#two-layers-independent-protection">
        
      </a>
    </div>
    <p>This protects via two encryption layers that operate independently:</p><p>The <a href="https://www.cloudflare.com/learning/ssl/transport-layer-security-tls/"><u>transport layer (TLS)</u></a> protects data in transit. It’s encrypted at the client and decrypted at the Cloudflare edge. With X25519MLKEM768, this layer is now post-quantum.</p><p>The <a href="https://www.cloudflare.com/learning/ddos/what-is-layer-7/"><u>application layer</u></a> (Megolm E2EE) protects message content. It’s encrypted on the sender’s device and decrypted only on recipient devices. This uses classical Curve25519 cryptography.</p>
    <div>
      <h3>Who sees what</h3>
      <a href="#who-sees-what">
        
      </a>
    </div>
    <p>Any Matrix homeserver operator — whether running Synapse on a VPS or this implementation on Workers — can see metadata: which rooms exist, who’s in them, when messages were sent. But no one in the infrastructure chain can see the message content, because the E2EE payload is encrypted on sender devices before it ever hits the network. Cloudflare terminates TLS and passes requests to your Worker, but both see only Megolm ciphertext. Media in encrypted rooms is encrypted client-side before upload, and private keys never leave user devices.</p>
    <div>
      <h3>What traditional deployments would need</h3>
      <a href="#what-traditional-deployments-would-need">
        
      </a>
    </div>
    <p>Achieving post-quantum TLS on a traditional Matrix deployment would require upgrading OpenSSL or BoringSSL to a version supporting ML-KEM, configuring cipher suite preferences correctly, testing client compatibility across all Matrix apps, monitoring for TLS negotiation failures, staying current as PQC standards evolve, and handling clients that don’t support PQC gracefully.</p><p>With Workers, it’s automatic. Chrome, Firefox, and Edge all support X25519MLKEM768. Mobile apps using platform TLS stacks inherit this support. The security posture improves as Cloudflare’s <a href="https://developers.cloudflare.com/ssl/post-quantum-cryptography/"><u>PQC</u></a> deployment expands — no action required on our part.</p>
    <div>
      <h2>The storage architecture that made it work</h2>
      <a href="#the-storage-architecture-that-made-it-work">
        
      </a>
    </div>
    <p>The key insight from porting Tuwunel was that different data needs different consistency guarantees. We use each Cloudflare primitive for what it does best.</p>
    <div>
      <h3>D1 for the data model</h3>
      <a href="#d1-for-the-data-model">
        
      </a>
    </div>
    <p>D1 stores everything that needs to survive restarts and support queries: users, rooms, events, device keys. Over 25 tables covering the full Matrix data model. </p>
            <pre><code>CREATE TABLE events (
	event_id TEXT PRIMARY KEY,
	room_id TEXT NOT NULL,
	sender TEXT NOT NULL,
	event_type TEXT NOT NULL,
	state_key TEXT,
	content TEXT NOT NULL,
	origin_server_ts INTEGER NOT NULL,
	depth INTEGER NOT NULL
);
</code></pre>
            <p><a href="https://www.cloudflare.com/developer-platform/products/d1/">D1’s SQLite foundation</a> meant we could port Tuwunel’s queries with minimal changes. Joins, indexes, and aggregations work as expected.</p><p>We learned one hard lesson: D1’s eventual consistency breaks foreign key constraints. A write to rooms might not be visible when a subsequent write to events checks the foreign key. We removed all foreign keys and enforce referential integrity in application code.</p>
    <div>
      <h3>KV for ephemeral state</h3>
      <a href="#kv-for-ephemeral-state">
        
      </a>
    </div>
    <p>OAuth authorization codes live for 10 minutes, while refresh tokens last for a session.</p>
            <pre><code>// Store OAuth code with 10-minute TTL
kv.put(&amp;format!("oauth_code:{}", code), &amp;token_data)?
	.expiration_ttl(600)
	.execute()
	.await?;</code></pre>
            <p>KV’s global distribution means OAuth flows work fast regardless of where users are located.</p>
    <div>
      <h3>R2 for media</h3>
      <a href="#r2-for-media">
        
      </a>
    </div>
    <p>Matrix media maps directly to R2, so you can upload an image, get back a content-addressed URL – and egress is free.</p>
    <div>
      <h3>Durable Objects for atomicity</h3>
      <a href="#durable-objects-for-atomicity">
        
      </a>
    </div>
    <p>Some operations can’t tolerate eventual consistency. When a client claims a one-time encryption key, that key must be atomically removed. If two clients claim the same key, encrypted session establishment fails.</p><p>Durable Objects provide single-threaded, strongly consistent storage:</p>
            <pre><code>#[durable_object]
pub struct UserKeysObject {
	state: State,
	env: Env,
}

impl UserKeysObject {
	async fn claim_otk(&amp;self, algorithm: &amp;str) -&gt; Result&lt;Option&lt;Key&gt;&gt; {
    	// Atomic within single DO - no race conditions possible
    	let mut keys: Vec&lt;Key&gt; = self.state.storage()
        	.get("one_time_keys")
        	.await
        	.ok()
        	.flatten()
        	.unwrap_or_default();

    	if let Some(idx) = keys.iter().position(|k| k.algorithm == algorithm) {
        	let key = keys.remove(idx);
        	self.state.storage().put("one_time_keys", &amp;keys).await?;
        	return Ok(Some(key));
    	}
    	Ok(None)
	}
}</code></pre>
            <p>We use UserKeysObject for E2EE key management, RoomObject for real-time room events like typing indicators and read receipts, and UserSyncObject for to-device message queues. The rest flows through D1.</p>
    <div>
      <h3>Complete end-to-end encryption, complete OAuth</h3>
      <a href="#complete-end-to-end-encryption-complete-oauth">
        
      </a>
    </div>
    <p>Our implementation supports the full Matrix E2EE stack: device keys, cross-signing keys, one-time keys, fallback keys, key backup, and dehydrated devices.</p><p>Modern Matrix clients use OAuth 2.0/OIDC instead of legacy password flows. We implemented a complete OAuth provider, with dynamic client registration, PKCE authorization, RS256-signed JWT tokens, token refresh with rotation, and standard OIDC discovery endpoints.
</p>
            <pre><code>curl https://matrix.example.com/.well-known/openid-configuration
{
  "issuer": "https://matrix.example.com",
  "authorization_endpoint": "https://matrix.example.com/oauth/authorize",
  "token_endpoint": "https://matrix.example.com/oauth/token",
  "jwks_uri": "https://matrix.example.com/.well-known/jwks.json"
}
</code></pre>
            <p>Point Element or any Matrix client at the domain, and it discovers everything automatically.</p>
    <div>
      <h2>Sliding Sync for mobile</h2>
      <a href="#sliding-sync-for-mobile">
        
      </a>
    </div>
    <p>Traditional Matrix sync transfers megabytes of data on initial connection,  draining mobile battery and data plans.</p><p>Sliding Sync lets clients request exactly what they need. Instead of downloading everything, clients get the 20 most recent rooms with minimal state. As users scroll, they request more ranges. The server tracks position and sends only deltas.</p><p>Combined with edge execution, mobile clients can connect and render their room list in under 500ms, even on slow networks.</p>
    <div>
      <h2>The comparison</h2>
      <a href="#the-comparison">
        
      </a>
    </div>
    <p>For a homeserver serving a small team:</p><table><tr><th><p> </p></th><th><p><b>Traditional (VPS)</b></p></th><th><p><b>Workers</b></p></th></tr><tr><td><p>Monthly cost (idle)</p></td><td><p>$20-50</p></td><td><p>&lt;$1</p></td></tr><tr><td><p>Monthly cost (active)</p></td><td><p>$20-50</p></td><td><p>$3-10</p></td></tr><tr><td><p>Global latency</p></td><td><p>100-300ms</p></td><td><p>20-50ms</p></td></tr><tr><td><p>Time to deploy</p></td><td><p>Hours</p></td><td><p>Seconds</p></td></tr><tr><td><p>Maintenance</p></td><td><p>Weekly</p></td><td><p>None</p></td></tr><tr><td><p>DDoS protection</p></td><td><p>Additional cost</p></td><td><p>Included</p></td></tr><tr><td><p>Post-quantum TLS</p></td><td><p>Complex setup</p></td><td><p>Automatic</p></td></tr></table><p><sup>*</sup><sup><i>Based on public rates and metrics published by DigitalOcean, AWS Lightsail, and Linode as of January 15, 2026.</i></sup></p><p>The economics improve further at scale. Traditional deployments require capacity planning and over-provisioning. Workers scale automatically.</p>
    <div>
      <h2>The future of decentralized protocols</h2>
      <a href="#the-future-of-decentralized-protocols">
        
      </a>
    </div>
    <p>We started this as an experiment: could Matrix run on Workers? It can—and the approach can work for other stateful protocols, too.</p><p>By mapping traditional stateful components to Cloudflare’s primitives — Postgres to D1, Redis to KV, mutexes to Durable Objects — we can see  that complex applications don't need complex infrastructure. We stripped away the operating system, the database management, and the network configuration, leaving only the application logic and the data itself.</p><p>Workers offers the sovereignty of owning your data, without the burden of owning the infrastructure.</p><p>I have been experimenting with the implementation and am excited for any contributions from others interested in this kind of service. </p><p>Ready to build powerful, real-time applications on Workers? Get started with<a href="https://developers.cloudflare.com/workers/"> <u>Cloudflare Workers</u></a> and explore<a href="https://developers.cloudflare.com/durable-objects/"> <u>Durable Objects</u></a> for your own stateful edge applications. Join our<a href="https://discord.cloudflare.com"> <u>Discord community</u></a> to connect with other developers building at the edge.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Durable Objects]]></category>
            <category><![CDATA[D1]]></category>
            <category><![CDATA[Cloudflare Workers KV]]></category>
            <category><![CDATA[R2]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Rust]]></category>
            <category><![CDATA[WebAssembly]]></category>
            <category><![CDATA[Post-Quantum]]></category>
            <category><![CDATA[Encryption]]></category>
            <guid isPermaLink="false">6VOVAMNwIZ18hMaUlC6aqp</guid>
            <dc:creator>Nick Kuntz</dc:creator>
        </item>
        <item>
            <title><![CDATA[Automatically Secure: how we upgraded 6,000,000 domains by default to get ready for the Quantum Future]]></title>
            <link>https://blog.cloudflare.com/automatically-secure/</link>
            <pubDate>Wed, 24 Sep 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ After a year since we started enabling Automatic SSL/TLS, we want to talk about these results, why they matter, and how we’re preparing for the next leap in Internet security. ]]></description>
            <content:encoded><![CDATA[ <p>The Internet is in constant motion. Sites scale, traffic shifts, and attackers adapt. Security that worked yesterday may not be enough tomorrow. That’s why the technologies that protect the web — such as Transport Layer Security (TLS) and emerging post-quantum cryptography (PQC) — must also continue to evolve. We want to make sure that everyone benefits from this evolution automatically, so we enabled the strongest protections by default.</p><p>During <a href="https://blog.cloudflare.com/introducing-automatic-ssl-tls-securing-and-simplifying-origin-connectivity/"><u>Birthday Week 2024</u></a>, we announced Automatic SSL/TLS: a service that scans origin server configurations of domains behind Cloudflare, and automatically upgrades them to the most secure encryption mode they support. In the past year, <b>this system has quietly strengthened security for more than 6 million domains </b>— ensuring Cloudflare can always connect to origin servers over the safest possible channel, without customers lifting a finger.</p><p>Now, a year after we started enabling Automatic SSL/TLS, we want to talk about these results, why they matter, and how we’re preparing for the next leap in Internet security.</p>
    <div>
      <h2>The Basics: TLS protocol</h2>
      <a href="#the-basics-tls-protocol">
        
      </a>
    </div>
    <p>Before diving in, let’s review the basics of Transport Layer Security (<a href="https://www.cloudflare.com/learning/ssl/transport-layer-security-tls/"><u>TLS</u></a>). The protocol allows two strangers (like a client and server) to communicate securely.</p><p>Every secure web session begins with a TLS handshake. Before a single byte of your data moves across the Internet, servers and clients need to agree on a shared secret key that will protect the confidentiality and integrity of your data. The key agreement handshake kicks off with a TLS <i>ClientHello</i> message. This message is the browser/client announcing, “Here’s who I want to talk to (via <a href="https://www.cloudflare.com/learning/ssl/what-is-sni/"><u>SNI</u></a>), and here are the key agreement methods I understand.” The server then proves who it is with its own credentials in the form of a certificate, and together they establish a shared secret key that will protect everything that follows. </p><p>TLS 1.3 added a clever shortcut: instead of waiting to be told which method to use for the shared key agreement, the browser can guess what key agreement the server supports, and include one or more <a href="https://blog.cloudflare.com/rfc-8446-aka-tls-1-3/"><u>keyshares</u></a> right away. If the guess is correct, the handshake skips an extra round trip and the secure connection is established more quickly. If the guess is wrong, the server responds with a <i>HelloRetryRequest</i> (HRR), telling the browser which key agreement method to retry with. This speculative guessing is a major reason TLS 1.3 is so much faster than TLS 1.2.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/W2t0sZjiliwZ0FGfFFG6k/113c44b54da1c0355d5bf76fba3080fa/1-2.png" />
          </figure><p>Once both sides agree, the chosen keyshare is used to create a shared secret that encrypts the messages they exchange and allows only the right parties to decrypt them.</p>
    <div>
      <h3>The nitty-gritty details of key agreement</h3>
      <a href="#the-nitty-gritty-details-of-key-agreement">
        
      </a>
    </div>
    <p>Up until recently, most of these handshakes have relied on <a href="https://blog.cloudflare.com/a-relatively-easy-to-understand-primer-on-elliptic-curve-cryptography/"><u>elliptic curve cryptography</u></a> (ECC) using a curve known as X25519. But looming on the horizon are quantum computers, which could one day break ECC algorithms like X25519 and others. To prepare, the industry is shifting toward post-quantum key agreement with <a href="https://nvlpubs.nist.gov/nistpubs/fips/nist.fips.203.pdf"><u>MLKEM</u></a>, deployed in a hybrid mode (<a href="https://datatracker.ietf.org/doc/draft-ietf-tls-ecdhe-mlkem/"><u>X25519 + MLKEM</u></a>). This ensures that even if quantum machines arrive, <a href="https://en.wikipedia.org/wiki/Harvest_now,_decrypt_later"><u>harvested traffic today</u></a> can’t be decrypted tomorrow. X25519 + MLKEM is <a href="https://radar.cloudflare.com/adoption-and-usage#post-quantum-encryption-adoption"><u>steadily rising to become the most popular</u></a> key agreement for connections to Cloudflare.</p><p>The TLS handshake model is the foundation for how we encrypt web communications today. The history of TLS is really the story of <i>iteration under pressure</i>. It’s a protocol that had to keep evolving, so trust on the web could keep pace with how Internet traffic has changed. It’s also what makes technologies like <b>Cloudflare’s Automatic SSL/TLS</b> possible, by abstracting decades of protocol battles and crypto engineering into a single click, so customer websites can be secured by default without requiring every operator to be a cryptography expert.</p>
    <div>
      <h2>History Lesson: Stumbles and Standards</h2>
      <a href="#history-lesson-stumbles-and-standards">
        
      </a>
    </div>
    <p>Early versions of TLS (then called SSL) in the 1990s suffered from weak keys, limited protection against attacks like <a href="https://en.wikipedia.org/wiki/Man-in-the-middle_attack"><u>man-in-the-middle</u></a>, and low adoption on the Internet. To stabilize things, the <a href="https://www.ietf.org/"><u>IETF</u></a> stepped in and released <a href="https://www.ietf.org/rfc/rfc2246.txt"><u>TLS 1.0</u></a>, followed by TLS <a href="https://datatracker.ietf.org/doc/html/rfc4346"><u>1.1</u></a> and <a href="https://datatracker.ietf.org/doc/html/rfc5246"><u>1.2</u></a> through the 2000s. These versions added stronger ciphers and patched new attack vectors, but years of fixes and extensions left the protocol bloated and hard to evolve.</p><p>The early 2010s marked a turning point. After the <a href="https://iapp.org/news/a/the-snowden-disclosures-10-years-on"><u>Snowden disclosures</u></a>, the Internet doubled down on encryption by default. Initiatives like <a href="https://en.wikipedia.org/wiki/Let%27s_Encrypt"><u>Let’s Encrypt</u></a>, the mass adoption of <a href="https://en.wikipedia.org/wiki/HTTPS"><u>HTTPS</u></a>, and Cloudflare’s own commitment to offer <a href="https://www.cloudflare.com/application-services/products/ssl/"><u>SSL/TLS for free</u></a> turned encryption from optional, expensive, and complex into an easy baseline requirement for a safer Internet.</p><p>All of this momentum led to <a href="https://datatracker.ietf.org/doc/html/rfc8446"><u>TLS 1.3</u></a> (2018), which cut away legacy baggage, locked in modern cipher suites, and made encrypted connections nearly as fast as the underlying transport protocols like TCP—and sometimes even faster with <a href="https://en.wikipedia.org/wiki/QUIC"><u>QUIC</u></a>.</p>
    <div>
      <h2>The CDN Twist</h2>
      <a href="#the-cdn-twist">
        
      </a>
    </div>
    <p>As Content Delivery Networks (CDNs) rose to prominence, they reshaped how TLS was deployed. Instead of a browser talking directly to a distant server hosting content (what Cloudflare calls an origin), it now spoke to the nearest edge data center, which may in-turn speak to an origin server on the client’s behalf.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7CTywdNaDxUXcGHVg5i1MP/975f9b0a74b2b5c5fb59ecb64d3268bb/2.png" />
          </figure><p>This created <b>two distinct TLS layers</b>:</p><ul><li><p><b>Edge ↔ Browser TLS:</b> The front door, built to quickly take on new improvements in security and performance. Edges and browsers adopt modern protocols (TLS 1.3, QUIC, session resumption) to cut down on latency.</p></li><li><p><b>Edge ↔ Origin TLS:</b> The backhaul, which must be more flexible. Origins might be older, more poorly maintained, run legacy TLS stacks, or require custom certificate handling.</p></li></ul><p>In practice, CDNs became <i>translators</i>: modernizing encryption at the edge while still bridging to legacy origins. It’s why you can have a blazing-fast TLS 1.3 session from your phone, even if the origin server behind the CDN hasn’t been upgraded in years. </p><p>This is where Automatic SSL/TLS sits in the story of how we secure Internet communications. </p>
    <div>
      <h2>Automatic SSL/TLS </h2>
      <a href="#automatic-ssl-tls">
        
      </a>
    </div>
    <p>Automatic SSL/TLS grew out of Cloudflare’s mission to ensure the web was as encrypted as possible. While we had initially spent an incredibly long time developing secure connections for the “front door” (from browsers to Cloudflare’s edge) with <a href="https://blog.cloudflare.com/introducing-universal-ssl/"><u>Universal SSL</u></a>, we knew that the “back door” (from Cloudflare’s edge to origin servers) would be slower and harder to upgrade. </p><p>One option we offered was <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/"><u>Cloudflare Tunnel</u></a>, where a lightweight agent runs near the origin server and tunnels traffic securely back to Cloudflare. This approach ensures the connection always uses modern encryption, without requiring changes on the origin itself.</p><p>But not every customer uses Tunnel. Many connect origins directly to Cloudflare’s edge, where encryption depends on the origin server’s configuration. Traditionally this meant customers had to either manually select an <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/"><u>encryption mode</u></a> that worked for their origin server or rely on the default chosen by Cloudflare. </p><p>To improve the experience of choosing an encryption mode, we introduced our <a href="https://blog.cloudflare.com/ssl-tls-recommender/"><u>SSL/TLS Recommender</u></a> in 2021.</p><p>The Recommender scanned customer origin servers and then provided recommendations for their most secure encryption mode. For example, if the Recommender detected that an origin server was using a certificate signed by a trusted Certificate Authority (CA) such as Let’s Encrypt, rather than a self-signed certificate, it would recommend upgrading from <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/full/"><b><u>Full</u></b><u> encryption mode</u></a> to <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/full-strict/"><b><u>Full (Strict)</u></b><u> encryption mode</u></a>.</p><p>Based on how the origin responded, Recommender would tell customers if they could improve their SSL/TLS encryption mode to be more secure. The following encryption modes represent what the SSL/TLS Recommender could recommend to customers based on their origin responses: </p><table><tr><td><p><b>SSL/TLS mode</b></p></td><td><p><b>HTTP from visitor</b></p></td><td><p><b>HTTPS from visitor</b></p></td></tr><tr><td><p><a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/off/"><u>Off</u></a></p></td><td><p>HTTP to Origin</p></td><td><p>HTTP to Origin</p></td></tr><tr><td><p><a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/flexible/"><u>Flexible</u></a></p></td><td><p>HTTP to Origin</p></td><td><p>HTTP to Origin</p></td></tr><tr><td><p><a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/full/"><u>Full</u></a></p></td><td><p>HTTP to Origin</p></td><td><p>HTTPS to Origin without certification validation check</p></td></tr><tr><td><p><a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/full-strict/"><u>Full (strict)</u></a></p></td><td><p>HTTP to Origin</p></td><td><p>HTTPS to Origin with certificate validation check</p></td></tr><tr><td><p><a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/ssl-only-origin-pull/"><u>Strict (SSL-only origin pull)</u></a></p></td><td><p>HTTPS to Origin with certificate validation check</p></td><td><p>HTTPS to Origin with certificate validation check</p></td></tr></table><p>However, in the three years after launching our Recommender we discovered something troubling: of the over two million domains using Recommender, <b>only 30% of the recommendations that the system provided were followed</b>. A significant number of users would not complete the next step of pushing the button to inform Cloudflare that we could communicate with their origin over a more secure setting. </p><p>We were seeing sub-optimal settings that our customers could upgrade from without risk of breaking their site, but for various reasons, our users did not follow through with the recommendations. So we pushed forward by building a system that worked with Recommender and actioned the recommendations by default. </p>
    <div>
      <h2>How does Automatic SSL/TLS work? </h2>
      <a href="#how-does-automatic-ssl-tls-work">
        
      </a>
    </div>
    <p>Automatic SSL/TLS<b> </b>works by crawling websites, looking for content over both HTTP and HTTPS, then comparing the results for compatibility. It also performs checks against the TLS certificate presented by the origin and looks at the type of content that is served to ensure it matches. If the downloaded content matches, Automatic SSL/TLS elevates the encryption level for the domain to the compatible and stronger mode, without risk of breaking the site.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/49AaKdddEOgWXk1Oxlg2Qp/be44b863e2f4c797fa58c8b81f93f51a/3.png" />
          </figure><p>More specifically, these are the steps that Automatic SSL/TLS takes to upgrade domain’s security: </p><ol><li><p>Each domain is scheduled for a scan <b>once per month</b> (or until it reaches the maximum supported encryption mode).</p></li><li><p>The scan evaluates the current <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/"><u>encryption mode</u></a> for the domain. If it’s lower than what the Recommender thinks the domain can support based on the <a href="https://blog.cloudflare.com/introducing-automatic-ssl-tls-securing-and-simplifying-origin-connectivity/#:~:text=When%20the%20Recommender,recommendation%20is%20followed."><u>results</u></a> of its probes and content scans, the system begins a gradual upgrade.</p></li><li><p>Automatic SSL/TLS begins to upgrade the domain by connecting with origins over the more secure mode starting with just 1% of its traffic.</p></li><li><p>If connections to the origin succeed, the result is logged as successful.</p><ol><li><p>If they fail, the system records the failure to Cloudflare’s control plane and aborts the upgrade. Traffic is immediately downgraded back to the previous SSL/TLS setting to ensure seamless operation.</p></li></ol></li><li><p>If no issues are found, the new SSL/TLS encryption mode is applied to traffic in 10% increments until 100% of traffic uses the recommended mode.</p></li><li><p>Once 100% of traffic has been successfully upgraded with no TLS-related errors, the domain’s SSL/TLS setting is permanently updated.</p></li><li><p><b>Special handling for Flexible → Full/Strict:</b> These upgrades are more cautious because customers’ <a href="https://developers.cloudflare.com/cache/how-to/cache-keys/"><u>cache keys</u></a> are changed (from <code>http</code> to <code>https</code> origin scheme).</p><ol><li><p>In this situation, traffic ramps up from 1% to 10% in 1% increments, allowing customers’ cache to warm-up.</p></li><li><p>After 10%, the system resumes the standard 10% increments until 100%.</p></li></ol></li></ol><p>We know that transparency and visibility are critical, especially when automated systems make changes. To keep customers informed, Automatic SSL/TLS sends a weekly digest to account <a href="https://developers.cloudflare.com/fundamentals/manage-members/roles/"><u>Super Administrators</u></a> whenever updates are made to domain encryption modes. This way, you always have visibility into what changed and when.  </p><p>In short, Automatic SSL/TLS automates what used to be trial and error: finding the strongest SSL/TLS mode your site can support while keeping everything working smoothly.</p>
    <div>
      <h2>How are we doing so far?  </h2>
      <a href="#how-are-we-doing-so-far">
        
      </a>
    </div>
    <p>So far we have onboarded <b>all Free, Pro, and Business domains to use Automatic SSL/TLS</b>. We also have enabled this for <b>all new domains</b> that will onboard onto Cloudflare regardless of plantype. Soon, we will start onboarding Enterprise customers as well. If you already have an Enterprise domain and want to try out Automatic SSL/TLS we encourage you to <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/#automatic-ssltls-default"><u>enable it in the SSL/TLS section</u></a> of the dashboard or via the API. </p><p>As of the publishing of this blog, we’ve upgraded over<b> 6 million domains</b> to be more secure without the website operators needing to manually configure anything on Cloudflare. </p><table><tr><td><p><b>Previous Encryption Mode</b></p></td><td><p><b>Upgraded Encryption Mode</b></p></td><td><p><b>Number of domains</b></p></td></tr><tr><td><p>Flexible</p></td><td><p>Full</p></td><td><p>~ 2,200,000</p></td></tr><tr><td><p>Flexible</p></td><td><p>Full (strict)</p></td><td><p>~ 2,000,000</p></td></tr><tr><td><p>Full </p></td><td><p>Full (strict)</p></td><td><p>~ 1,800,000</p></td></tr><tr><td><p>Off</p></td><td><p>Full</p></td><td><p>~ 7,000</p></td></tr><tr><td><p>Off</p></td><td><p>Full (strict)</p></td><td><p>~ 5,000</p></td></tr></table><p>We’re most excited about the over 4 million domains that moved from <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/flexible/"><u>Flexible</u></a> or <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/off/"><u>Off</u></a>, which uses HTTP to origin servers, to <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/full/"><u>Full</u></a> or <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/full-strict/"><u>Strict</u></a>, which uses HTTPS. </p><p>If you have a reason to use a particular encryption mode (e.g., on a test domain that isn’t production ready) you can always disable Automatic SSL/TLS and manually set the encryption mode that works best for your use case.</p><p>Today, SSL/TLS mode works on a domain-wide level, which can feel blunt. This means that one suboptimal subdomain can keep the entire domain in a less secure TLS setting, to ensure availability. Our long-term goal is to make these controls more precise, so that Automatic SSL/TLS and encryption modes can optimize security per origin or subdomain, rather than treating every hostname the same.</p>
    <div>
      <h2>Impact on origin-facing connections</h2>
      <a href="#impact-on-origin-facing-connections">
        
      </a>
    </div>
    <p>Since we began onboarding domains to <b>Automatic SSL/TLS</b> in late 2024 and early 2025, we’ve been able to measure how origin connections across our network are shifting toward stronger security. Looking at the ratios across all origin requests, the trends are clear:</p><ul><li><p><b>Encryption is rising.</b> Plaintext connections are steadily declining, a reflection of Automatic SSL/TLS helping millions of domains move to HTTPS by default. We’ve seen <b>a correlated 7-8% reduction in plaintext origin-bound connections.</b> Still, some origins remain on outdated configurations, and these should be upgraded to keep pace with modern security expectations.</p></li><li><p><b>TLS 1.3 is surging.</b> Since late 2024, TLS 1.3 adoption has climbed sharply, now making up the majority of encrypted origin traffic (almost 60%). While Automatic SSL/TLS doesn’t control which TLS version an origin supports, this shift is an encouraging sign for both performance and security.</p></li><li><p><b>Older versions are fading.</b> Month after month, TLS 1.2 continues to shrink, while TLS 1.0 and 1.1 are now so rare they barely register.</p></li></ul><p>The decline in plaintext connections is encouraging, but it also highlights a long tail of servers still relying on outdated packages or configurations. Sites like <a href="https://www.ssllabs.com/ssltest/"><u>SSL Labs</u></a> can be used, for instance, to check a server’s TLS configuration. However, simply copy-pasting settings to achieve a high rating can be risky, so we encourage customers to review their origin TLS configurations carefully. In addition, <a href="https://developers.cloudflare.com/ssl/origin-configuration/origin-ca/"><u>Cloudflare origin CA</u></a> or <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/"><u>Cloudflare Tunnel</u></a> can help provide guidance for upgrading origin security.</p>
    <div>
      <h3>Upgraded domain results</h3>
      <a href="#upgraded-domain-results">
        
      </a>
    </div>
    <p>Instead of focusing on the entire network of origin-facing connections from Cloudflare, we’re now going to drill into specific changes that we’ve seen from domains that have been upgraded by <b>Automatic SSL/TLS</b>. </p><p>By January 2025, most domains had been enrolled in Automatic SSL/TLS, and the results were dramatic: a near 180-degree shift from plaintext to encrypted communication with origins. After that milestone, traffic patterns leveled off into a steady plateau, reflecting a more stable baseline of secure connections across the network. There is some drop in encrypted traffic which may represent some of the originally upgraded domains manually turning off Automatic SSL/TLS.</p><p>But the story doesn’t end there. In the past two months (July and August 2025), we’ve observed another noticeable uptick in encrypted origin traffic. This likely reflects customers upgrading outdated origin packages and enabling stronger TLS support—evidence that Automatic SSL/TLS not only raised the floor on encryption but continues nudging the long tail of domains toward better security.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6nJe12swMSMXBQsgzEhXtq/78debf8e0c3efbaf66bce8cf6e623c80/4.png" />
          </figure><p>To further explore the “encrypted” line above, we wanted to see what the delta was between TLS 1.2 and 1.3. Originally we wanted to include all TLS versions we support but the levels of 1.0 and 1.1 were so small that they skewed the graph and were taken out. We see a noticeable rise in the support for both TLS 1.2 and 1.3 between Cloudflare and origin servers. What is also interesting to note here is the network-wide decrease in TLS 1.2 but for the domains that have been automatically upgraded a generalized increase, potentially also signifying origin TLS stacks that could be updated further.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/BoRlq4irKWuvuXs5E4e8l/3971165f5029a03ae64dac79235a8671/5.png" />
          </figure><p>Finally, for Full (Strict) mode,<b> </b>we wanted to investigate the number of successful certificate validations we performed. This line shows a dramatic, approximately 40%, increase in successful certificate validations performed for customers upgraded by Automatic SSL/TLS. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5nNyiMNQ4xtOubbrhnDuRY/af16c0792a73de71fa0176e6c1cfeb0b/6.png" />
          </figure><p>We’ve seen a largely successful rollout of Automatic SSL/TLS so far, with millions of domains upgraded to stronger encryption by default. We’ve seen help Automatic SSL/TLS improve origin-facing security, safely pushing connections to stronger modes whenever possible, without risking site breakage. Looking ahead, we’ll continue to expand this capability to more customer use cases as we help to build a more encrypted Internet.</p>
    <div>
      <h2>What will we build next for Automatic SSL/TLS? </h2>
      <a href="#what-will-we-build-next-for-automatic-ssl-tls">
        
      </a>
    </div>
    <p>We’re expanding Automatic SSL/TLS with new features that give customers more visibility and control, while keeping the system safe by default. First, we’re building an <b>ad-hoc scan</b> option that lets you rescan your origin earlier than the standard monthly cadence. This means if you’ve just rotated certificates, upgraded your origin’s TLS configuration, or otherwise changed how your server handles encryption, you won’t need to wait for the next scheduled pass—Cloudflare will be able to re-evaluate and move you to a stronger mode right away.</p><p>In addition, we’re working on <b>error surfacing</b> that will highlight origin connection problems directly in the dashboard and provide actionable guidance for remediation. Instead of discovering after the fact that an upgrade failed, or a change on the origin resulted in a less secure setting than what was set previously, customers will be able to see where the issue lies and how to fix it. </p><p>Finally, for <b>newly onboarded domains</b>, we plan to add clearer guidance on when to finish configuring the origin before Cloudflare runs its first scan and sets an encryption mode. Together, these improvements are designed to reduce surprises, give customers more agency, and ensure smoother upgrades. We expect all three features to roll out by June 2026.</p>
    <div>
      <h2>Post Quantum Era</h2>
      <a href="#post-quantum-era">
        
      </a>
    </div>
    <p>Looking ahead, quantum computers introduce a serious risk: data encrypted today can be harvested and decrypted years later once quantum attacks become practical. To counter this <a href="https://en.wikipedia.org/wiki/Harvest_now,_decrypt_later"><u>harvest-now, decrypt-later</u></a> threat, the industry is moving towards post-quantum cryptography (PQC)—algorithms designed to withstand quantum attacks. We have extensively written on this subject <a href="https://blog.cloudflare.com/tag/post-quantum/"><u>in our previous blogs</u></a>.</p><p>In August 2024, NIST finalized its PQC standards: <a href="https://nvlpubs.nist.gov/nistpubs/fips/nist.fips.203.pdf"><u>ML-KEM</u></a> for key agreement, and <a href="https://nvlpubs.nist.gov/nistpubs/fips/nist.fips.204.pdf"><u>ML-DSA</u></a> and <a href="https://nvlpubs.nist.gov/nistpubs/fips/nist.fips.205.pdf"><u>SLH-DSA</u></a> for digital signatures. In collaboration with industry partners, Cloudflare has helped drive the development and deployment of PQC. We have deployed the hybrid key agreement, combining ML-KEM (post-quantum secure) and X25519 (classical), to secure TLS 1.3 traffic to our servers and internal systems. As of mid-September 2025, <a href="https://radar.cloudflare.com/adoption-and-usage#post-quantum-encryption-adoption"><u>around 43%</u></a> of human-generated connections to Cloudflare are already protected with the hybrid post-quantum secure key agreement – a huge milestone in preparing the Internet for the quantum era.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2hgIUNO8TM50kvAOvzQ8rg/cdbe5b3d64390fc4b946036e2f37471d/6.png" />
          </figure><p>But things look different on the other side of the network. When Cloudflare connects to origins, we act as the client, navigating a fragmented landscape of hosting providers, software stacks, and middleboxes. Each origin may support a different set of cryptographic features, and not all are ready for hybrid post-quantum handshakes.</p><p>To manage this diversity without the risk of breaking connections, we relied on <i>HelloRetryRequest</i>. Instead of sending post-quantum keyshare immediately in the <i>ClientHello</i>, we only advertise support for it. If the origin server supports the post-quantum key agreement, it uses <i>HelloRetryRequest</i> to request it from Cloudflare, and creates the post-quantum connection. The downside is this extra round trip (from the retry) cancels out the performance gains of TLS 1.3 and makes the connection feel closer to TLS 1.2 for uncached requests.</p><p>Back in 2023, <a href="https://developers.cloudflare.com/ssl/post-quantum-cryptography/pqc-to-origin/"><u>we launched an API endpoint</u></a>, so customers could manually opt their origins into preferring post-quantum connections. If set, we avoid the extra roundtrip and try to create a post-quantum connection at the start of the TLS session. Similarly, we extended post-quantum protection to <a href="https://blog.cloudflare.com/post-quantum-tunnel/"><u>Cloudflare tunnel</u></a>, making it one of the easiest ways to get origin-facing PQ today.</p><p><b>Starting Q4 2025, we’re taking the next step – making it </b><b><i>automatic</i></b><b>. </b>Just as we’ve done with SSL/TLS upgrades, Automatic SSL/TLS will begin testing, ramping, and enabling post-quantum handshakes with origins—without requiring customers to change a thing, as long as their origins support post-quantum key agreement.</p><p>Behind the scenes, we’re already scanning active origins about every 24 hours to test support and preferences for both classical and post-quantum key agreements. We’ve worked directly with vendors and customers to identify compatibility issues, and this new scanning system will be fully integrated into <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/#automatic-ssltls-default"><u>Automatic SSL/TLS</u></a>.</p><p>And the benefits won't stop at post-quantum. Even for classical handshakes, optimization matters. Today, the X25519 algorithm is used by default, but <b>our scanning data shows that more than 6% of origins currently prefer a different key agreement algorithm, </b>which leads to unnecessary <i>HelloRetryRequests </i>and wasted round trips<b>.</b> By folding this scanning data into Automatic SSL/TLS, we’ll improve connection establishment for classical TLS as well—squeezing out extra speed and reliability across the board.</p><p>As enterprises and hosting providers adopt PQC, our preliminary scanning pipeline has already found that <b>around 4% of origins could benefit from a post-quantum-preferred key agreement even today</b>, as shown below. This is an 8x increase since <a href="https://blog.cloudflare.com/post-quantum-to-origins/"><u>we started our scans in 2023</u></a>. We expect this number to grow at a steady pace as the industry continues to migrate to post-quantum protocols.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3g2Um1vTz6cuCtoYWtMU4C/7551fb50305a8512fa7cc22844024b99/8.png" />
          </figure><p>As part of this change, we will also<b> phase out</b> support for the pre-standard version X25519Kyber768 to support the final ML-KEM standard, again using a hybrid, from edge to origin connections.</p><p>With Automatic SSL/TLS, we will soon by default scan your origins proactively to directly send the most preferred keyshare to your origin removing the need for any extra roundtrip, improving both security and performance of your origin connections collectively.</p><p>At Cloudflare, we’ve always believed security is a right, not a privilege. From Universal SSL to post-quantum cryptography, our mission has been to make the strongest protections free and available to everyone. <b>Automatic SSL/TLS</b> is the next step—upgrading every domain to the best protocols automatically. Check the SSL/TLS section of your dashboard to ensure it’s enabled and join the millions of sites already secured for today and ready for tomorrow.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Encryption]]></category>
            <category><![CDATA[Post-Quantum]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[CDN]]></category>
            <guid isPermaLink="false">7nO4wFW304Eh2r48934ugz</guid>
            <dc:creator>Alex Krivit</dc:creator>
            <dc:creator>Suleman Ahmad</dc:creator>
            <dc:creator>Yawar Jamal</dc:creator>
        </item>
        <item>
            <title><![CDATA[Orange Me2eets: We made an end-to-end encrypted video calling app and it was easy]]></title>
            <link>https://blog.cloudflare.com/orange-me2eets-we-made-an-end-to-end-encrypted-video-calling-app-and-it-was/</link>
            <pubDate>Thu, 26 Jun 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Orange Meets, our open-source video calling web application, now supports end-to-end encryption using the MLS protocol with continuous group key agreement. ]]></description>
            <content:encoded><![CDATA[ <p>Developing a new video conferencing application often begins with a peer-to-peer setup using <a href="https://webrtc.org/"><u>WebRTC</u></a>, facilitating direct data exchange between clients. While effective for small demonstrations, this method encounters scalability hurdles with increased participants. The data transmission load for each client escalates significantly in proportion to the number of users, as each client is required to send data to every other client except themselves (n-1).</p><p>In the scaling of video conferencing applications, Selective Forwarding Units (SFUs) are essential.  Essentially a media stream routing hub, an SFU receives media and data flows from participants and intelligently determines which streams to forward. By strategically distributing media based on network conditions and participant needs, this mechanism minimizes bandwidth usage and greatly enhances scalability. Nearly every video conferencing application today uses SFUs.</p><p>In 2024, we announced <a href="https://blog.cloudflare.com/cloudflare-calls-anycast-webrtc/"><u>Cloudflare Realtime</u></a> (then called Cloudflare Calls), our suite of WebRTC products, and we also released <a href="https://github.com/cloudflare/orange"><u>Orange Meets</u></a>, an open source video chat application built on top of our SFU.</p><p>We also realized that use of an SFU often comes with a privacy cost, as there is now a centralized hub that could see and listen to all the media contents, even though its sole job is to forward media bytes between clients as a data plane.</p><p>We believe end-to-end encryption should be the industry standard for secure communication and that’s why today we’re excited to share that we’ve implemented and open sourced end-to-end encryption in Orange Meets. Our generic implementation is client-only, so it can be used with any WebRTC infrastructure. Finally, our new <i>designated committer </i>distributed algorithm is verified in a bounded model checker to verify this algorithm handles edge cases gracefully.</p>
    <div>
      <h2>End-to-end encryption for video conferencing is different than for text messaging</h2>
      <a href="#end-to-end-encryption-for-video-conferencing-is-different-than-for-text-messaging">
        
      </a>
    </div>
    <p>End-to-end encryption describes a secure communication channel whereby only the intended participants can read, see, or listen to the contents of the conversation, not anybody else. WhatsApp and iMessage, for example, are end-to-end-encrypted, which means that the companies that operate those apps or any other infrastructure can’t see the contents of your messages. </p><p>Whereas encrypted group chats are usually long-lived, highly asynchronous, and low bandwidth sessions, video and audio calls are short-lived, highly synchronous, and require high bandwidth. This difference comes with plenty of interesting tradeoffs, which influenced the design of our system.</p><p>We had to consider how factors like the ephemeral nature of calls, compared to the persistent nature of group text messages, also influenced the way we designed E2EE for Orange Meets. In chat messages, users must be able to decrypt messages sent to them while they were offline (e.g. while taking a flight). This is not a problem for real-time communication.</p><p>The bandwidth limitations around audio/video communication and the use of an SFU prevented us from using some of the E2EE technologies already available for text messages. Apple’s iMessage, for example, encrypts a message N-1 times for an N-user group chat. We can't encrypt the video for each recipient, as that could saturate the upload capacity of Internet connections as well as slow down the client. Media has to be encrypted once and decrypted by each client while preserving secrecy around only the current participants of the call.</p>
    <div>
      <h2>Messaging Layer Security (MLS)</h2>
      <a href="#messaging-layer-security-mls">
        
      </a>
    </div>
    <p>Around the same time we were working on Orange Meets, we saw a lot of excitement around new apps being built with <a href="https://messaginglayersecurity.rocks/"><u>Messaging Layer Security</u></a> (MLS), an IETF-standardized protocol that describes how you can do a group key exchange in order to establish end-to-end-encryption for group communication. </p><p>Previously, the only way to achieve these properties was to essentially run your own fork of the <a href="https://signal.org/docs/"><u>Signal protocol</u></a>, which itself is more of a living protocol than a solidified standard. Since MLS is standardized, we’ve now seen multiple high-quality implementations appear, and we’re able to use them to achieve Signal-level security with far less effort.</p><p>Implementing MLS here wasn’t easy: it required a moderate amount of client modification, and the development and verification of an encrypted room-joining protocol. Nonetheless, we’re excited to be pioneering a standards-based approach that any customer can run on our network, and to share more details about how our implementation works. </p><p>We did not have to make any changes to the SFU to get end-to-end encryption working. Cloudflare’s SFU doesn’t care about the contents of the data forwarded on our data plane and whether it’s encrypted or not.</p>
    <div>
      <h2>Orange Meets: the basics </h2>
      <a href="#orange-meets-the-basics">
        
      </a>
    </div>
    <p>Orange Meets is a video calling application built on <a href="https://workers.cloudflare.com/"><u>Cloudflare Workers</u></a> that uses the <a href="https://developers.cloudflare.com/realtime/calls-vs-sfus/"><u>Cloudflare Realtime SFU service</u></a> as the data plane. The roles played by the three main entities in the application are as follows:</p><ul><li><p>The <i>user</i> is a participant in the video call. They connect to the Orange Meets server and SFU, described below.</p></li><li><p>The <i>Orange Meets Server </i>is a simple service run on a Cloudflare Worker that runs the small-scale coordination logic of Orange Meets, which is concerned with which user is in which video call — called a <i>room </i>— and what the state of the room is. Whenever something in the room changes, like a participant joining or leaving, or someone muting themselves, the app server broadcasts the change to all room participants. You can use any backend server for this component, we just chose Cloudflare Workers for its convenience.</p></li><li><p>Cloudflare Realtime <i>Selective Forwarding Unit</i> (SFU) is a service that Cloudflare runs, which takes everyone’s audio and video and broadcasts it to everyone else. These connections are potentially lossy, using UDP for transmission. This is done because a dropped video frame from five seconds ago is not very important in the context of a video call, and so should not be re-sent, as it would be in a TCP connection.</p></li></ul>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/61htaksggj580PqX02XoVB/3b0f1ed34ee681e41b2009257fdc8525/image2.png" />
          </figure><p><sup><i>The network topology of Orange Meets</i></sup></p><p>Next, we have to define what we mean by end-to-end encryption in the context of video chat.</p>
    <div>
      <h2>End-to-end encrypting Orange Meets </h2>
      <a href="#end-to-end-encrypting-orange-meets">
        
      </a>
    </div>
    <p>The most immediate way to end-to-end encrypt Orange Meets is to simply have the initial users agree on a symmetric encryption/decryption key at the beginning of a call, and just encrypt every video frame using that key. This is sufficient to hide calls from Cloudflare’s SFU. Some source-encrypted video conferencing implementations, such as <a href="https://jitsi.org/e2ee-in-jitsi/"><u>Jitsi Meet</u></a>, work this way.</p><p>The issue, however, is that kicking a malicious user from a call does not invalidate their key, since the keys are negotiated just once. A joining user learns the key that was used to encrypt video from before they joined. These failures are more formally referred to as failures of <i>post-compromise security</i> and <i>perfect forward secrecy</i>. When a protocol successfully implements these in a group setting, we call the protocol a <b>continuous group key agreement protocol</b>.</p><p>Fortunately for us, MLS is a continuous group key agreement protocol that works out of the box, and the nice folks at <a href="https://phnx.im/"><u>Phoenix R&amp;D</u></a> and <a href="https://cryspen.com/"><u>Cryspen</u></a> have a well-documented <a href="https://github.com/openmls/openmls/tree/main"><u>open-source Rust implementation</u></a> of most of the MLS protocol. </p><p>All we needed to do was write an MLS client and compile it to WASM, so we could decrypt video streams in-browser. We’re using WASM since that’s one way of running Rust code in the browser. If you’re running a video conferencing application on a desktop or mobile native environment, there are other MLS implementations in your preferred programming language.</p><p>Our setup for encryption is as follows:</p><p><b>Make a web worker for encryption.</b> We wrote a web worker in Rust that accepts a WebRTC video stream, broken into individual frames, and encrypts each frame. This code is quite simple, as it’s just an MLS encryption:</p>
            <pre><code>group.create_message(
	&amp;self.mls_provider,
	self.my_signing_keys.as_ref()?,
	frame,
)</code></pre>
            <p><b>Postprocess outgoing audio/video.</b> We take our normal stream and, using some newer features of the <a href="https://developer.mozilla.org/en-US/docs/Web/API/WebRTC_API"><u>WebRTC API</u></a>, add a transform step to it. This transform step simply sends the stream to the worker:</p>
            <pre><code>const senderStreams = sender.createEncodedStreams()
const { readable, writable } = senderStreams
this.worker.postMessage(
	{
    	    type: 'encryptStream',
    	    in: readable,
    	    out: writable,
	},
	[readable, writable]
)</code></pre>
            <p>And the same for decryption:</p>
            <pre><code>const receiverStreams = receiver.createEncodedStreams()
const { readable, writable } = receiverStreams
this.worker.postMessage(
	{
    	    type: 'decryptStream',
    	    in: readable,
    	    out: writable,
	},
	[readable, writable]
)</code></pre>
            <p>Once we do this for both audio and video streams, we’re done.</p>
    <div>
      <h2>Handling different codec behaviors</h2>
      <a href="#handling-different-codec-behaviors">
        
      </a>
    </div>
    <p>The streams are now encrypted before sending and decrypted before rendering, but the browser doesn’t know this. To the browser, the stream is still an ordinary video or audio stream. This can cause errors to occur in the browser’s depacketizing logic, which expects to see certain bytes in certain places, depending on the codec. This results in some extremely cypherpunk artifacts every dozen seconds or so:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/72baLJkLPZPdbjHjGVxSU5/2ea34b02826aacc2b23086b463a4938f/image3.png" />
          </figure><p>Fortunately, this exact issue was discovered by engineers at Discord, who handily documented it in their <a href="https://github.com/discord/dave-protocol/blob/main/protocol.md"><u>DAVE</u></a> E2EE videocalling protocol. For the VP8 codec, which we use by default, the solution is simple: split off the first 1–10 bytes of each packet, and send them unencrypted:</p>
            <pre><code>fn split_vp8_header(frame: &amp;[u8]) -&gt; Option&lt;(&amp;[u8], &amp;[u8])&gt; {
    // If this is a keyframe, keep 10 bytes unencrypted. Otherwise, 1 is enough
    let is_keyframe = frame[0] &gt;&gt; 7 == 0;
    let unencrypted_prefix_size = if is_keyframe { 10 } else { 1 };
    frame.split_at_checked(unencrypted_prefix_size)
}</code></pre>
            <p>These bytes are not particularly important to encrypt, since they only contain versioning info, whether or not this frame is a keyframe, some constants, and the width and height of the video.</p><p>And that’s truly it for the stream encryption part! The only thing remaining is to figure out how we will let new users join a room.</p>
    <div>
      <h2>“Join my Orange Meet” </h2>
      <a href="#join-my-orange-meet">
        
      </a>
    </div>
    <p>Usually, the only way to join the call is to click a link. And since the protocol is encrypted, a joining user needs to have some cryptographic information in order to decrypt any messages. How do they receive this information, though? There are a few options.</p><p>DAVE does it by using an MLS feature called <i>external proposals</i>. In short, the Discord server registers itself as an <i>external sender</i>, i.e., a party that can send administrative messages to the group, but cannot receive any. When a user wants to join a room, they provide their own cryptographic material, called a <i>key package</i>, and the server constructs and sends an MLS <a href="https://www.rfc-editor.org/rfc/rfc9420.html#section-12.1.8"><u>External Add message</u></a> to the group to let them know about the new user joining. Eventually, a group member will <i>commit</i> this External Add, sending the joiner a <i>Welcome</i> message containing all information necessary to send and receive video.
</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1gQm3r3Bai8Rks4M82JuSh/87ff851a12505f5c17c241e3f1eade6a/image4.png" />
          </figure><p><sup><i>A user joining a group via MLS external proposals. Recall the Orange Meets app server functions as a broadcast channel for the whole group. We consider a group of 3 members. We write member #2 as the one committing to the proposal, but this can be done by any member. Member #2 also sends a Commit message to the other members, but we omit this for space.</i></sup><sup>  </sup></p><p>This is a perfectly viable way to implement room joining, but implementing it would require us to extend the Orange Meets server logic to have some concept of MLS. Since part of our goal is to keep things as simple as possible, we would like to do all our cryptography client-side.</p><p>So instead we do what we call the <i>designated committer</i> algorithm. When a user joins a group, they send their cryptographic material to one group member, the <i>designated committer</i>, who then constructs and sends the Add message to the rest of the group. Similarly, when notified of a user’s exit, the designated committer constructs and sends a Remove message to the rest of the group. With this setup, the server’s job remains nothing more than broadcasting messages! It’s quite simple too—the full implementation of the designated committer state machine comes out to <a href="https://github.com/cloudflare/orange/blob/66e80d6d9146e2aedd4668e581810c0ee6aeb4a0/rust-mls-worker/src/mls_ops.rs#L90-L446"><u>300 lines of Rust</u></a>, including the MLS boilerplate, and it’s about as efficient.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3k3U7kFcYTwY81XzSrggt8/c27945dec311f251493826542704d370/image1.png" />
          </figure><p><sup><i>A user joining a group via the designated committer algorithm.</i></sup></p><p>One cool property of the designated committer algorithm is that something like this isn’t possible in a text group chat setting, since any given user (in particular, the designated committer) may be offline for an arbitrary period of time. Our method works because it leverages the fact that video calls are an inherently synchronous medium.</p>
    <div>
      <h3>Verifying the Designated Committer Algorithm with TLA<sup>+</sup></h3>
      <a href="#verifying-the-designated-committer-algorithm-with-tla">
        
      </a>
    </div>
    <p>The designated committer algorithm is a pretty neat simplification, but it comes with some non-trivial edge cases that we need to make sure we handle, such as:</p><ul><li><p><i>How do we make sure there is only one designated committer at a time?</i> The designated committer is the alive user with the smallest index in the MLS group state, which all users share.</p></li><li><p><i>What happens if the designated committer exits?</i> Then the next user will take its place. Every user keeps track of pending Adds and Removes, so it can continue where the previous designated committer left off.</p></li><li><p><i>If a user has not caught up to all messages, could they think they’re the designated committer?</i> No, they have to believe first that all prior eligible designated committers are disconnected.</p></li></ul><p>To make extra sure that this algorithm was correct, we formally modeled it and put it through the <a href="https://lamport.azurewebsites.net/tla/high-level-view.html"><u>TLA</u><u><sup>+</sup></u></a> model checker. To our surprise, it caught some low-level bugs! In particular, it found that, if the designated committer dies while adding a user, the protocol does not recover. We fixed these by breaking up MLS operations and enforcing a strict ordering on messages locally (e.g., a Welcome is always sent before its corresponding Add).</p><p>You can find an explainer, lessons learned, and the full <a href="https://learntla.com/core/index.html"><u>PlusCal</u></a> program (a high-level language that compiles to TLA<sup>+</sup>) <a href="https://github.com/cloudflareresearch/orange-e2ee-model-check"><u>here</u></a>. The caveat, as with any use of a bounded model checker, is that the checking is, well, bounded. We verified that no invalid protocol states are possible in a group of up to five users. We think this is good evidence that the protocol is correct for an arbitrary number of users. Because there are only two distinct roles in the protocol (designated committer and other group member), any weird behavior ought to be reproducible with two or three users, max.</p>
    <div>
      <h2>Preventing Monster-in-the-Middle attacks</h2>
      <a href="#preventing-monster-in-the-middle-attacks">
        
      </a>
    </div>
    <p>One important concern to address in any end-to-end encryption setup is how to prevent the service provider from replacing users’ key packages with their own. If the Orange Meets app server did this, and colluded with a malicious SFU to decrypt and re-encrypt video frames on the fly, then the SFU could see all the video sent through the network, and nobody would know.</p><p>To resolve this, like DAVE, we include a <i>safety number</i> in the corner of the screen for all calls. This number uniquely represents the cryptographic state of the group. If you check out-of-band (e.g., in a Signal group chat) that everyone agrees on the safety number, then you can be sure nobody’s key material has been secretly replaced.</p><p>In fact, you could also read the safety number aloud in the video call itself, but doing this is not provably secure. Reading a safety number aloud is an <i>in-band verification</i> mechanism, i.e., one where a party authenticates a channel within that channel. If a malicious app server colluding with a malicious SFU were able to construct believable video and audio of the user reading the safety number aloud, it could bypass this safety mechanism. So if your threat model includes adversaries that are able to break into a Worker and Cloudflare’s SFU, and simultaneously generate real-time deep-fakes, you should use out-of-band verification 😄.</p>
    <div>
      <h2>Future work</h2>
      <a href="#future-work">
        
      </a>
    </div>
    <p>There are some areas we could improve on:</p><ul><li><p>There is another attack vector for a malicious app server: it is possible to simply serve users malicious JavaScript. This problem, more generally called the <a href="https://web.archive.org/web/20200731144044/https://www.nccgroup.com/us/about-us/newsroom-and-events/blog/2011/august/javascript-cryptography-considered-harmful/"><u>JavaScript Cryptography Problem</u></a>, affects any in-browser application where the client wants to hide data from the server. Fortunately, we are working on a standard to address this, called <a href="https://github.com/beurdouche/explainers/blob/main/waict-explainer.md"><u>Web Application Manifest Consistency, Integrity, and Transparency</u></a>. In short, like our <a href="https://blog.cloudflare.com/key-transparency/"><u>Code Verify</u></a> solution for WhatsApp, this would allow every website to commit to the JavaScript it serves, and have a third party create an auditable log of the code. With transparency, malicious JavaScript can still be distributed, but at least now there is a log that records the code.</p></li><li><p>We can make out-of-band authentication easier by placing trust in an identity provider. Using <a href="https://www.bastionzero.com/openpubkey"><u>OpenPubkey</u></a>, it would be possible for a user to get the identity provider to sign their cryptographic material, and then present that. Then all the users would check the signature before using the material. Transparency would also help here to ensure no signatures were made in secret.</p></li></ul>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>We built end-to-end encryption into the Orange Meets video chat app without a lot of engineering time, and by modifying just the client code. To do so, we built a WASM (compiled from Rust) <a href="https://github.com/cloudflare/orange/blob/e2ee/rust-mls-worker"><u>service worker</u></a> that sets up an <a href="https://www.rfc-editor.org/rfc/rfc9420.html"><u>MLS</u></a> group and does stream encryption and decryption, and designed a new joining protocol for groups, called the <i>designated committer algorithm</i>, and <a href="https://github.com/cloudflareresearch/orange-e2ee-model-check"><u>formally modeled it in TLA</u><u><sup>+</sup></u></a>. We made comments for all kinds of optimizations that are left to do, so please send us a PR if you’re so inclined!</p><p>Try using Orange Meets with E2EE enabled at <a href="https://e2ee.orange.cloudflare.dev/"><u>e2ee.orange.cloudflare.dev</u></a>, or deploy your own instance using the <a href="https://github.com/cloudflare/orange"><u>open source repository</u></a> on Github.</p> ]]></content:encoded>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Privacy]]></category>
            <category><![CDATA[Encryption]]></category>
            <category><![CDATA[Video]]></category>
            <category><![CDATA[Cloudflare Realtime]]></category>
            <guid isPermaLink="false">6X6FQzpKaqVyTLVk7rw6xm</guid>
            <dc:creator>Michael Rosenberg</dc:creator>
            <dc:creator>Kevin Kipp</dc:creator>
            <dc:creator>Renan Dincer</dc:creator>
            <dc:creator>Felipe Astroza Araya</dc:creator>
            <dc:creator>Mari Galicer</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing Automatic SSL/TLS: securing and simplifying origin connectivity]]></title>
            <link>https://blog.cloudflare.com/introducing-automatic-ssl-tls-securing-and-simplifying-origin-connectivity/</link>
            <pubDate>Thu, 08 Aug 2024 14:05:00 GMT</pubDate>
            <description><![CDATA[ This new Automatic SSL/TLS setting will maximize and simplify the encryption modes Cloudflare uses to communicate with origin servers by using the SSL/TLS Recommender. ]]></description>
            <content:encoded><![CDATA[ 
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4YIQCIdM9Td1RJfdCkg3o5/6fc5cd824f819658e00007c61f69ce71/1885-1-Hero.png" />
          </figure><p>During Birthday Week 2022, we <a href="https://blog.cloudflare.com/securing-origin-connectivity"><u>pledged</u></a> to provide our customers with the most secure connection possible from Cloudflare to their origin servers automatically. I’m thrilled to announce we will begin rolling this experience out to customers who have the <a href="https://blog.cloudflare.com/ssl-tls-recommender"><u>SSL/TLS Recommender</u></a> enabled on <b>August 8, 2024. </b>Following this, remaining Free and Pro customers can use this feature beginning <b>September 16, 2024</b> with Business and Enterprise customers to follow<b>.</b></p><p>Although it took longer than anticipated to roll out, our priority was to achieve an automatic configuration both transparently and without risking any site downtime. Taking this additional time allowed us to balance enhanced security with seamless site functionality, especially since origin server security configuration and capabilities are beyond Cloudflare's direct control. The new Automatic SSL/TLS setting will maximize and simplify the <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/"><u>encryption modes</u></a> Cloudflare uses to communicate with origin servers by using the <a href="https://blog.cloudflare.com/ssl-tls-recommender"><u>SSL/TLS Recommender</u></a>. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/53WNT2fwr0HuN2L0M5PKnJ/c005f100b455fd699d32d2f602ebf447/1885-2.png" />
          </figure><p>We first talked about this process in <a href="https://blog.cloudflare.com/introducing-universal-ssl"><u>2014</u></a>: at that time, securing connections was hard to configure, prohibitively expensive, and required specialized knowledge to set up correctly. To help alleviate these pains, Cloudflare introduced Universal SSL, which allowed web properties to obtain a <a href="https://www.cloudflare.com/application-services/products/ssl/"><u>free SSL/TLS certificate</u></a> to enhance the security of connections between browsers and Cloudflare. </p><p>This worked well and was easy because Cloudflare could manage the certificates and connection security from incoming browsers. As a result of that work, the number of encrypted HTTPS connections on the entire Internet <a href="https://blog.cloudflare.com/introducing-universal-ssl#:~:text=we%27ll%20have%20doubled%20that"><u>doubled</u></a> at that time. However, the connections made from Cloudflare to origin servers still required <i>manual</i> configuration of the encryption <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/"><u>modes</u></a> to let Cloudflare know the capabilities of the origin. </p><p>Today we’re excited to begin the sequel to Universal SSL and make security between Cloudflare and origins automatic and easy for everyone.</p>
    <div>
      <h2>History of securing origin-facing connections</h2>
      <a href="#history-of-securing-origin-facing-connections">
        
      </a>
    </div>
    <p>Ensuring that more bytes flowing across the Internet are automatically encrypted strengthens the barrier against interception, throttling, and censorship of Internet traffic by third parties. </p><p>Generally, two communicating parties (often a client and server) establish a secure connection using the <a href="https://www.cloudflare.com/learning/ssl/what-happens-in-a-tls-handshake/"><u>TLS</u></a> protocol. For a simplified breakdown: </p><ul><li><p>The client advertises the list of encryption parameters it supports (along with some metadata) to the server.</p></li><li><p>The server responds back with its own preference of the chosen encryption parameters. It also sends a digital certificate so that the client can authenticate its identity.</p></li><li><p>The client validates the server identity, confirming that the server is who it says it is.</p></li><li><p>Both sides agree on a <a href="https://www.cloudflare.com/learning/ssl/what-is-asymmetric-encryption/#:~:text=What%20is-,symmetric,-encryption%3F"><u>symmetric</u></a> secret key for the session that is used to encrypt and decrypt all transmitted content over the connection.</p></li></ul><p>Because Cloudflare acts as an intermediary between the client and our customer’s origin server, two separate TLS connections are established. One between the user’s browser and our network, and the other from our network to the origin server. This allows us to manage and optimize the security and performance of both connections independently.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6s0NxfVR5tCXuAzhI8pYdw/f1f48be437de48bf1b60495647016fbb/1885-3.png" />
          </figure><p>Unlike securing connections between clients and Cloudflare, the security capabilities of origin servers are not under our direct control. For example, we can manage the <a href="https://www.cloudflare.com/en-gb/learning/ssl/what-is-an-ssl-certificate/"><u>certificate</u></a> (the file used to verify identity and provide context on establishing encrypted connections) between clients and Cloudflare because it’s our job in that connection to provide it to clients, but when talking to origin servers, Cloudflare <i>is</i> the client.</p><p>Customers need to <a href="https://developers.cloudflare.com/ssl/origin-configuration/origin-ca/"><u>acquire and provision</u></a> an origin certificate on their host. They then have to configure Cloudflare to expect the new certificate from the origin when opening a connection. Needing to manually configure connection security across multiple different places requires effort and is prone to human error. </p><p>This issue was discussed in the original <a href="https://blog.cloudflare.com/introducing-universal-ssl"><u>Universal SSL blog</u></a>:</p><blockquote><p><i>For a site that did not have SSL before, we will default to our </i><a href="https://support.cloudflare.com/hc/en-us/articles/200170416-What-do-the-SSL-options-Off-Flexible-SSL-Full-SSL-Full-SSL-Strict-mean-"><i><u>Flexible SSL mode</u></i></a><i>, which means traffic from browsers to Cloudflare will be encrypted, but traffic from Cloudflare to a site's origin server will not. We strongly recommend site owners install a certificate on their web servers so we can encrypt traffic to the origin … Once you've installed a certificate on your web server, you can enable the </i><a href="https://support.cloudflare.com/hc/en-us/articles/200170416-What-do-the-SSL-options-Off-Flexible-SSL-Full-SSL-Full-SSL-Strict-mean-"><i><u>Full or Strict SSL modes</u></i></a><i> which encrypt origin traffic and provide a higher level of security.</i></p></blockquote><p>Over the years Cloudflare has introduced numerous products to help customers configure how Cloudflare should talk to their origin. These products include a <a href="https://blog.cloudflare.com/universal-ssl-encryption-all-the-way-to-the-origin-for-free/"><u>certificate authority</u></a> to help customers obtain a certificate to verify their origin server’s identity and encryption capabilities, <a href="https://developers.cloudflare.com/ssl/origin-configuration/authenticated-origin-pull/"><u>Authenticated Origin Pulls</u></a> that ensures only HTTPS (encrypted) requests from Cloudflare will receive a response from the origin server, and <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/"><u>Cloudflare Tunnels</u></a> that can be configured to proactively establish secure and private tunnels to the nearest Cloudflare data center. Additionally, the <a href="https://datatracker.ietf.org/doc/html/rfc8555/"><u>ACME</u></a> protocol and its corresponding <a href="https://certbot.eff.org/"><u>Certbot</u></a> tooling make it easier than ever to obtain and manage publicly-trusted certificates on customer origins. While these technologies help customers configure how Cloudflare should communicate with their origin server, they still require manual configuration changes on the origin and to Cloudflare settings. </p><p>Ensuring certificates are configured appropriately on origin servers and informing Cloudflare about how we should communicate with origins can be anxiety-inducing because misconfiguration can lead to downtime if something isn’t deployed or configured correctly. </p><p>To simplify this process and help identify the most secure options that customers could be using without any misconfiguration risk, <b>Cloudflare introduced the </b><a href="https://blog.cloudflare.com/ssl-tls-recommender"><b><u>SSL/TLS Recommender</u></b></a><b> in 2021.</b> The Recommender works by probing customer origins with different SSL/TLS settings to provide a recommendation whether the SSL/TLS <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/"><u>encryption mode</u></a> for the web property can be improved. The Recommender has been in production for three years and has consistently managed to provide high quality origin-security recommendations for Cloudflare’s customers. </p><p>The SSL/TLS Recommender system serves as the brain of the automatic origin connection service that we are announcing today. </p>
    <div>
      <h2>How does SSL/TLS Recommendation work?</h2>
      <a href="#how-does-ssl-tls-recommendation-work">
        
      </a>
    </div>
    <p>The Recommender works by actively comparing content on web pages that have been downloaded using different SSL/TLS modes to see if it is safe and risk-free to update the <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/"><u>mode</u></a> Cloudflare uses to connect to origin servers.</p><p>Cloudflare currently offers five SSL/TLS modes:</p><ol><li><p><b>Off</b>: No encryption is used for traffic between browsers and Cloudflare or between Cloudflare and origins. Everything is cleartext HTTP.</p></li><li><p><b>Flexible</b>: Traffic from browsers to Cloudflare can be encrypted via HTTPS, but traffic from Cloudflare to the origin server is not. This mode is common for origins that do not support TLS, though upgrading the origin configuration is recommended whenever possible. A guide for upgrading is available <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/full/#required-setup"><u>here</u></a>.</p></li><li><p><b>Full</b>: Cloudflare matches the browser request protocol when connecting to the origin. If the browser uses HTTP, Cloudflare connects to the origin via HTTP; if HTTPS, Cloudflare uses HTTPS without validating the origin’s certificate. This mode is common for origins that use self-signed or otherwise invalid certificates.</p></li><li><p><b>Full (Strict)</b>: Similar to Full Mode, but with added validation of the origin server’s certificate, which can be issued by a public CA like Let’s Encrypt or by Cloudflare Origin CA.</p></li><li><p><b>Strict (SSL-only origin pull)</b>: Regardless of whether the browser-to-Cloudflare connection uses HTTP or HTTPS, Cloudflare always connects to the origin over HTTPS with certificate validation.</p></li></ol><table><tr><th><p>
</p></th><th><p><b>HTTP from visitor</b></p></th><th><p><b>HTTPS from visitor</b></p></th></tr><tr><td><p><b>Off</b></p></td><td><p>HTTP to origin</p></td><td><p>HTTP to origin</p></td></tr><tr><td><p><b>Flexible</b></p></td><td><p>HTTP to origin</p></td><td><p>HTTP to origin</p></td></tr><tr><td><p><b>Full</b></p></td><td><p>HTTP to origin</p></td><td><p>HTTPS without cert validation to origin</p></td></tr><tr><td><p><b>Full (strict)</b></p></td><td><p>HTTP to origin</p></td><td><p>HTTPS with cert validation to origin</p></td></tr><tr><td><p><b>Strict (SSL-only origin pull)</b></p></td><td><p>HTTPS with cert validation to origin</p></td><td><p>HTTPS with cert validation to origin</p></td></tr></table><p>
The SSL/TLS Recommender works by crawling customer sites and collecting links on the page (like any web crawler). The Recommender downloads content over both HTTP and HTTPS, making GET requests to avoid modifying server resources. It then uses a content similarity algorithm, adapted from the research paper "<a href="https://www.cs.umd.edu/~dml/papers/https_tma20.pdf"><u>A Deeper Look at Web Content Availability and Consistency over HTTP/S"</u></a> (TMA Conference 2020), to determine if content matches. If the content does match, the Recommender makes a determination for whether the <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/"><u>SSL/TLS mode</u></a> can be increased without misconfiguration risk. </p><p>The recommendations are currently delivered to customers via email. </p><p>When the Recommender is making security recommendations, it errs on the side of maintaining current site functionality to avoid breakage and usability issues. If a website is non-functional, blocks all bots, or has SSL/TLS-specific Page Rules or Configuration Rules, the Recommender may not complete its scans and provide a recommendation. It was designed to maximize <a href="https://www.cloudflare.com/application-services/solutions/domain-protection-services/">domain security</a>, but will not help resolve website or domain functionality issues.</p><p>The crawler uses the user agent "<code>Cloudflare-SSLDetector</code>" and is included in Cloudflare’s list of known <a href="https://bots-directory.cfdata.org/bot/cloudflare-ssl-detector"><u>good bots</u></a>. It ignores <code>robots.txt</code> (except for rules specifically targeting its user agent) to ensure accurate recommendations.</p><p>When downloading content from your origin server over both HTTP and HTTPS and comparing the content, the Recommender understands the current SSL/TLS encryption mode that your website uses and what risk there might be to the site functionality if the recommendation is followed.</p>
    <div>
      <h2>Using SSL/TLS Recommender to automatically manage SSL/TLS settings </h2>
      <a href="#using-ssl-tls-recommender-to-automatically-manage-ssl-tls-settings">
        
      </a>
    </div>
    <p>Previously, signing up for the SSL/TLS Recommender provided a good experience for customers, but only resulted in an email recommendation in the event that a zone’s current SSL/TLS modes could be updated. To Cloudflare, this was a positive signal that customers wanted their websites to have more secure connections to their origin servers – over 2 million domains have enabled the SSL/TLS Recommender. However, we found that a significant number of users would not complete the next step of pushing the button to inform Cloudflare that we could communicate over the upgraded settings. <b>Only 30% of the recommendations that the system provided were followed. </b></p><p>With the system designed to increase security while avoiding any breaking changes, we wanted to provide an option for customers to allow the Recommender to help upgrade their site security, without requiring further manual action from the customer. <b>Therefore, we are introducing a new option for managing SSL/TLS configuration on Cloudflare: Automatic SSL/TLS. </b></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/21q0D6rvhXHQxRe2ko4ITA/d5ca2f9a7139a2f55a16ca8bcf783ee0/1885-4.png" />
          </figure><p></p><p>Automatic SSL/TLS uses the SSL/TLS Recommender to make the determination as to what encryption mode is the most secure and safest for a website to be set to. If there is a <b>more secure</b> option for your website (based on your origin certification or capabilities), Automatic SSL/TLS will find it and apply it for your domain. The other option, <b>Custom SSL/TLS,</b> will work exactly like the setting the encryption mode does today. If you know what setting you want, just select it using Custom SSL/TLS, and we’ll use it. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/jFTSsmxG2WH0FqTklAJwb/eff9f692cdec3d199d32996bb0111441/1885-5.png" />
          </figure><p></p><p>Automatic SSL/TLS is currently meant to service an entire website, which typically works well for those with a single origin. For those concerned that they have more complex setups which use multiple origin servers with different security capabilities, don’t worry. Automatic SSL/TLS will still avoid breaking site functionality by looking for the best setting that works for all origins serving a part of the site’s traffic. </p><p>If customers want to segment the SSL/TLS mode used to communicate with the numerous origins that service their domain, they can achieve this by using <a href="https://developers.cloudflare.com/rules/configuration-rules/"><u>Configuration Rules</u></a>. These rules allow you to set more precise modes that Cloudflare should respect (based on path or subdomain or even IP address) to maximize the security of the domain based on your desired Rules criteria. If your site uses SSL/TLS-specific settings in a Configuration Rule or Page rule, those settings will <b>override the zone-wide Automatic and Custom settings.</b></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6PCXOjFBtEucRUOP3BoMGQ/6ba2700c18cf4c49782bdf2d0ee33435/1885-6.png" />
          </figure><p></p><p>The goal of Automatic SSL/TLS<b> </b>is to simplify and maximize the origin-facing security for customers on Cloudflare. We want this to be the new default for all websites on Cloudflare, but we understand that not everyone wants this new default, and we will respect your decision for how Cloudflare should communicate with your origin server. If you block the Recommender from completing its crawls, the origin server is non-functional or can’t be crawled, or if you want to opt out of this default and just continue using the same encryption mode you are using today, we will make it easy for you to tell us what you prefer. </p>
    <div>
      <h2>How to onboard to Automatic SSL/TLS</h2>
      <a href="#how-to-onboard-to-automatic-ssl-tls">
        
      </a>
    </div>
    <p>To improve the security settings for everyone by default, we are making the following default changes to how Cloudflare configures the SSL/TLS level for all zones: </p><p>Starting on <b>August 8, 2024</b> websites with the <b>SSL/TLS Recommender currently enabled</b> will have the Automatic SSL/TLS setting enabled by default. Enabling does not mean that the Recommender will begin scanning and applying new settings immediately though. There will be a <b><u>one-month grace period</u></b> before the first scans begin and the recommended settings are applied. Enterprise (ENT) customers will get a <b><u>six-week grace period</u></b>. Origin scans will start getting scheduled by <b>September 9, 2024, for non-Enterprise </b>customers<b> </b>and<b> September 23rd for ENT customers with the SSL Recommender enabled</b>. This will give customers the ability to opt out by removing Automatic SSL/TLS and selecting the Custom mode that they want to use instead.</p><p>Further, during the second week of September <b>all new zones signing up for Cloudflare</b> will start seeing the Automatic SSL/TLS setting enabled by default.</p><p>Beginning <b>September 16, 2024, </b>remaining <b>Free and Pro</b> customers will start to see the new Automatic SSL/TLS setting. They will also have a one-month grace period to opt out before the scans start taking effect. </p><p>Customers in the cohort having the new Automatic SSL/TLS setting applied will receive an email communication regarding the date that they are slated for this migration as well as a banner on the dashboard that mentions this transition as well. If they do not wish for Cloudflare to change anything in their configurations, the process for opt-out of this migration is outlined below. </p><p>Following the successful migration of Free and Pro customers, we will proceed to Business and Enterprise customers with a similar cadence. These customers will get email notifications and information in the dashboard when they are in the migration cohort.</p><p>The Automatic SSL/TLS setting will not impact users that are already in Strict or Full (strict) mode nor will it impact websites that have opted-out. </p>
    <div>
      <h2>Opting out</h2>
      <a href="#opting-out">
        
      </a>
    </div>
    <p>There are a number of reasons why someone might want to configure a lower-than-optimal security setting for their website. Some may want to set a lower security setting for testing purposes or to debug some behavior. Whatever the reason, the options to opt-out of the Automatic SSL/TLS setting during the migration process are available in the dashboard and API.</p><p>To opt-out, simply select <b>Custom SSL/TLS</b> in the dashboard (instead of the enabled Automatic SSL/TLS) and we will continue to use the previously set encryption mode that you were using prior to the migration. Automatic and Custom SSL/TLS modes can be found in the <b>Overview</b> tab of the SSL/TLS section of the dashboard. To enable your preferred mode, select <b>configure</b>.  </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4meNmREGaXd1FJfxUKr5NN/bdbe1e07a2121d2f9ec2a11e64c77b7f/1885-7.png" />
          </figure><p></p><p>If you want to opt-out via the API you can make this API call on or before the grace period expiration date. </p>
            <pre><code>    curl --request PATCH \
        --url https://api.cloudflare.com/client/v4/zones/&lt;insert_zone_tag_here&gt;/settings/ssl_automatic_mode \
        --header 'Authorization: Bearer &lt;insert_api_token_here&gt;' \
        --header 'Content-Type: application/json' \
        --data '{"value":"custom"}'
</code></pre>
            <p></p><p>If an opt-out is triggered, there will not be a change to the currently configured SSL/TLS setting. You are also able to change the security level at any time by going to the SSL/TLS section of the dashboard and choosing the Custom setting you want (similar to how this is accomplished today). </p><p>If at a later point you’d like to opt-in to Automatic SSL/TLS, that option is available by changing your setting from Custom to Automatic.</p>
    <div>
      <h2>What if I want to be more secure now?</h2>
      <a href="#what-if-i-want-to-be-more-secure-now">
        
      </a>
    </div>
    <p>We will begin to roll out this change to customers with the SSL/TLS Recommender enabled on <b>August 8, 2024</b>. If you want to enroll in that group, we recommend enabling the Recommender as soon as possible. </p><p>If you read this and want to make sure you’re at the highest level of backend security already, we recommend <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/full-strict/"><u>Full (strict)</u></a> or <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/ssl-only-origin-pull/"><u>Strict mode</u></a>. Directions on how to make sure you’re correctly configured in either of those settings are available <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/full-strict/#required-setup"><u>here</u></a> and <a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes/ssl-only-origin-pull/#required-setup"><u>here</u></a>. </p><p>If you prefer to wait for us to automatically upgrade your connection to the maximum encryption mode your origin supports, please watch your inbox for the date we will begin rolling out this change for you.</p> ]]></content:encoded>
            <category><![CDATA[SSL]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Network Services]]></category>
            <category><![CDATA[Encryption]]></category>
            <category><![CDATA[Research]]></category>
            <guid isPermaLink="false">2lhAhlWMei6M2NkhzAuULC</guid>
            <dc:creator>Alex Krivit</dc:creator>
            <dc:creator>Suleman Ahmad</dc:creator>
            <dc:creator>J Evans</dc:creator>
            <dc:creator>Yawar Jamal</dc:creator>
        </item>
        <item>
            <title><![CDATA[Armed to Boot: an enhancement to Arm's Secure Boot chain]]></title>
            <link>https://blog.cloudflare.com/armed-to-boot/</link>
            <pubDate>Wed, 25 Jan 2023 14:00:00 GMT</pubDate>
            <description><![CDATA[ Enhancing the Arm Secure Boot chain to improve platform security on modern systems. ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6DXRZwWpgxoNOs7LL87gEw/26d5483a74be3dcbe2a5c56c65c5986c/image1-45.png" />
            
            </figure><p>Over the last few years, there has been a rise in the number of attacks that affect how a computer boots. Most modern computers use a specification called Unified Extensible Firmware Interface (<a href="https://en.wikipedia.org/wiki/UEFI">UEFI</a>) that defines a software interface between an operating system (e.g. Windows) and platform firmware (e.g. disk drives, video cards). There are security mechanisms built into UEFI that ensure that platform firmware can be cryptographically validated and boot securely through an application called a bootloader. This firmware is stored in non-volatile <a href="https://en.wikipedia.org/wiki/Serial_Peripheral_Interface">SPI</a> flash memory on the motherboard, so it persists on the system even if the operating system is reinstalled and drives are replaced.</p><p>This creates a ‘trust anchor’ used to validate each stage of the boot process, but, unfortunately, this trust anchor is also a target for attack. In these UEFI attacks, malicious actions are loaded onto a compromised device early in the boot process. This means that malware can change configuration data, establish persistence by ‘<a href="https://www.zdnet.com/article/chinese-apt-deploy-moonbounce-malware-in-uefi-firmware/">implanting</a>’ itself, and can bypass security measures that are only loaded at the operating system stage. So, while UEFI-anchored secure boot protects the bootloader from bootloader attacks, it does not protect the UEFI firmware itself.</p><p>Because of this growing trend of attacks, we began the process of <a href="/anchoring-trust-a-hardware-secure-boot-story/">cryptographically signing our UEFI firmware</a> as a mitigation step. While our existing solution is platform specific to our x86 AMD server fleet, we did not have a similar solution to UEFI firmware signing for Arm. To determine what was missing, we had to take a deep dive into the Arm secure boot process.</p><p>Read on to learn about the world of Arm Trusted Firmware Secure Boot.</p>
    <div>
      <h2>Arm Trusted Firmware Secure Boot</h2>
      <a href="#arm-trusted-firmware-secure-boot">
        
      </a>
    </div>
    <p>Arm defines a trusted boot process through an architecture called <a href="https://developer.arm.com/documentation/den0006/d">Trusted Board Boot Requirements</a> (TBBR), or Arm Trusted Firmware (ATF) Secure Boot. TBBR works by authenticating a series of cryptographically signed binary images each containing a different stage or element in the system boot process to be loaded and executed. Every bootloader (BL) stage accomplishes a different stage in the initialization process:</p>
    <div>
      <h3>BL1</h3>
      <a href="#bl1">
        
      </a>
    </div>
    <p>BL1 defines the boot path (is this a cold boot or warm boot), initializes the architectures (exception vectors, CPU initialization, and control register setup), and initializes the platform (enables watchdog processes, MMU, and DDR initialization).</p>
    <div>
      <h3>BL2</h3>
      <a href="#bl2">
        
      </a>
    </div>
    <p>BL2 prepares initialization of the Arm Trusted Firmware (ATF), the stack responsible for setting up the secure boot process. After ATF setup, the console is initialized, memory is mapped for the MMU, and message buffers are set for the next bootloader.</p>
    <div>
      <h3>BL3</h3>
      <a href="#bl3">
        
      </a>
    </div>
    <p>The BL3 stage has multiple parts, the first being initialization of runtime services that are used in detecting system topology. After initialization, there is a handoff between the ATF ‘secure world’ boot stage to the ‘normal world’ boot stage that includes setup of UEFI firmware. Context is set up to ensure that no secure state information finds its way into the normal world execution state.</p><p>Each image is authenticated by a public key, which is stored in a signed certificate and can be traced back to a root key stored on the SoC in one time programmable (OTP) memory or ROM.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1UdHJVQqnXX9gZ6uaglVa4/83e7065007376d4f774fc6d9d6ee867b/image9.png" />
            
            </figure><p>TBBR was originally designed for cell phones. This established a reference architecture on how to build a “Chain of Trust” from the first ROM executed (BL1) to the handoff to “normal world” firmware (BL3). While this creates a validated firmware signing chain, it has caveats:</p><ol><li><p>SoC manufacturers are heavily involved in the secure boot chain, while the customer has little involvement.</p></li><li><p>A unique SoC SKU is required per customer. With one customer this could be easy, but most manufacturers have thousands of SKUs</p></li><li><p>The SoC manufacturer is primarily responsible for end-to-end signing and maintenance of the PKI chain. This adds complexity to the process  requiring USB key fobs for signing.</p></li><li><p>Doesn’t scale outside the manufacturer.</p></li></ol><p>What this tells us is what was built for cell phones doesn’t scale for servers.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3ycPPB81rgzRIuCGSu0KE7/6ed984097e839782f83ecbc28c25212a/image3-27.png" />
            
            </figure><p>If we were involved 100% in the manufacturing process, then this wouldn’t be as much of an issue, but we are a customer and consumer. As a customer, we have a lot of control of our server and block design, so we looked at design partners that would take some of the concepts we were able to implement with AMD Platform Secure Boot and refine them to fit Arm CPUs.</p>
    <div>
      <h2>Amping it up</h2>
      <a href="#amping-it-up">
        
      </a>
    </div>
    <p>We partnered with Ampere and tested their Altra Max <a href="/arms-race-ampere-altra-takes-on-aws-graviton2/">single socket rack server CPU</a> (code named Mystique) that provides high performance with incredible power efficiency per core, much of what we were looking for in reducing power consumption. These are only a small subset of specs, but Ampere backported various features into the Altra Max notably, speculative attack mitigations that include Meltdown and Spectre (variants 1 and 2) from the Armv8.5 instruction set architecture, giving Altra the “+” designation in their ISA.</p><p>Ampere does implement a signed boot process similar to the ATF signing process mentioned above, but with some slight variations. We’ll explain it a bit to help set context for the modifications that we made.</p>
    <div>
      <h2>Ampere Secure Boot</h2>
      <a href="#ampere-secure-boot">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3z02FzpTwAjg8aK9VdC2Ro/6ab22618c187ff09fcc1b7d177d165db/image4-21.png" />
            
            </figure><p>The diagram above shows the Arm processor boot sequence as implemented by Ampere. System Control Processors (SCP) are comprised of the System Management Processor (SMpro) and the Power Management Processor (PMpro). The SMpro is responsible for features such as secure boot and bmc communication while the PMpro is responsible for power features such as Dynamic Frequency Scaling and on-die thermal monitoring.</p><p>At power-on-reset, the SCP runs the system management bootloader from ROM and loads the SMpro firmware. After initialization, the SMpro spawns the power management stack on the PMpro and ATF threads. The ATF BL2 and BL31 bring up processor resources such as DRAM, and PCIe. After this, control is passed to BL33 BIOS.</p>
    <div>
      <h3>Authentication flow</h3>
      <a href="#authentication-flow">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1uctsORF4iINvxIpkpAMCM/34e931a29cc21a6b0e9d3549e81abcf8/image7-5.png" />
            
            </figure><p>At power on, the SMpro firmware reads Ampere’s public key (ROTPK) from the SMpro key certificate in SCP EEPROM, computes a hash and compares this to Ampere’s public key hash stored in eFuse. Once authenticated, Ampere’s public key is used to decrypt key and content certificates for SMpro, PMpro, and ATF firmware, which are launched in the order described above.</p><p>The SMpro public key will be used to authenticate the SMpro and PMpro images and ATF keys which in turn will authenticate ATF images. This cascading set of authentication that originates with the Ampere root key and stored in chip called an electronic fuse, or eFuse.  An eFuse can be programmed only once, setting the content to be read-only and can not be tampered with nor modified.</p><p>This is the original hardware root of trust used for signing system, secure world firmware. When we looked at this, after referencing the signing process we had with AMD PSB and knowing there was a large enough one-time-programmable (OTP) region within the SoC, we thought: why can’t we insert our key hash in here?</p>
    <div>
      <h2>Single Domain Secure Boot</h2>
      <a href="#single-domain-secure-boot">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6FQq6V32FKRBNJ8z0efGuP/e718f9c9cf5e8b754b4e2ce5018f904a/image11-2.png" />
            
            </figure><p>Single Domain Secure Boot takes the same authentication flow and adds a hash of the customer public key (Cloudflare firmware signing key in this case) to the eFuse domain. This enables the verification of UEFI firmware by a hardware root of trust. This process is performed in the already validated ATF firmware by BL2. Our public key (dbb) is read from UEFI secure variable storage, a hash is computed and compared to the public key hash stored in eFuse. If they match, the validated public key is used to decrypt the BL33 content certificate, validating and launching the BIOS, and remaining boot items. This is the key feature added by SDSB. It validates the entire software boot chain with a single eFuse root of trust on the processor.</p>
    <div>
      <h2>Building blocks</h2>
      <a href="#building-blocks">
        
      </a>
    </div>
    <p>With a basic understanding of how Single Domain Secure Boot works, the next logical question is “How does it get implemented?”. We ensure that all UEFI firmware is signed at build time, but this process can be better understood if broken down into steps.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7JbZYTpQmAzs85jK66urgj/dd5cf1128921938209c0ea090372eab4/Building-Blocks-1.png" />
            
            </figure><p>Ampere, our original device manufacturer (ODM), and we play a role in execution of SDSB. First, we generate certificates for a public-private key pair using our internal, secure PKI. The public key side is provided to the ODM as dbb.auth and dbu.auth in UEFI secure variable format. Ampere provides a reference Software Release Package (SRP) including the baseboard management controller, system control processor, UEFI, and complex programmable logic device (CPLD) firmware to the ODM, who customizes it for their platform. The ODM generates a board file describing the hardware configuration, and also customizes the UEFI to enroll dbb and dbu to secure variable storage on first boot.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3AtHZs6bsQ0VgTHkNDskFD/6ef31e083683fa462cf1f0f9e0860501/Building-Blocks-2.png" />
            
            </figure><p>Once this is done, we generate a UEFI.slim file using the ODM’s UEFI ROM image, Arm Trusted Firmware (ATF) and Board File. (Note: This differs from AMD PSB insofar as the entire image and ATF files are signed; with AMD PSB, only the first block of boot code is signed.) The entire .SLIM file is signed with our private key, producing a signature hash in the file. This can only be authenticated by the correct public key. Finally, the ODM packages the UEFI into .HPM format compatible with their platform BMC.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/74fFXKUQ6ao22MbxnLRTOZ/8ee373903359790a543c2907280069ec/Security-Provisioning-Firmware.png" />
            
            </figure><p>In parallel, we provide the debug fuse selection and hash of our DER-formatted public key. Ampere uses this information to create a special version of the SCP firmware known as Security Provisioning (SECPROV) .slim format. This firmware is run one time only, to program the debug fuse settings and public key hash into the SoC eFuses. Ampere delivers the SECPROV .slim file to the ODM, who packages it into a .hpm file compatible with the BMC firmware update tooling.</p>
    <div>
      <h2>Fusing the keys</h2>
      <a href="#fusing-the-keys">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5PcDXfMPCbyZHbW06S9TQK/a1fbf46b9176e2291172387cda350829/eFuse-Key-Provisioning.png" />
            
            </figure><p>During system manufacturing, firmware is pre-programmed into storage ICs before placement on the motherboard. Note that the SCP EEPROM contains the SECPROV image, not standard SCP firmware. After a system is first powered on, an IPMI command is sent to the BMC which releases the Ampere processor from reset. This allows SECPROV firmware to run, burning the SoC eFuse with our public key hash and debug fuse settings.</p>
    <div>
      <h2>Final manufacturing flow</h2>
      <a href="#final-manufacturing-flow">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5APB65geJdVYfsHYJDeU5W/b9ce5c4abbf565525110c79852d53e87/Final-manufacturing-flow.png" />
            
            </figure><p>Once our public key has been provisioned, manufacturing proceeds by re-programming the SCP EEPROM with its regular firmware. Once the system powers on, ATF detects there are no keys present in secure variable storage and allows UEFI firmware to boot, regardless of signature. Since this is the first UEFI boot, it programs our public key into secure variable storage and reboots. ATF is validated by Ampere’s public key hash as usual. Since our public key is present in dbb, it is validated against our public key hash in eFuse and allows UEFI to boot.</p>
    <div>
      <h2>Validation</h2>
      <a href="#validation">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/QqpZGCJe52609SFewuSK0/bc0514db4f993630e9ec8b735c38d91e/Validation.png" />
            
            </figure><p>The first part of validation requires observing successful destruction of the eFuses. This imprints our public key hash into a dedicated, immutable memory region, not allowing the hash to be overwritten. Upon automatic or manual issue of an IPMI OEM command to the BMC, the BMC observes a signal from the SECPROV firmware, denoting eFuse programming completion. This can be probed with BMC commands.</p><p>When the eFuses have been blown, validation continues by observing the boot chain of the other firmware. Corruption of the SCP, ATF, or UEFI firmware obstructs boot flow and boot authentication and will cause the machine to fail booting to the OS. Once firmware is in place, happy path validation begins with booting the machine.</p><p>Upon first boot, firmware boots in the following order: BMC, SCP, ATF, and UEFI. The BMC, SCP, and ATF firmware can be observed via their respective serial consoles. The UEFI will automatically enroll the dbb and dbu files to the secure variable storage and trigger a reset of the system.</p><p>After observing the reset, the machine should successfully boot to the OS if the feature is executed correctly. For further validation, we can use the UEFI shell environment to extract the dbb file and compare the hash against the hash submitted to Ampere. After successfully validating the keys, we flash an unsigned UEFI image. An unsigned UEFI image causes authentication failure at bootloader stage BL3-2. The ATF firmware undergoes a boot loop as a result. Similar results will occur for a UEFI image signed with incorrect keys.</p>
    <div>
      <h2>Updated authentication flow</h2>
      <a href="#updated-authentication-flow">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6AsINoXMVNcl9qeYT3mwgz/04850535a1613572e174fec5f40003a0/UEFI-Authentication.png" />
            
            </figure><p>On all subsequent boot cycles, the ATF will read secure variable dbb (our public key), compute a hash of the key, and compare it to the read-only Cloudflare public key hash in eFuse. If the computed and eFuse hashes match, our public key variable can be trusted and is used to authenticate the signed UEFI. After this, the system boots to the OS.</p>
    <div>
      <h2>Let’s boot!</h2>
      <a href="#lets-boot">
        
      </a>
    </div>
    <div></div>
<p></p><p>We were unable to get a machine without the feature enabled to demonstrate the set-up of the feature since we have the eFuse set at build time, but we can demonstrate what it looks like to go between an unsigned BIOS and a signed BIOS. What we would have observed with the set-up of the feature is a custom BMC command to instruct the SCP to burn the ROTPK into the SOC’s OTP fuses. From there, we would observe feedback to the BMC detailing whether burning the fuses was successful. Upon booting the UEFI image for the first time, the UEFI will write the dbb and dbu into secure storage.</p><p>As you can see, after flashing the unsigned BIOS, the machine fails to boot.</p><p>Despite the lack of visibility in failure to boot, there are a few things going on underneath the hood. The SCP (System Control Processor) still boots.</p><ol><li><p>The SCP image holds a key certificate with Ampere’s generated ROTPK and the SCP key hash. SCP will calculate the ROTPK hash and compare it against the burned OTP fuses. In the failure case, where the hash does not match, you will observe a failure as you saw earlier. If successful, the SCP firmware will proceed to boot the PMpro and SMpro. Both the PMpro and SMpro firmware will be verified and proceed with the ATF authentication flow.</p></li><li><p>The conclusion of the SCP authentication is the passing of the BL1 key to the first stage bootloader via the SCP HOB(hand-off-block) to proceed with the standard three stage bootloader ATF authentication mentioned previously.</p></li><li><p>At BL2, the dbb is read out of the secure variable storage and used to authenticate the BL33 certificate and complete the boot process by booting the BL33 UEFI image.</p></li></ol><div></div>
<p></p>
    <div>
      <h2>Still more to do</h2>
      <a href="#still-more-to-do">
        
      </a>
    </div>
    <p>In recent years, management interfaces on servers, like the BMC, have been the target of cyber attacks including ransomware, implants, and disruptive operations. Access to the BMC can be local or remote. With remote vectors open, there is potential for malware to be installed on the BMC via network interfaces. With compromised software on the BMC, malware or spyware could maintain persistence on the server. An attacker might be able to update the BMC directly using flashing tools such as flashrom or socflash without the same level of firmware resilience established at the UEFI level.</p><p>The future state involves using host CPU-agnostic infrastructure to enable a cryptographically secure host prior to boot time. We will look to incorporate a modular approach that has been proposed by the Open Compute Project’s Data Center Secure Control Module Specification (DC-SCM) 2.0 <a href="https://drive.google.com/file/d/13BxuseSrKo647hjIXjp087ei8l5QQVb0/view">specification</a>. This will allow us to standardize our Root of Trust, sign our BMC, and assign physically unclonable function (PUF) based identity keys to components and peripherals to limit the use of OTP fusing. OTP fusing creates a problem with trying to “e-cycle” or reuse machines as you cannot truly remove a machine identity.</p> ]]></content:encoded>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Hardware]]></category>
            <category><![CDATA[Encryption]]></category>
            <category><![CDATA[Cloudflare Network]]></category>
            <guid isPermaLink="false">1N9Xj1EVgCAmk3Sv1HVHQR</guid>
            <dc:creator>Derek Chamorro</dc:creator>
            <dc:creator>Ryan Chow</dc:creator>
        </item>
        <item>
            <title><![CDATA[Helping build the next generation of privacy-preserving protocols]]></title>
            <link>https://blog.cloudflare.com/next-generation-privacy-protocols/</link>
            <pubDate>Tue, 08 Dec 2020 12:00:00 GMT</pubDate>
            <description><![CDATA[ Today, we’re making several announcements around improving Internet protocols with respect to something important to our customers and Internet users worldwide: privacy. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3KjEAqn2Lizr1zW42YzTU4/6492bcae03200a5c1688671ecc3b6291/Privacy-protocols-2.png" />
            
            </figure><p>Over the last ten years, Cloudflare has become an important part of Internet infrastructure, powering websites, APIs, and web services to help make them more secure and efficient. The Internet is growing in terms of its capacity and the number of people using it and evolving in terms of its design and functionality. As a player in the Internet ecosystem, Cloudflare has a responsibility to help the Internet grow in a way that respects and provides value for its users. Today, we’re making several announcements around improving Internet protocols with respect to something important to our customers and Internet users worldwide: privacy.</p><p>These initiatives are:</p><ul><li><p>Fixing one of the last information leaks in HTTPS through <a href="/encrypted-client-hello"><b>Encrypted Client Hello (ECH)</b></a><b>,</b> previously known as <a href="/encrypted-sni/">Encrypted SNI</a></p></li><li><p>Making <a href="https://www.cloudflare.com/learning/dns/what-is-dns/">DNS</a> even more private by supporting <a href="/oblivious-dns"><b>Oblivious DNS-over-HTTPS (ODoH)</b></a></p></li><li><p>Developing a superior protocol for password authentication, <a href="/opaque-oblivious-passwords"><b>OPAQUE</b></a>, that makes password breaches less likely to occur</p></li></ul><p>Each of these projects impacts an aspect of the Internet that influences our online lives and digital footprints. Whether we know it or not, there is a lot of private information about us and our lives floating around online. This is something we can help fix.</p><p>For over a year, we have been working through standards bodies like the IETF and partnering with the biggest names in Internet technology (including Mozilla, Google, Equinix, and more) to design, deploy, and test these new privacy-preserving protocols at Internet scale. Each of these three protocols touches on a critical aspect of our online lives, and we expect them to help make real improvements to privacy online as they gain adoption.</p>
    <div>
      <h3>A continuing tradition at Cloudflare</h3>
      <a href="#a-continuing-tradition-at-cloudflare">
        
      </a>
    </div>
    <p>One of Cloudflare’s core missions is to support and develop technology that helps build a better Internet. As an industry, we’ve made exceptional progress in making the Internet more secure and robust. Cloudflare is proud to have played a part in this progress through multiple initiatives over the years.</p><p>Here are a few highlights:</p><ul><li><p><a href="/introducing-universal-ssl/"><b>Universal SSL</b></a>™. We’ve been one of the driving forces for encrypting the web. We launched Universal SSL in 2014 to give website encryption to our customers for free and have actively been working along with certificate authorities like Let’s Encrypt, web browsers, and website operators to help remove <a href="/tag/mixed-content-errors/">mixed content</a>. Before Universal SSL launched to give all Cloudflare customers HTTPS for free, only 30% of connections to websites were encrypted. Through the industry’s efforts, that number is now <a href="https://letsencrypt.org/stats/">80%</a> -- and a much more significant proportion of overall Internet traffic. Along with doing our part to encrypt the web, we have supported the Certificate Transparency project via <a href="/introducing-certificate-transparency-and-nimbus/">Nimbus</a> and <a href="https://ct.cloudflare.com/">Merkle Town</a>, which has improved accountability for the certificate ecosystem HTTPS relies on for trust.</p></li><li><p><b>TLS 1.3 and QUIC</b>. We’ve also been a proponent of upgrading existing security protocols. Take Transport Layer Security (TLS), the underlying protocol that secures HTTPS. Cloudflare engineers helped contribute to the design of TLS 1.3, the latest version of the standard, and <a href="/introducing-tls-1-3/">in 2016</a> we launched support for an early version of the protocol. This early deployment helped lead to improvements to the final version of the protocol. TLS 1.3 is now the most widely used encryption protocol on the web and a vital component of the <a href="/last-call-for-quic/">emerging QUIC standard</a>, of which we were also early adopters.</p></li><li><p><b>Securing Routing, Naming, and Time</b>. We’ve made major efforts to help secure other critical components of the Internet. Our efforts to help secure Internet routing through our <a href="/cloudflares-rpki-toolkit/">RPKI toolkit</a>, <a href="https://conferences.sigcomm.org/imc/2019/presentations/p221.pdf">measurement studies</a>, and “<a href="/is-bgp-safe-yet-rpki-routing-security-initiative/">Is BGP Safe Yet</a>” tool have significantly improved the Internet’s resilience against disruptive route leaks. Our time service (<a href="/secure-time/">time.cloudflare.com</a>) has helped keep people’s clocks in sync with more secure protocols like <a href="/nts-is-now-rfc/">NTS</a> and <a href="/roughtime/">Roughtime</a>. We’ve also made DNS more secure by supporting <a href="/dns-encryption-explained/">DNS-over-HTTPS and DNS-over-TLS</a> in 1.1.1.1 at launch, along with one-click DNSSEC in our <a href="/introducing-universal-dnssec/">authoritative DNS</a> service and <a href="/one-click-dnssec-with-cloudflare-registrar/">registrar</a>.</p></li></ul><p>Continuing to improve the security of the systems of trust online is critical to the Internet’s growth. However, there is a more fundamental principle at play: respect. The infrastructure underlying the Internet should be designed to respect its users.</p>
    <div>
      <h3>Building an Internet that respects users</h3>
      <a href="#building-an-internet-that-respects-users">
        
      </a>
    </div>
    <p>When you sign in to a specific website or service with a privacy policy, you know what that site is expected to do with your data. It’s explicit. There is no such visibility to the users when it comes to the operators of the Internet itself. You may have an agreement with your Internet Service Provider (ISP) and the site you’re visiting, but it’s doubtful that you even know which <a href="http://www.washingtonpost.com/graphics/national/security-of-the-internet/bgp">networks your data is traversing</a>. Most people don’t have a concept of the Internet beyond what they see on their screen, so it’s hard to imagine that people would accept or even understand what a privacy policy from a <a href="/the-relative-cost-of-bandwidth-around-the-world/">transit wholesaler</a> or an <a href="https://us-cert.cisa.gov/ncas/alerts/TA17-075A">inspection middlebox</a> would even mean.</p><p>Without encryption, Internet browsing information is implicitly shared with countless third parties online as information passes between networks. Without secure routing, users’ traffic can be hijacked and disrupted. Without privacy-preserving protocols, users’ online life is not as private as they would think or expect. The infrastructure of the Internet wasn’t built in a way that reflects their expectations.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7rjHEqRERPxkcFeoNRwfAX/37548cf8be78a4849c9a188c076ca483/image3.png" />
            
            </figure><p>Normal network flow</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/76hMzZSOArtOdRiTWCeXqH/b2063a3b7aef6e410f30efcbc242f4b6/image1-7.png" />
            
            </figure><p>Network flow with malicious route leak</p><p>The good news is that the Internet is continuously evolving. One of the groups that help guide that evolution is the Internet Architecture Board (IAB). The IAB provides architectural oversight to the Internet Engineering Task Force (IETF), the Internet’s main standard-setting body. The IAB recently published <a href="https://www.rfc-editor.org/rfc/rfc8890.html">RFC 8890</a>, which states that individual end-users should be prioritized when designing Internet protocols. It says that if there’s a conflict between the interests of end-users and the interest of service providers, corporations, or governments, IETF decisions should favor end users. One of the prime interests of end-users is the right to privacy, and the IAB published <a href="https://tools.ietf.org/html/rfc6973">RFC 6973</a> to indicate how Internet protocols should take privacy into account.</p><p>Today’s technical blog posts are about <b>improvements to the Internet designed to respect user privacy</b>. Privacy is a complex topic that spans multiple disciplines, so it’s essential to clarify what we mean by “improving privacy.” We are specifically talking about changing the protocols that handle privacy-sensitive information exposed “on-the-wire” and modifying them so that this data is exposed to fewer parties. This data continues to exist. It’s just no longer available or visible to third parties without building a mechanism to collect it at a higher layer of the Internet stack, the application layer. <i>These changes go beyond website encryption</i>; they go deep into the design of the systems that are foundational to making the Internet what it is.</p>
    <div>
      <h3>The toolbox: cryptography and secure proxies</h3>
      <a href="#the-toolbox-cryptography-and-secure-proxies">
        
      </a>
    </div>
    <p>Two tools for making sure data can be used without being seen are <i>cryptography</i> and <i>secure</i> <i>proxies</i>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/71bC5CqEyrYCZ0RpSJGbHI/922ebc973778951111a1a1881b978e71/Cryptography-and-Secure-Proxies.png" />
            
            </figure><p>Cryptography allows information to be transformed into a format that a very limited number of people (those with the key) can understand. Some describe cryptography as a tool that transforms data security problems into key management problems. This is a humorous but fair description. Cryptography makes it easier to reason about privacy because only key holders can view data.</p><p>Another tool for protecting access to data is isolation/segmentation. By physically limiting which parties have access to information, you effectively build privacy walls. A popular architecture is to rely on policy-aware proxies to pass data from one place to another. Such proxies can be configured to strip sensitive data or block data transfers between parties according to what the privacy policy says.</p><p>Both these tools are useful individually, but they can be even more effective if combined. Onion routing (the cryptographic technique <a href="/cloudflare-onion-service/">underlying Tor</a>) is one example of how proxies and encryption can be used in tandem to enforce strong privacy. Broadly, if party A wants to send data to party B, they can encrypt the data with party B’s key and encrypt the metadata with a proxy’s key and send it to the proxy.</p><p>Platforms and services built on top of the Internet can build in consent systems, like privacy policies presented through user interfaces. The infrastructure of the Internet relies on layers of underlying protocols. Because these layers of the Internet are so far below where the user interacts with them, it’s almost impossible to build a concept of user consent. In order to respect users and protect them from privacy issues, the protocols that glue the Internet together should be designed with privacy enabled by default.</p>
    <div>
      <h3>Data vs. metadata</h3>
      <a href="#data-vs-metadata">
        
      </a>
    </div>
    <p>The transition from a mostly unencrypted web to an encrypted web has done a lot for end-user privacy. For example, the “<a href="https://codebutler.com/2010/10/24/firesheep/">coffeeshop stalker</a>” is no longer an issue for most sites. When accessing the majority of sites online, users are no longer broadcasting every aspect of their web browsing experience (search queries, browser versions, authentication cookies, etc.) over the Internet for any participant on the path to see. Suppose a site is configured correctly to use HTTPS. In that case, users can be confident their data is secure from onlookers and reaches only the intended party because their connections are both encrypted and authenticated.</p><p>However, HTTPS only protects the <i>content</i> of web requests. Even if you only browse sites over HTTPS, that doesn’t mean that your <i>browsing</i> <i>patterns</i> are private. This is because HTTPS fails to encrypt a critical aspect of the exchange: the metadata. When you make a phone call, the metadata is the phone number, not the call’s contents. Metadata is the data about the data.</p><p>To illustrate the difference and why it matters, here’s a diagram of what happens when you visit a website like an imageboard. Say you’re going to a specific page on that board (<a href="https://images.com/room101/">https://.com/room101/</a>) that has specific embedded images hosted on .com.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2WDSyOoFNRtXcGj5XXQC9p/b1c9ce791e8d84798b93782c97703c37/image5-2.png" />
            
            </figure><p>Page load for an imageboard, returning an HTML page with an image from an embarassing site</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6DvbyOK8cIcvblLUmPsQCl/c994c812f99f917e5ae4c86898da827c/image4.png" />
            
            </figure><p>Subresource fetch for the image from an embarassing site</p><p>The space inside the dotted line here represents the part of the Internet that your data needs to transit. They include your local area network or coffee shop, your ISP, an Internet transit provider, and it could be the network portion of the cloud provider that hosts the server. Users often don’t have a relationship with these entities or a contract to prevent these parties from doing anything with the user’s data. And even if those entities don’t look at the data, a well-placed observer intercepting Internet traffic could see anything sent unencrypted. It would be best if they just didn’t see it at all. In this example, the fact that the user visited .com can be seen by an observer, which is expected. However, though page content is encrypted, it’s possible to learn <i>which specific page you’ve visited</i> can be seen since .com is also visible.</p><p>It’s a general rule that if data is available to on-path parties on the Internet, some of these on-path parties will use this data. It’s also true that these on-path parties need some metadata in order to facilitate the transport of this data. This balance is explored in <a href="https://www.rfc-editor.org/rfc/rfc8558.html">RFC 8558</a>, which explains how protocols should be designed thoughtfully with respect to the balance between too much metadata (bad for privacy) and too little metadata (bad for operations).</p><p>In an ideal world, Internet protocols would be designed with the principle of least privilege. They would provide the minimum amount of information needed for the on-path parties (the pipes) to do the job of transporting the data to the right place and keep everything else confidential by default. Current protocols, including TLS 1.3 and QUIC, are important steps towards this ideal but fall short with respect to metadata privacy.</p>
    <div>
      <h3>Knowing both who you are and what you do online can lead to profiling</h3>
      <a href="#knowing-both-who-you-are-and-what-you-do-online-can-lead-to-profiling">
        
      </a>
    </div>
    <p>Today’s announcements reflect two metadata protection levels: the first involves limiting the amount of metadata available to third-party observers (like ISPs). The second involves restricting the amount of metadata that users share with service providers themselves.</p><p>Hostnames are an example of metadata that needs to be protected from third-party observers, which DoH and ECH intend to do. However, it doesn’t make sense to hide the hostname from the site you’re visiting. It also doesn’t make sense to hide it from a directory service like DNS. A DNS server needs to know which hostname you’re resolving to resolve it for you!</p><p>A privacy issue arises when a service provider knows about both what sites you’re visiting and who you are. Individual websites do not have this dangerous combination of information (except in the case of third party cookies, which <a href="https://www.cnbc.com/2020/01/14/google-chrome-to-end-support-for-third-party-cookies-within-two-years.html">are going away soon in browsers</a>), but DNS providers do. Thankfully, it’s not actually necessary for a DNS resolver to know *both* the hostname of the service you’re going to and which IP you’re coming from. Disentangling the two, which is the goal of ODoH, is good for privacy.</p>
    <div>
      <h3>The Internet is part of 'our' Infrastructure</h3>
      <a href="#the-internet-is-part-of-our-infrastructure">
        
      </a>
    </div>
    <p>Roads should be well-paved, well lit, have accurate signage, and be optimally connected. They aren't designed to stop a car based on who's inside it. Nor should they be! Like transportation infrastructure, Internet infrastructure is responsible for getting data where it needs to go, not looking inside packets, and making judgments. But the Internet is made of computers and software, and software tends to be written to make decisions based on the data it has available to it.</p><p>Privacy-preserving protocols attempt to eliminate the temptation for infrastructure providers and others to peek inside and make decisions based on personal data. A non-privacy preserving protocol like HTTP keeps data and metadata, like passwords, IP addresses, and hostnames, as explicit parts of the data sent over the wire. The fact that they are explicit means that they are available to any observer to collect and act on. A protocol like HTTPS improves upon this by making some of the data (such as passwords and site content) invisible on the wire using encryption.</p><p>The three protocols we are exploring today extend this concept.</p><ul><li><p><b>ECH</b> takes most of the unencrypted metadata in TLS (including the hostname) and encrypts it with a key that was fetched ahead of time.</p></li><li><p><b>ODoH</b> (a new variant of DoH co-designed by Apple, Cloudflare, and Fastly engineers) uses proxies and onion-like encryption to make the source of a DNS query invisible to the DNS resolver. This protects the user’s IP address when resolving hostnames.</p></li><li><p><b>OPAQUE</b> uses a new cryptographic technique to keep passwords hidden <b><i>even from the server</i></b>. Utilizing a construction called an Oblivious Pseudo-Random Function (as seen in <a href="/privacy-pass-the-math/">Privacy Pass</a>), the server does not learn the password; it only learns whether or not the user knows the password.</p></li></ul><p>By making sure Internet infrastructure acts more like physical infrastructure, user privacy is more easily protected. The Internet is more private if private data can only be collected where the user has a chance to consent to its collection.</p>
    <div>
      <h3>Doing it together</h3>
      <a href="#doing-it-together">
        
      </a>
    </div>
    <p>As much as we’re excited about working on new ways to make the Internet more private, innovation at a global scale doesn’t happen in a vacuum. Each of these projects is the output of a collaborative group of individuals working out in the open in organizations like the IETF and the IRTF. Protocols must come about through a consensus process that involves all the parties that make up the interconnected set of systems that power the Internet. From browser builders to cryptographers, from DNS operators to website administrators, this is truly a global team effort.</p><p>We also recognize that sweeping technical changes to the Internet will inevitably also impact the technical community. Adopting these new protocols may have legal and policy implications. We are actively working with governments and civil society groups to help educate them about the impact of these potential changes.</p><p>We’re looking forward to sharing our work today and hope that more interested parties join in developing these protocols. The projects we are announcing today were designed by experts from academia, industry, and hobbyists together and were built by engineers from Cloudflare Research (including the work of interns, which we will highlight) with everyone’s support Cloudflare.</p><p>If you’re interested in this type of work, <a href="https://www.cloudflare.com/careers/jobs/">we’re hiring</a>!</p> ]]></content:encoded>
            <category><![CDATA[Privacy Week]]></category>
            <category><![CDATA[Encryption]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[DoH]]></category>
            <category><![CDATA[Authentication]]></category>
            <category><![CDATA[Passwords]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Encrypted SNI]]></category>
            <category><![CDATA[Research]]></category>
            <guid isPermaLink="false">6Npild5sJTVfGo3GttHrTd</guid>
            <dc:creator>Nick Sullivan</dc:creator>
        </item>
        <item>
            <title><![CDATA[Anchoring Trust: A Hardware Secure Boot Story]]></title>
            <link>https://blog.cloudflare.com/anchoring-trust-a-hardware-secure-boot-story/</link>
            <pubDate>Tue, 17 Nov 2020 12:00:00 GMT</pubDate>
            <description><![CDATA[ As a security company, we pride ourselves on finding innovative ways to protect our platform to, in turn, protect the data of our customers. Part of this approach is implementing progressive methods in protecting our hardware at scale. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/15nAX8kXKg4gz7jDCecogD/f2340b560358c5d4c25f5ff7fbb77325/anchor2-2-1.png" />
            
            </figure><p>As a security company, we pride ourselves on finding innovative ways to protect our platform to, in turn, protect the data of our customers. Part of this approach is implementing progressive methods in protecting our hardware at scale. While we have blogged about how we address security threats from <a href="/mitigating-spectre-and-other-security-threats-the-cloudflare-workers-security-model/">application</a> to <a href="/securing-memory-at-epyc-scale/">memory</a>, the attacks on hardware, as well as firmware, have increased substantially. The data cataloged in the <a href="https://nvd.nist.gov/">National Vulnerability Database (NVD)</a> has shown the frequency of hardware and firmware-level vulnerabilities rising year after year.</p><p>Technologies like <a href="https://docs.microsoft.com/en-us/windows-hardware/design/device-experiences/oem-secure-boot">secure boot</a>, common in desktops and laptops, have been ported over to the server industry as a method to combat firmware-level attacks and protect a device’s boot integrity. These technologies require that you create a trust ‘anchor’, an authoritative entity for which trust is assumed and not derived. A common trust anchor is the system <a href="https://en.wikipedia.org/wiki/BIOS">Basic Input/Output System (BIOS)</a> or the <a href="https://www.uefi.org/">Unified Extensible Firmware Interface (UEFI</a>) firmware.</p><p>While this ensures that the device boots only signed firmware and operating system bootloaders, does it protect the entire boot process? What protects the BIOS/UEFI firmware from attacks?</p>
    <div>
      <h2>The Boot Process</h2>
      <a href="#the-boot-process">
        
      </a>
    </div>
    <p>Before we discuss how we secure our boot process, we will first go over how we boot our machines.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3ubWFPurTNwYnbXL8EnVpl/bd8c93e0e7acedf04b2787816be51e6d/image1-7.png" />
            
            </figure><p>The above image shows the following sequence of events:</p><ul><li><p>After powering on the system (through a <a href="https://www.gigabyte.com/Glossary/bmc">baseboard management controller (BMC)</a> or physically pushing a button on the system), the system unconditionally executes the UEFI firmware residing on a flash chip on the motherboard.</p></li><li><p>UEFI performs some hardware and peripheral initialization and executes the <a href="https://wiki.debian.org/PXEBootInstall">Preboot Execution Environment (PXE)</a> code, which is a small program that boots an image over the network and usually resides on a flash chip on the network card.</p></li><li><p>PXE sets up the network card, and downloads and executes a small program bootloader through an open source boot firmware called <a href="https://ipxe.org/">iPXE</a>.</p></li><li><p>iPXE loads a script that automates a sequence of commands for the bootloader to know how to boot a specific operating system (sometimes several of them). In our case, it loads our Linux kernel, <code>initrd</code> (this contains device drivers which are not directly compiled into the kernel), and a standard Linux root filesystem. After loading these components, the bootloader executes and hands off the control to the kernel.</p></li><li><p>Finally, the Linux kernel loads any additional drivers it needs and starts applications and services.</p></li></ul>
    <div>
      <h2>UEFI Secure Boot</h2>
      <a href="#uefi-secure-boot">
        
      </a>
    </div>
    <p>Our UEFI secure boot process is fairly straightforward, albeit customized for our environments. After loading the UEFI firmware from the bootloader, an initialization script defines the following variables:</p><p><b>Platform Key (PK):</b> It serves as the cryptographic root of trust for secure boot, giving capabilities to manipulate and/or validate the other components of the <a href="https://blog.hansenpartnership.com/the-meaning-of-all-the-uefi-keys/">secure boot framework</a>.</p><p><b>Trusted Database (DB):</b> Contains a signed (by platform key) list of hashes of all PCI option ROMs, as well as a public key, which is used to verify the signature of the bootloader and the kernel on boot.</p><p>These variables are respectively the master platform public key, which is used to sign all other resources, and an allow list database, containing other certificates, binary file hashes, etc. In default secure boot scenarios, Microsoft keys are used by default. At Cloudflare we use our own, which makes us the root of trust for UEFI:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/12zsC1QVrT8uIGMG0K6ss8/92c1d1cdaf9348b03794be40943c81f8/image7.png" />
            
            </figure><p>But, by setting our trust anchor in the UEFI firmware, what attack vectors still exist?</p>
    <div>
      <h2>UEFI Attacks</h2>
      <a href="#uefi-attacks">
        
      </a>
    </div>
    <p>As stated previously, firmware and hardware attacks are on the rise. It is clear from the figure below that firmware-related vulnerabilities have increased significantly over the last 10 years, especially since 2017, when the hacker community started attacking the firmware on different platforms:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7gyyxjhnApSBYfZEdqglcS/d863b610573c1ca0b497b1b3e1ebf2cc/image14.png" />
            
            </figure><p>This upward trend, coupled with <a href="https://arstechnica.com/information-technology/2020/10/custom-made-uefi-bootkit-found-lurking-in-the-wild/">recent malware findings in UEFI</a>, shows that trusting firmware is becoming increasingly problematic.</p><p>By tainting the UEFI firmware image, you poison the entire boot trust chain. The ability to trust firmware integrity is important beyond secure boot. For example, if you can't trust the firmware not to be compromised, you can't trust things like <a href="https://docs.microsoft.com/en-us/windows/security/information-protection/tpm/trusted-platform-module-overview">trusted platform module (TPM) measurements</a> to be accurate, because the firmware is itself responsible for doing these measurements (e.g a TPM is not an on-path security mechanism, but instead it requires firmware to interact and cooperate with). Firmware may be crafted to extend measurements that are accepted by a remote attestor, but that don't represent what's being locally loaded. This could cause firmware to have a questionable measured boot and remote attestation procedure.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3hTmUFGH3CZwcPD6CXmASh/4a26d01153fb10cd9b11d461be513005/image11.png" />
            
            </figure><p>If we can’t trust firmware, then hardware becomes our last line of defense.</p>
    <div>
      <h2>Hardware Root of Trust</h2>
      <a href="#hardware-root-of-trust">
        
      </a>
    </div>
    <p>Early this year, we made a series of blog posts on <a href="/technical-details-of-why-cloudflare-chose-amd-epyc-for-gen-x-servers/">why we chose AMD EPYC processors</a> for our Gen X servers. With security in mind, we started <a href="/securing-memory-at-epyc-scale/">turning on</a> features that were available to us and set forth the plan of using AMD silicon as a Hardware Root of Trust (HRoT).</p><p><a href="https://www.amd.com/system/files/2017-06/Trusting-in-the-CPU.pdf">Platform Secure Boot</a> (PSB) is AMD’s implementation of hardware-rooted boot integrity. Why is it better than UEFI firmware-based root of trust? Because it is intended to assert, by a root of trust anchored in the hardware, the integrity and authenticity of the System ROM image before it can execute. It does so by performing the following actions:</p><ul><li><p>Authenticates the first block of BIOS/UEFI prior to releasing x86 CPUs from reset.</p></li><li><p>Authenticates the System Read-Only Memory (ROM) contents on each boot, not just during updates.</p></li><li><p>Moves the UEFI Secure Boot trust chain to immutable hardware.</p></li></ul><p>This is accomplished by the AMD Platform Security Processor (PSP), an ARM Cortex-A5 microcontroller that is an immutable part of the system on chip (SoC). The PSB consists of two components:</p><p><b><b><b>On-chip Boot ROM</b></b></b></p><ul><li><p>Embeds a SHA384 hash of an AMD root signing key</p></li><li><p>Verifies and then loads the off-chip PSP bootloader located in the boot flash</p></li></ul><p><b><b><b>Off-chip Boot</b></b></b><b><b>**l</b></b>oader******</p><ul><li><p>Locates the PSP directory table that allows the PSP to find and load various images</p></li><li><p>Authenticates first block of BIOS/UEFI code</p></li><li><p>Releases CPUs after successful authentication</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5ABw5l5aHps4VcM0wHACDq/e0eaaf60f0ffee8317d88d049f3c594c/image4.gif" />
            
            </figure><ol><li><p>The PSP secures the On-chip Boot ROM code, loads the off-chip PSP firmware into PSP static random access memory (SRAM) after authenticating the firmware, and passes control to it.</p></li><li><p>The Off-chip Bootloader (BL) loads and specifies applications in a specific order (whether or not the system goes into a debug state and then a secure EFI application binary interface to the BL)</p></li><li><p>The system continues initialization through each bootloader stage.</p></li><li><p>If each stage passes, then the UEFI image is loaded and the x86 cores are released.</p></li></ol><p>Now that we know the booting steps, let’s build an image.</p>
    <div>
      <h2>Build Process</h2>
      <a href="#build-process">
        
      </a>
    </div>
    
    <div>
      <h3></h3>
      <a href="#">
        
      </a>
    </div>
    <p>Public Key Infrastructure</p><p>Before the image gets built, a public key infrastructure (PKI) is created to generate the key pairs involved for signing and validating signatures:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7CGqAcYMPxBE9kv0ZcjGwL/0ffb73c3d6b1353b0291b1736ff0f599/image10.png" />
            
            </figure><p>Our original device manufacturer (ODM), as a trust extension, creates a key pair (public and private) that is used to sign the first segment of the BIOS (private key) and validates that segment on boot (public key).</p><p>On AMD’s side, they have a key pair that is used to sign (the AMD root signing private key) and certify the public key created by the ODM. This is validated by AMD’s root signing public key, which is stored as a hash value (<a href="https://tools.ietf.org/html/rfc5756">RSASSA-PSS</a>: SHA-384 with 4096-bit key is used as the hashing algorithm for both message and mask generation) in <a href="https://en.wikipedia.org/wiki/Serial_Peripheral_Interface">SPI-ROM</a>.</p><p>Private keys (both AMD and ODM) are stored in <a href="https://en.wikipedia.org/wiki/Hardware_security_module">hardware security modules</a>.</p><p>Because of the way the PKI mechanisms are built, the system cannot be compromised if only one of the keys is leaked. This is an important piece of the trust hierarchy that is used for image signing.</p>
    <div>
      <h3>Certificate Signing Request</h3>
      <a href="#certificate-signing-request">
        
      </a>
    </div>
    <p>Once the PKI infrastructure is established, a BIOS signing key pair is created, together with a certificate signing request (CSR). Creating the CSR uses known common name (CN) fields that many are familiar with:</p><ul><li><p><code>countryName</code></p></li><li><p><code>stateOrProvinceName</code></p></li><li><p><code>localityName</code></p></li><li><p><code>organizationName</code></p></li></ul><p>In addition to the fields above, the CSR will contain a <code>serialNumber</code> field, a 32-bit integer value represented in ASCII HEX format that encodes the following values:</p><ul><li><p><code>PLATFORM_VENDOR_ID</code>: An 8-bit integer value assigned by AMD for each ODM.</p></li><li><p><code>PLATFORM_MODEL_ID</code>: A 4-bit integer value assigned to a platform by the ODM.</p></li><li><p><code>BIOS_KEY_REVISION_ID</code>: is set by the ODM encoding a 4-bit key revision as unary counter value.</p></li><li><p><code>DISABLE_SECURE_DEBUG</code>: Fuse bit that controls whether secure debug unlock feature is disabled permanently.</p></li><li><p><code>DISABLE_AMD_BIOS_KEY_USE</code>: Fuse bit that controls if the BIOS, signed by an AMD key, (with <code>vendor ID == 0</code>) is permitted to boot on a CPU with non-zero Vendor ID.</p></li><li><p><code>DISABLE_BIOS_KEY_ANTI_ROLLBACK</code>: Fuse bit that controls whether BIOS key anti-rollback feature is enabled.</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7jqoVoEKDiRSoVt9MbdY0W/0f4d2d505e12ecef65a6b31648f223d1/image3-3.png" />
            
            </figure><p>Remember these values, as we’ll show how we use them in a bit. Any of the <code>DISABLE</code> values are optional, but recommended based on your security posture/comfort level.</p><p>AMD, upon processing the CSR, provides the public part of the BIOS signing key signed and certified by the AMD signing root key as a RSA Public Key Token file (<code>.stkn</code>) format.</p>
    <div>
      <h2>Putting It All Together</h2>
      <a href="#putting-it-all-together">
        
      </a>
    </div>
    <p>The following is a step-by-step illustration of how signed UEFI firmware is built:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/48teEpwD6WwiJEVvSql73e/203673a7e6265f4d799c2b31ec9f492a/image8.gif" />
            
            </figure><ol><li><p>The ODM submits their public key used for signing Cloudflare images to AMD.</p></li><li><p>AMD signs this key using their RSA private key and passes it back to ODM.</p></li><li><p>The AMD public key and the signed ODM public key are part of the final BIOS SPI image.</p></li><li><p>The BIOS source code is compiled and various BIOS components (PEI Volume, Driver eXecution Environment (DXE) volume, NVRAM storage, etc.) are <a href="https://edk2-docs.gitbook.io/edk-ii-build-specification/2_design_discussion/23_boot_sequence">built as usual.</a></p></li><li><p>The PSP directory and BIOS directory are built next. PSP directory and BIOS directory table points to the location of various firmware entities.</p></li><li><p>The ODM builds the signed BIOS Root of Trust Measurement (RTM) signature based on the blob of BIOS PEI volume concatenated with BIOS Directory header, and generates the digital signature of this using the private portion of ODM signing key. The SPI location for signed BIOS RTM code is finally updated with this signature blob.</p></li><li><p>Finally, the BIOS binaries, PSP directory, BIOS directory and various firmware binaries are combined to build the SPI BIOS image.</p></li></ol>
    <div>
      <h2>Enabling Platform Secure Boot</h2>
      <a href="#enabling-platform-secure-boot">
        
      </a>
    </div>
    <p>Platform Secure Boot is enabled at boot time with a PSB-ready firmware image. PSB is configured using a region of one-time programmable (OTP) fuses, specified for the customer. OTP fuses are on-chip non-volatile memory (NVM) that permits data to be written to memory only once. There is <b>NO</b> way to roll the fused CPU back to an unfused one.</p><p>Enabling PSB in the field will go through two steps: fusing and validating.</p><ul><li><p>Fusing: Fuse the values assigned in the <code>serialNumber</code> field that was generated in the CSR</p></li><li><p>Validating: Validate the fused values and the status code registers</p></li></ul><p>If validation is successful, the BIOS RTM signature is validated using the ODM BIOS signing key, PSB-specific registers (<code>MP0_C2P_MSG_37</code> and <code>MP0_C2P_MSG_38</code>) are updated with the PSB status and fuse values, and the x86 cores are released</p><p>If validation fails, the registers above are updated with the PSB error status and fuse values, and the x86 cores stay in a locked state.</p>
    <div>
      <h2>Let’s Boot!</h2>
      <a href="#lets-boot">
        
      </a>
    </div>
    <p>With a signed image in hand, we are ready to enable PSB on a machine. We chose to deploy this on a few machines that had an updated, unsigned <a href="https://ami.com/en/support/bios-uefi-firmware-support/">AMI UEFI</a> firmware image, in this case version <code>2.16</code>. We use a couple of different firmware <a href="https://github.com/Zibri/afulnx/releases">update</a> <a href="https://downloadcenter.intel.com/download/29977?v=t">tools</a>, so, after a quick script, we ran an update to change the firmware version from <code>2.16</code> to <code>2.18C</code> (the signed image):</p>
            <pre><code>. $sudo ./UpdateAll.sh
Bin file name is ****.218C

BEGIN

+---------------------------------------------------------------------------+
|                 AMI Firmware Update Utility v5.11.03.1778                 |      
|                 Copyright (C)2018 American Megatrends Inc.                |                       
|                         All Rights Reserved.                              |
+---------------------------------------------------------------------------+
Reading flash ............... done
FFS checksums ......... ok
Check RomLayout ........ ok.
Erasing Boot Block .......... done
Updating Boot Block ......... done
Verifying Boot Block ........ done
Erasing Main Block .......... done
Updating Main Block ......... done
Verifying Main Block ........ done
Erasing NVRAM Block ......... done
Updating NVRAM Block ........ done
Verifying NVRAM Block ....... done
Erasing NCB Block ........... done
Updating NCB Block .......... done
Verifying NCB Block ......... done

Process completed.</code></pre>
            <p>After the update completed, we rebooted:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2ZW4E36XqbrlpgvXyjR34t/21625b6e4691951b840d4d0d7bb91c5b/image2-6.png" />
            
            </figure><p>After a successful install, we validated that the image was correct via the <a href="https://man7.org/linux/man-pages/man5/sysfs.5.html">sysfs</a> information provided in the <a href="https://linux.die.net/man/8/dmidecode">dmidecode</a> output:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1gIi4xRxwq4PfpxAjc2nnA/6262739c2730a1c85865a59dd168ada7/image12.gif" />
            
            </figure>
    <div>
      <h3>Testing</h3>
      <a href="#testing">
        
      </a>
    </div>
    <p>With a signed image installed, we wanted to test that it worked, meaning: what if an unauthorized user installed their own firmware image? We did this by downgrading the image back to an unsigned image, <code>2.16</code>. In theory, the machine shouldn’t boot as the x86 cores should stay in a locked state. After downgrading, we rebooted and got the following:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7HQqHnKOTSHg1YQOyZq35A/2209455a149759a90c474970bb9bf6ad/image13-1.jpg" />
            
            </figure><p>This isn’t a powered down machine, but the result of booting with an unsigned image.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/y9RGO5UGzOEG1lxRyPJa2/818ecd62bc892f61097cf369d1b599a2/image9.jpg" />
            
            </figure><p>Flashing back to a signed image is done by running the same flashing utility through the BMC, so we weren’t bricked. Nonetheless, the results were successful.</p>
    <div>
      <h2>Naming Convention</h2>
      <a href="#naming-convention">
        
      </a>
    </div>
    <p>Our standard UEFI firmware images are alphanumeric, making it difficult to distinguish (by name) the difference between a signed and unsigned image (<code>v2.16A</code> vs <code>v2.18C</code>), for example. There isn’t a remote attestation capability (yet) to probe the PSB status registers or to store these values by means of a signature (e.g. <a href="https://linux.die.net/man/8/tpm_quote_tools">TPM quote</a>). As we transitioned to PSB, we wanted to make this easier to determine by adding a specific suffix: <code>-sig</code>  that we could query in userspace. This would allow us to query this information via <a href="https://prometheus.io/">Prometheus</a>. Changing the file name alone wouldn’t do it, so we had to make the following changes to reflect a new naming convention for signed images:</p><ul><li><p>Update filename</p></li><li><p>Update BIOS version for setup menu</p></li><li><p>Update post message</p></li><li><p>Update <a href="https://www.dmtf.org/sites/default/files/standards/documents/DSP0134_2.7.1.pdf">SMBIOS type 0</a> (BIOS version string identifier)</p></li></ul><p>Signed images now have a <code>-sig</code> suffix:</p>
            <pre><code>~$ sudo dmidecode -t0
# dmidecode 3.2
Getting SMBIOS data from sysfs.
SMBIOS 3.3.0 present.
# SMBIOS implementations newer than version 3.2.0 are not
# fully supported by this version of dmidecode.

Handle 0x0000, DMI type 0, 26 bytes
BIOS Information
	Vendor: American Megatrends Inc.
	Version: V2.20-sig
	Release Date: 09/29/2020
	Address: 0xF0000
	Runtime Size: 64 kB
	ROM Size: 16 MB</code></pre>
            
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Finding weaknesses in firmware is a challenge that many attackers have taken on. Attacks that physically manipulate the firmware used for performing hardware initialization during the booting process can invalidate many of the common secure boot features that are considered industry standard. By implementing a hardware root of trust that is used for code signing critical boot entities, your hardware becomes a 'first line of defense' in ensuring that your server hardware and software integrity can derive trust through cryptographic means.</p>
    <div>
      <h2>What’s Next?</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>While this post discussed our current, AMD-based hardware platform, how will this affect our future hardware generations? One of the benefits of working with diverse vendors like AMD and <a href="https://amperecomputing.com/">Ampere</a> (ARM) is that we can ensure they are baking in our desired platform security by default (which we’ll speak about in a future post), making our hardware security outlook that much brighter ?.</p> ]]></content:encoded>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Hardware]]></category>
            <category><![CDATA[Encryption]]></category>
            <category><![CDATA[Data Center]]></category>
            <category><![CDATA[Cloudflare Network]]></category>
            <guid isPermaLink="false">m0jUp84VV1cK5dqLgVh32</guid>
            <dc:creator>Derek Chamorro</dc:creator>
            <dc:creator>Ryan Chow</dc:creator>
        </item>
        <item>
            <title><![CDATA[UtahFS: Encrypted File Storage]]></title>
            <link>https://blog.cloudflare.com/utahfs/</link>
            <pubDate>Tue, 09 Jun 2020 11:00:00 GMT</pubDate>
            <description><![CDATA[ Several months ago, I realized that I had a lot of sensitive files on my computer (my diary, if you must know) that I was afraid of losing, but I didn’t feel comfortable putting them in something like Google Drive or Dropbox. Introducing UtahFS. ]]></description>
            <content:encoded><![CDATA[ <p>Encryption is one of the most powerful technologies that everyone uses on a daily basis without realizing it. Transport-layer encryption, which protects data as it’s sent across the Internet to its intended destination, is now ubiquitous because it’s a fundamental tool for creating a trustworthy Internet. Disk encryption, which protects data while it’s sitting idly on your phone or laptop’s hard drive, is also becoming ubiquitous because it prevents anybody who steals your device from also being able to see what’s on your desktop or read your email.</p><p>The next improvement on this technology that’s starting to gain popularity is end-to-end encryption, which refers to a system where only the end-users are able to access their data -- not any intermediate service providers. Some of the most popular examples of this type of encryption are chat apps like <a href="https://faq.whatsapp.com/en/android/28030015/">WhatsApp</a> and <a href="https://signal.org/">Signal</a>. End-to-end encryption significantly reduces the likelihood of a user’s data being <a href="https://en.wikipedia.org/wiki/Yahoo!_data_breaches">maliciously stolen</a> from, or <a href="https://arstechnica.com/gadgets/2020/02/google-photos-bug-let-strangers-download-your-private-videos/">otherwise mishandled</a> by a service provider. This is because even if the service provider loses the data, nobody will have the keys to decrypt it!</p><p>Several months ago, I realized that I had a lot of sensitive files on my computer (my diary, if you must know) that I was afraid of losing, but I didn’t feel comfortable putting them in something like Google Drive or Dropbox. While Google and Dropbox are absolutely trustworthy companies, they don’t offer encryption and this is a case where I would really wanted complete control of my data.</p><p>From looking around, it was hard for me to find something that met all of my requirements:</p><ol><li><p>Would both encrypt and authenticate the directory structure, meaning that file names are hidden and it’s not possible for others to move or rename files.</p></li><li><p>Viewing/changing part of a large file doesn’t require downloading and decrypting the entire file.</p></li><li><p>Is open-source and has a documented protocol.</p></li></ol><p>So I set out to build such a system! The end result is called UtahFS, and the <a href="https://github.com/cloudflare/utahfs">code for it is available here</a>. Keep in mind that this system is not used in production at Cloudflare: it’s a proof-of-concept that I built while working on our <a href="/cloudflares-approach-to-research/">Research Team</a>. The rest of this blog post describes why I built it as I did, but there’s documentation in the repository on actually using it if you <a href="https://github.com/cloudflare/utahfs/tree/master/docs">want to skip to that</a>.</p>
    <div>
      <h3>Storage Layer</h3>
      <a href="#storage-layer">
        
      </a>
    </div>
    <p>The first and most important part of a storage system is… the storage. For this, I used Object Storage, because it’s one of the cheapest and most reliable ways to store data on somebody else’s hard drives. <a href="https://www.cloudflare.com/learning/cloud/what-is-object-storage/">Object storage</a> is nothing more than a key-value database hosted by a cloud provider, often tuned for storing values around a few kilobytes in size. There are a ton of different providers with different pricing schemes like <a href="https://aws.amazon.com/free/storage/">Amazon S3</a>, <a href="https://www.backblaze.com/b2/cloud-storage.html">Backblaze B2</a>, and <a href="https://wasabi.com/">Wasabi</a>. All of them are capable of storing terabytes of data, and many also offer geographic redundancy.</p>
    <div>
      <h3>Data Layer</h3>
      <a href="#data-layer">
        
      </a>
    </div>
    <p>One of the requirements that was important to me was that it shouldn’t be necessary to download and decrypt an entire file before being able to read a part of it. One place where this matters is audio and video files, because it enables playback to start quickly. Another case is ZIP files: a lot of file browsers have the ability to explore compressed archives, like ZIP files, without decompressing them. To enable this functionality, the browser needs to be able to read a specific part of the archive file, decompress just that part, and then move somewhere else.</p><p>Internally, UtahFS never stores objects that are larger than a configured size (32 kilobytes, by default). If a file has more than that amount of data, the file is broken into multiple objects which are connected by a <a href="https://en.wikipedia.org/wiki/Skip_list">skip list</a>. A skip list is a slightly more complicated version of a <a href="https://en.wikipedia.org/wiki/Linked_list">linked list</a> that allows a reader to move to a random position quickly by storing additional pointers in each block that point further than just one hop ahead.</p><p>When blocks in a skip list are no longer needed, because a file was deleted or truncated, they’re added to a special “trash” linked list. Elements of the trash list can then be recycled when blocks are needed somewhere else, to create a new file or write more data to the end of an existing file, for example. This maximizes reuse and means new blocks only need to be created when the trash list is empty. Some readers might recognize this as the Linked Allocation strategy described in The Art of Computer Programming: Volume 1, section 2.2.3!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6Bnd1jOzfzii7HcP4IWI51/6cb2736762a219e51eaf3769387f2396/IMG_0428-5.jpg" />
            
            </figure><p>The reason for using Linked Allocation is fundamentally that it’s the most efficient for most operations. But also, it’s the approach for allocating memory that’s going to be most compatible with the cryptography we talk about in the next three sections.</p>
    <div>
      <h3>Encryption Layer</h3>
      <a href="#encryption-layer">
        
      </a>
    </div>
    <p>Now that we’ve talked about how files are broken into blocks and connected by a skip list, we can talk about how the data is actually protected. There are two aspects to this:</p><p>The first is <b>confidentiality</b>, which hides the contents of each block from the storage provider. This is achieved simply by encrypting each block with AES-GCM, with a key derived from the user’s password.</p><p>While simple, this scheme doesn’t provide <i>forward secrecy</i> or <i>post-compromise security</i>. Forward Secrecy means that if the user’s device was compromised, an attacker wouldn’t be able to read deleted files. Post-Compromise Security means that once the user’s device is no longer compromised, an attacker wouldn’t be able to read new files. Unfortunately, providing either of these guarantees means storing cryptographic keys on the user’s device that would need to be synchronized between devices and, if lost, would render the archive unreadable.</p><p>This scheme also doesn’t protect against <i>offline password cracking</i>, because an attacker can take any of the encrypted blocks and keep guessing passwords until they find one that works. This is somewhat mitigated by using <a href="https://en.wikipedia.org/wiki/Argon2">Argon2</a>, which makes guessing passwords more expensive, and by recommending that users choose strong passwords.</p><p>I'm definitely open to improving the encryption scheme in the future, but considered the security properties listed above too difficult and fragile for the initial release.</p>
    <div>
      <h3>Integrity Layer</h3>
      <a href="#integrity-layer">
        
      </a>
    </div>
    <p>The second aspect of data protection is <b>integrity</b>, which ensures the storage provider hasn’t changed or deleted anything. This is achieved by building a Merkle Tree over the user’s data. Merkle Trees are described in-depth in our <a href="/introducing-certificate-transparency-and-nimbus/">blog post about Certificate Transparency</a>. The root hash of the Merkle Tree is associated with a version number that’s incremented with each change, and both the root hash and the version number are authenticated with a key derived from the user’s password. This data is stored in two places: under a special key in the object storage database, and in a file on the user’s device.</p><p>Whenever the user wants to read a block of data from the storage provider, they first request the root stored remotely and check that it’s either the same as what they have on disk, or has a greater version number than what’s on disk. Checking the version number prevents the storage provider from reverting the archive to a previous (valid) state undetected. Any data which is read can then be verified against the most recent root hash, which prevents any other types of modifications or deletions.</p><p>Using a Merkle Tree here has the same benefit as it does for Certificate Transparency: it allows us to verify individual pieces of data without needing to download and verify <i>everything</i> at once. Another common tool used for data integrity is called a Message Authentication Code (or MAC), and while it’s a lot simpler and more efficient, it doesn’t have a way to do only partial verification.</p><p>The one thing our use of Merkle Trees doesn’t protect against is <i>forking</i>, where the storage provider shows different versions of the archive to different users. However, detecting forks would require some kind of gossip between users, which is beyond the scope of the initial implementation for now.</p>
    <div>
      <h3>Hiding Access Patterns</h3>
      <a href="#hiding-access-patterns">
        
      </a>
    </div>
    <p><a href="https://en.wikipedia.org/wiki/Oblivious_RAM">Oblivious RAM</a>, or ORAM, is a cryptographic technique for reading and writing to random-access memory in a way that hides which operation was performed (a read, or a write) and to which part of memory the operation was performed, from the memory itself! In our case, the ‘memory’ is our <a href="https://www.cloudflare.com/developer-platform/products/r2/">object storage provider</a>, which means we’re hiding from them which pieces of data we’re accessing and why. This is valuable for defending against <i>traffic analysis attacks</i>, where an adversary with detailed knowledge of a system like UtahFS can look at the requests it makes, and infer the contents of encrypted data. For example, they might see that you <i>upload</i> data at regular intervals and almost never <i>download</i>, and infer that you’re storing automated backups.</p><p>The simplest implementation of ORAM would consist of always reading the entire memory space and then rewriting the entire memory space with all new values, any time you want to read or write an individual value. An adversary looking at the pattern of memory accesses wouldn’t be able to tell which value you actually wanted, because you always touch everything. This would be incredibly inefficient, however.</p><p>The construction we actually use, which is called <a href="https://eprint.iacr.org/2013/280.pdf">Path ORAM</a>, abstracts this simple scheme a little bit to make it more efficient. First, it organizes the blocks of memory into a binary tree, and second, it keeps a client-side table that maps application-level pointers to random leafs in the binary tree. The trick is that a value is allowed to live in any block of memory that’s on the path between its assigned leaf and the root of the binary tree.</p><p>Now, when we want to lookup the value that a pointer goes to, we look in our table for its assigned leaf, and read all the nodes on the path between the root and that leaf. The value we’re looking for should be on this path, so we already have what we need! And in the absence of any other information, all the adversary saw is that we read a random path from the tree.</p><p>What looks like a random path is read from the tree, that ends up containing the data we're looking for.</p><p>However, we still need to hide whether we’re reading or writing, and to re-randomize some memory to ensure this lookup can’t be linked with others we make in the future. So to re-randomize, we assign the pointer we just read to a new leaf and move the value from whichever block it was stored in before to a block that’s a parent of both the new and old leaves. (In the worst case, we can use the root block since the root is a parent of everything.) Once the value is moved to a suitable block and done being consumed/modified by the application, we re-encrypt all the blocks we fetched and write them back to memory. This puts the value in the path between the root and its new leaf, while only changing the blocks of memory we’ve already fetched.</p><p>This construction is great because we’ve only had to touch the memory assigned to a single random path in a binary tree, which is a logarithmic amount of work relative to the total size of our memory. But even if we read the same value again and again, we’ll touch completely random paths from the tree each time! There’s still a performance penalty caused by the additional memory lookups though, which is why ORAM support is optional.</p>
    <div>
      <h3>Wrapping Up</h3>
      <a href="#wrapping-up">
        
      </a>
    </div>
    <p>Working on this project has been really rewarding for me because while a lot of the individual layers of the system seem simple, they’re the result of a lot of refinement and build up into something complex quickly. It was difficult though, in that I had to implement a lot of functionality myself instead of reuse other people’s code. This is because building end-to-end encrypted systems requires carefully integrating security into every feature, and the only good way to do that is from the start. I hope UtahFS is useful for others interested in secure storage.</p> ]]></content:encoded>
            <category><![CDATA[Encryption]]></category>
            <category><![CDATA[Developers]]></category>
            <guid isPermaLink="false">5un9hPBwEvXsmyek9NjHff</guid>
            <dc:creator>Brendan McMillion</dc:creator>
        </item>
        <item>
            <title><![CDATA[How Cloudflare and Wall Street Are Helping Encrypt the Internet Today]]></title>
            <link>https://blog.cloudflare.com/how-cloudflare-and-wall-street-are-helping-encrypt-the-internet-today/</link>
            <pubDate>Fri, 13 Sep 2019 23:00:00 GMT</pubDate>
            <description><![CDATA[ Today has been a big day for Cloudflare, as we became a public company on the New York Stock Exchange (NYSE: NET). To mark the occasion, we decided to bring our favorite entropy machines to the floor of the NYSE. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Today has been a big day for Cloudflare, as we became a public company on the New York Stock Exchange (NYSE: NET). To mark the occasion, we decided to bring our favorite entropy machines to the floor of the NYSE. Footage of these lava lamps is being used as an additional seed to our <a href="/randomness-101-lavarand-in-production/"><b>entropy-generation system LavaRand</b></a> — bolstering Internet encryption for over 20 million Internet properties worldwide.</p><p><i>(This is mostly for fun. But when’s the last time you saw a lava lamp on the trading floor of the New York Stock Exchange?)</i></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2g2vH1hZmjN2EcjCIV8wfi/94702da9614d3dd97ddae35c5f8eeff8/NYSE-Lava-Lamps-Cropped.jpg" />
            
            </figure><p>A little context: generating truly random numbers using computers is impossible, because code is inherently deterministic (i.e. predictable). To compensate for this, engineers draw from pools of randomness created by entropy generators, which is a fancy term for "things that are truly unpredictable".</p><p>It turns out that lava lamps are fantastic sources of entropy, as was first shown by Silicon Graphics in the 1990s. It’s a torch we’ve been proud to carry forward: today, Cloudflare uses lava lamps to generate entropy that helps make millions of Internet properties more secure.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/ah3LRVUaM1zY6dND64ghZ/18ce5b07b000d6d951d97cc5753abe81/LavaRand1.png" />
            
            </figure><p>Housed in our San Francisco headquarters is a wall filled with dozens of lava lamps, undulating with mesmerizing randomness. We capture these lava lamps on video via a camera mounted across the room, and feed the resulting footage into an algorithm — called <a href="/lavarand-in-production-the-nitty-gritty-technical-details/"><b>LavaRand</b></a> — that amplifies the pure randomness of these lava lamps to dizzying extremes (computers can't create seeds of pure randomness, but they can massively amplify them).</p><p>Shortly before we rang the opening bell this morning, we recorded footage of our lava lamps in operation on the trading room floor of the New York Stock Exchange, and we're ingesting the footage into our LavaRand system. The resulting entropy is mixed with the myriad additional sources of entropy that we leverage every day, creating a cryptographically-secure source of randomness — fortified by Wall Street.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2i4n94z75iXaoZZFgXxg7y/f8cb4be9c50b4b4ff48b3f3aff354ff7/League-Of-Entropy.png" />
            
            </figure><p>We recently took our enthusiasm for randomness a step further by facilitating the <a href="/league-of-entropy/"><b>League of Entropy</b></a>, a consortium of global organizations and individual contributors, generating verifiable randomness via a globally distributed network. As one of the founding members of the League, LavaRand (pictured above) plays a key role in empowering developers worldwide with a pool of randomness with extreme entropy and high reliability.</p><p>And today, she’s enjoying the view from the podium!</p><hr /><p><i>One caveat: the lava lamps we run in our San Francisco headquarters are recorded in real-time, 24/7, giving us an ongoing stream of entropy. For reasons that are understandable, the NYSE doesn't allow for live video feeds from the exchange floor while it is in operation. But this morning they did let us record footage of the lava lamps operating shortly before the opening bell. The video was recorded and we're ingesting it into our LavaRand system (alongside many other entropy generators, including the lava lamps back in San Francisco).</i></p> ]]></content:encoded>
            <category><![CDATA[LavaRand]]></category>
            <category><![CDATA[Entropy]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Encryption]]></category>
            <guid isPermaLink="false">5cmkZ6Tia2iUD2yBBnQAOb</guid>
            <dc:creator>Nick Sullivan</dc:creator>
        </item>
        <item>
            <title><![CDATA[A Detailed Look at RFC 8446 (a.k.a. TLS 1.3)]]></title>
            <link>https://blog.cloudflare.com/rfc-8446-aka-tls-1-3/</link>
            <pubDate>Fri, 10 Aug 2018 23:00:00 GMT</pubDate>
            <description><![CDATA[ TLS 1.3 (RFC 8446) was published today. This article provides a deep dive into the changes introduced in TLS 1.3 and its impact on the future of internet security. ]]></description>
            <content:encoded><![CDATA[ <p>For the last five years, the Internet Engineering Task Force (IETF), the standards body that defines internet protocols, has been working on standardizing the latest version of one of its most important security protocols: Transport Layer Security (TLS). TLS is used to secure the web (and much more!), providing encryption and ensuring the authenticity of every HTTPS website and API. The latest version of TLS, TLS 1.3 (<a href="https://www.rfc-editor.org/rfc/pdfrfc/rfc8446.txt.pdf">RFC 8446</a>) was published today. It is the first major overhaul of the protocol, bringing significant security and performance improvements. This article provides a deep dive into the changes introduced in TLS 1.3 and its impact on the future of internet security.</p>
    <div>
      <h3>An evolution</h3>
      <a href="#an-evolution">
        
      </a>
    </div>
    <p>One major way Cloudflare provides <a href="https://www.cloudflare.com/application-services/solutions/api-security/">security</a> is by supporting HTTPS for websites and web services such as APIs. With HTTPS (the “S” stands for secure) the communication between your browser and the server travels over an encrypted and authenticated channel. Serving your content over HTTPS instead of HTTP provides confidence to the visitor that the content they see is presented by the legitimate content owner and that the communication is safe from eavesdropping. This is a big deal in a world where online privacy is more important than ever.</p><p>The machinery under the hood that makes HTTPS secure is a protocol called TLS. It has its roots in a protocol called Secure Sockets Layer (SSL) developed in the mid-nineties at Netscape. By the end of the 1990s, Netscape handed SSL over to the IETF, who renamed it TLS and have been the stewards of the protocol ever since. Many people still refer to web encryption as SSL, even though the vast majority of services have switched over to supporting TLS only. The term SSL continues to have popular appeal and Cloudflare has kept the term alive through product names like <a href="/keyless-ssl-the-nitty-gritty-technical-details/">Keyless SSL</a> and <a href="/introducing-universal-ssl/">Universal SSL</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/59tFn3me3Oe6OcjT24CqYF/22a662ccc88b06adc516449b8e2be657/image5.png" />
            
            </figure><p>In the IETF, protocols are called RFCs. TLS 1.0 was RFC 2246, TLS 1.1 was RFC 4346, and TLS 1.2 was RFC 5246. Today, TLS 1.3 was published as RFC 8446. RFCs are generally published in order, keeping 46 as part of the RFC number is a nice touch.</p>
    <div>
      <h3>TLS 1.2 wears parachute pants and shoulder pads</h3>
      <a href="#tls-1-2-wears-parachute-pants-and-shoulder-pads">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5p8wOtJF3L8LEprZDoO0z7/c4742d6066b73c33e4f5e98afddb83ff/image11.jpg" />
            
            </figure><p><a href="https://memegenerator.net/Mc-Hammer-Pants">MC Hammer</a>, like SSL, was popular in the 90s</p><p>Over the last few years, TLS has seen its fair share of problems. First of all, there have been problems with the code that implements TLS, including <a href="/answering-the-critical-question-can-you-get-private-ssl-keys-using-heartbleed/">Heartbleed</a>, <a href="https://www.imperialviolet.org/2014/09/26/pkcs1.html">BERserk</a>, <a href="https://gotofail.com/">goto fail;</a>, and more. These issues are not fundamental to the protocol and mostly resulted from a lack of testing. Tools like <a href="https://github.com/RUB-NDS/TLS-Attacker">TLS Attacker</a> and <a href="https://security.googleblog.com/2016/12/project-wycheproof.html">Project Wycheproof</a> have helped improve the robustness of TLS implementation, but the more challenging problems faced by TLS have had to do with the protocol itself.</p><p>TLS was designed by engineers using tools from mathematicians. Many of the early design decisions from the days of SSL were made using heuristics and an incomplete understanding of how to design robust security protocols. That said, this isn’t the fault of the protocol designers (Paul Kocher, Phil Karlton, Alan Freier, Tim Dierks, Christopher Allen and others), as the entire industry was still learning how to do this properly. When TLS was designed, formal papers on the design of secure authentication protocols like Hugo Krawczyk’s landmark <a href="http://webee.technion.ac.il/~hugo/sigma-pdf.pdf">SIGMA</a> paper were still years away. TLS was 90s crypto: It meant well and seemed cool at the time, but the modern cryptographer’s design palette has moved on.</p><p>Many of the design flaws were discovered using <a href="https://en.wikipedia.org/wiki/Formal_verification">formal verification</a>. Academics attempted to prove certain security properties of TLS, but instead found counter-examples that were turned into real vulnerabilities. These weaknesses range from the purely theoretical (<a href="https://access.redhat.com/articles/2112261">SLOTH</a> and <a href="https://eprint.iacr.org/2018/298.pdf">CurveSwap</a>), to feasible for highly resourced attackers (<a href="https://weakdh.org/imperfect-forward-secrecy-ccs15.pdf">WeakDH</a>, <a href="/logjam-the-latest-tls-vulnerability-explained/">LogJam</a>, <a href="https://censys.io/blog/freak">FREAK</a>, <a href="https://nakedsecurity.sophos.com/2016/08/25/anatomy-of-a-cryptographic-collision-the-sweet32-attack/">SWEET32</a>), to practical and dangerous (<a href="https://en.wikipedia.org/wiki/POODLE">POODLE</a>, <a href="https://robotattack.org/">ROBOT</a>).</p>
    <div>
      <h3>TLS 1.2 is slow</h3>
      <a href="#tls-1-2-is-slow">
        
      </a>
    </div>
    <p>Encryption has always been important online, but historically it was only used for things like logging in or sending credit card information, leaving most other data exposed. There has been a major trend in the last few years towards using HTTPS for all traffic on the Internet. This has the positive effect of protecting more of what we do online from eavesdroppers and <a href="/an-introduction-to-javascript-based-ddos/">injection attacks</a>, but has the downside that new connections get a bit slower.</p><p>For a browser and web server to agree on a key, they need to exchange cryptographic data. The exchange, called the “handshake” in TLS, has remained largely unchanged since TLS was standardized in 1999. The handshake requires two additional round-trips between the browser and the server before encrypted data can be sent (or one when resuming a previous connection). The additional cost of the TLS handshake for HTTPS results in a noticeable hit to latency compared to an HTTP alone. This additional delay can negatively impact performance-focused applications.</p>
    <div>
      <h3>Defining TLS 1.3</h3>
      <a href="#defining-tls-1-3">
        
      </a>
    </div>
    <p>Unsatisfied with the outdated design of TLS 1.2 and two-round-trip overhead, the IETF set about defining a new version of TLS. In August 2013, Eric Rescorla laid out a wishlist of features for the new protocol:<a href="https://www.ietf.org/proceedings/87/slides/slides-87-tls-5.pdf">https://www.ietf.org/proceedings/87/slides/slides-87-tls-5.pdf</a></p><p>After <a href="https://www.ietf.org/mail-archive/web/tls/current/msg20938.html">some debate</a>, it was decided that this new version of TLS was to be called TLS 1.3. The main issues that drove the design of TLS 1.3 were mostly the same as those presented five years ago:</p><ul><li><p>reducing handshake latency</p></li><li><p>encrypting more of the handshake</p></li><li><p>improving resiliency to cross-protocol attacks</p></li><li><p>removing legacy features</p></li></ul><p>The specification was shaped by volunteers through an open design process, and after four years of diligent work and vigorous debate, TLS 1.3 is now in its final form: RFC 8446. As adoption increases, the new protocol will make the internet both faster and more secure.</p><p>In this blog post I will focus on the two main advantages TLS 1.3 has over previous versions: security and performance.</p>
    <div>
      <h3>Trimming the hedges</h3>
      <a href="#trimming-the-hedges">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/57PQK3ofbneOYgOmlwNm4Y/e5c3c319903504002330efec0fc06db2/image10.jpg" />
            
            </figure><p><a href="https://commons.wikimedia.org/wiki/File:Williton_Highbridge_Nursery_topiary_garden.jpg">Creative Commons Attribution-Share Alike 3.0</a></p><p>In the last two decades, we as a society have learned a lot about how to write secure cryptographic protocols. The parade of cleverly-named attacks from POODLE to Lucky13 to SLOTH to LogJam showed that even TLS 1.2 contains antiquated ideas from the early days of cryptographic design. One of the design goals of TLS 1.3 was to correct previous mistakes by removing potentially dangerous design elements.</p>
    <div>
      <h4>Fixing key exchange</h4>
      <a href="#fixing-key-exchange">
        
      </a>
    </div>
    <p>TLS is a so-called “hybrid” cryptosystem. This means it uses both symmetric key cryptography (encryption and decryption keys are the same) and public key cryptography (encryption and decryption keys are different). Hybrid schemes are the predominant form of encryption used on the Internet and are used in <a href="https://en.wikipedia.org/wiki/Secure_Shell">SSH</a>, <a href="https://en.wikipedia.org/wiki/IPsec">IPsec</a>, <a href="https://en.wikipedia.org/wiki/Signal_Protocol">Signal</a>, <a href="https://www.wireguard.com/">WireGuard</a> and other protocols. In hybrid cryptosystems, public key cryptography is used to establish a shared secret between both parties, and the shared secret is used to create symmetric keys that can be used to encrypt the data exchanged.</p><p>As a general rule, public key crypto is slow and expensive (microseconds to milliseconds per operation) and symmetric key crypto is fast and cheap (nanoseconds per operation). Hybrid encryption schemes let you send a lot of encrypted data with very little overhead by only doing the expensive part once. Much of the work in TLS 1.3 has been about improving the part of the handshake, where public keys are used to establish symmetric keys.</p>
    <div>
      <h4>RSA key exchange</h4>
      <a href="#rsa-key-exchange">
        
      </a>
    </div>
    <p>The public key portion of TLS is about establishing a shared secret. There are two main ways of doing this with public key cryptography. The simpler way is with public-key encryption: one party encrypts the shared secret with the other party’s public key and sends it along. The other party then uses its private key to decrypt the shared secret and ... voila! They both share the same secret. This technique was discovered in 1977 by Rivest, Shamir and Adelman and is called RSA key exchange. In TLS’s RSA key exchange, the shared secret is decided by the client, who then encrypts it to the server’s public key (extracted from the certificate) and sends it to the server.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6vFfpyW3dU5vLUnbrzOGk8/5b9c70ff88d0da6d3210fc37f13e8184/image4.png" />
            
            </figure><p>The other form of key exchange available in TLS is based on another form of public-key cryptography, invented by Diffie and Hellman in 1976, so-called Diffie-Hellman key agreement. In Diffie-Hellman, the client and server both start by creating a public-private key pair. They then send the public portion of their key share to the other party. When each party receives the public key share of the other, they combine it with their own private key and end up with the same value: the pre-main secret. The server then uses a digital signature to ensure the exchange hasn’t been tampered with. This key exchange is called “ephemeral” if the client and server both choose a new key pair for every exchange.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6tjgSGvMVdzh3LZvt1HZT1/98031ef05fdc4353af60d27062fdb67a/image3.png" />
            
            </figure><p>Both modes result in the client and server having a shared secret, but RSA mode has a serious downside: it’s not <a href="/staying-on-top-of-tls-attacks/">forward secret</a>. That means that if someone records the encrypted conversation and then gets ahold of the RSA private key of the server, they can decrypt the conversation. This even applies if the conversation was recorded and the key is obtained some time well into the future. In a world where national governments are recording encrypted conversations and using exploits like <a href="https://en.wikipedia.org/wiki/Heartbleed">Heartbleed</a> to steal private keys, this is a realistic threat.</p><p>RSA key exchange has been problematic for some time, and not just because it’s not forward-secret. It’s also notoriously difficult to do correctly. In 1998, Daniel Bleichenbacher discovered a vulnerability in the way RSA encryption was done in SSL and created what’s called the “million-message attack,” which allows an attacker to perform an RSA private key operation with a server’s private key by sending a million or so well-crafted messages and looking for differences in the error codes returned. The attack has been refined over the years and in some cases only requires thousands of messages, making it feasible to do from a laptop. It was recently discovered that major websites (including facebook.com) were also vulnerable to a variant of Bleichenbacher’s attack called the <a href="https://robotattack.org/">ROBOT attack</a> as recently as 2017.</p><p>To reduce the risks caused by non-forward secret connections and million-message attacks, RSA encryption was removed from TLS 1.3, leaving ephemeral Diffie-Hellman as the only key exchange mechanism. Removing RSA key exchange brings other advantages, as we will discuss in the performance section below.</p>
    <div>
      <h4>Diffie-Hellman named groups</h4>
      <a href="#diffie-hellman-named-groups">
        
      </a>
    </div>
    <p>When it comes to cryptography, giving too many options leads to the wrong option being chosen. This principle is most evident when it comes to choosing Diffie-Hellman parameters. In previous versions of TLS, the choice of the Diffie-Hellman parameters was up to the participants. This resulted in some implementations choosing incorrectly, resulting in vulnerable implementations being deployed. TLS 1.3 takes this choice away.</p><p>Diffie-Hellman is a powerful tool, but not all Diffie-Hellman parameters are “safe” to use. The security of Diffie-Hellman depends on the difficulty of a specific mathematical problem called the <a href="https://en.wikipedia.org/wiki/Discrete_logarithm">discrete logarithm problem</a>. If you can solve the discrete logarithm problem for a set of parameters, you can extract the private key and break the security of the protocol. Generally speaking, the bigger the numbers used, the harder it is to solve the discrete logarithm problem. So if you choose small DH parameters, you’re in trouble.</p><p>The LogJam and WeakDH attacks of 2015 showed that many TLS servers could be tricked into using small numbers for Diffie-Hellman, allowing an attacker to break the security of the protocol and decrypt conversations.</p><p>Diffie-Hellman also requires the parameters to have certain other mathematical properties. In 2016, Antonio Sanso found an <a href="http://arstechnica.com/security/2016/01/high-severity-bug-in-openssl-allows-attackers-to-decrypt-https-traffic/">issue in OpenSSL</a> where parameters were chosen that lacked the right mathematical properties, resulting in another vulnerability.</p><p>TLS 1.3 takes the opinionated route, restricting the Diffie-Hellman parameters to ones that are known to be secure. However, it still leaves several options; permitting only one option makes it difficult to update TLS in case these parameters are found to be insecure some time in the future.</p>
    <div>
      <h3>Fixing ciphers</h3>
      <a href="#fixing-ciphers">
        
      </a>
    </div>
    <p>The other half of a hybrid crypto scheme is the actual encryption of data. This is done by combining an authentication code and a symmetric cipher for which each party knows the key. As I’ll describe, there are many ways to encrypt data, most of which are wrong.</p>
    <div>
      <h4>CBC mode ciphers</h4>
      <a href="#cbc-mode-ciphers">
        
      </a>
    </div>
    <p>In the last section we described TLS as a hybrid encryption scheme, with a public key part and a symmetric key part. The public key part is not the only one that has caused trouble over the years. The symmetric key portion has also had its fair share of issues. In any secure communication scheme, you need both encryption (to keep things private) and integrity (to make sure people don’t modify, add, or delete pieces of the conversation). Symmetric key encryption is used to provide both encryption and integrity, but in TLS 1.2 and earlier, these two pieces were combined in the wrong way, leading to security vulnerabilities.</p><p>An algorithm that performs symmetric encryption and decryption is called a symmetric cipher. Symmetric ciphers usually come in two main forms: block ciphers and stream ciphers.</p><p>A stream cipher takes a fixed-size key and uses it to create a stream of pseudo-random data of arbitrary length, called a key stream. To encrypt with a stream cipher, you take your message and combine it with the key stream by XORing each bit of the key stream with the corresponding bit of your message.. To decrypt, you take the encrypted message and XOR it with the key stream. Examples of pure stream ciphers are RC4 and ChaCha20. Stream ciphers are popular because they’re simple to implement and fast in software.</p><p>A block cipher is different than a stream cipher because it only encrypts fixed-sized messages. If you want to encrypt a message that is shorter or longer than the block size, you have to do a bit of work. For shorter messages, you have to add some extra data to the end of the message. For longer messages, you can either split your message up into blocks the cipher can encrypt and then use a block cipher mode to combine the pieces together somehow. Alternatively, you can turn your block cipher into a stream cipher by encrypting a sequence of counters with a block cipher and using that as the stream. This is called “counter mode”. One popular way of encrypting arbitrary length data with a block cipher is a mode called cipher block chaining (CBC).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5hHGcSStKo5bHWDn64PXHq/4801697c668fc061eab0c0ab57c2fdd8/image9.png" />
            
            </figure><p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/510G6uPdPGbwTJHcsGqPSc/726945c085f0912e1307e18ef4393563/image7.png" />
            
            </figure><p>In order to prevent people from tampering with data, encryption is not enough. Data also needs to be integrity-protected. For CBC-mode ciphers, this is done using something called a message-authentication code (MAC), which is like a fancy checksum with a key. Cryptographically strong MACs have the property that finding a MAC value that matches an input is practically impossible unless you know the secret key. There are two ways to combine MACs and CBC-mode ciphers. Either you encrypt first and then MAC the ciphertext, or you MAC the plaintext first and then encrypt the whole thing. In TLS, they chose the latter, MAC-then-Encrypt, which turned out to be the wrong choice.</p><p>You can blame this choice for <a href="https://www.youtube.com/watch?v=-_8-2pDFvmg">BEAST</a>, as well as a slew of padding oracle vulnerabilities such as <a href="http://www.isg.rhul.ac.uk/tls/Lucky13.html">Lucky 13</a> and <a href="https://eprint.iacr.org/2015/1129">Lucky Microseconds</a>. Read my previous post on <a href="/padding-oracles-and-the-decline-of-cbc-mode-ciphersuites/">padding oracle attacks</a> for a comprehensive explanation of these flaws. The interaction between CBC mode and padding was also the cause of the widely publicized <a href="/sslv3-support-disabled-by-default-due-to-vulnerability/">POODLE vulnerability</a> in SSLv3 and some implementations of TLS.</p><p>RC4 is a classic stream cipher designed by Ron Rivest (the “R” of RSA) that was broadly supported since the early days of TLS. In 2013, it was found to have <a href="http://www.isg.rhul.ac.uk/tls/">measurable biases</a> that could be leveraged to allow attackers to decrypt messages.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7A6uMNALwNJtA0GSKOkxOL/5c9692baac1ab202e0e4d7f79e4ae8f2/image2.png" />
            
            </figure><p>AEAD Mode</p><p>In TLS 1.3, all the troublesome ciphers and cipher modes have been removed. You can no longer use CBC-mode ciphers or insecure stream ciphers such as RC4. The only type of symmetric crypto allowed in TLS 1.3 is a new construction called <a href="/it-takes-two-to-chacha-poly/">AEAD (authenticated encryption with additional data)</a>, which combines encryption and integrity into one seamless operation.</p>
    <div>
      <h3>Fixing digital signatures</h3>
      <a href="#fixing-digital-signatures">
        
      </a>
    </div>
    <p>Another important part of TLS is authentication. In every connection, the server authenticates itself to the client using a digital certificate, which has a public key. In RSA-encryption mode, the server proves its ownership of the private key by decrypting the pre-main secret and computing a MAC over the transcript of the conversation. In Diffie-Hellman mode, the server proves ownership of the private key using a digital signature. If you’ve been following this blog post so far, it should be easy to guess that this was done incorrectly too.</p>
    <div>
      <h4>PKCS#1v1.5</h4>
      <a href="#pkcs-1v1-5">
        
      </a>
    </div>
    <p>Daniel Bleichenbacher has made a living identifying problems with RSA in TLS. In 2006, he devised a pen-and-paper attack against RSA signatures as used in TLS. It was later discovered that major TLS implemenations including those of NSS and OpenSSL <a href="https://www.ietf.org/mail-archive/web/openpgp/current/msg00999.html">were vulnerable to this attack</a>. This issue again had to do with how difficult it is to implement padding correctly, in this case, the PKCS#1 v1.5 padding used in RSA signatures. In TLS 1.3, PKCS#1 v1.5 is removed in favor of the newer design <a href="https://en.wikipedia.org/wiki/Probabilistic_signature_scheme">RSA-PSS</a>.</p>
    <div>
      <h4>Signing the entire transcript</h4>
      <a href="#signing-the-entire-transcript">
        
      </a>
    </div>
    <p>We described earlier how the server uses a digital signature to prove that the key exchange hasn’t been tampered with. In TLS 1.2 and earlier, the server’s signature only covers part of the handshake. The other parts of the handshake, specifically the parts that are used to negotiate which symmetric cipher to use, are not signed by the private key. Instead, a symmetric MAC is used to ensure that the handshake was not tampered with. This oversight resulted in a number of high-profile vulnerabilities (FREAK, LogJam, etc.). In TLS 1.3 these are prevented because the server signs the entire handshake transcript.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2gJewngm3kPtCgXDvl7O4q/340d28439e4eaac4cd176359dfa19900/image1.png" />
            
            </figure><p>The FREAK, LogJam and CurveSwap attacks took advantage of two things:</p><ol><li><p>the fact that intentionally weak ciphers from the 1990s (called export ciphers) were still supported in many browsers and servers, and</p></li><li><p>the fact that the part of the handshake used to negotiate which cipher was used was not digitally signed.</p></li></ol><p>The on-path attacker can swap out the supported ciphers (or supported groups, or supported curves) from the client with an easily crackable choice that the server supports. They then break the key and forge two finished messages to make both parties think they’ve agreed on a transcript.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6PHIoZN6sxu78eUdoLWcbz/4f4a1fbbff72785aa2be24c5c9872e8f/image13.png" />
            
            </figure><p>These attacks are called downgrade attacks, and they allow attackers to force two participants to use the weakest cipher supported by both parties, even if more secure ciphers are supported. In this style of attack, the perpetrator sits in the middle of the handshake and changes the list of supported ciphers advertised from the client to the server to only include weak export ciphers. The server then chooses one of the weak ciphers, and the attacker figures out the key with a brute-force attack, allowing the attacker to forge the MACs on the handshake. In TLS 1.3, this type of downgrade attack is impossible because the server now signs the entire handshake, including the cipher negotiation.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/P4S0oZBnuJvkG23ljrAmN/a658f4a88dddcf2019fa22567e150a53/image14.png" />
            
            </figure>
    <div>
      <h3>Better living through simplification</h3>
      <a href="#better-living-through-simplification">
        
      </a>
    </div>
    <p>TLS 1.3 is a much more elegant and secure protocol with the removal of the insecure features listed above. This hedge-trimming allowed the protocol to be simplified in ways that make it easier to understand, and faster.</p>
    <div>
      <h4>No more take-out menu</h4>
      <a href="#no-more-take-out-menu">
        
      </a>
    </div>
    <p>In previous versions of TLS, the main negotiation mechanism was the ciphersuite. A ciphersuite encompassed almost everything that could be negotiated about a connection:</p><ul><li><p>type of certificates supported</p></li><li><p>hash function used for deriving keys (e.g., SHA1, SHA256, ...)</p></li><li><p>MAC function (e.g., HMAC with SHA1, SHA256, …)</p></li><li><p>key exchange algorithm (e.g., RSA, ECDHE, …)</p></li><li><p>cipher (e.g., AES, RC4, ...)</p></li><li><p>cipher mode, if applicable (e.g., CBC)</p></li></ul><p>Ciphersuites in previous versions of TLS had grown into monstrously large alphabet soups. Examples of commonly used cipher suites are: DHE-RC4-MD5 or ECDHE-ECDSA-AES-GCM-SHA256. Each ciphersuite was represented by a code point in a table maintained by an organization called the Internet Assigned Numbers Authority (IANA). Every time a new cipher was introduced, a new set of combinations needed to be added to the list. This resulted in a combinatorial explosion of code points representing every valid choice of these parameters. It had become a bit of a mess.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2W8o2jcAOgb3EUcGitVN8b/59d24160803d908ed4417494d57ea288/image8.png" />
            
            </figure><p>TLS 1.2</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/171emSynzRJ1MSpJv08jc1/18c165eb9c379b48418a5f25a47131e1/image16.png" />
            
            </figure><p></p><p>TLS 1.3</p><p>TLS 1.3 removes many of these legacy features, allowing for a clean split between three orthogonal negotiations:</p><ul><li><p>Cipher + HKDF Hash</p></li><li><p>Key Exchange</p></li><li><p>Signature Algorithm</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5eHObkLXwOPPw9MEbaxSsc/8e20132ec65ebb83b1e43528711fe05d/image6.png" />
            
            </figure><p>This simplified cipher suite negotiation and radically reduced set of negotiation parameters opens up a new possibility. This possibility enables the TLS 1.3 handshake latency to drop from two round-trips to only one round-trip, providing the performance boost that will ensure that TLS 1.3 will be popular and widely adopted.</p>
    <div>
      <h3>Performance</h3>
      <a href="#performance">
        
      </a>
    </div>
    <p>When establishing a new connection to a server that you haven’t seen before, it takes two <a href="https://www.cloudflare.com/learning/cdn/glossary/round-trip-time-rtt/">round-trips</a> before data can be sent on the connection. This is not particularly noticeable in locations where the server and client are geographically close to each other, but it can make a big difference on mobile networks where latency can be as high as 200ms, an amount that is noticeable for humans.</p>
    <div>
      <h3>1-RTT mode</h3>
      <a href="#1-rtt-mode">
        
      </a>
    </div>
    <p>TLS 1.3 now has a radically simpler cipher negotiation model and a reduced set of key agreement options (no RSA, no user-defined DH parameters). This means that every connection will use a DH-based key agreement and the parameters supported by the server are likely easy to guess (ECDHE with X25519 or P-256). Because of this limited set of choices, the client can simply choose to send DH key shares in the first message instead of waiting until the server has confirmed which key shares it is willing to support. That way, the server can learn the shared secret and send encrypted data one round trip earlier. Chrome’s implementation of TLS 1.3, for example, sends an X25519 keyshare in the first message to the server.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3E3tuAB7cL1jf7HXHLegge/461301a79e282e3034a6acc1bb537e49/image3.png" />
            
            </figure><p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5xa8AA1zO4jZ4yjPcIukcM/a29464a13527710055cd6031cae54c92/image15.png" />
            
            </figure><p>In the rare situation that the server does not support one of the key shares sent by the client, the server can send a new message, the HelloRetryRequest, to let the client know which groups it supports. Because the list has been trimmed down so much, this is not expected to be a common occurrence.</p>
    <div>
      <h3>0-RTT resumption</h3>
      <a href="#0-rtt-resumption">
        
      </a>
    </div>
    <p>A further optimization was inspired by the <a href="https://docs.google.com/document/u/1/d/1g5nIXAIkN_Y-7XJW5K45IblHd_L2f5LTaDUDwvZ5L6g/edit">QUIC protocol</a>. It lets clients send encrypted data in their first message to the server, resulting in no additional latency cost compared to unencrypted HTTP. This is a big deal, and once TLS 1.3 is widely deployed, the encrypted web is sure to feel much snappier than before.</p><p>In TLS 1.2, there are two ways to resume a connection, <a href="/tls-session-resumption-full-speed-and-secure/">session ids and session tickets</a>. In TLS 1.3 these are combined to form a new mode called PSK (pre-shared key) resumption. The idea is that after a session is established, the client and server can derive a shared secret called the “resumption main secret”. This can either be stored on the server with an id (session id style) or encrypted by a key known only to the server (session ticket style). This session ticket is sent to the client and redeemed when resuming a connection.</p><p>For resumed connections, both parties share a resumption main secret so key exchange is not necessary except for providing forward secrecy. The next time the client connects to the server, it can take the secret from the previous session and use it to encrypt application data to send to the server, along with the session ticket. Something as amazing as sending encrypted data on the first flight does come with its downfalls.</p>
    <div>
      <h3>Replayability</h3>
      <a href="#replayability">
        
      </a>
    </div>
    <p>There is no interactivity in 0-RTT data. It’s sent by the client, and consumed by the server without any interactions. This is great for performance, but comes at a cost: replayability. If an attacker captures a 0-RTT packet that was sent to server, they can replay it and there’s a chance that the server will accept it as valid. This can have interesting negative consequences.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4aI0oRVRPjH8lmqKPfo2Uu/66828a933209d66d8f8ac0db0d77d54d/0-rtt-attack-_2x.png" />
            
            </figure><p>An example of dangerous replayed data is anything that changes state on the server. If you increment a counter, perform a database transaction, or do anything that has a permanent effect, it’s risky to put it in 0-RTT data.</p><p>As a client, you can try to protect against this by only putting “safe” requests into the 0-RTT data. In this context, “safe” means that the request won’t change server state. In HTTP, different methods are supposed to have different semantics. HTTP GET requests are supposed to be safe, so a browser can usually protect HTTPS servers against replay attacks by only sending GET requests in 0-RTT. Since most page loads start with a GET of “/” this results in faster page load time.</p><p>Problems start to happen when data sent in 0-RTT are used for state-changing requests. To help prevent against this failure case, TLS 1.3 also includes the time elapsed value in the session ticket. If this diverges too much, the client is either approaching the speed of light, or the value has been replayed. In either case, it’s prudent for the server to reject the 0-RTT data.</p><p>For more details about <a href="/introducing-0-rtt/">0-RTT, and the improvements to session resumption</a> in TLS 1.3, check out this previous blog post.</p>
    <div>
      <h3>Deployability</h3>
      <a href="#deployability">
        
      </a>
    </div>
    <p>TLS 1.3 was a radical departure from TLS 1.2 and earlier, but in order to be deployed widely, it has to be backwards compatible with existing software. One of the reasons TLS 1.3 has taken so long to go from draft to final publication was the fact that some existing software (namely middleboxes) wasn’t playing nicely with the new changes. Even minor changes to the TLS 1.3 protocol that were visible on the wire (such as eliminating the redundant ChangeCipherSpec message, bumping the version from 0x0303 to 0x0304) ended up causing connection issues for some people.</p><p>Despite the fact that future flexibility was built into the TLS spec, some implementations made incorrect assumptions about how to handle future TLS versions. The phenomenon responsible for this change is called <i>ossification</i> and I explore it more fully in the context of TLS in my previous post about <a href="/why-tls-1-3-isnt-in-browsers-yet/">why TLS 1.3 isn’t deployed yet</a>. To accommodate these changes, TLS 1.3 was modified to look a lot like TLS 1.2 session resumption (at least on the wire). This resulted in a much more functional, but less aesthetically pleasing protocol. This is the price you pay for upgrading one of the most widely deployed protocols online.</p>
    <div>
      <h3>Conclusions</h3>
      <a href="#conclusions">
        
      </a>
    </div>
    <p>TLS 1.3 is a modern security protocol built with modern tools like <a href="http://tls13tamarin.github.io/TLS13Tamarin/">formal</a> <a href="https://eprint.iacr.org/2016/081">analysis</a> that retains its backwards compatibility. It has been tested widely and iterated upon using real world deployment data. It’s a cleaner, faster, and more secure protocol ready to become the de facto two-party encryption protocol online. Draft 28 of TLS 1.3 is enabled by default for <a href="/you-get-tls-1-3-you-get-tls-1-3-everyone-gets-tls-1-3/">all Cloudflare customers</a>, and we will be rolling out the final version soon.</p><p>Publishing TLS 1.3 is a huge accomplishment. It is one the best recent examples of how it is possible to take 20 years of deployed legacy code and change it on the fly, resulting in a better internet for everyone. TLS 1.3 has been debated and analyzed for the last three years and it’s now ready for prime time. Welcome, RFC 8446.</p> ]]></content:encoded>
            <category><![CDATA[Encryption]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[TLS 1.3]]></category>
            <category><![CDATA[HTTPS]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Cryptography]]></category>
            <guid isPermaLink="false">2sBEBduE1Y7lYRV2e70E5m</guid>
            <dc:creator>Nick Sullivan</dc:creator>
        </item>
        <item>
            <title><![CDATA[Privacy-Protecting Portable Router: Adding DNS-Over-TLS support to OpenWRT (LEDE) with Unbound]]></title>
            <link>https://blog.cloudflare.com/dns-over-tls-for-openwrt/</link>
            <pubDate>Mon, 09 Apr 2018 19:20:32 GMT</pubDate>
            <description><![CDATA[ This blog post explains how you can configure an OpenWRT router to encrypt DNS traffic to Cloudflare Resolver using DNS-over-TLS. ]]></description>
            <content:encoded><![CDATA[ <p><i>If you want to skip ahead to instructions, </i><a href="#settingupdnsovertls"><i>scroll to the next section</i></a><i>. But I, like a TLS handshake, am very verbose so please enjoy this opener.</i></p><p>Imagine this scenario - I'm at a restaurant and need to have a private phone conversation but unfortunately my phone's battery is drained. To get around this problem, I borrow my friend's phone and dial the number - to protect my privacy I walk outside. When I'm done with the call, I come back inside and return the phone.</p><p>Whilst the phone itself doesn't store the conversation I've had, it does have a log of the recently dialed number, if the friend from whom I borrowed the phone wanted to, they could easily see who I actually called - even if they don't specifically know the topic of conversation.</p><p>Sometimes, the data about who you've spoken to can tell an aweful lot about the conversation - if someone was to call an emotional support hotline or a debt collector, you could probably infer a lot about the conversation from the caller ID.</p><p>When we browse the internet, we use encryption to try and protect the conversations we have. When you connect to a website over HTTPS, a green padlock lights up on your browser and let's you know that your conversation is encrypted such that it is computationally difficult for an adversary sitting between you and the website's server to see what you're talking about.</p><p>I've previously blogged about how, under certain circumstances, it is possible to <a href="/performing-preventing-ssl-stripping-a-plain-english-primer/">strip away this this encryption</a> and the mitigations that websites can use to prevent this. Unfortunately, there is a far more fundamental problem to privacy online.</p><p>As is common IT knowledge, before your browser makes a HTTP connection to a website (say, cloudflare.com), your client needs to make a DNS query to work out the IP Address where the HTTP connection should be made. The same is true for any other application layer protocol, when you connect using a hostname instead of an IP Address. For a primer on DNS, we have an article on <a href="https://www.cloudflare.com/learning/dns/what-is-dns/">the basics of DNS</a> on our Learning Centre.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6HlZXQ1B1RUBPioetMFMAu/2d903b592a7d53903b93a8c16db7d569/dns-lookup-diagram.png" />
            
            </figure><p>Whilst encryption technologies have been fairly long-standing for HTTP itself, only recently have such encryption techniques been standardised for DNS. Chances are, if you don't know if your DNS traffic is encrypted - it isn't.</p><p>In practice this means that when you connect to a website that uses HTTPS, even though your conversation is encrypted - someone able to intercept your connection is able to see what website you're looking for and (depending on how the site is secured) even manipulate the response to get you to communicate with a different server.</p><p>This is particularly useful for evesdroppers; be they the network that's running the free Wi-Fi hotspot looking to sell your data to targetted advertisers or the hacker sipping on a latte whilst intercepting your network traffic (ironically dressed in a black hoodie and a balaclava).</p><p>By switching your DNS resolver to use <a href="https://1.1.1.1/">Cloudflare's DNS Resolver</a>, you get a faster browsing experience whilst ensuring that the people who run your DNS resolver aren't selling off that data to target you with ads. However, whilst Cloudflare Resolver supports both DNS-over-HTTPS and DNS-over-TLS, to make sure the connection between Cloudflare Resolver and you is encrypted, you may need to follow some additional configuration steps like enabling a <a href="https://developers.cloudflare.com/1.1.1.1/dns-over-https/cloudflared-proxy/">DNS over HTTPS client</a>.</p><p>This blog post explains how you can configure an OpenWRT router to encrypt outbound traffic to Cloudflare Resolver. This is particularly useful when you want to protect the traffic for the devices in house which may not support encrypted DNS protocols; such as your TV or <a href="/iot-security-anti-patterns/">IoT enabled toaster</a>. Whilst local clients may still explicitly override your local DNS resolver on your router, many will default to using it.</p>
    <div>
      <h3>OpenWRT (LEDE)</h3>
      <a href="#openwrt-lede">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3LxdO125L6OSPxEmCZCgFu/6b6faeecf730ddd2d0cff6bb097d6e86/IMG-3335-1.JPG.jpeg" />
            
            </figure><p>Over the weekend, prior to writing this post, I ordered a new wireless router, the GL.iNet GL-AR750. This router has a very small form-factor and is marketed as a "Travel Router" and can act as a Wi-Fi repeater as well as a traditional Wi-Fi Router. At it's longest edge, the router itself is around the length of my index finger:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2sI1sP3XQaTzuWqFRFZ6Wy/7284e9bab35195f0f3abe380ba77d606/IMG-3360.JPG.jpeg" />
            
            </figure><p>I didn't just order this specific router because of it's form-factor, it also comes pre-installed with OpenWRT - an embedded Linux-based operating system that's well suited for routers. In May 2016, OpenWRT was forked as LEDE (the Linux Embedded Development Environment) and was re-merged with the OpenWRT project in January 2018.</p><p>For those of you without a router with LEDE pre-installed, you can follow along with this blog post on any other router that supports being flashed with the OpenWRT firmware; more information can be found on the <a href="https://openwrt.org/supported_devices">OpenWRT Support Devices</a> page. Though, please be aware that, depending on your device, this may carry some risk.</p>
    <div>
      <h3>Support for DNS-over-TLS (or, the lack of)</h3>
      <a href="#support-for-dns-over-tls-or-the-lack-of">
        
      </a>
    </div>
    <p>The router I'm playing with has a configuration option to configure upstream DNS Resolver that it will use when a query isn't cached in it's own internal resolver. This local resolver is then suggested to clients that connect to the router.</p><p>For the sake of experimentation - through the web UI, I am able to configure this router to use <code>1.1.1.1</code>, <code>1.0.0.1</code>, <code>2606:4700:4700::1111</code> and<code>2606:4700:4700::1001</code> as the upsteam DNS servers (with the IPv6 addresses updated if the network doesn't support them):</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2L5uK2C6pOpSC9EREcgcMh/0320649b7ee8b21641c9e1008e430b33/Screen-Shot-2018-04-09-at-13.15.07.png" />
            
            </figure><p>By connecting the router's <a href="https://www.cloudflare.com/learning/network-layer/what-is-a-wan/">WAN</a> port to my computer, I am able to sniff traffic as it leaves the router by using Wireshark before it goes out to the actual WAN. When a DNS query isn't in my routers cache it is forwarded to <code>1.1.1.1</code>. As my router is sending these queries unecrypted instead of using DNS-over-TLS, I am able to see these DNS queries being sent around the internet in unencrypted form:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7MwYdpfJPivptjLrWfJSqN/24674ce3f0431057f119ef1cb6220d57/dns_unencrypted.png" />
            
            </figure><p>Although Cloudflare Resolver supports DNS-over-TLS, unfortuantely my router doesn't and will simply send all queries unencrypted.</p>
    <div>
      <h3>Setting Up DNS-Over-TLS</h3>
      <a href="#setting-up-dns-over-tls">
        
      </a>
    </div>
    <p>By default, LEDE comes pre-installed using Dnsmasq as an internal resolver and therefore doesn't support DNS-over-TLS. So that we can get our requests encrypted, we're going to replace Dnsmasq with Unbound and odhcpd. I've based the steps I'm following from the very useful <a href="https://github.com/openwrt/packages/tree/master/net/unbound/files#unbound-and-odhcpd">OpenWRT Unbound package documentation</a>.</p><p>Before we can get started, we need to <a href="https://www.cloudflare.com/learning/access-management/what-is-ssh/">SSH</a> into our router, if you're prompted for a password, this will likely be identical to the one you set up for the web portal:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4jwntgIewgDBLPjv2dlCra/a2e28851b4cac59151435fee144264bc/Screen-Shot-2018-04-09-at-13.06.26.png" />
            
            </figure><p>LEDE uses <code>opkg</code> as it's package manager of choice. Firstly, let's update the package list, then we install Unbound with Unbound-Control and the full version of odhcpd:</p>
            <pre><code>opkg update
opkg install unbound odhcpd unbound-control
opkg remove dnsmasq</code></pre>
            <p>Note that you can additionally install the Luci app for Unbound should you wish to control it with the standard user interface.</p>
            <pre><code>opkg install luci-app-unbound</code></pre>
            <p>As my router isn't currently running vanilla LEDE, it's user interface won't be altered if I was to install this and I haven't tested this module myself.</p><p>With Unbound in place, we can add some configuration to ensure Unbound uses <code>1.1.1.1</code>, <code>1.0.0.1</code>, <code>2606:4700:4700::1111</code> and<code>2606:4700:4700::1001</code> as the DNS resolvers with TLS encryption. I've done this by appending some configuration to <code>/etc/unbound/unbound_ext.conf</code> using Vim:</p>
            <pre><code>forward-zone:
  name: "."
  forward-addr: 1.1.1.1@853                   
  forward-addr: 1.0.0.1@853                             
  forward-addr: 2606:4700:4700::1111@853
  forward-addr: 2606:4700:4700::1001@853
  forward-ssl-upstream: yes   </code></pre>
            <p>In the Unbound configuration file at <code>/etc/config/unbound</code>, I've added some required configuration parameters as outlined in the package documentation. In my case, I backed up the configuration file and simply used the following:</p>
            <pre><code>config unbound
  option add_local_fqdn '1'
  option add_wan_fqdn '1'
  option dhcp_link 'odhcpd'
  option dhcp4_slaac6 '1'
  option domain 'lan'
  option domain_type 'static'
  option listen_port '53'
  option rebind_protection '1'
  option unbound_control '1'</code></pre>
            <p>If you do have additional parameters in the file, ensure that nothing overrides the parameters set - being especially cautious about the <code>unbound_control</code> parameter.</p><p>I've also merged the following configuration with <code>/etc/config/dhcp</code> (leaving some existing entries alone):</p>
            <pre><code>config dhcp 'lan'
        option dhcpv4 'server'
        option dhcpv6 'server'
        option interface 'lan'
        option leasetime '12h'
        option ra 'server'
        option ra_management '1'

config odhcpd 'odhcpd'
        option maindhcp '1'
        option leasefile '/var/lib/odhcpd/dhcp.leases'
        option leasetrigger '/usr/lib/unbound/odhcpd.sh'
...</code></pre>
            <p>Finally, we can enable autostart on Unbound and start it:</p>
            <pre><code>service unbound enable
service unbound start</code></pre>
            <p>Here's the proof of the pudding; when we intercept DNS queries between our router and the wider internet, we'll notice they are encrypted with TLS v1.2:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4UhGWT8fE7lL2nOPxLxaEd/145dcb85616f1cc87dc9b9c3ff29d58c/dns_encrypted.png" />
            
            </figure>
    <div>
      <h4>Conclusion</h4>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>In this blog post, we've discussed how encrypting your DNS traffic can help privacy protect your internet browsing. By replacing Dnsmasq with Unbound, we are able to allow OpenWRT to take advantage of DNS-over-TLS to help encrypt our web traffic.</p> ]]></content:encoded>
            <category><![CDATA[1.1.1.1]]></category>
            <category><![CDATA[Privacy]]></category>
            <category><![CDATA[Encryption]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">W44zpqS8lPDlJkmzRXMAY</guid>
            <dc:creator>Junade Ali</dc:creator>
        </item>
        <item>
            <title><![CDATA[Disruptive Cryptography: Post-Quantum & Machine Learning With Encrypted Data]]></title>
            <link>https://blog.cloudflare.com/disruptive-cryptography/</link>
            <pubDate>Thu, 14 Sep 2017 18:25:39 GMT</pubDate>
            <description><![CDATA[ Shay Gueron, Associate Professor of Mathematics, University of Haifa, Israel, and Raluca Ada Popa, Assistant Professor of Computer Science, UC Berkeley

 ]]></description>
            <content:encoded><![CDATA[ <p>Shay Gueron, Associate Professor of Mathematics, University of Haifa, Israel, and <a href="https://twitter.com/ralucaadapopa">Raluca Ada Popa</a>, Assistant Professor of Computer Science, UC Berkeley</p><p>Moderator: <a href="https://twitter.com/jgrahamc">John Graham-Cumming</a>, CTO, Cloudflare</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/72G9FbjLaobqe4DURG9yhL/124bdfecac85b995ff052124cec23e8d/Screen-Shot-2017-09-14-at-10.18.40-AM.png" />
            
            </figure><p> Photo by Cloudflare Staff</p><p>Raluca is also a Co-Director of the RISELab at UC Berkeley as well as Co-Founder and CTO of a cybersecurity startup called PreVeil. She developed practical systems that protect data confidentiality by computing over encrypted data as well as designed new encryption schemes that underlie these systems.</p><p>Shay was previously a Senior Principal Engineer, serving as Intel’s Senior Cryptographer and is now senior principal at AWS, and an expert in post-quantum, security, and algorithms.</p><p>JGC: Tell us about what you actually do.</p><p>RP: Computing on encrypted data is not just theoretical; it’s also exciting because you can keep data encrypted in the cloud. It covers hacking attacks while still enabling the functionality of the system. This is exciting because we can cover so many hacking attacks in one shot.</p><p>SG: I’m working on making new algorithms; also on making solutions for quantum computers that are increasingly strong.</p><p>SG: I’ve been working on cryptography: making it faster, recently I’ve been thinking about solutions for what will happen when we have a quantum computer strong enough to threaten the known methods for cryptography.</p><p>JGC: Why are we worrying ahead of time?</p><p>SG: Protocols and implementations have been improved; performance on processors allows for most things to be encrypted. We are entering a stable situation. But right now, there is a new threat where there may be quantum computers that can solve difficult problems. This means that we need to start thinking about a replacement to the current cryptography.</p><p>RP: If someone is saving encrypted communications now, they could decrypt past conversations that could still be relevant in the future.</p><p>JGC: We don’t have the quantum computer yet but we already have the programs that will run on it.</p><p>SG: Cryptography is based on a belief in “reduction of a difficult problem.” All cryptography is based on a belief that something is difficult to do; based on this there are theoretical works that run “if… then”; but there is no robust proof that factorization is difficult, or that solving a particular problem is hard. We are just not smart enough yet.</p><p>JGC: Talk about this concept that there are classes of problems that are hard.</p><p>RP: There are classes of problems. There are many studies that people used to boost their confidence about specific algorithms.</p><p>JGC: Why can't we just make keys bigger to deal with quantum threat?</p><p>SG: We have to be practical in some sense. The amount of traffic that occurs prior to encrypting data is significant. This causes computational burdens.</p><p>RP: Shor’s algorithm is particularly effective; it can break certain properties of RSA. This is not the same for symmetrical cryptography, where increasing the key is more hopeful.</p><p>JGC: So what are we going to fix today?</p><p>SG: When you establish communications, first we agree on crypto-ciphers. The symmetric key will be used for encryption based on algorithm and signatures. Signatures are more urgent. For the symmetric key encryption, we can start today, because the quantum algorithms can’t reverse the key.</p><p>JGC: Give us an idea of what kinds of things you can do without decrypting something?</p><p>RP: In theory, you can compute any function without decrypting. We can do specialized computations effectively and machine learning on encrypted data.</p><p>For instance: How can you do summation of encrypted data? You get encryption of the sum. It’s not difficult to do an encryption summation. There are practical examples: startups, doc sharing in email; there are many solutions for classes of computation that apply to products we are using today.</p><p>So there are services for all sorts of classes of computation out there.</p><p>SG: But some of those encryption systems also depend on difficulty of factorization.</p><p>JGC: How fast will it be before companies become “post-quantum certified’?</p><p>RP: For certain classes of computation it is happening quickly, but there are still many factors making that difficult. For specialized classes of computations, it should happen in the near future … hopefully within the next 5 years. Why? Because encrypted computation brings new functionality. I.e. sharing encrypted data across hospitals to measure effectiveness of cancer treatments and enable new studies.</p><p>Encrypted computation brings you new functionality. A lot of businesses can’t share data: for instance, medical companies - which means they cannot help their patients as effectively, so we’ll be able to do many more studies when we can enable this encrypted computation.</p><p>SG: There is a call for proposals by NIST for quantum-resistant algorithms. They estimate that this will be a 5-year process. Industry will have to start integrating; the safe way would be to do both: If you want to do a key exchange, you do the classical and the quantum resistant one.</p><p>JGC: How long before we create a quantum computer?</p><p>SG: The question is how long it will take before they are strong enough… this will take some time. But there is a lot of motivation.</p><p>Quantum computing is not designed to break cryptography, but to go some good. Many industries and governments are trying to do this right now. It’s a race against the human mind.</p><p>JGC: One of the arguments against new cryptography is that it is slow. Are there costs?</p><p>RP: Certainly; what’s sped up encryption are hardware implementations. There are already startups trying to build specialized hardware for advanced encryption.</p><p>RP: For the masses to enjoy acceleration, you would need quantum computers for the masses. To speed up usage, you need quantum computers for the masses.</p><p>JGC: If there are quantum computers for the masses, what will I get?</p><p>SG: You can get better AI, faster searches.</p><p>JGC: Tell us about quantum encryption vs. quantum computing: for instance, the Chinese sending data between two satellites</p><p>RP: You’d need a lot of quantum computers, but to break it you’d just need a few.A widespread adoption of quantum encryption is going to be much slower.</p><p>Q&amp;A: What is lattice-based cryptography?</p><p>Why do the two of your domains intersect?</p><p>RP: Lattices are much more expressive in terms of the computations they can do. Lattices are more resilient to quantum attackers and classical algorithms.</p><p>SG: We have no idea how to solve lattice problems, even if we had quantum computers. New cryptography is trying to solve these issues.</p><p>This is why the new cryptography is trying to build on these problems in the hopes we can come up with an algorithm.</p><p>A quantum computer is not going to allow you to do a million computations.</p><p>JGC: What would you like the audience to take away from this session?</p><p>RP: Mainly, encrypted computation is practical. There are actually practical solutions; it can enable new functionalities. Secondly, you can enable interesting studies (medical, financial) with encrypted computation.</p><p>SG: People shouldn’t worry about quantum-resistant encryption. We’re working on it.</p><p>So it can enable new functionalities for you.</p><p><b>Q&amp;A:</b></p><p>Q: What advice for people who want to make cheap, future-proof “internet of things” devices?</p><p>SG: There is a set of algorithms that are known to be secure against quantum attacks. These are hash-based signatures. These are slow, but practical solutions. But in general, I’d like to say: Don’t lose any sleep over the threat of quantum computers; it will happen gradually. There is still time to prepare.</p><p>RP: I agree; but do start thinking about it. First get Internet of Things right; then worry about the quantum part.</p><p>Q: What are some primitives that are missing in programming language that allow you to build easily? How to balance security with programming?</p><p>RP: We have some libraries; there are some.</p><p>Q: What do you think of the quantum resistant crypto put into the Chrome browser’s TLS stack? Will secrets stand up to a quantum computer?</p><p>SG: This experiment was already performed by Google, They wanted to test what would happen to the overhead. This particular algorithm was just an exercise to see what would happen in reality, if you do both classical and quantum-safe key exchange. Conclusion: yes, we can handle it.</p><p>All our sessions will be streamed live! If you can't make it to Summit, here's the link: <a href="http://www.cloudflare.com/summit17">cloudflare.com/summit17</a></p> ]]></content:encoded>
            <category><![CDATA[Encryption]]></category>
            <category><![CDATA[Internet Summit]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">1BYQ8dCGIJD1a36awJyG2s</guid>
            <dc:creator>Internet Summit Team</dc:creator>
        </item>
        <item>
            <title><![CDATA[TLS nonce-nse]]></title>
            <link>https://blog.cloudflare.com/tls-nonce-nse/</link>
            <pubDate>Wed, 12 Oct 2016 15:05:00 GMT</pubDate>
            <description><![CDATA[ One of the base principles of cryptography is that you can't just encrypt multiple messages with the same key. At the very least, what will happen is that two messages that have identical plaintext will also have identical ciphertext, which is a dangerous leak.  ]]></description>
            <content:encoded><![CDATA[ <p>One of the base principles of cryptography is that you can't <i>just</i> encrypt multiple messages with the same key. At the very least, what will happen is that two messages that have identical plaintext will also have identical ciphertext, which is a dangerous leak. (This is similar to why you can't encrypt blocks with <a href="https://blog.filippo.io/the-ecb-penguin/">ECB</a>.)</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7e62kSNRA1U4fnYOyOzcnf/f7170ef03a53336e04f0ce640fe779fe/19fq1n.jpg" />
            
            </figure><p>If you think about it, a pure encryption function is just like any other pure computer function: deterministic. Given the same set of inputs (key and message) it will always return the same output (the encrypted message). And we don't want an attacker to be able to tell that two encrypted messages came from the same plaintext.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2ypDxOdFtrCyJKATmJ8MrZ/6993281f817e294e4c93aded559575b4/Nonces.001-1.png" />
            
            </figure><p>The solution is the use of IVs (Initialization Vectors) or nonces (numbers used once). These are byte strings that are different for each encrypted message. They are the source of non-determinism that is needed to make duplicates indistinguishable. They are usually not secret, and distributed prepended to the ciphertext since they are necessary for decryption.</p><p>The distinction between IVs and nonces is controversial and not binary. Different encryption schemes require different properties to be secure: some just need them to never repeat, in which case we commonly refer to them as nonces; some also need them to be random, or even unpredictable, in which case we commonly call them IVs.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3zLMUUtK3IC6m9cEqDEeN8/b1e3f972e89120f0251a86eb9b9043dd/Nonces.002-1.png" />
            
            </figure>
    <div>
      <h3>Nonces in TLS</h3>
      <a href="#nonces-in-tls">
        
      </a>
    </div>
    <p>TLS at its core is about encrypting a stream of packets, or more properly "records". The initial handshake takes care of authenticating the connection and generating the keys, but then it's up to the record layer to encrypt many records with that same key. Enter nonces.</p><p>Nonce management can be a hard problem, but TLS is near to the best case: keys are never reused across connections, and the records have sequence numbers that both sides keep track of. However, it took the protocol a few revisions to fully take advantage of this.</p><p>The resulting landscape is a bit confusing (including one or two attack names):</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6SKHBZfapSldYN2czxNNuH/7f2d30dd41d885dc13b76d9ac6668c35/Nonces-table.png" />
            
            </figure>
    <div>
      <h4>RC4 and stream ciphers</h4>
      <a href="#rc4-and-stream-ciphers">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/VnXTZTwdpRaoyqySKVcDn/ae6e025045313ccfc7da498555196386/Nonces-RC4-black.png" />
            
            </figure><p>RC4 is a stream cipher, so it doesn't have to treat records separately. The cipher generates a continuous keystream which is XOR'd with the plaintexts as if they were just portions of one big message. Hence, there are no nonces.</p><p>RC4 <a href="/tag/rc4/">is broken</a> and was removed from TLS 1.3.</p>
    <div>
      <h4>CBC in TLS 1.0</h4>
      <a href="#cbc-in-tls-1-0">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4qMGKsTc0nc47WCKPrCvjk/f5dbe96ec072da2153b4549c5094d454/Nonces-CBC-1.0-black-1.png" />
            
            </figure><p>CBC in TLS 1.0 works similarly to RC4: the cipher is instantiated once, and then the records are encrypted as part of one continuous message.</p><p>Sadly that means that the IV for the next record <a href="https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#Cipher_Block_Chaining_.28CBC.29">is the last block of ciphertext of the previous record</a>, which the attacker can observe. Being able to predict the IV breaks CBC security, and that led to the <a href="https://www.imperialviolet.org/2011/09/23/chromeandbeast.html">BEAST attack</a>. BEAST is mitigated by <a href="https://www.imperialviolet.org/2012/01/15/beastfollowup.html">splitting records in two</a>, which effectively randomizes the IV, but this is a client-side fix, out of the server control.</p>
    <div>
      <h4>CBC in TLS 1.1+</h4>
      <a href="#cbc-in-tls-1-1">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5OaNhFttOtRZjYfEgeIq6v/e7a957af57c1116317a13befa7d5f6b6/Nonces-CBC-explicit-black.png" />
            
            </figure><p>TLS 1.1 fixed BEAST by simply making IVs explicit, sending the IV with each record (with the network overhead that comes with that).</p><p>AES-CBC IVs are 16 bytes (128 bits), so using random bytes is sufficient to prevent collisions.</p><p>CBC has <a href="/yet-another-padding-oracle-in-openssl-cbc-ciphersuites/">other nasty design issues</a> and has been removed in TLS 1.3.</p>
    <div>
      <h4>TLS 1.2 GCM</h4>
      <a href="#tls-1-2-gcm">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3bbl3iFzHYooGKXgbGZYCo/389f3e082c1097f97933391d8c7ed9fd/Nonces-GCM-black-2.png" />
            
            </figure><p>TLS 1.2 inherited the 1.1 explicit IVs. It also introduced <a href="/it-takes-two-to-chacha-poly/">AEADs</a> like AES-GCM. The record nonce in 1.2 AES-GCM is a concatenation of a fixed per-connection IV (4 bytes, derived at the same time as the key) and an explicit per-record nonce (8 bytes, sent on the wire).</p><p>Since <a href="https://en.wikipedia.org/wiki/Birthday_problem">8 random bytes is too short to guarantee uniqueness</a>, 1.2 GCM implementations have to use the sequence number or a counter. If you are thinking "but what sense does it make to use an explicit IV, sent on the wire, which is just the sequence number that both parties know anyway", well... yeah.</p><p>Implementations not using a counter/sequence-based AES-GCM nonce were found to be indeed vulnerable by the "<a href="https://github.com/nonce-disrespect/nonce-disrespect">Nonce-Disrespecting Adversaries</a>" paper.</p>
    <div>
      <h4>TLS 1.3</h4>
      <a href="#tls-1-3">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6X6R46MkjuADDzdspGpNAp/313ba89f66a1a304c169d1e9cd801eaf/Nonces-1.3-black-1.png" />
            
            </figure><p>TLS 1.3 finally took advantage of the sequential nature of TLS records and removed the free-form explicit IVs. It uses instead a combination of a fixed per-connection IV (derived at the same time as the key) and the sequence number, XORed—not concatenated.</p><p>This way the entire nonce length is random-looking, nonces can never be reused as the sequence number monotonically increases, and there is no network overhead.</p>
    <div>
      <h4>ChaCha20-Poly1305</h4>
      <a href="#chacha20-poly1305">
        
      </a>
    </div>
    <p>The <a href="/do-the-chacha-better-mobile-performance-with-cryptography/">ChaCha20-Poly1305 ciphersuite</a> uses the same "fixed IV XORed with the sequence number" scheme of TLS 1.3 even when used in TLS 1.2</p><p>While 1.3 AEADs and 1.2 ChaCha20 use the same nonce scheme, when used in 1.2 ChaCha20 still puts the sequence number, type, version and length in the additional authenticated data. 1.3 makes all those either implicit or part of the encrypted payload.</p>
    <div>
      <h3>To recap</h3>
      <a href="#to-recap">
        
      </a>
    </div>
    <ul><li><p>RC4 is a stream cipher, so it has no per-record nonce.</p></li><li><p>CBC in TLS 1.0 used to work similarly to RC4. Sadly, that was vulnerable to BEAST.</p></li><li><p>TLS 1.1 fixed BEAST by simply making IVs explicit and random.</p></li><li><p>TLS 1.2 AES-GCM uses a concatenation of a fixed IV and an explicit sequential nonce.</p></li><li><p>TLS 1.3 finally uses a simple fixed IV XORed with the sequence number.</p></li><li><p>ChaCha20-Poly1305 uses the same scheme of TLS 1.3 even when used in TLS 1.2.</p></li></ul>
    <div>
      <h2>Nonce misuse resistance</h2>
      <a href="#nonce-misuse-resistance">
        
      </a>
    </div>
    <p>In the introduction we used the case of a pair of identical message and key to illustrate the most intuitive issue of missing or reused nonces. However, depending on the cipher, other things can go wrong when the same nonce is reused, or is predictable.</p><p>A repeated nonce often breaks entirely the security properties of the connection. For example, AES-GCM <a href="https://github.com/nonce-disrespect/nonce-disrespect">leaks the authentication key altogether</a>, allowing an attacker to fake packets and inject data.</p><p>As part of the trend of making cryptography primitives less dangerous to use for implementers, the research is focusing on mitigating the adverse consequences of nonce reuse. The property of these new schemes is called <a href="https://www.lvh.io/posts/nonce-misuse-resistance-101.html">Nonce Reuse Resistance</a>.</p><p>However, they still have to see wider adoption and standardization, which is why a solid protocol design like the one in TLS 1.3 is critical to prevent this class of attacks.</p><p><i>Does painting overviews of technical topics like this sound satisfying to you? </i><a href="https://www.cloudflare.com/join-our-team/"><i>We are hiring in London, Austin (TX), Champaign (IL), San Francisco and Singapore</i></a><i>!</i></p> ]]></content:encoded>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[TLS 1.3]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Encryption]]></category>
            <category><![CDATA[Cryptography]]></category>
            <guid isPermaLink="false">1xqvUPkstNMoVVXMQ3fy5C</guid>
            <dc:creator>Filippo Valsorda</dc:creator>
        </item>
        <item>
            <title><![CDATA[Padding oracles and the decline of CBC-mode cipher suites]]></title>
            <link>https://blog.cloudflare.com/padding-oracles-and-the-decline-of-cbc-mode-ciphersuites/</link>
            <pubDate>Fri, 12 Feb 2016 14:00:25 GMT</pubDate>
            <description><![CDATA[ At CloudFlare, we’re committed to making sure the encrypted web is available to everyone, even those with older browsers. At the same time, we want to make sure that as many people as possible are using the most modern and secure encryption available to them.  ]]></description>
            <content:encoded><![CDATA[ <p>At CloudFlare, we’re committed to making sure the encrypted web is available to everyone, even those with older browsers. At the same time, we want to make sure that as many people as possible are using the most modern and secure encryption available to them. Improving the cryptography used by the majority requires a coordinated effort between the organizations building web browsers and API clients and those working on web services like CloudFlare. Cryptography is a two-way street. Even if we support the most secure cryptographic algorithms for our customers, web visitors won’t get the benefit unless their web client supports the same algorithms.</p><p>In this blog post we explore the history of one widely used cryptographic mode that continues to cause problems: cipher block chaining (CBC). We’ll explain why CBC has proven difficult to use safely, and how recent trends in the adoption of secure ciphers by web clients have helped reduce the web’s reliance on this technology. From CloudFlare’s own data, we’ve seen the percentage of web clients that support safer cipher modes (such as AEAD) rise from under 50% to over 70% in six months, a good sign for the Internet.</p>
    <div>
      <h3>What’s in a block cipher?</h3>
      <a href="#whats-in-a-block-cipher">
        
      </a>
    </div>
    <p>Ciphers are usually grouped into two categories: stream ciphers and block ciphers. Stream ciphers encrypt data on a bit-by-bit basis. Plaintext and ciphertext are always the same length. Examples of pure stream ciphers are RC4 and ChaCha20. Although RC4 is <a href="/killing-rc4-the-long-goodbye/">no longer considered secure</a>, we can still rely on <a href="/do-the-chacha-better-mobile-performance-with-cryptography/">ChaCha20</a> as a secure stream cipher for use on the web, but it was only <a href="https://tools.ietf.org/html/rfc7539">recently standardized</a> by the IETF and therefore does not have broad adoption.</p><p>Unlike stream ciphers, which can encrypt data of any size, block ciphers can only encrypt data in "blocks" of a fixed size. Examples of block ciphers are DES (8-byte blocks) and AES (16-byte blocks). To encrypt data that is less than one block long using a block cipher, you have several options. You can either turn the block cipher into a stream cipher (using something called counter mode, more on this later), or you can include extra bytes as padding to align the data to the block size. If the data is longer than one block, then the data needs to be split into multiple blocks that are encrypted separately. This splitting process is where things get tricky.</p><p>The naïve approach to encrypting data larger than the block size is called Electronic Code Book (ECB) mode. In ECB mode, you split your data into chunks that match the cipher’s block size and then encrypt each block with the same key. ECB turns out to be a very bad way to encrypt most kinds of data: if the data you are encrypting has redundant portions, say an image with many pixels of the same color, you end up with the "Tux" problem (demonstrated below). If two blocks have the same value, they will be encrypted to the same value. This property lets an attacker know which plaintext blocks match by looking at the ciphertext blocks.</p><p>For example, here’s what a high-resolution version of Linux’s "Tux" looks when encrypted in ECB mode:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6ldIhS9beqvxDI1lihAdfd/95648d88100a22d175cd00e9406d6ce7/image_0-1.png" />
            
            </figure><p> </p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4aje0C12YM6dQ4tg3CKBWA/4b37b8710f8706285fd9c1b1d95a3858/image_1-1.png" />
            
            </figure><p>Image from <a href="https://blog.filippo.io/the-ecb-penguin/">Filippo Valsorda’s blog</a></p><p>The fact that identical plaintext blocks are encrypted to identical ciphertext blocks gives an unwanted structure to encrypted data that reveals information about the plaintext.</p><p>One solution to this is to "chain" blocks together by taking the output of one encryption and mixing it into the input for the next block. There are several block cipher modes, but the one that was originally standardized in SSL (and continues to be used in TLS) is Cipher Block Chaining (CBC). In CBC, the plaintext of one block is combined with the ciphertext of the previous block using the <a href="https://en.wikipedia.org/wiki/Exclusive_or">exclusive OR operation (XOR)</a>. The first block is XOR’d with a randomly generated initialization vector (IV).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/29h249NOwtpzKb63SZZ7FJ/6d55df75895d7b1bdfb0e0913b75960c/image_2-1.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1kem6tYzLTZZU1UgDnjLXl/f816414bd7f226522627880631a88c32/image_3-1.png" />
            
            </figure><p>Decryption works by XORing the previous block of ciphertext (or the IV) into the output of the decryption.</p><p>CBC has some nice properties. The ciphertext produced by a block cipher is encrypted, so it (hopefully) looks random. In CBC, you’re mixing this random looking encrypted data into the plaintext, making it very unlikely that there will be patterns in the output. Another advantage is that decryption can be done in parallel to speed things up. However, CBC has proven to be more trouble than expected when used in the context of the HTTPS and the web.</p>
    <div>
      <h3>How records are encrypted in TLS</h3>
      <a href="#how-records-are-encrypted-in-tls">
        
      </a>
    </div>
    <p>TLS provides both encryption—via a cipher—and integrity—via a message authentication code (MAC). When SSL was originally designed, one open question was: should we authenticate the plaintext data, or should we encrypt and then authenticate the encrypted data? This is sometimes stated as MAC-then-encrypt or encrypt-then-MAC? They chose MAC-then-encrypt (encrypt the authenticated data) which has since proven to be the less than ideal choice.</p><p>In cryptographic protocol design, leaving some bytes unauthenticated can lead to unexpected weaknesses (this is known as the <a href="http://www.thoughtcrime.org/blog/the-cryptographic-doom-principle/">Cryptographic Doom Principle</a>). When encrypting data using a block cipher mode like CBC, the last block needs to be padded with extra bytes to align the data to the block size. In TLS, this padding comes <i>after</i> the MAC. (There is a TLS extension, described in <a href="https://tools.ietf.org/html/rfc7366">RFC 7366</a>, that enables encrypt-then-MAC, but it’s rarely implemented.)</p><p>A TLS record has the following format. Each one has an 8-byte sequence number that is stored and incremented on each new one. The encrypted part of a request needs to add up to a multiple of 16 bytes, but for the purposes of this post, let’s assume that this length is 64 bytes (4 blocks).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5yDZT85RQiinE1PrBqFjPu/23b750a8cea7e9b614001b0b117c303b/image_4-1.png" />
            
            </figure><p>In TLS, valid padding looks like a number preceded by that number of copies of itself. So, if the number is 0x00, it’s repeated 0 times:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3K8Q5akKuJ9uglGdrO0of0/56a75c94e4675ac193ad21c07a5382ef/image_5.png" />
            
            </figure><p>If the number is 0x02, it’s repeated 0x02 times:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2q5xGLGDRQIkKY7SeygAXT/06e34bab25a27093fffc41801ce65298/image_6.png" />
            
            </figure><p>To decode a block, decrypt the entire message, look at the last byte, remove it, and remove that many bytes of padding. This gives you the location of the HMAC.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/62K6h6eN4DElF3fLwHHHpo/b63e8339a1d90a57a7c016b39c5ced41/image_7.png" />
            
            </figure><p>To compute the MAC, take the sequence number, the 5-byte header, and the message, then HMAC them using the shared integrity key.</p>
    <div>
      <h3>Padding oracle</h3>
      <a href="#padding-oracle">
        
      </a>
    </div>
    <p>The problem with this construction is that it is susceptible to a technique called the padding oracle attack. This attack was first reported against TLS by <a href="https://www.iacr.org/archive/eurocrypt2002/23320530/cbc02_e02d.pdf">Serge</a> <a href="https://www.iacr.org/archive/eurocrypt2002/23320530/cbc02_e02d.pdf">Vaudenay</a> <a href="https://www.iacr.org/archive/eurocrypt2002/23320530/cbc02_e02d.pdf">in 2002</a>. A padding oracle is a way for an attacker with the ability to modify ciphertext sent to a server to extract the value of the plaintext.</p><p>Attackers don’t have to be an ISP or a government to get in the middle of requests. If they are on the same local network as their victim they can use a technique called <a href="https://en.wikipedia.org/wiki/ARP_spoofing">ARP spoofing</a>. By tricking the victim’s machine to forward data to the attacker’s machine instead of the router, they can read, modify and measure the time it takes for every encrypted message sent from the browser to the server. By injecting JavaScript into an unencrypted website the client is visiting, they can get the browser to repeatedly send requests to a target HTTPS site. These requests contain valuable data such as login cookies and CSRF tokens.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5tmdPDX0abF8OR6knSLI43/578b3c6b3e6de5a30cf9c8fa28168586/image_8.png" />
            
            </figure><p>If the TLS server behaves differently when decrypting ciphertext that has correct padding vs. incorrect padding, the attacker can carefully craft ciphertexts that provide enough information to reveal the plaintext data.</p><p>This kind of sounds like magic, but it really isn’t. Given a ciphertext, there are three possible ways for the *decrypted data *to look:</p><ol><li><p>Invalid padding</p></li><li><p>Valid padding, wrong HMAC</p></li><li><p>Valid padding, correct HMAC</p></li></ol><p>Originally, the TLS server would return a different error code for cases 1 and 2. Using the structure of CBC, an attacker can construct 256 ciphertexts whose last bytes decrypt to the numbers 0x00 to 0xFF. By looking at the error code, the attacker can tell which one of those ciphertexts decrypted to the value 0x00, a valid 0-byte padding.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4sKsagADnsCv5Afapt85os/ed295c442069fddc67cc963ec6dc883a/image_9.png" />
            
            </figure><p>With that knowledge, the attacker can then construct 256 guesses where the last byte is 0x01 and the second-to-last byte covers 0-255. The error code lets the attacker know which one decrypts to 0x01, causing the last two bytes to be 0x01 0x01, another valid padding. This process can be continued to decrypt the entire message.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/NqaZzOWhxXJiUH6FxDTOR/e833ee322f16a7e741bdf87f22652dca/image_10.png" />
            
            </figure><p>The fact that the padding is unauthenticated and that an attacker can tell the difference between a correct and incorrect guess makes this attack very powerful. This error code provides a tiny bit of information about the plaintext to the attacker for each decryption. This is what’s called a side-channel, information leakage that is visible to a third party.</p><p>The original Vaudenay attack was mitigated by returning the same error code for both cases 1 and 2; however, the attack was revived again using another side channel: <a href="https://crypto.stanford.edu/~dabo/papers/ssl-timing.pdf">timing</a>.</p><p>Consider the amount of work a server has to do in each of these cases:</p><ol><li><p>Invalid padding – Read the padding bytes</p></li><li><p>Valid padding, wrong HMAC – Read the padding bytes, compute the HMAC</p></li><li><p>Valid padding, correct HMAC – Read the padding bytes, compute the HMAC</p></li></ol><p>In Vaudenay’s original attack, the error code gave away the difference between scenarios 1 and 2. In the timing version of the padding oracle, the attacker measures how much time it takes for the server to respond. A fast response means scenario 1 and slow responses mean scenario 2 or 3. Timing attacks are subject to some jitter due to the fact that computers are complex and sometimes requests just take longer than others. However, given enough attempts you can use statistics to determine which case you are in. Once the attacker has this oracle, the full plaintext can be decrypted using the same steps as the original Vaudenay attack described above.</p><p>To fix the timing attack, TLS implementations were changed to perform the HMAC even if the padding is invalid. Now every time invalid padding is found in a decrypted ciphertext, the server would assume zero padding and perform a dummy HMAC on all data. The amount of time spent should be constant for cases 1, 2 and 3. Or so we thought.</p>
    <div>
      <h3>Getting lucky</h3>
      <a href="#getting-lucky">
        
      </a>
    </div>
    <p>As it turns out, it’s very difficult to find a heuristic that takes a branching program and making all branches take the same amount of time. In 2013, Nadhem AlFardan and Kenny Paterson <a href="http://www.isg.rhul.ac.uk/~kp/dtls.pdf">found</a> that there was still a timing oracle in TLS, based on the fact that HMAC takes a different amount of time to compute based on how much data is being MAC’d. They called their attack Lucky 13.</p><p>What makes Lucky 13 possible is the fact that an HMAC of 55 bytes takes less time than an HMAC of 56 bytes, and 55 - 13 (of extra data) + 20 (of MAC) = 62, which is… luckily close to a block size multiple (64). This can be exploited to give a small timing difference that can be used as a decryption oracle.</p><p>Say the decryption is 64-bytes long (4 AES blocks), then the data can look like:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3dok1EJbm9r5dBsmSkXYYn/75114e8ca806450fc9efd47582dafd8f/image_11.png" />
            
            </figure><p>By picking every combination of the last two bytes, you can identify which pair of values result in the padding 0x01 0x01, which can be identified by a slightly faster operation on the server. With this information in hand, you can jump ahead to step 2 of the padding oracle. Guessing two bytes at once takes more guesses (around 2<sup>16</sup> = 65536 rather than 256 for one guess), but it is still practical.</p><p>The fact that the sequence number and header together are 13 bytes long enable this attack to work and inspired the name "Lucky 13". The authors note how if it were only 12 bytes then they wouldn’t need to guess the last two bytes at once, making it even faster.</p><p>"In some sense, 13 is lucky, but 12 would have been luckier!" - <a href="http://www.isg.rhul.ac.uk/~kp/dtls.pdf">AlFardan &amp; Paterson</a></p><p>Lucky 13 can be mitigated by making sure HMAC takes the same amount of time no matter how much data goes into it. The code to make the HMAC constant time was so complicated and difficult to implement that it took <a href="https://www.imperialviolet.org/2013/02/04/luckythirteen.html">several attempts to fix</a>. Other implementers ran into similar problems mitigating Lucky 13. In 2015, Paterson and Albrecht found that Amazon’s TLS implementation (called <a href="https://blogs.aws.amazon.com/security/post/TxCKZM94ST1S6Y/Introducing-s2n-a-New-Open-Source-TLS-Implementation">s2n</a>) was also <a href="https://eprint.iacr.org/2015/1129">susceptible</a> <a href="https://eprint.iacr.org/2015/1129">to a variant of the attack</a> despite attempts to mitigate it in the code, due to an even more obscure pattern in HMAC timing. Other libraries haven’t yet implemented a fix. For example, Golang’s crypto package is still susceptible to Lucky 13 (CloudFlare’s own Filippo Valsorda <a href="https://go-review.googlesource.com/#/c/18130/">has proposed a fix</a>, but is has yet to be reviewed by the language maintainers). Timing oracles are extremely hard to avoid when implementing CBC ciphers in MAC-then-encrypt mode.</p><p>As soon as the Lucky 13 paper was released, CloudFlare adopted <a href="/staying-on-top-of-tls-attacks/">OpenSSL’s server-side fix</a>. This helps prevent visitors to sites on CloudFlare from being affected by Lucky 13 when they use CBC mode, however the preferred solution would be to move to a cipher mode that is not susceptible to this kind of attack.</p>
    <div>
      <h3>What to use if not CBC?</h3>
      <a href="#what-to-use-if-not-cbc">
        
      </a>
    </div>
    <p>Timing oracles are not the only vulnerabilities that CBC mode ciphers suffer from. Both <a href="/taming-beast-better-ssl-now-available-across/">BEAST</a> and <a href="/sslv3-support-disabled-by-default-due-to-vulnerability/">POODLE</a> were high-profile TLS vulnerabilities that only affected CBC mode ciphers. There are still valid uses of CBC mode (such as for encrypting static content), but in the context of TLS, MAC-then-encrypt with CBC has had too many issues to be ignored.</p><p>The attacks on RC4 and CBC have left us with very few choices for cryptographic algorithms that are safe from attack in the context of TLS. In fact, there are no ciphers supported by TLS 1.1 or earlier that are safe. The only options are CBC mode ciphers or RC4. In TLS 1.2, a new cipher construction was introduced called <a href="/tag/aead/">AEAD (Authenticated Encryption with Associated Data)</a>. AEAD takes a stream cipher and mixes in the authentication step along the way rather than computing the MAC at the end. CloudFlare implements two such cipher modes, <a href="/go-crypto-bridging-the-performance-gap/">AES-GCM</a> and <a href="/do-the-chacha-better-mobile-performance-with-cryptography/">ChaCha20-Poly1305</a>. ChaCha20 is a stream cipher, and Poly1305 a MAC scheme. AES-GCM instead uses counter mode to turn the block cipher AES into a stream cipher and adds authentication using a construction called <a href="https://en.wikipedia.org/wiki/Galois/Counter_Mode">GMAC</a>.</p><p>Since communication requires two parties, both the web client and web server need to support the same ciphers and cipher modes. Luckily, adoption of AEAD cipher modes in clients is growing.</p>
    <div>
      <h3>Client support for AEAD cipher modes</h3>
      <a href="#client-support-for-aead-cipher-modes">
        
      </a>
    </div>
    <p>Most modern browsers and operating systems have adopted at least one AEAD cipher suite in their TLS software. The most popular is AES-GCM, however some browsers (Google Chrome in particular) support both AES-GCM and ChaCha20-Poly1305. Until late 2015, the major exception to this rule was Apple, whose iOS and Mac OS X operating systems only supported AES in CBC mode and never supported ChaCha20-Poly1305, making both Safari and iOS Apps susceptible to Lucky 13.</p><p>The following graph shows the ciphersuites negotiated between web clients and sites using CloudFlare over the last six months of 2015. Both iOS 9 and Mac OS 10.11 were released in September 2015, helping push the percentage of AEAD ciphers used in connections to CloudFlare over 60%. The increased adoption of modern versions of Microsoft’s Edge browser has also contributed to the growth.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3FXkZFiNOgWlOOGYdtvIMf/11d0b0bf1e67b0d3586a58264e21d610/image_12.png" />
            
            </figure>
    <div>
      <h3>Conclusions</h3>
      <a href="#conclusions">
        
      </a>
    </div>
    <p>Even the most well-intentioned cryptographic constructions can turn out to cause more problems than they’re worth. This is what happened with CBC-mode block ciphers over the years as more problems with MAC-then-Encrypt were discovered. The cryptographic community is constantly working on new algorithms, but only those with the most vetting end up making it to mainstream protocols like TLS. In choosing cryptographic algorithms for an ecosystem like the web, it’s best to follow the advice of standards bodies like the IETF and implement what works and is secure.</p><p>As a company that provides HTTPS for a large portion of the Internet, it’s our responsibility to keep our customers secure by implementing the latest standards, like AEAD cipher suites. We have a team of talented cryptographers who stay on pulse of things, and are able to handle everything from 0-day vulnerabilities to adherence to latest standards to keep customers safe. However, keeping servers up to date is only half of the equation. Web clients also need to keep up to date, and web users need to keep using the latest browsers. The good news is that our data shows that web clients are rapidly moving toward secure AEAD-based ciphers, but we still have a long way to go to keep everyone secure.</p> ]]></content:encoded>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Encryption]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Cryptography]]></category>
            <guid isPermaLink="false">2rvDj9OXl5YGgktQQpMnOY</guid>
            <dc:creator>Nick Sullivan</dc:creator>
        </item>
        <item>
            <title><![CDATA[How to Talk to Your Parents About Encryption]]></title>
            <link>https://blog.cloudflare.com/how-to-talk-to-your-parents-about-encryption/</link>
            <pubDate>Fri, 25 Dec 2015 08:49:00 GMT</pubDate>
            <description><![CDATA[ It’s December 25th, which means most of you are probably at home visiting with family. I asked a few of the security engineers here at CloudFlare how they explain their jobs when they’re home for the holidays, and here's what they had to say. ]]></description>
            <content:encoded><![CDATA[ <p>It’s December 25th, which means most of you are probably at home visiting with family. I asked a few of the security engineers here at CloudFlare how they explain their jobs when they’re home for the holidays, and most of them responded with something along the lines of, "Oh, I stopped trying to do that a long time ago." Apparently, working in the cryptography field doesn’t exactly make it easy to talk about work with your parents.</p><p>After chatting with our crypto experts some more, we figured out a decent way to explain the general idea of encryption and why it’s a critical part of the Internet. While this post may not explain exactly what security engineers do on a day-to-day basis, hopefully it will help you at least tell your parents why you have a job in the first place.</p>
    <div>
      <h3>Banks and Their Big Fancy Buildings</h3>
      <a href="#banks-and-their-big-fancy-buildings">
        
      </a>
    </div>
    <p>To explain encryption to your parents, I’d start by asking them why they trust their bank. Let’s say they have some cash to deposit. They drive to their bank’s local branch, walk through a big fancy lobby, wait in line for a teller, and hand them their money. It may seem like a silly question, but how do they know they’re actually giving that cash to their bank and not some stranger off the street (or a very sophisticated con artist)?</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/jZ9dWuGiJas51LR0A5eBM/2da20f4b889db2bdc69a24dd05d1b4cb/bank-vault.jpg" />
            
            </figure><p><a href="http://creativecommons.org/licenses/by-sa/3.0/">CC By-SA 3.0</a> by <a href="https://en.wikipedia.org/wiki/User:Rdavout">Rdavout</a></p><p>Put another way, how would they feel if they walked into their bank and it looked like a run-down bail bonds office? Even if they saw their bank’s logo hanging on the wall, they’d still probably be a little hesitant about handing over their money.</p><p>Traditional banks are in big fancy buildings for a reason. Big fancy buildings convey an innate sense of trust. You might think that it’s to invoke a reaction like, "Oh my gosh, this bank has so much money they wouldn’t ever need to steal mine," but that’s not quite it. Big fancy buildings are trustworthy because they took a lot of time, money, and effort to construct, which means it would also take a lot of time, money, and effort for a con artist to build their own big fancy building and masquerade as your bank.</p>
    <div>
      <h3>Encryption Is the Internet’s Big Fancy Building</h3>
      <a href="#encryption-is-the-internets-big-fancy-building">
        
      </a>
    </div>
    <p>I use an online bank with no physical branches, so instead of walking into a lobby, I visit a website. The problem with (and beauty of) the Internet is that anybody can publish a website that looks just like my bank’s site for only a few bucks a month. How do I know that the website I’m visiting isn’t the digital equivalent of a run-down bail bonds office? Where’s my big fancy building?</p><p>Instead of gaining trust with a big fancy building, my online bank adds a TLS certificate to their website. <a href="https://www.cloudflare.com/application-services/products/ssl/">TLS certificates</a> use encryption to ensure that I’m actually visiting the <a href="https://www.cloudflare.com/learning/dns/glossary/what-is-a-domain-name/">domain name</a> I see in my browser and not a phishing website trying to steal my credentials.</p><p>That TLS certificate has the same key property as my parent’s physical bank building: it would take a lot of time and money for somebody to forge a certificate and impersonate my online bank. Even if a bad guy put up a website that looked exactly like my bank’s, they wouldn’t have a TLS certificate that matched, and my browser would warn me that something very bad is happening as soon as I visit the forged website.</p>
    <div>
      <h3>The Internet Is More Complicated</h3>
      <a href="#the-internet-is-more-complicated">
        
      </a>
    </div>
    <p>Of course, online financial transactions are a little different than at your local branch. When I make a deposit with my online bank, I don’t just hand some cash to a teller, I ask my neighbor to drop it off at the post office for me, where it gets picked up by a mailman, who’s kind of busy that day, so he asks his wife to swing by the bank, but all the bank tellers are out to lunch, so she leaves it with the doorman, who says he’ll pass it along when they get back. Oh, and my cash isn’t even in an envelope; it’s just a naked $100 bill.</p><p>That’s what happens every time you do <i>anything</i> on the Internet. That should scare your parents.</p>
    <div>
      <h3>Encryption for Privacy</h3>
      <a href="#encryption-for-privacy">
        
      </a>
    </div>
    <p>Encryption serves a dual purpose on the web. It not only ensures I’m talking to my actual online bank, it also protects my information from all those third-party intermediaries. Visiting my online bank’s website with an encrypted connection is like sticking my $100 bill in a lock box before dropping it off at my neighbor’s place. Neither he, his mailman, his wife, nor the doorman can steal it because they don’t even know what’s inside. My bank’s TLS certificate is the key to that lock box, which makes sure that only the bank tellers are allowed to access my funds.</p>
    <div>
      <h3>Best Practices for Internet Security</h3>
      <a href="#best-practices-for-internet-security">
        
      </a>
    </div>
    <p>At this point, your parents think that working in the crypto field is the coolest, and, let’s face it, mission accomplished. Now that they’re hooked, here are some tips you can feed them about how to stay safe online:</p><ul><li><p>Look for a lock icon in your browser and/or a URL that starts with https to ensure an encrypted connection before entering sensitive information like credit cards or passwords.</p></li><li><p>Double-check the domain name of the website before entering sensitive information to make sure you’re not on a phishing website like paypa1.com or g00gle.com.</p></li><li><p>Don’t text or email your credit cards, bank account numbers, or passwords, no matter how much you trust the person on the other end.</p><ul><li><p>To share this kind of information, use an encrypted file-sharing service like Dropbox or Box and password-protect any documents you share.</p></li></ul></li><li><p>Use different, randomly generated passwords for each of your online accounts.</p><ul><li><p>If your Facebook password is stolen, at least they won’t be able to get into your Twitter or email account.</p></li><li><p>This makes a password manager like LastPass or 1Password a must for keeping track of all your credentials.</p></li></ul></li><li><p>Use two-factor authentication anywhere that supports it, even if it’s slightly annoying.</p><ul><li><p>Two-factor authentication doesn’t let you login without access to your mobile phone, which makes it much harder for an attacker to hijack your account.</p></li></ul></li><li><p>Don’t ignore your browser or operating system when it asks if you want to upgrade.</p><ul><li><p>It gets cheaper and cheaper to break older encryption protocols, and having an outdated system puts you at risk for attacks against outdated security protocols.</p></li></ul></li></ul><p>The underlying theme behind all these best practices is the same. At some point along the way, there’s no encryption protecting your sensitive information, which means there’s a risk of a bad guy intercepting it.</p> ]]></content:encoded>
            <category><![CDATA[Encryption]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">27u49PUmlQpxqcHfU9nxL4</guid>
            <dc:creator>Ryan Hodson</dc:creator>
        </item>
        <item>
            <title><![CDATA[How to build your own public key infrastructure]]></title>
            <link>https://blog.cloudflare.com/how-to-build-your-own-public-key-infrastructure/</link>
            <pubDate>Wed, 24 Jun 2015 13:57:30 GMT</pubDate>
            <description><![CDATA[ A major part of securing a network as geographically diverse as CloudFlare’s is protecting data as it travels between datacenters. Customer data and logs are important to protect but so is all the control data that our applications use to communicate with each other.  ]]></description>
            <content:encoded><![CDATA[ <p>A major part of securing a network as geographically diverse as CloudFlare’s is protecting data as it travels between datacenters. Customer data and logs are important to protect but so is all the control data that our applications use to communicate with each other. For example, our application servers need to securely communicate with our new datacenter in <a href="/osaka-data-center/">Osaka, Japan</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4LLkau2uBvAD4G7wNt1wWt/faf1e0ac7ce890c8800ee0bd06bbf156/23390123_b6caaefc16_z.jpg" />
            
            </figure><p><a href="https://creativecommons.org/licenses/by-sa/2.0/">CC BY-SA 2.0</a> <a href="https://www.flickr.com/photos/kk/23390123/in/photolist-34T4Z-eCaWn2-98sYER-7Hx7ig-ejcw3L-n7qDvJ-dYj3cw-ayZi8V-8qi3MQ-8vR8W5-73xSvY-ehPXNM-oaLWEs-qKGKFC-29GeC-7oEdaD-dAFKPa-az45tU-hjsKL-bikriH-dCz8na-6UKyJq-cJLzGh-drbr3g-5ru4oK-9gGswk-dU3PK3-edjQhq-d7exeW-cdBJCm-9XE9R5-62J5N-brKi7f-4uwyQL-6u3Npt-ccX3-qixk9G-2vj3k2-4DjUnW-cXXqXy-92Cgam-cv6bco-dA29Tj-87az7W-98P4W5-9AipNF-buwS46-9nKh7L-69WpZo-evqszn">image</a> by <a href="https://www.flickr.com/photos/kk/">kris krüg</a></p><p>Great security architecture requires a defense system with multiple layers of protection. As CloudFlare’s services have grown, the need to secure application-to-application communication has grown with it. As a result, we needed a simple and maintainable way to ensure that all communication between CloudFlare’s internal services stay protected, so we built one based on known and reliable protocols.</p><p>Our system of trust is based on a Public Key Infrastructure (PKI) using internally-hosted Certificate Authorities (CAs). In this post we will describe how we built our PKI, how we use it internally, and how to run your own with our open source software. This is a long post with lots of information, grab a coffee!</p>
    <div>
      <h2>Protection at the application layer</h2>
      <a href="#protection-at-the-application-layer">
        
      </a>
    </div>
    <p>Most reasonably complex modern web services are not made up of one monolithic application. In order to handle complex operations, an application is often split up into multiple “services” that handle different portions of the business logic or data storage. Each of these services may live on different machines or even in different datacenters.</p><p>The software stacks of large service providers are made up of many components. For CloudFlare, this includes a web application to handle user actions, a main database to maintain DNS records and user rules, a data pipeline to distribute these rules to the edge network, services for caching, a log transport pipeline, data analysis services and much, much more.</p><p>Some service-to-service communication can happen within a machine, some within a datacenter and some across a broader network like the Internet. Managing which communications should use which type of network in our evolving services is not a simple task. A single accidental configuration change could result in messages that are supposed to never leave a machine going through an untrusted connection on the Internet. The system should be designed so that these messages are secure even if they go over the wrong network.</p>
    <div>
      <h2>Enter TLS</h2>
      <a href="#enter-tls">
        
      </a>
    </div>
    <p>One approach to mitigate the risks posed by attackers is to encrypt and authenticate data in transit. Our approach is to require that all new services use an encrypted protocol, Transport Layer Security (TLS), to keep inter-service communication protected. It was a natural choice: TLS is the “S” in HTTPS and is the foundation of the encrypted web. Furthermore, modern web services and <a href="https://www.cloudflare.com/learning/security/api/what-is-an-api/">APIs</a> have embraced TLS as the de-facto standard for application layer encryption. It works seamlessly with RESTful services, is supported in <a href="/kyoto-tycoon-secure-replication/">Kyoto Tycoon</a>, <a href="http://www.postgresql.org/docs/8.3/static/libpq-ssl.html">PostgreSQL</a>, and the Go standard library.</p><p>As we have described in <a href="/an-introduction-to-javascript-based-ddos/">previous blog posts</a>, unauthenticated encryption can be foiled by <a href="/introducing-strict-ssl-protecting-against-a-man-in-the-middle-attack-on-origin-traffic/">on-path attackers</a>. Encryption without authentication does not protect data in transit. For connections to be safe, each party needs to prove their identity to the other. Public key cryptography provides many mechanisms for trust, including PGP’s “web of trust” and HTTPS’s public key infrastructure (PKI) model. We chose the PKI model because of ease of use and deployment. TLS with PKI provides trusted communication.</p>
    <div>
      <h2>Be picky with your PKI</h2>
      <a href="#be-picky-with-your-pki">
        
      </a>
    </div>
    <p>Trust is the bedrock of secure communication. For two parties to securely exchange information, they need to know that the other party is legitimate. PKI provides just that: a mechanism for trusting identities online.</p><p>The tools that enable this are digital certificates and public key cryptography. A certificate lets a website or service prove its identity. Practically speaking, a certificate is a file with some identity information about the owner, a public key, and a signature from a certificate authority (CA). Each certificate also contains a public key. Each public key has an associated private key, which is kept securely under the certificate owner’s control. The private key can be used to create digital signatures that can be verified by the associated public key.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4zk0sFLWqQIg6Fh9Qc5kPU/8105c11afdaffc285ec06c30e9418eee/anatomy-of-a-certificate.png" />
            
            </figure><p>A certificate typically contains</p><ul><li><p>Information about the organization that the certificate is issued to</p></li><li><p>A public key</p></li><li><p>Information about the organization that issued the certificate</p></li><li><p>The rights granted by the issuer</p></li><li><p>The validity period for the certificate</p></li><li><p>Which hostnames the certificate is valid for</p></li><li><p>The allowed uses (client authentication, server authentication)</p></li><li><p>A digital signature by the issuer certificate’s private key</p></li></ul><p>A certificate is a powerful tool for proving your identity online. The owner of a certificate can digitally sign data, and a verifier can use the public key from the certificate to verify it. The fact that the certificate is itself digitally signed by a third party CA means that if the verifier trusts the third party, they have assurances that the certificate is legitimate. The CA can give a certificate certain rights, such as a period of time in which the identity of the certificate should be trusted.</p><p>Sometimes certificates are signed by what’s called an intermediate CA, which is itself signed by a different CA. In this case, a certificate verifier can follow the chain until they find a certificate that they trust — the root.</p><p>This chain of trust model can be very useful for the CA. It allows the root certificate’s private key to be kept offline and only used for signing intermediate certificates. Intermediate CA certificates can be shorter lived and be used to sign endpoint certificates on demand. Shorter-lived online intermediates are easier to manage and revoke if compromised.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1mgZiSD0E5rW6KjaGEFPa0/0428f6fc0637a81525dcbed41586192b/certificate-chain-of-trust.png" />
            
            </figure><p>This is the same system used for HTTPS on the web. For example, cloudflare.com has a certificate signed by Comodo’s Intermediate certificate, which is in turn signed by the Comodo offline root. Browsers trust the Comodo root, and therefore also trust the intermediate and web site certificate.</p><p>This model works for the web because browsers only trust a small set of certificate authorities, who each have stringent requirements to only issue certificates after validating the ownership of a website. For internal services that are not accessed via browsers, there is *no need * to go through a third party certificate authority. Trusted certificates do not have to be from Globalsign, Comodo, Verisign or any other CA — they can be from a CA you operate yourself.</p>
    <div>
      <h2>Building your own CA</h2>
      <a href="#building-your-own-ca">
        
      </a>
    </div>
    <p>The most painful part of getting a certificate for a website is going through the process of obtaining it. For websites, we eliminated this pain by launching <a href="/introducing-universal-ssl/">Universal SSL</a>. The most painful part of running a CA is the administration. When we decided to build our internal CA, we sought to make both obtaining certificates and operating the CA painless and even fun.</p><p>The software we are using is <a href="https://github.com/cloudflare/cfssl">CFSSL</a>, CloudFlare’s open source PKI toolkit. This tool was open sourced last year and has all the capabilities needed to run a certificate authority. Although CFSSL was built for an internal CA, it’s robust enough to be use a publicly trusted CA; in fact, the <a href="https://letsencrypt.org">Let’s Encrypt</a> project is using CFSSL as a core part of their CA infrastructure.</p>
    <div>
      <h3>Key protection</h3>
      <a href="#key-protection">
        
      </a>
    </div>
    <p>To run a CA, you need the CA certificate and corresponding private key. This private key is extremely sensitive. Any person who knows the value of the key can act as the CA and issue certificates. Browser-trusted certificate authorities are required to keep their private keys inside of specialized hardware known as Hardware Security Modules (HSMs). The requirements for protecting private keys for corporate infrastructures are not necessarily as stringent, so we provided several mechanisms to protect keys.</p><p>CFSSL supports three different modes of protection for private keys:</p><ol><li><p><i>Hardware Security Module (HSM)</i>CFSSL allows the CA server to use an HSM to compute digital signatures. Most HSMs use an interface called PKCS#11 to interact with them, and CFSSL natively supports this interface. Using an HSM ensures that private keys do not live in memory and it provides tamper protection against physical adversaries.</p></li><li><p><i>Red October</i>Private Keys can be encrypted using <a href="/red-october-cloudflares-open-source-implementation-of-the-two-man-rule/">Red October</a> (another open source CloudFlare project). A key protected with Red October can only be decrypted with the permission of multiple key owners. In order to use CFSSL with a Red October key, the key owners need to authorize the use of the private key. This ensures that the CA key is never unencrypted on disk, in source control, or in configuration management. Note: Red October support in CFSSL is experimental and subject to change.</p></li><li><p><i>Plaintext</i>CFSSL accepts plain unencrypted private keys. This works well when the private key is generated on the machine running CFSSL or by another program. If the machine that is running the CA is highly secure, this mode is a compromise between security, cost, and usability. This is also useful in development mode, allowing users to test changes to their infrastructure designs.</p></li></ol><p>Next I’ll show you how to quickly configure an internal CA using plaintext private keys. Note: The following expects <a href="https://github.com/cloudflare/cfssl#readme">CFSSL to be installed</a>.</p>
    <div>
      <h3>Generating a CA key and certificate</h3>
      <a href="#generating-a-ca-key-and-certificate">
        
      </a>
    </div>
    <p>To start, you need some information about what metadata you want to include in your certificate. Start by creating a file called <code>csr_ca.json</code> containing this basic information (feel free to fill in your own organization's details):</p>
            <pre><code>{
  "CN": "My Awesome CA",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
    "names": [
       {
         "C": "US",
         "L": "San Francisco",
         "O": "My Awesome Company",
         "OU": "CA Services",
         "ST": "California"
       }
    ]
}</code></pre>
            <p>With this you can create your CA with a single call"<code>$ cfssl gencert -initca csr_ca.json | cfssljson -bare ca</code></p><p>This generates the two files you need to start your CA:</p><ul><li><p><code>ca-key.pem</code>: your private key</p></li><li><p><code>ca.pem</code>: your certificate</p></li><li><p><code>ca.csr</code>: your certificate signing request (needed to get your CA cross-signed by another CA)</p></li></ul><p>The key and certificate are the bare minimum you need to start running a CA.</p>
    <div>
      <h3>Policy</h3>
      <a href="#policy">
        
      </a>
    </div>
    <p>Once the CA certificate and key are created, the CA software needs to know what kind of certificates it will issue. This is determined in the CFSSL configuration file’s signing policy section.</p><p>Here’s an example of a simple policy for a CA that can issue certificates that are valid for a year and can be used for server authentication.</p>
            <pre><code>config_ca.json
{
  "signing": {
    "default": {
      "auth_key": "key1",
      "expiry": "8760h",
      "usages": [
         "signing",
         "key encipherment",
         "server auth"
       ]
     }
  },
  "auth_keys": {
    "key1": {
      "key": &lt;16 byte hex API key here&gt;,
      "type": "standard"
    }
  }
}</code></pre>
            <p>We also added an authentication key to this signing policy. This authentication key should be randomly generated and kept private. The API key is a basic authentication mechanism that prevents unauthorized parties from requesting certificates. There are several other features you can use for the CA (subject name allowlisting, etc.), <a href="https://github.com/cloudflare/cfssl/tree/master/doc">CFSSL documentation</a> for more information.</p><p>To run the service, call<code>$ cfssl serve -ca-key ca-key.pem -ca ca.pem -config config_ca.json</code></p><p>This opens up a CA service listening on port 8888.</p>
    <div>
      <h3>Issuing certificates</h3>
      <a href="#issuing-certificates">
        
      </a>
    </div>
    <p>Certificate authorities do not just create certificates out of a private key and thin air, they need a public key and metadata to populate the certificate’s data fields. This information is typically communicated to a CA via a certificate signing request (CSR).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1XFNOUGDbNZ98c8QiaJ8hA/12c40f70676beb344b0adbae792cbf16/illustrations-ssl-blog-june-2015-03.png" />
            
            </figure><p>A CSR is very similar in structure to a certificate. The CSR contains:</p><ul><li><p>Information about the organization that is requesting the certificate</p></li><li><p>A public key</p></li><li><p>A digital signature by the requestor’s private key</p></li></ul><p>Given a CSR, a certificate authority can create a certificate. First, it verifies that the requestor has control over the associated private key. It does this by checking the CSR’s signature. Then the CA will check to see if the requesting party should be given a certificate and which domains/IPs it should be valid for. This can be done with a database lookup or through a registration authority. If everything checks out, the CA uses its private key to create and sign the certificate to send back to the requestor.</p>
    <div>
      <h3>Requesting Certificates</h3>
      <a href="#requesting-certificates">
        
      </a>
    </div>
    <p>Let’s say you have CFSSL set up as CA as described above and it’s running on a server called “ca1.mysite.com” with an authentication API key. How do you get this CA to issue a certificate? CFSSL provides two commands to help with that: <code>gencert</code> and <code>sign</code>. They are available as JSON API endpoints or command line options.</p><p>The <code>gencert</code> command will automatically handle the whole certificate generation process. It will create your private key, generate a CSR, send the CSR to the CA to be signed and return your signed certificate.</p><p>There are two configuration files needed for this. One to tell the local CFSSL client where the CA is and how to authenticate the request, and a CSR configuration used to populate the CSR.</p><p>Here’s an example for creating a certificate for a generic database server, db1.mysite.com.</p><p><code>csr_client.json</code></p>
            <pre><code>{
  "hosts": [
    	"db1.mysite.com"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "San Francisco",
      "O": “My Awesome Company",
      "OU": “Data Services",
      "ST": "California"
    }
  ]
}</code></pre>
            <p><code>config_client.json</code></p>
            <pre><code>{
  "signing": {
    "default": {
      "auth_key": "key1",
      "remote": "caserver"
    }
  },
  "auth_keys": {
    "key1": {
    "key": &lt;16 byte hex API key here&gt;,
    "type": "standard"
    }
  },
  "remotes": {
    "caserver": “ca1.mysite.com:8888"
  }
}</code></pre>
            <p>With these two configuration files set, you can create your certificate.<code>$ cfssl gencert -config config_client.json csr_client.json | cfssljson -bare db</code></p><p>This results in three files:</p><ul><li><p><code>db-key.pem</code>: your private key</p></li><li><p><code>db.pem</code>: your certificate</p></li><li><p><code>db.csr</code>: your CSR</p></li></ul><p>The CSR can be resubmitted to the CA to be signed again at any point with the <code>sign</code> command</p><p><code>$ cfssl sign -config config_client.json db.csr | cfssljson -bare db-new</code></p><p>resulting in:</p><ul><li><p><code>db-new.pem</code>: your re-signed certificate</p></li></ul><p>These two commands let you easily and conveniently set up a private PKI. As a startup or a growing business moving to a service-oriented or even a microservice architecture, having a PKI can be very convenient. Next we’ll describe how CloudFlare set up its own internal PKI to help make its services encrypted by default.</p>
    <div>
      <h2>Using a PKI for services</h2>
      <a href="#using-a-pki-for-services">
        
      </a>
    </div>
    <p>So now you have a complex set of services that can all speak TLS, and a central certificate authority server that can issue certificates. What’s next? Getting certificates and keys for the applications. There are several ways to do this including a centrally managed way and a distributed way.</p>
    <div>
      <h3>Centralized distribution vs on demand</h3>
      <a href="#centralized-distribution-vs-on-demand">
        
      </a>
    </div>
    <p>One way to create certificates and keys for your applications is to create them all on a central provisioning server and then send them out to each of the servers. In this model, a central server creates certificates and keys and sends them over a secure tunnel to the application servers.</p><p>This model creates a sort of chicken and egg problem. How do you transport private keys if you need those private keys to encrypt your transport?</p><p>A distributed key management model fits better with the way modern services are typically deployed and run. The trick is creating the private keys directly in the application server. At install or run time, a service creates its own private key and sends a request to a certificate authority to issue a certificate. This can be repeated on demand if the current certificate is close to expiring.</p><p>For example, many companies are starting to use Docker, or other lightweight container technologies, to encapsulate and run individual services. Under load, services can be scaled up by automatically running new containers. In a centralized distribution model, certificates and keys for each container need to be created before the containers are deployed.</p><p>In the centralized distribution model, the provisioning service needs to create and manage a key and certificate for each service. Keeping this sort of central database in a complex and dynamic topology seems like the wrong approach. It would be preferable if the CA itself was stateless and generated a set of logs.</p><p>The idea that keys need to be transported to their destination instead of generated locally is also troubling. Transporting private keys introduces an unnecessary risk to the system. When a new service comes into being, it should generate its key locally and request a certificate for use.</p>
    <div>
      <h3>Trust across services</h3>
      <a href="#trust-across-services">
        
      </a>
    </div>
    <p>Internal services need to trust each other. Browsers validate website certificates by checking the signature on the certificate and checking the hostname against the list of Subject Alternative Names (SANs) in the certificate. This type of explicit check is useful, but it has a record of <a href="https://cwe.mitre.org/data/definitions/297.html">not working as expected</a>. Another way for services to trust each other is an implicit check based on per-service CAs.</p><p>The idea is simple: use a different CA for each set of services. Issue all database certificates from a database CA and all API servers from an API server CA.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/25QdARnmhs56cQyEzRWRZs/c0abd815cfbbc0267b7f19aeaccdaa7c/provisioning-app-certificates.png" />
            
            </figure><p>When setting these services up to talk to each other with mutual TLS authentication configure the trust stores as follows:</p><ul><li><p>API server only trusts DB CA</p></li><li><p>DB only trusts API CA</p></li></ul><p>This approach is less fine-grained than an explicit check against a SAN, but it is more robust and easier to manage on the CA policy side. With an implicit trust system in place, you can guarantee that services of type A can only communicate with services of type B.</p><p>The following diagram describes how two applications can trust each other with mutually authenticated TLS.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6zHWhhn4tWw94Y9yGmBFjy/a76107c0a0268e886c15c7f54261c669/service-to-service-communication.png" />
            
            </figure><p>In this diagram, the API server trusts the DB CA (in red). It will therefore only accept certificates that are signed by the DB CA (i.e. with a red ribbon). Conversely, the database server only accepts certificates with a signature from the API CA (orange ribbon). To establish a trusted connection, each party sends a key share to the other, signed with their certificate’s private key. The key shares are combined to create a session key, with which both parties can encrypt their data. The chain of verification from key share to certificate to CA assure that the other party is authentic.</p><p>Establishing a mutually authenticated trusted tunnel between services prevents attackers from accessing or modifying data in transit and causing havoc on your services. With a strong PKI in place, every service can communicate securely and confidentially over any network, even the Internet.</p>
    <div>
      <h2>Using a PKI for remote services</h2>
      <a href="#using-a-pki-for-remote-services">
        
      </a>
    </div>
    <p>Internal PKIs are very flexible and can be used to issue certificates to third parties who are integrating with your network. For example, CloudFlare has a service called <a href="https://www.cloudflare.com/railgun">Railgun</a> that can be used to optimize connections between CloudFlare and an origin server. Communication between Railgun and CloudFlare is done over an encrypted and authenticated channel using certificates from a CloudFlare certificate authority.</p><p>This ensures that data is secure in transit. When a new Railgun instance is set up on the origin server, it creates a private key and sends a CSR to CloudFlare, which then issues the appropriate certificate. The Railgun server keeps the private key in memory and deletes it when it shuts down, preventing other services from getting access to it.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6MzHk6TL9t4sBWgQz4kEQ6/009bdae54937d9e446064477f494cb87/illustrations-ssl-blog-june-2015-02-1.png" />
            
            </figure><p>This model works great for not only Railgun, but several other initiatives at CloudFlare such as the <a href="/universal-ssl-encryption-all-the-way-to-the-origin-for-free/">Origin CA</a> and <a href="/keyless-ssl-the-nitty-gritty-technical-details/">Keyless SSL</a>.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Securing data at the application level is an important step for securing a distributed systems architecture, but is only truly effective with a strong PKI in place.</p> ]]></content:encoded>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[HTTPS]]></category>
            <category><![CDATA[Encryption]]></category>
            <category><![CDATA[Certificate Authority]]></category>
            <category><![CDATA[SSL]]></category>
            <category><![CDATA[CFSSL]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">71loFFBcz7GMVazjFt5flh</guid>
            <dc:creator>Nick Sullivan</dc:creator>
        </item>
        <item>
            <title><![CDATA[Universal SSL: Encryption all the way to the origin, for free]]></title>
            <link>https://blog.cloudflare.com/universal-ssl-encryption-all-the-way-to-the-origin-for-free/</link>
            <pubDate>Tue, 24 Feb 2015 20:15:12 GMT</pubDate>
            <description><![CDATA[ Last September, CloudFlare unveiled Universal SSL, enabling HTTPS support for all sites by default. All sites using CloudFlare now support strong cryptography from the browser to CloudFlare’s servers. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Last September, CloudFlare unveiled <a href="/introducing-universal-ssl/">Universal SSL</a>, enabling HTTPS support for all sites by default. All sites using CloudFlare now support strong cryptography from the browser to CloudFlare’s servers. One of the most popular requests for Universal SSL was to make it easier to encrypt the other half of the connection: from CloudFlare to the origin server.</p><p>Until today, encryption from CloudFlare to the origin required the purchase of a trusted certificate from a third party. The certificate purchasing process can be tedious and sometimes costly. To remedy this, CloudFlare has created a new Origin CA service in which we provide free limited-function certificates to customer origin servers.</p><p>Today we are excited to announce the public beta of this service, providing full encryption of all data from the browser to the origin, for free.</p>
    <div>
      <h3>Encrypted all the way</h3>
      <a href="#encrypted-all-the-way">
        
      </a>
    </div>
    <p>CloudFlare offers three modes for HTTPS: Flexible, Full and Strict. In Flexible mode, traffic from browsers to CloudFlare is encrypted, but traffic from CloudFlare to a site's origin server is not. In Full and Strict modes, traffic between CloudFlare and the origin server is encrypted. Strict mode adds validation of the origin server’s certificate. We strongly encourage customers to select Strict mode for their websites to ensure their visitors get the strongest data security possible.</p><p>As we <a href="/origin-server-connection-security-with-universal-ssl/">previously discussed</a>, sites on <a href="https://www.cloudflare.com/plans/free/">CloudFlare’s Free plan</a> default to Flexible SSL mode. To take advantage of our Strict SSL mode it’s necessary to install a certificate on the origin server, which until now required them to buy one from a third party. Now customers can get that certificate directly from CloudFlare, for free.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1uVSxqXR6vUpUQ6ugD42yo/79d259ee62d60e2105efd2e36ec13cbf/illustration-strict-ssl--2-.png" />
            
            </figure><p>This certificate is only used to protect the traffic between the origin server and CloudFlare; it is never presented to browsers. For now you should only use it behind orange-clouded sites on CloudFlare.</p><p>If you are a CloudFlare customer and want to sign up for the beta, just send an email to <a>origin-ca-beta@cloudflare.com</a> with the following:</p><ul><li><p>A certificate signing request (CSR)</p></li><li><p>The domain name of the orange-clouded zone you want to install the certificate on</p></li></ul><p>The first <i>ten</i> brave beta customers will get a shiny new certificate to install on their web server. Note: do <i>not</i> send your private key to CloudFlare, only the CSR is needed.</p><p><i>Update: The beta is full! Thanks to those who are participating.</i></p>
    <div>
      <h3>CloudFlare’s Origin Certificate Authority</h3>
      <a href="#cloudflares-origin-certificate-authority">
        
      </a>
    </div>
    <p>In order to grant certificates to customer origins, CloudFlare had to create its own Certificate Authority. This consists of a set of processes and systems to validate certificate requests and create new certificates. For the Origin CA, CloudFlare created a private key and certificate for the specific purpose of signing certificates for origin servers.</p>
    <div>
      <h4>Software</h4>
      <a href="#software">
        
      </a>
    </div>
    <p>The certificate authority software we use is <a href="/introducing-cfssl/">CFSSL</a>, our open source PKI toolkit written in Go. It allows us to validate CSRs and use them to create new certificates for sites. These certificates are signed with our certificate authority private key, and validated when CloudFlare connects to the origin in Strict SSL mode.</p><p>In collaboration with other members of the industry (such as Richard Barnes from the <a href="https://letsencrypt.org/">Let's Encrypt</a> project), we have updated CFSSL with several new features that help make it a viable certificate authority tool. These include <a href="http://en.wikipedia.org/wiki/PKCS_11">PKCS#11</a> support, which makes it possible for CFSSL to use a Hardware Security Module (HSM) to store private keys and <a href="http://en.wikipedia.org/wiki/Online_Certificate_Status_Protocol">OCSP</a> support, which lets CFSSL answer questions about the revocation status of a certificate.</p>
    <div>
      <h4>Validation</h4>
      <a href="#validation">
        
      </a>
    </div>
    <p>CAs are supposed to only give certificates to sites that own the domain(s) listed in the certificate. Domain validation is usually done in one of three ways:</p><ul><li><p>Putting a challenge in the DNS zone</p></li><li><p>Putting a challenge into a meta-tag of an HTML page hosted on the domain</p></li><li><p>Sending an email challenge to the domain registrant from the WhoIs DB</p></li></ul><p>Since CloudFlare is both a <a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/">content delivery network</a> and a DNS provider, both DNS and HTML validation can be done by CloudFlare on behalf of the site. If your site is on CloudFlare and orange-clouded, we will give you a certificate for your site.</p>
    <div>
      <h4>Public trust</h4>
      <a href="#public-trust">
        
      </a>
    </div>
    <p>The CloudFlare Origin CA is currently not trusted by browsers, so these certificates should not be used on sites that are not behind CloudFlare. To issue certificates that are trusted by browsers, we would have to convince a publicly trusted certificate authority to cross-sign our CA certificate. This is not necessary in this case since it is CloudFlare that determines which certificates we trusted and the Origin CA is on our list.</p><hr />
    <div>
      <h3>Bonus: How to create Certificate Signing Requests</h3>
      <a href="#bonus-how-to-create-certificate-signing-requests">
        
      </a>
    </div>
    <p>The certificate signing request (CSR) is the standard mechanism for obtaining a certificate from a certificate authority. It contains a public key, some metadata such as which domain it is for and is digitally signed by a private key. It lets CloudFlare know that you own the private key.</p>
    <div>
      <h4>Creating a CSR and private key with CFSSL</h4>
      <a href="#creating-a-csr-and-private-key-with-cfssl">
        
      </a>
    </div>
    <p>CFSSL is not only a tool that can be used for running a CA, but it can be used to create CSRs too. Following these instructions will get you a private key and a CSR to submit to a certificate authority.</p><h6>1) Install Go:</h6><p><a href="https://golang.org/doc/install">https://golang.org/doc/install</a></p><h6>2) Install CFSSL</h6>
            <pre><code>$ go get github.com/cloudflare/cfssl/cmd/...</code></pre>
            <h6>3) Create a CSR template</h6><p>Use the following template for <code>csr.json</code> and replace “mysite.com” with your site’s <a href="https://www.cloudflare.com/learning/dns/glossary/what-is-a-domain-name/">domain name</a> and names with your company's information.</p><p>csr.json:</p>
            <pre><code>{
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "hosts": [
    “mysite.com"
  ],
 "CN": "mysite.com",
  "names": [
    {
      "C": "US",
      "L": "San Francisco",
      "ST": "California",
      "O": "My Company, Inc.",
      "OU": "My Company’s IT Department"
    }
  ]
}</code></pre>
            <h6>4) Create the certificate</h6>
            <pre><code>$ cfssl genkey csr.json | cfssljson -bare site</code></pre>
            <p>This creates two files:</p><ul><li><p><code>site.csr</code>: your CSR</p></li><li><p><code>site-key.pem</code>: your private key</p></li></ul>
    <div>
      <h5>Other resources</h5>
      <a href="#other-resources">
        
      </a>
    </div>
    <p>If CFSSL is not working for you, here are some more resources for creating CSRs:</p><ul><li><p><a href="https://support.comodo.com/index.php?/Default/Knowledgebase/Article/View/1/19/csr-generation-using-openssl-apache-wmod_ssl-nginx-os-x">Comodo</a></p></li><li><p><a href="https://support.globalsign.com/customer/portal/articles/1229769-certificate-signing-request-csr---overview">GlobalSign</a></p></li><li><p><a href="https://www.digicert.com/csr-creation.htm">Digicert</a></p></li></ul><p>In the future we plan on releasing tools to make certificate generation even easier and more automatic.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/gkjU9sNg6L0UT6j0x6keC/bd0b762f12482a99f8a66eba329b7331/cloudflare_ssl-week-2.png" />
            
            </figure><p></p> ]]></content:encoded>
            <category><![CDATA[HTTPS]]></category>
            <category><![CDATA[CFSSL]]></category>
            <category><![CDATA[Universal SSL]]></category>
            <category><![CDATA[Encryption]]></category>
            <category><![CDATA[Certificate Authority]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">47jKviMlM0kk9qsg8TQ3xz</guid>
            <dc:creator>Nick Sullivan</dc:creator>
        </item>
        <item>
            <title><![CDATA[TLS Session Resumption: Full-speed and Secure]]></title>
            <link>https://blog.cloudflare.com/tls-session-resumption-full-speed-and-secure/</link>
            <pubDate>Tue, 24 Feb 2015 14:20:26 GMT</pubDate>
            <description><![CDATA[ At CloudFlare, making web sites faster and safer at scale is always a driving force for innovation. We introduced “Universal SSL” to dramatically increase the size of the encrypted web. ]]></description>
            <content:encoded><![CDATA[ <p>At CloudFlare, making web sites faster and safer at scale is always a driving force for innovation. We introduced “<a href="/universal-ssl-how-it-scales/">Universal SSL</a>” to dramatically increase the size of the encrypted web. In order for that to happen we knew we needed to efficiently handle large volumes of HTTPS traffic, and give end users the fastest possible performance.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2wjvWjC2tNe2fmxUhuWLUN/ff2b81ce5075aa45d5f78a52a1343f42/7439386300_837724fe8e_z.jpg" />
            
            </figure><p><a href="https://creativecommons.org/licenses/by/2.0/">CC BY 2.0</a> <a href="https://www.flickr.com/photos/ecossystems/7439386300/in/photolist-ckoNn3-ckoMrj-ckoDLC-2avSAw-ckoMTE-cknaTs-cknaL7-nGDFzz-nvRvYP-8TqyZC-sszBV-4NV8t8-6KsRdt-9A92Ci-4yuwtQ-9X7Bkd-7kERzD-fUG4DG-7UGXkZ-8F4mG3-ar89v8-8TqJW5-8TnCai-cJorC1-cKPf55-nr6F4s-wrLjf-aNqxbR-eVfMiZ-dJRj8E-jogBj4-feJ1Y-49rZz6-pv8QoU-cJorDA-a6tf2m-4VoZZs-9qWH8F-bvb1X6-aoa1zS-jogRMo-7UGWb2-joqdky-9Q51D7-gdczv-4Lu4qb-6R4VUd-8Tqojq-6QZSQB-6QZSEx">image</a> by <a href="https://www.flickr.com/photos/ecossystems/">ecos systems</a></p><p>In this article, I’ll explain how we added speed to Universal SSL with session resumptions across multiple hosts, and explain the design decisions we made in this process. Currently, we use two standardized session resumption mechanisms that require two different data sharing designs: Session IDs <a href="https://tools.ietf.org/html/rfc5246">RFC 5246</a>, and Session Tickets <a href="https://tools.ietf.org/html/rfc5077">RFC 5077</a>.</p>
    <div>
      <h2>Session ID Resumption</h2>
      <a href="#session-id-resumption">
        
      </a>
    </div>
    <p>Resuming an encrypted session through a session ID means that the server keeps track of recent negotiated sessions using unique session IDs. This is done so that when a client reconnects to a server with a session ID, the server can quickly look up the session keys and resume the encrypted communication.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5XN2D3lCPIVBbE7SazQkS5/3cc478f12f8bf390c2d7365ea2e2288f/session_resumption_with_session_id.jpg" />
            
            </figure><p>At each of CloudFlare’s PoPs (Point of Presence) there are multiple hosts handling HTTPS traffic. When the client attempts to resume a TLS connection with a web site, there is no guarantee that they will connect to the same physical machine that they connected to previously. Without session sharing, the success rate of session ID resumption could be as low as 1/n (when there are n hosts). That means the more hosts we have, the less likely a session can be resumed. This goes directly against our goal of scaling SSL performance!</p><p>CloudFlare’s solution to this problem is to share the sessions within the PoP, making the successful resumption rate approach 100%.</p>
    <div>
      <h2>How sessions are shared</h2>
      <a href="#how-sessions-are-shared">
        
      </a>
    </div>
    <p>We employ a memcached cluster to cache all the recent negotiated sessions from all the hosts within the same PoP. To enhance the secrecy and security of session keys, all cached sessions are encrypted. When a new session with a session ID is negotiated, a host will encrypt the new session and insert it to memcached, indexed by the session ID. When a host needs to look up a session for session resumption, it will query memcached using the session ID as the key and decrypt the cached session to resume it. All those operations happen as non-blocking asynchronous calls thanks to the power of <a href="http://openresty.org/">OpenResty</a>, and many handy OpenResty modules such as <a href="https://github.com/openresty/lua-resty-memcached">the fully asynchronous memcached client</a>. We also needed tweaks in OpenSSL to support asynchronous session caching.</p><p>I’d like to send a few shout-outs to my amazing colleagues Piotr Sikora and Yichun Zhang for making this project possible.</p>
    <div>
      <h2>Performance Improvement</h2>
      <a href="#performance-improvement">
        
      </a>
    </div>
    <p>Using OpenSSL’s <a href="https://www.openssl.org/docs/apps/s_client.html">s_client</a> utility, we can quickly test how a session ID is speeds up the TLS connection from the client side. We test the TLS performance of <a href="https://www.cloudflare.com">www.cloudflare.com</a> from our office. And the result is shown below:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3IccBdOGPS42EqiK8JiH7z/58333544b71bece5376737d367eb03d2/figure_1-1.png" />
            
            </figure><p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/60Ar1nt9o6ku1Fr9Q7PwzD/de3e82ec883ab6aade8255b765f1a3f4/figure_2.png" />
            
            </figure><p>The overall cost of a session resumption is less than 50% of a full TLS handshake, mainly because session resumption only costs one round-trip while a full TLS handshake requires two. Moreover, a session resumption does not require any large finite field arithmetic (new sessions do), so the CPU cost for the client is almost negligible compared to that in a full TLS handshake. For mobile users, the performance improvement by session resumption means a much more reactive and battery-life-friendly surfing experience.</p>
    <div>
      <h2>Session Ticket Resumption</h2>
      <a href="#session-ticket-resumption">
        
      </a>
    </div>
    <p>Session resumption with session IDs has a major limitation: servers are responsible for remembering negotiated TLS sessions for a given period of time. It poses scalability issues for servers with a large load of concurrent connections per second and for servers that want to cache sessions for a long time. Session ticket resumption is designed to address this issue.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4aO6djKlA6MK8GFN6Isspa/ada797115af7ab8f068152b8b61752bb/session_resumption_with_session_ticket.jpg" />
            
            </figure><p>The idea is simple: outsource session storage to clients. A session ticket is a blob of <a href="https://tools.ietf.org/html/rfc5077#section-4">a session key and associated information</a> encrypted by a key which is only known by the server. The ticket is sent by the server at the end of the TLS handshake. Clients supporting session tickets will cache the ticket along with the current session key information. Later the client includes the session ticket in the handshake message to indicate it wishes to resume the earlier session The server on the other end will be able to decrypt this ticket, recover the session key and resume the session.</p><p>Now consider every host in the same PoP uses the same encryption key, the good news is that every host now is able to decrypt this session ticket and resume the session for the client. The not-so-good news is that this key becomes critical single point of failure for TLS security: if an adversary gets hold of it, the session key information is exposed for every session ticket! Even after the lifetime of a session ticket, such a loss would invalidate supposed “perfect forward secrecy” (as evangelized <a href="/staying-on-top-of-tls-attacks/">here on our blog</a>). Therefore, it is important to:</p><blockquote><p>“generate session ticket keys randomly, distribute them to the servers without ever touching persistent storage and rotate them frequently.”(<a href="https://www.imperialviolet.org/2013/06/27/botchingpfs.html">Adam Langley</a>)</p></blockquote>
    <div>
      <h2>How session encryption keys are encrypted, shared and rotated</h2>
      <a href="#how-session-encryption-keys-are-encrypted-shared-and-rotated">
        
      </a>
    </div>
    <p>To meet all these security goals, we first start an in-memory key generator daemon that generates a fresh, timestamped key every hour. Keys are encrypted so that only our nginx servers can decrypt them. Then with CloudFlare’s existing <a href="/kyoto-tycoon-secure-replication/">secure data propagation infrastructure</a>, ticket keys replicate from one primary instance to all of our PoPs around the world. Each host periodically queries the local copy of the database through a memcached interface for fresh encryption keys for the current hour. To summarize, the key generation daemon generates keys randomly and rotates them hourly, and keys are distributed to all hosts across the globe securely without being written to disk.</p><p>There are some technical details still worth mentioning. First, we need to tackle distributed clock synchronization. For example, there might be one host thinks it is UTC 12:01pm while other hosts still think it UTC 11:59am, the faster-clock host might start encrypting session tickets with the key of 12:00pm while other hosts could not decrypt those tickets because they don’t know the new key yet. Or the fast-clock host might find the key is not yet available due to propagation delay. Instead of dedicating efforts for synchronization, we solve the problem by breaking the synchronization requirement. The key daemon generates keys one hour ahead and each host would opportunistically save the key for the next hour (if there is any) as a decryption-only key. Now even with one or more faster-clock hosts, session resumption by ticket still works without interruption because they can still decrypt session tickets encrypted by any other.</p><p>Also we set the session ticket lifetime hint to be 18 hours, the same value for SSL session timeout. Each server also keeps ticket keys for the past 18 hours for ticket decryption.</p>
    <div>
      <h2>Conclusions</h2>
      <a href="#conclusions">
        
      </a>
    </div>
    <p>To summarize, we support TLS session resumption globally using both sessions IDs and session tickets. For any web site on CloudFlare’s network, HTTPS performance has been made faster for every user and every device.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6WloH8zZ0HRguIGNqe9Hu4/599874036b97e67327bbfa3b42afdcaf/cloudflare_ssl-week-2.png" />
            
            </figure> ]]></content:encoded>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Universal SSL]]></category>
            <category><![CDATA[Encryption]]></category>
            <category><![CDATA[Speed]]></category>
            <category><![CDATA[SSL]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">58HqMls1vQte6DzOjtKmOg</guid>
            <dc:creator>Zi Lin</dc:creator>
        </item>
        <item>
            <title><![CDATA[End of the road for RC4]]></title>
            <link>https://blog.cloudflare.com/end-of-the-road-for-rc4/</link>
            <pubDate>Mon, 23 Feb 2015 18:51:04 GMT</pubDate>
            <description><![CDATA[ Today, we completely disabled the RC4 encryption algorithm for all SSL/TLS connections to CloudFlare sites. It's no longer possible to connect to any site that uses CloudFlare using RC4. ]]></description>
            <content:encoded><![CDATA[ <p>Today, we completely disabled the RC4 encryption algorithm for all SSL/TLS connections to CloudFlare sites. It's no longer possible to connect to any site that uses CloudFlare using RC4.</p><p>Over a year ago, we <a href="/killing-rc4/">disabled RC4 for connections for TLS 1.1 and above</a> because there were more secure algorithms available. In May 2014, we <a href="/killing-rc4-the-long-goodbye/">deprecated RC4</a> by moving it to the lowest priority in our list of cipher suites. That forced any browser that had a good alternative to RC4 to use it. Those two changes meant that almost everyone who was using RC4 to connect to CloudFlare sites switched to a more secure protocol.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/9UpKzGh7EBXIKuhQGtKDb/52af7b5b35a730a947da13b44764ebe5/593px-Nokia_6120_Classic_alga_01.jpg" />
            
            </figure><p>Back in May, we noted that <a href="/the-web-is-world-wide-or-who-still-needs-rc4/">some people still needed RC4</a>, particularly people using old mobile phones and some Windows XP users. At the time, 4% of requests using RC4 came from a single phone type: the <a href="https://en.wikipedia.org/wiki/Nokia_6120_classic">Nokia 6120</a>.</p><p>At the time, we noted that roughly 0.000002% of requests to CloudFlare were using the RC4 protocol. In the last 9 months, that number is halved and so, although some people are still using RC4, we have decided to turn off the protocol. It's simply no longer secure.</p><p>The remaining users are almost all on old phones and Windows XP (those two groups make up 80% of the RC4-based requests). But we are still seeing some connections from SSL-intercepting proxy software that's using RC4. To repeat what we said in May:</p><p><i>Digging into the User-Agent data for the US, we see the following web browser being used to access CloudFlare-powered sites using RC4:</i></p>
            <pre><code>Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.137 Safari/537.36</code></pre>
            <p><i>That's the most recent version of Google Chrome running on Windows 7 (you can see the presence in Windows 7 in the chart above). That should not be using RC4. In fact, most of the connections from Windows machines that we see using RC4 should not be (since we prioritize 3DES over RC4 for older machines).</i></p><p><i>It was initially unclear why this was happening until we looked at where the connections were coming from. They were concentrated in the US and Brazil and most seemed to be coming from IP addresses used by schools, hospitals, and other large institutions.</i></p><p><i>Although the desktop machines in these locations have recent Windows and up to date browsers (which will not use RC4) the networks they are on are using SSL-based VPNs or firewalls that are performing on-path attacker monitoring of SSL connections.</i></p><p><i>This enables them to filter out undesirable sites, even those that are accessed using HTTPS, but it appears that the VPN/firewall software is using older cipher suites. That software likely needs updating to stop it from using RC4 for secure connections.</i></p><p>Since May, that situation has remained largely unchanged: there are some institutions doing SSL-interception (probably for IDS or policy enforcement reasons) that use RC4 for outbound connections, and many apparent individuals running software that does the same.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/xWjjFUCTekWvJbZpTot18/62dc3ba44ff81c03e22b74248ae13dfa/Screen-Shot-2015-02-23-at-10-50-04-1.png" />
            
            </figure><p>We've been continually tracking what's happening in the academic community around RC4 attacks and the slow death of RC4 as people switch from old devices to newer ones.</p><p>With both a decline in RC4 connections to CloudFlare and whispers of another, easier attack on RC4 in the academic community, we've decided the time is right to disable RC4 completely.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3ToJQPuD7viATrLnNh4076/0797e22b1191fb6b8ad1db46ca1c0180/cloudflare_ssl-week-1.png" />
            
            </figure> ]]></content:encoded>
            <category><![CDATA[RC4]]></category>
            <category><![CDATA[Encryption]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[SSL]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Crypto Week]]></category>
            <guid isPermaLink="false">7LKlU2Og7VE3RifdpqNM0Z</guid>
            <dc:creator>John Graham-Cumming</dc:creator>
        </item>
        <item>
            <title><![CDATA[Origin Server Connection Security with Universal SSL]]></title>
            <link>https://blog.cloudflare.com/origin-server-connection-security-with-universal-ssl/</link>
            <pubDate>Mon, 29 Sep 2014 23:14:00 GMT</pubDate>
            <description><![CDATA[ Earlier today, CloudFlare enabled Universal SSL: HTTPS support for all sites by default. Universal SSL provides state-of-the-art encryption between browsers and CloudFlare’s edge servers keeping web traffic private and secure from tampering. ]]></description>
            <content:encoded><![CDATA[ <p>Earlier today, CloudFlare enabled Universal SSL: HTTPS support for all sites by default. Universal SSL provides state-of-the-art encryption between browsers and CloudFlare’s edge servers keeping web traffic private and secure from tampering.</p><p>CloudFlare’s Flexible SSL mode is the default for CloudFlare sites on the Free plan. Flexible SSL mode means that traffic from browsers to CloudFlare will be encrypted, but traffic from CloudFlare to a site's origin server will not be. To take advantage of our <a href="https://www.cloudflare.com/ssl">Full and Strict SSL</a> mode—which encrypts the connection between CloudFlare and the origin server—it’s necessary to install a certificate on the origin server.</p><p>We made Universal SSL free so that everyone can use modern, strong encryption tools to protect their web traffic. More encrypted traffic helps build a safer, better Internet. In keeping with CloudFlare’s goal to help build a better Internet, we have some tips on how to upgrade your site from Flexible SSL to Full or Strict SSL.</p>
    <div>
      <h3>Option 1: Full SSL: create a self-signed certificate</h3>
      <a href="#option-1-full-ssl-create-a-self-signed-certificate">
        
      </a>
    </div>
    <p>Dealing with Certificate Authorities (CAs) can be frustrating, and the process of obtaining a certificate can be time consuming. In the meantime, you can get started by installing a self-signed certificate on your origin server. This allows CloudFlare to encrypt the communication with the origin, protecting the communication against passive surveillance, but not against <a href="/introducing-strict-ssl-protecting-against-a-man-in-the-middle-attack-on-origin-traffic/">active attackers</a>.</p><p>Our <a href="https://github.com/cloudflare/cfssl/wiki/Creating-a-new-CSR">handy CSR guide</a> for <a href="https://github.com/cloudflare/cfssl">CFSSL</a> describes how to generate a self-signed certificate. Using <a href="http://msol.io/blog/tech/2013/10/06/create-a-self-signed-ecc-certificate/">OpenSSL to create it</a> is another option.</p><p>Once you have created a self-signed certificate and private key, you can install them on your origin server. Digicert has a guide for <a href="https://www.digicert.com/ssl-certificate-installation.htm">installing a certificate</a> that covers the most popular server software.</p><p>Keep in mind that a self-signed certificate is not signed by a trusted CA. This means that you can change your SSL setting from Flexible SSL to Full, but not Full (strict). Full SSL won’t be able to provide authentication, but it will make sure the connection to the origin is encrypted and protected from passive snoopers.</p>
    <div>
      <h3>Option 2: Strict SSL: get a certificate from trusted CA</h3>
      <a href="#option-2-strict-ssl-get-a-certificate-from-trusted-ca">
        
      </a>
    </div>
    <p>Most CAs offer low-cost or even free certificates. A popular CA that offers <a href="https://www.cloudflare.com/application-services/products/ssl/">free SSL certificates</a> is <a href="https://www.startssl.com/">StartSSL</a>. Buying and installing a trusted certificate on your origin server is currently the simplest way to enable Strict SSL on your site.</p><p>To enable TLS on your server, you need both a certificate and a corresponding private key. The first step in obtaining a certificate from a CA is creating a Certificate Signing Request (CSR). A CSR contains your public key and a proof that you have the associated private key. The CA will verify it and give you back a certificate that you install on your web server. We <a href="https://github.com/cloudflare/cfssl/wiki/Creating-a-new-CSR">put together a guide</a> to creating a private key and CSR with CloudFlare’s CFSSL tool that you can use, or alternatively, there’s always <a href="http://www.rackspace.com/knowledge_center/article/generate-a-csr-with-openssl">OpenSSL</a>.</p><p>Once you have a certificate installed on your origin server, you can change your SSL setting from Flexible to Full (strict) and have the added benefit of an authenticated and encrypted connection to your origin server.</p>
    <div>
      <h3>Option 3: (sneak preview) The CloudFlare Origin CA/Certificate Pinning</h3>
      <a href="#option-3-sneak-preview-the-cloudflare-origin-ca-certificate-pinning">
        
      </a>
    </div>
    <p>Soon you will be able to send your CSR to CloudFlare to get a certificate instantaneously, speeding up the certificate acquisition process. This process will be like that of a regular CA, but much faster. These certificates aren't yet trusted by browsers, but <i>will</i> be trusted by CloudFlare, allowing the back end connection to be both encrypted and authenticated. This also protects your site if one of the publicly trusted certificate authorities is compromised <a href="http://threatpost.com/final-report-diginotar-hack-shows-total-compromise-ca-servers-103112/77170">by attackers</a> and used to issue <a href="http://www.zdnet.com/indian-government-agency-issues-fake-google-certificates-7000031396/">illegitimate certificates</a>.</p><p>We’re also investigating the possibility of adding a feature called <a href="https://www.google.com/webhp?sourceid=chrome-instant&amp;ion=1&amp;espv=2&amp;ie=UTF-8#safe=active&amp;q=certificate+pinning">Certificate Pinning</a>. Certificate Pinning would allow you to tell CloudFlare exactly which certificate to trust for your origin. This would allow customers to use hosting services that don’t allow custom certificates to have the benefit of a fully encrypted tunnel, or to simply use a self-signed certificate and get the benefit of both authentication and encryption.</p> ]]></content:encoded>
            <category><![CDATA[HTTPS]]></category>
            <category><![CDATA[Universal SSL]]></category>
            <category><![CDATA[Encryption]]></category>
            <category><![CDATA[SSL]]></category>
            <category><![CDATA[CFSSL]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">3gC9ZZvYX0en9ddjBpvtRS</guid>
            <dc:creator>Nick Sullivan</dc:creator>
        </item>
    </channel>
</rss>