
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sat, 04 Apr 2026 12:29:48 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Protect your key server with Keyless SSL and Cloudflare Tunnel integration]]></title>
            <link>https://blog.cloudflare.com/protect-your-key-server-with-keyless-ssl-and-cloudflare-tunnel-integration/</link>
            <pubDate>Thu, 16 Mar 2023 13:00:00 GMT</pubDate>
            <description><![CDATA[ Now, customers will be able to use our Cloudflare Tunnels product to send traffic to the key server through a secure channel, without publicly exposing it to the rest of the Internet ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Today, we’re excited to announce a big security enhancement to our Keyless SSL offering. Keyless SSL allows customers to store their private keys on their own hardware, while continuing to use Cloudflare’s proxy services. In the past, the configuration required customers to expose the location of their key server through a DNS record - something that is publicly queryable. Now, customers will be able to use our Cloudflare Tunnels product to send traffic to the key server through a secure channel, without publicly exposing it to the rest of the Internet.</p>
    <div>
      <h3>A primer on Keyless SSL</h3>
      <a href="#a-primer-on-keyless-ssl">
        
      </a>
    </div>
    <p>Security has always been a critical aspect of online communication, especially when it comes to protecting sensitive information. Today, Cloudflare manages private keys for millions of domains which allows the data communicated by a client to stay secure and encrypted. While Cloudflare adopts the strictest controls to secure these keys, certain industries such as financial or medical services may have compliance requirements that prohibit the sharing of private keys.In the past, Cloudflare required customers to upload their private key in order for us to provide our L7 services. That was, until we built out Keyless SSL in 2014, a feature that allows customers to keep their private keys stored on their own infrastructure while continuing to make use of Cloudflare’s services.</p><p>While Keyless SSL is compatible with any hardware that support PKCS#11 standard, Keyless SSL users frequently opt to secure their private keys within HSMs (Hardware Security Modules), which are specialized machines designed to be tamper proof and resistant to to unauthorized access or manipulation, secure against attacks, and optimized to efficiently execute cryptographic operations such as signing and decryption. To make it easy for customers to set this up, during Security Week in 2021, we <a href="/keyless-ssl-supports-fips-140-2-l3-hsm/">launched</a> integrations between Keyless SSL and HSM offerings from all major cloud providers.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1wuHcCOkcDcvmXTGLrFQho/2b984dacb313cd7e85da2fb2e57e3321/image1-36.png" />
            
            </figure>
    <div>
      <h3>Strengthening the security of key servers even further</h3>
      <a href="#strengthening-the-security-of-key-servers-even-further">
        
      </a>
    </div>
    <p>In order for Cloudflare to communicate with a customer’s key server, we have to know the IP address associated with it. To configure Keyless SSL, we ask customers to create a DNS record that indicates the IP address of their keyserver. As a security measure, we ask customers to keep this record under a long, random hostname such as “11aa40b4a5db06d4889e48e2f738950ddfa50b7349d09b5f.example.com”. While it adds a layer of obfuscation to the location of the key server, it does expose the IP address of the keyserver to the public Internet, allowing anyone to send requests to that server. We lock the connection between Cloudflare and the Keyless server down through Mutual TLS, so that the Keyless server should only accept the request if a Cloudflare client certificate associated with the Keyless client is served. While this allows the key server to drop any requests with an invalid or missing client certificate, the key server is still publicly exposed, making it susceptible to attacks.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6jRWZW8bTeCVotnsB8nbwI/1b83910f4917f4555abd27992a3378db/image6-8.png" />
            
            </figure><p>Instead, Cloudflare should be the only party that knows about this key server’s location, as it should be the only party making requests to it.</p>
    <div>
      <h3>Enter: Cloudflare Tunnel</h3>
      <a href="#enter-cloudflare-tunnel">
        
      </a>
    </div>
    <p>Instead of re-inventing the wheel, we decided to make use of an existing Cloudflare product that our customers use to protect the connections between Cloudflare and their origin servers — Cloudflare Tunnels!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/efEdmsMhMKZINZeiqaEbj/19f607592f8a07ae67ab9ac9aad574b6/image4-11.png" />
            
            </figure><p>Cloudflare Tunnel gives customers the tools to connect incoming traffic to their private networks without exposing those networks to the Internet through a public hostname. It works by having customers install a Cloudflare daemon, called “cloudflared” which Cloudflare’s client will then connect to.</p><p>Now, customers will be able to use the same functionality but for connections made to their key server.</p>
    <div>
      <h3>Getting started</h3>
      <a href="#getting-started">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7cwgNSFiwyruWuHOlNDBKO/ff04a1fc33f4de532cea36570e7fa712/image2-20.png" />
            
            </figure><p>To set this up, customers will need to configure a virtual network on Cloudflare - this is where customers will tell us the IP address or hostname of their key server. Then, when uploading a Keyless certificate, instead of telling us the public hostname associated with the key server, customers will be able to tell us the virtual network that resolves to it. When making requests to the key server, Cloudflare’s gokeyless client will automatically connect to the “cloudflared” server and will continue to use Mutual TLS as an additional security layer on top of that connection. For more instructions on how to set this up , check out our <a href="https://developers.cloudflare.com/ssl/keyless-ssl/configuration/">Developer Docs</a>.</p><p>If you’re an Enterprise customer and are interested in using Keyless SSL in conjunction with Cloudflare Tunnels, reach out to your account team today to get set up.</p> ]]></content:encoded>
            <category><![CDATA[Security Week]]></category>
            <category><![CDATA[Keyless SSL]]></category>
            <category><![CDATA[Cloudflare Tunnel]]></category>
            <guid isPermaLink="false">1FZfLi9GFmCGG0PEQwLhHw</guid>
            <dc:creator>Dina Kozlov</dc:creator>
        </item>
        <item>
            <title><![CDATA[Keyless SSL now supports FIPS 140-2 L3 hardware security module (HSM) offerings from all major cloud providers]]></title>
            <link>https://blog.cloudflare.com/keyless-ssl-supports-fips-140-2-l3-hsm/</link>
            <pubDate>Sat, 27 Mar 2021 13:01:00 GMT</pubDate>
            <description><![CDATA[ Private encryption keys stored in hardware security module offerings from all major cloud providers can now be used to secure HTTPS connections at Cloudflare’s global edge.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p><b>Private encryption keys stored in hardware security module offerings from all major cloud providers can now be used to secure HTTPS connections at Cloudflare’s global edge</b>.</p><p>Cloudflare generates, protects, and manages more SSL/TLS private keys than perhaps any organization in the world. Private keys must be carefully protected, as an attacker in possession of one can impersonate legitimate sites and decrypt HTTPS requests. To mitigate this risk, Cloudflare has strict key handling procedures and <a href="/going-keyless-everywhere/">layers of isolation</a> at the edge that are designed to safeguard keys at all costs. But for a small minority of customers with <a href="https://www.cloudflare.com/learning/security/what-is-information-security/">information security policies</a> dictating where they can (or cannot) custody their keys, these protections do not meet their requirements.</p><p>It was for these customers that we <a href="/announcing-keyless-ssl-all-the-benefits-of-cloudflare-without-having-to-turn-over-your-private-ssl-keys/">first released Keyless SSL in 2014</a>, a protocol we use extensively inside our network: all of the TLS handshakes per day established at the Cloudflare edge that take place in a process that has no access to our customers’ private keys. The data required to establish the session is instead sent to a separate system, where the necessary cryptographic signing operation is performed. For keys uploaded to or generated by Cloudflare, we manage this other system, but some customers wish to manage it themselves.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2TvqVFrjxysKMtotYnQkQo/7f4c325d764571621141733445a8e99f/image2-47.png" />
            
            </figure><p>Historically the keys were placed on the server running the <a href="https://github.com/cloudflare/gokeyless">open source gokeyless daemon</a> we provide to process the handshake, or secured in an on-prem hardware security module (HSM) that gokeyless interfaces with using a standard protocol known as PKCS#11. However, as financial services, healthcare, cryptocurrency, and other highly regulated or security-focused companies have moved to the cloud, they cannot simply ship these expensive boxes and ask Amazon, Google, IBM, or Microsoft to rack and stack them.</p><p>For these customers, especially those whose information security policies mandate <a href="https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.140-2.pdf">FIPS 140-2 Level 3</a> validated HSMs, we are announcing that Keyless SSL now supports the following cloud-hosted HSMs: <a href="https://aws.amazon.com/cloudhsm/">Amazon Cloud HSM</a>; <a href="https://cloud.google.com/kms/docs/hsm">Google Cloud HSM</a>; <a href="https://cloud.ibm.com/catalog/infrastructure/hardware-security-module">IBM Cloud HSM</a>; <a href="https://azure.microsoft.com/en-us/services/azure-dedicated-hsm/">Microsoft Azure Dedicated HSM</a> and <a href="https://azure.microsoft.com/en-us/updates/akv-managed-hsm-public-preview/">Managed HSM</a>. We also support any other HSM that implements the PKCS#11 standard, including series such as the <a href="https://developers.cloudflare.com/ssl/keyless-ssl/hardware-security-modules/ncipher-thales-nshield-connect">nCipher nShield Connect</a> and Thales Luna.</p>
    <div>
      <h3>HSM overview</h3>
      <a href="#hsm-overview">
        
      </a>
    </div>
    <p>HSMs are purpose-built machines that are tamper-resistant, hardened against weaknesses such as side-channel attacks, and optimized to perform cryptographic operations such as signing and decryption. They can be deployed as stand-alone boxes, as expansion cards inserted into servers, or, most recently, as cloud services.</p><p>Rather than generate private keys on a server and upload them to the HSM, vendors and security experts typically recommend that keys are generated on (and never leave) the device itself. HSMs have better randomness guarantees than general-purpose servers, and take special precautions to protect keys in memory before synchronizing them to disk in an encrypted state. When operations that require the private key are required, services make authenticated API calls into the device using libraries provided by the HSM vendor.</p>
    <div>
      <h3>HSMs and FIPS 140-2 level 3</h3>
      <a href="#hsms-and-fips-140-2-level-3">
        
      </a>
    </div>
    <p>HSMs are typically validated against the Federal Information Process Standard (FIPS) publication 140-2: <a href="https://csrc.nist.gov/publications/detail/fips/140/2/final">Security Requirements for Cryptographic Modules</a>. There are four levels — 1 through 4 — that specify increasingly stringent requirements around approved algorithms, security functions, <a href="https://www.cloudflare.com/learning/access-management/role-based-access-control-rbac/">role-based access control</a>, and tamper evident/resistant protections.</p><p>The National Institute of Standards and Technology (NIST) publishes these guidelines, administers the <a href="https://csrc.nist.gov/projects/cryptographic-module-validation-program/validated-modules">Cryptographic Module Validation Program</a>, and publishes a <a href="https://csrc.nist.gov/projects/cryptographic-module-validation-program/validated-modules/search">searchable database of validated modules</a>, which includes the offerings listed below. We have provided instructions on how to use them with Cloudflare.</p>
    <div>
      <h3>Getting started with cloud offerings</h3>
      <a href="#getting-started-with-cloud-offerings">
        
      </a>
    </div>
    <p>All existing Keyless SSL customers can immediately make use of this technology, and you can read instructions for doing so at <a href="https://developers.cloudflare.com/ssl/keyless-ssl/hardware-security-modules">https://developers.cloudflare.com/ssl/keyless-ssl/hardware-security-modules</a>. Source code is available on GitHub: <a href="https://github.com/cloudflare/gokeyless">https://github.com/cloudflare/gokeyless</a>.</p>
    <div>
      <h3>End-to-end example: Microsoft Azure Managed HSM</h3>
      <a href="#end-to-end-example-microsoft-azure-managed-hsm">
        
      </a>
    </div>
    <p>Microsoft’s Azure Key Vault team <a href="https://azure.microsoft.com/en-us/updates/akv-managed-hsm-public-preview/">released Managed HSM</a>. The offering is FIPS 140-2 Level 3 validated and is integrated with Azure services such as Azure Storage, Azure SQL, and Azure Information Protection. Managed HSM is available in the following regions: East US 2, South Central US, North Europe, and West Europe.</p><p>The instructions below are taken from the <a href="https://docs.microsoft.com/en-us/azure/key-vault/managed-hsm/quick-create-cli">Quickstart: Provision and activate a managed HSM using Azure CLI</a> guide, followed by instructions for using the Managed HSM with Cloudflare. The commands were run on a Ubuntu VM created in the same region (South Central US) as the HSM; this is also where we will deploy the Cloudflare Keyless SSL daemon.</p>
    <div>
      <h3>Provision and activate the HSM</h3>
      <a href="#provision-and-activate-the-hsm">
        
      </a>
    </div>
    <p>First we log in via the CLI and create a resource group for the Managed HSM in one of the supported regions. Note that you may get warnings from various commands based on the preview status of the offering.</p>
            <pre><code>$ LOCATION=southcentralus; GROUPNAME=”HSMgroup”; HSMNAME=”KeylessHSM”
$ az login
$ az group create --name $GROUPNAME --location $LOCATION</code></pre>
            <p>Next, we provision the HSM resource and activate it by downloading the <a href="https://docs.microsoft.com/en-us/azure/key-vault/managed-hsm/security-domain">security domain</a>. The example below grants administrative access to the signed-in user, along with another administrator whose OID can be retrieved by executing the same oid=$(...) command from the CLI where that user is logged in.</p>
            <pre><code>$ MYOID=$(az ad signed-in-user show --query objectId -o tsv)
$ OTHERADMIN_OID=...

$ az keyvault create --hsm-name $HSMNAME --resource-group $GROUPNAME --location $LOCATION --administrators $MYOID $OTHERADMIN_OID

Argument '--hsm-name' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus
Argument '--administrators' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus
{- Finished ..
  "id": "/subscriptions/.../resourceGroups/HSMgroup/providers/Microsoft.KeyVault/managedHSMs/Keyles
sHSM",
  "location": "southcentralus",
  "name": "KeylessHSM",
  "properties": {
    "createMode": null,
    "enablePurgeProtection": false,
    "enableSoftDelete": true,
    "hsmUri": "https://keylesshsm.managedhsm.azure.net/",
    "initialAdminObjectIds": [
      "$MYOID",
      "$OTHERADMIN_OID"
    ],
    "provisioningState": "Succeeded",
    "softDeleteRetentionInDays": 90,
    "statusMessage": "The Managed HSM is provisioned and ready to use.",
    "tenantId": "..."
  },
  "resourceGroup": "$GROUPNAME",
  "sku": {
    "family": "B",
    "name": "Standard_B1"
  },
  "tags": {},
  "type": "Microsoft.KeyVault/managedHSMs"
}</code></pre>
            <p>Record the <b>hsmUri</b> property that is returned from the command above. You will need this shortly when configuring Keyless SSL on your VM.</p>
            <pre><code>$ HSMURI="https://keylesshsm.managedhsm.azure.net/"</code></pre>
            <p>Now that the HSM is provisioned, you must provide it with at least 3 RSA public keys. The HSM will encrypt the security domain with these keys and send it back to you, after which the HSM is ready to use.</p>
            <pre><code>$ openssl req -newkey rsa:2048 -nodes -keyout cert_0.key -x509 -days 365 -out cert_0.cer
$ openssl req -newkey rsa:2048 -nodes -keyout cert_1.key -x509 -days 365 -out cert_1.cer
$ openssl req -newkey rsa:2048 -nodes -keyout cert_2.key -x509 -days 365 -out cert_2.cer

$ az keyvault security-domain download --hsm-name $HSMNAME --sd-wrapping-keys ./cert_0.cer ./cert_1.cer ./cert_2.cer --sd-quorum 2 --security-domain-file $HSMNAME-SD.json</code></pre>
            <p>If you get a “Failed to connect to MSI” error, and you are using a cloud shell from the Azure Portal, run az login again as this is a known issue.</p><p>Once you have your HSM provisioned, <a href="https://docs.microsoft.com/en-us/cli/azure/keyvault/key?view=azure-cli-latest#az_keyvault_key_import">add your private key</a> to the keyvault</p>
            <pre><code>$ az keyvault key import --KeylessHSM</code></pre>
            <p>This will return a URI that you will later add to the Keyless YAML file to indicate where your private key is stored.</p><p>Now that you have your HSM provisioned and activated, you need to create a VM where you will deploy the Keyless daemon. For this example, we will create an Ubuntu Xenial VM in Azure. In the portal, go to the <b>Virtual machines</b> page and <b>Add</b> a VM. There, you can use the resource group that you created earlier for the HSM. For best results, choose the same region as the one for the HSM. Note the public IP of the VM, you will use this to remotely connect to the server.</p><p>Next, configure your VM as a key server. First, you need to add the <a href="https://pkg.cloudflare.com/">Cloudflare Package Repository</a>. Then, you will update your OS’ package listings and install the gokeyless server.</p>
            <pre><code>$ apt-get update
$ echo 'deb http://pkg.cloudflare.com/ Xenial main' |
sudo tee /etc/apt/sources.list.d/cloudflare-main.list
$ curl -C - https://pkg.cloudflare.com/pubkey.gpg | sudo apt-key
$ apt-get update
$ apt-get install gokeyless</code></pre>
            <p>Then, update the gokeyless YAML file. There, you will add the hostname of your keyserver — this hostname should have a DNS record that points to your VM, the zoneID, and Origin CA API key. The zoneID and Origin CA API key can be found in the Cloudflare dashboard. In addition to that, indicate the URI that to points to your private key’s directory under private_key_stores.</p>
            <pre><code>$ vim /etc/keyless/gokeyless.yaml</code></pre>
            <p>Lastly, start the keyless server.</p>
            <pre><code>$ service gokeyless start</code></pre>
            <p>Go back to the Azure portal and open the required TCP ports for the Azure VM. Go to your VM → Networking → Add inbound port rule. Make sure you allow traffic on any source port and indicate Port 2407 for the destination port.</p><p>Save the change, then go to the Cloudflare dashboard to upload your <a href="https://www.cloudflare.com/application-services/products/ssl/">SSL certificate</a>. You should see “Upload Keyless SSL Certificate” in the Edge Certificates section of the SSL/TLS tab. There, enter the fields with the key server label, the hostname in the YAML file, the key server port-- 2407 is the default port, and paste the SSL certificate.</p><p>Next you’re ready to test! Run <code>curl -v https://zone.com</code> and check that the TLS handshake is successfully completed.</p>
    <div>
      <h3>Microsoft Azure Dedicated HSM</h3>
      <a href="#microsoft-azure-dedicated-hsm">
        
      </a>
    </div>
    <p>In addition to the Managed HSM offering that is now in public preview, Azure customers can configure Cloudflare’s edge to utilize keys stored in Microsoft’s <a href="https://azure.microsoft.com/en-us/services/azure-dedicated-hsm/">Dedicated HSM offering</a>, based on the SafeNet Luna Network HSM 7 Model A790 series.</p><p>Azure Dedicated HSM is validated against both FIPS 140-2 Level 3 and eIDAS Common Criteria EAL4+. After following the instructions to <a href="https://docs.microsoft.com/en-us/azure/dedicated-hsm/tutorial-deploy-hsm-powershell">deploy the HSM</a>, customers should follow the Azure specific Keyless SSL instructions <a href="https://developers.cloudflare.com/ssl/keyless-ssl/hardware-security-modules/azure-dedicated-hsm">here</a>.</p>
    <div>
      <h3>Amazon Web Services (AWS) Cloud HSM</h3>
      <a href="#amazon-web-services-aws-cloud-hsm">
        
      </a>
    </div>
    <p><a href="https://aws.amazon.com/cloudhsm/">AWS CloudHSM</a> also provides FIPS 140-2 Level 3 validated HSMs to store your private keys.</p><p>The official <a href="https://registry.terraform.io/providers/hashicorp/aws/latest">AWS Terraform Provider</a> now includes support for the <a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudhsm_v2_cluster">aws_cloudhsm_v2_cluster</a>, which is the version that Cloudflare supports. After <a href="https://docs.aws.amazon.com/cloudhsm/latest/userguide/getting-started.html">provisioning the AWS CloudHSM cluster</a>, customers should follow the AWS specific Keyless SSL instructions <a href="https://developers.cloudflare.com/ssl/keyless-ssl/hardware-security-modules/aws-cloud-hsm">here</a>.</p>
    <div>
      <h3>Google Cloud HSM</h3>
      <a href="#google-cloud-hsm">
        
      </a>
    </div>
    <p><a href="https://cloud.google.com/kms/docs/hsm">Google Cloud HSM</a> uses GCP’s Cloud KMS as its front end, and allows hosting of keys in FIPS 140-2 Level 3 validated HSMs. Additionally, Google offers the ability to host your own HSM in Google provided space; it is recommended that you contact your GCP account representative for additional information about this option.</p><p>Once the key ring and key have been <a href="https://cloud.google.com/kms/docs/hsm">created</a> within the HSM, customers should follow the Google Cloud specific Keyless SSL instructions <a href="https://developers.cloudflare.com/ssl/keyless-ssl/hardware-security-modules/google-cloud-hsm">here</a>. Additional details on using asymmetric keys with GCP KMS can be found <a href="https://cloud.google.com/kms/docs/encrypt-decrypt-rsa">here</a>.</p>
    <div>
      <h3>IBM Cloud</h3>
      <a href="#ibm-cloud">
        
      </a>
    </div>
    <p><a href="https://cloud.ibm.com/docs/hardware-security-modules/about.html#about-ibm-cloud-hsm">IBM Cloud HSM</a> 7.0 provides FIPS 140-2 Level 3 validated HSM capabilities. The offering is based on the SafeNet Luna A750 series.</p><p>After <a href="https://cloud.ibm.com/docs/hardware-security-modules?topic=hardware-security-modules-provisioning-ibm-cloud-hsm#provisioning-ibm-cloud-hs">provisioning the HSM</a>, customers should refer to the IBM specific Keyless SSL instructions <a href="https://developers.cloudflare.com/ssl/keyless-ssl/hardware-security-modules/ibm-cloud-hsm">here</a>.</p>
    <div>
      <h3>Getting help and providing feedback</h3>
      <a href="#getting-help-and-providing-feedback">
        
      </a>
    </div>
    <p>HSMs offer strong key protection capabilities, but can be complicated to set up and deploy. If you need assistance deploying the HSM on your cloud provider, we suggest that you start with their support channels.</p><p>However, if you need assistance configuring the HSM to work with Cloudfare’s edge, or would like to provide feedback on the process, you should reach out directly to your Solutions Engineering team who can put you in touch with Cloudflare’s Keyless SSL specialists.</p> ]]></content:encoded>
            <category><![CDATA[Security Week]]></category>
            <category><![CDATA[Keyless SSL]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">7KkFM8MKL5KP7l2zYnquaY</guid>
            <dc:creator>Patrick R. Donahue</dc:creator>
            <dc:creator>Dina Kozlov</dc:creator>
        </item>
        <item>
            <title><![CDATA[Going Keyless Everywhere]]></title>
            <link>https://blog.cloudflare.com/going-keyless-everywhere/</link>
            <pubDate>Fri, 01 Nov 2019 13:01:00 GMT</pubDate>
            <description><![CDATA[ Time flies. The Heartbleed vulnerability was discovered just over five and a half years ago. Heartbleed became a household name not only because it was one of the first bugs with its own web page and logo, but because of what it revealed about the fragility of the Internet as a whole. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Time flies. The <a href="/tag/heartbleed/">Heartbleed</a> vulnerability was discovered just over five and a half years ago. Heartbleed became a household name not only because it was one of the first bugs with its own <a href="http://heartbleed.com/">web page</a> and <a href="http://heartbleed.com/heartbleed.png">logo</a>, but because of what it revealed about the fragility of the Internet as a whole. With Heartbleed, one tiny bug in a cryptography library exposed the personal data of the users of almost every website online.</p><p>Heartbleed is an example of an underappreciated class of bugs: remote memory disclosure vulnerabilities. High profile examples other than <a href="/tag/heartbleed/">Heartbleed</a> include <a href="/incident-report-on-memory-leak-caused-by-cloudflare-parser-bug/">Cloudbleed</a> and most recently <a href="https://arxiv.org/abs/1807.10535">NetSpectre</a>. These vulnerabilities allow attackers to extract secrets from servers by simply sending them specially-crafted packets. Cloudflare recently completed a multi-year project to make our platform more resilient against this category of bug.</p><p>For the last five years, the industry has been dealing with the consequences of the design that led to Heartbleed being so impactful. In this blog post we’ll dig into memory safety, and how we re-designed Cloudflare’s main product to protect private keys from the next Heartbleed.</p>
    <div>
      <h2>Memory Disclosure</h2>
      <a href="#memory-disclosure">
        
      </a>
    </div>
    <p>Perfect security is not possible for businesses with an online component. History has shown us that no matter how robust their security program, an unexpected exploit can leave a company exposed. One of the more famous recent incidents of this sort is Heartbleed, a vulnerability in a commonly used cryptography library called OpenSSL that exposed the inner details of millions of web servers to anyone with a connection to the Internet. Heartbleed made international news, caused millions of dollars of damage, and <a href="https://blog.malwarebytes.com/exploits-and-vulnerabilities/2019/09/everything-you-need-to-know-about-the-heartbleed-vulnerability/">still hasn’t been fully resolved</a>.</p><p>Typical web services only return data via well-defined public-facing interfaces called APIs. Clients don’t typically get to see what’s going on under the hood inside the server, that would be a huge privacy and security risk. Heartbleed broke that paradigm: it enabled anyone on the Internet to get access to take a peek at the operating memory used by web servers, revealing privileged data usually not exposed via the API. Heartbleed could be used to extract the result of previous data sent to the server, including passwords and credit cards. It could also reveal the inner workings and cryptographic secrets used inside the server, including TLS <a href="/the-results-of-the-cloudflare-challenge/">certificate private keys</a>.</p><p>Heartbleed let attackers peek behind the curtain, but not too far. Sensitive data could be extracted, but not everything on the server was at risk. For example, Heartbleed did not enable attackers to steal the content of databases held on the server. You may ask: why was some data at risk but not others? The reason has to do with how modern operating systems are built.</p>
    <div>
      <h2>A simplified view of process isolation</h2>
      <a href="#a-simplified-view-of-process-isolation">
        
      </a>
    </div>
    <p>Most modern operating systems are split into multiple layers. These layers are analogous to security clearance levels. So-called user-space applications (like your browser) typically live in a low-security layer called user space. They only have access to computing resources (memory, CPU, networking) if the lower, more credentialed layers let them.</p><p>User-space applications need resources to function. For example, they need memory to store their code and working memory to do computations. However, it would be risky to give an application direct access to the physical RAM of the computer they’re running on. Instead, the raw computing elements are restricted to a lower layer called the operating system kernel. The kernel only runs specially-designed applications designed to safely manage these resources and mediate access to them for user-space applications.</p><p>When a new user space application process is launched, the kernel gives it a virtual memory space. This virtual memory space acts like real memory to the application but is actually a safely guarded translation layer the kernel uses to protect the real memory. Each application’s virtual memory space is like a parallel universe dedicated to that application. This makes it impossible for one process to view or modify another’s, the other applications are simply not addressable.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/46WR5JrLwEtc94VDZ7YZJK/8dd78b2efd297c87c430362e7883b4d3/image9-3.png" />
            
            </figure>
    <div>
      <h2>Heartbleed, Cloudbleed and the process boundary</h2>
      <a href="#heartbleed-cloudbleed-and-the-process-boundary">
        
      </a>
    </div>
    <p>Heartbleed was a vulnerability in the OpenSSL library, which was part of many web server applications. These web servers run in user space, like any common applications. This vulnerability caused the web server to return up to 2 kilobytes of its memory in response to a specially-crafted inbound request.</p><p>Cloudbleed was also a memory disclosure bug, albeit one specific to Cloudflare, that got its name because it was so similar to Heartbleed. With Cloudbleed, the vulnerability was not in OpenSSL, but instead in a secondary web server application used for HTML parsing. When this code parsed a certain sequence of HTML, it ended up inserting some process memory into the web page it was serving.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7qlhsxgqsCJwmzNREBXRYx/05a1c85bf7a8109890bf8f621f61bc55/image2.png" />
            
            </figure><p>It’s important to note that both of these bugs occurred in applications running in user space, not kernel space. This means that the memory exposed by the bug was necessarily part of the virtual memory of the application. Even if the bug were to expose megabytes of data, it would only expose data specific to that application, not other applications on the system.</p><p>In order for a web server to serve traffic over the encrypted HTTPS protocol, it needs access to the certificate’s private key, which is typically kept in the application’s memory. These keys were exposed to the Internet by Heartbleed. The Cloudbleed vulnerability affected a different process, the HTML parser, which doesn’t do HTTPS and therefore doesn’t keep the private key in memory. This meant that HTTPS keys were safe, even if other data in the HTML parser’s memory space wasn’t.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6B3QATkOKQfQndbFuDifgd/6b77a1e6fc06fdfa386158113aac5369/image4.png" />
            
            </figure><p>The fact that the HTML parser and the web server were different applications saved us from having to revoke and re-issue our customers’ <a href="https://www.cloudflare.com/application-services/products/ssl/">TLS certificates</a>. However, if another memory disclosure vulnerability is discovered in the web server, these keys are again at risk.</p>
    <div>
      <h2>Moving keys out of Internet-facing processes</h2>
      <a href="#moving-keys-out-of-internet-facing-processes">
        
      </a>
    </div>
    <p>Not all web servers keep private keys in memory. In some deployments, private keys are held in a separate machine called a Hardware Security Module (HSM). HSMs are built to withstand physical intrusion and tampering and are often built to comply with stringent compliance requirements. They can often be bulky and expensive. Web servers designed to take advantage of keys in an HSM connect to them over a physical cable and communicate with a specialized protocol called PKCS#11. This allows the web server to serve encrypted content while being physically separated from the private key.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1GBuW8GyHUgvg2VI7YKZpC/b5ca62effb7f36c3b7f3f8d80cc04f9e/image8-1.png" />
            
            </figure><p>At Cloudflare, we built our own way to separate a web server from a private key: <a href="/keyless-ssl-the-nitty-gritty-technical-details/">Keyless SSL</a>. Rather than keeping the keys in a separate physical machine connected to the server with a cable, the keys are kept in a key server operated by the customer in their own infrastructure (this can also be backed by an HSM).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/73zinlbc5lRdAJuhY7q2Jl/2ac31d4a9220ed6ca468b553f105a1d2/image10-4.png" />
            
            </figure><p>More recently, we launched <a href="/introducing-cloudflare-geo-key-manager/">Geo Key Manager</a>, a service that allows users to store private keys in only select Cloudflare locations. Connections to locations that do not have access to the private key use Keyless SSL with a key server hosted in a datacenter that does have access.</p><p>In both Keyless SSL and Geo Key Manager, private keys are not only not part of the web server’s memory space, they’re often not even in the same country! This extreme degree of separation is not necessary to protect against the next Heartbleed. All that is needed is for the web server and the key server to not be part of the same application. So that’s what we did. We call this Keyless Everywhere.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/jFMg99U9Aiq8yNh43fx1l/d61671a3bb94f6fc415ad5aef8b4f808/image7-2.png" />
            
            </figure>
    <div>
      <h2>Keyless SSL is coming from inside the house</h2>
      <a href="#keyless-ssl-is-coming-from-inside-the-house">
        
      </a>
    </div>
    <p>Repurposing Keyless SSL for Cloudflare-held private keys was easy to conceptualize, but the path from ideation to live in production wasn't so straightforward. The core functionality of Keyless SSL comes from the open source <a href="https://github.com/cloudflare/gokeyless">gokeyless</a> which customers run on their infrastructure, but internally we use it as a library and have replaced the main package with an implementation suited to our requirements (we've creatively dubbed it gokeyless-internal).</p><p>As with all major architecture changes, it’s prudent to start with testing out the model with something new and low risk. In our case, the test bed was our experimental <a href="/introducing-tls-1-3/">TLS 1.3</a> implementation. In order to quickly iterate through draft versions of the TLS specification and push releases without affecting the majority of Cloudflare customers, we <a href="/introducing-tls-1-3/">re-wrote our custom nginx web server in Go</a> and deployed it in parallel to our existing infrastructure. This server was designed to never hold private keys from the start and only leverage gokeyless-internal. At this time there was only a small amount of TLS 1.3 traffic and it was all coming from the beta versions of browsers, which allowed us to work through the initial kinks of gokeyless-internal without exposing the majority of visitors to security risks or outages due to gokeyless-internal.</p><p>The first step towards making TLS 1.3 fully keyless was identifying and implementing the new functionality we needed to add to gokeyless-internal. Keyless SSL was designed to run on customer infrastructure, with the expectation of supporting only a handful of private keys. But our edge must simultaneously support millions of private keys, so we implemented the same <a href="/universal-ssl-how-it-scales/">lazy loading</a> logic we use in our web server, nginx. Furthermore, a typical customer deployment would put key servers behind a network load balancer, so they could be taken out of service for upgrades or other maintenance. Contrast this with our edge, where it’s important to maximize our resources by serving traffic during software upgrades. This problem is solved by the excellent <a href="/graceful-upgrades-in-go/">tableflip package</a> we use elsewhere at Cloudflare.</p><p>The next project to go Keyless was <a href="https://www.cloudflare.com/products/cloudflare-spectrum/">Spectrum</a>, which launched with default support for gokeyless-internal. With these small victories in hand, we had the confidence necessary to attempt the big challenge, which was porting our existing nginx infrastructure to a fully keyless model. After implementing the new functionality, and being satisfied with our integration tests, all that’s left is to turn this on in production and call it a day, right? Anyone with experience with large distributed systems knows how far "working in dev" is from "done," and this story is no different. Thankfully we were anticipating problems, and built a fallback into nginx to complete the handshake itself if any problems were encountered with the gokeyless-internal path. This allowed us to expose gokeyless-internal to production traffic without risking downtime in the event that our reimplementation of the nginx logic was not 100% bug-free.</p>
    <div>
      <h2>When rolling back the code doesn’t roll back the problem</h2>
      <a href="#when-rolling-back-the-code-doesnt-roll-back-the-problem">
        
      </a>
    </div>
    <p>Our deployment plan was to enable Keyless Everywhere, find the most common causes of fallbacks, and then fix them. We could then repeat this process until all sources of fallbacks had been eliminated, after which we could remove access to private keys (and therefore the fallback) from nginx. One of the early causes of fallbacks was gokeyless-internal returning ErrKeyNotFound, indicating that it couldn’t find the requested private key in storage. This should not have been possible, since nginx only makes a request to gokeyless-internal after first finding the certificate and key pair in storage, and we always write the private key and certificate together. It turned out that in addition to returning the error for the intended case of the key truly not found, we were also returning it when transient errors like timeouts were encountered. To resolve this, we updated those transient error conditions to return ErrInternal, and deployed to our <a href="https://en.wikipedia.org/wiki/Sentinel_species">canary datacenters</a>. Strangely, we found that a handful of instances in a single datacenter started encountering high rates of fallbacks, and the logs from nginx indicated it was due to a timeout between nginx and gokeyless-internal. The timeouts didn’t occur right away, but once a system started logging some timeouts it never stopped. Even after we rolled back the release, the fallbacks continued with the old version of the software! Furthermore, while nginx was complaining about timeouts, gokeyless-internal seemed perfectly healthy and was reporting reasonable performance metrics (sub-millisecond median request latency).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5xlp86WlHoUaRjiXcdifcP/c06dad2ff1f58ff555470d8809d6bba8/image1-1.png" />
            
            </figure><p>To debug the issue, we added detailed logging to both nginx and gokeyless, and followed the chain of events backwards once timeouts were encountered.</p>
            <pre><code>➜ ~ grep 'timed out' nginx.log | grep Keyless | head -5
2018-07-25T05:30:49.000 29m41 2018/07/25 05:30:49 [error] 4525#0: *1015157 Keyless SSL request/response timed out while reading Keyless SSL response, keyserver: 127.0.0.1
2018-07-25T05:30:49.000 29m41 2018/07/25 05:30:49 [error] 4525#0: *1015231 Keyless SSL request/response timed out while waiting for Keyless SSL response, keyserver: 127.0.0.1
2018-07-25T05:30:49.000 29m41 2018/07/25 05:30:49 [error] 4525#0: *1015271 Keyless SSL request/response timed out while waiting for Keyless SSL response, keyserver: 127.0.0.1
2018-07-25T05:30:49.000 29m41 2018/07/25 05:30:49 [error] 4525#0: *1015280 Keyless SSL request/response timed out while waiting for Keyless SSL response, keyserver: 127.0.0.1
2018-07-25T05:30:50.000 29m41 2018/07/25 05:30:50 [error] 4525#0: *1015289 Keyless SSL request/response timed out while waiting for Keyless SSL response, keyserver: 127.0.0.1</code></pre>
            <p>You can see the first request to log a timeout had id 1015157. Also interesting that the first log line was "timed out while reading," but all the others are "timed out while waiting," and this latter message is the one that continues forever. Here is the matching request in the gokeyless log:</p>
            <pre><code>➜ ~ grep 'id=1015157 ' gokeyless.log | head -1
2018-07-25T05:30:39.000 29m41 2018/07/25 05:30:39 [DEBUG] connection 127.0.0.1:30520: worker=ecdsa-29 opcode=OpECDSASignSHA256 id=1015157 sni=announce.php?info_hash=%a8%9e%9dc%cc%3b1%c8%23%e4%93%21r%0f%92mc%0c%15%89&amp;peer_id=-ut353s-%ce%ad%5e%b1%99%06%24e%d5d%9a%08&amp;port=42596&amp;uploaded=65536&amp;downloaded=0&amp;left=0&amp;corrupt=0&amp;key=04a184b7&amp;event=started&amp;numwant=200&amp;compact=1&amp;no_peer_id=1 ip=104.20.33.147</code></pre>
            <p>Aha! That SNI value is clearly invalid (SNIs are like Host headers, i.e. they are domains, not URL paths), and it’s also quite long. Our storage system indexes certificates based on two indices: which SNI they correspond to, and which IP addresses they correspond to (for older clients that don’t support SNI). Our storage interface uses the memcached protocol, and the client library that gokeyless-internal uses rejects requests for keys longer than 250 characters (memcached’s maximum key length), whereas the nginx logic is to simply ignore the invalid SNI and treat the request as if only had an IP. The change in our new release had shifted this condition from <code>ErrKeyNotFound</code> to <code>ErrInternal</code>, which triggered cascading problems in nginx. The “timeouts” it encountered were actually a result of throwing away all in-flight requests multiplexed on a connection which happened to return <code>ErrInternal</code>for a single request. These requests were retried, but once this condition triggered, nginx became overloaded by the number of retried requests plus the continuous stream of new requests coming in with bad SNI, and was unable to recover. This explains why rolling back gokeyless-internal didn’t fix the problem.</p><p>This discovery finally brought our attention to nginx, which thus far had escaped blame since it had been working reliably with customer key servers for years. However, communicating over localhost to a multitenant key server is fundamentally different than reaching out over the public Internet to communicate with a customer’s key server, and we had to make the following changes:</p><ul><li><p>Instead of a long connection timeout and a relatively short response timeout for customer key servers, extremely short connection timeouts and longer request timeouts are appropriate for a localhost key server.</p></li><li><p>Similarly, it’s reasonable to retry (with backoff) if we timeout waiting on a customer key server response, since we can’t trust the network. But over localhost, a timeout would only occur if gokeyless-internal were overloaded and the request were still queued for processing. In this case a retry would only lead to more total work being requested of gokeyless-internal, making the situation worse.</p></li><li><p>Most significantly, nginx must not throw away all requests multiplexed on a connection if any single one of them encounters an error, since a single connection no longer represents a single customer.</p></li></ul>
    <div>
      <h2>Implementations matter</h2>
      <a href="#implementations-matter">
        
      </a>
    </div>
    <p>CPU at the edge is one of our most precious assets, and it’s closely guarded by our performance team (aka CPU police). Soon after turning on Keyless Everywhere in one of our canary datacenters, they noticed gokeyless using ~50% of a core per instance. We were shifting the sign operations from nginx to gokeyless, so of course it would be using more CPU now. But nginx should have seen a commensurate reduction in CPU usage, right?</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5UKCYIeE5MqU3j7jG8GFiy/38fbb7e9842218b75153a512d908ae31/image5.png" />
            
            </figure><p>Wrong. Elliptic curve operations are very fast in Go, but it’s known that <a href="https://github.com/golang/go/issues/21525">RSA operations are much slower than their BoringSSL counterparts</a>.</p><p>Although Go 1.11 includes optimizations for RSA math operations, we needed more speed. Well-tuned assembly code is required to match the performance of BoringSSL, so Armando Faz from our Crypto team helped claw back some of the lost CPU by reimplementing parts of the <a href="https://golang.org/pkg/math/big/">math/big</a> package with platform-dependent assembly in an internal fork of Go. The recent <a href="https://github.com/golang/go/wiki/AssemblyPolicy">assembly policy</a> of Go prefers the use of Go portable code instead of assembly, so these optimizations were not upstreamed. There is still room for more optimizations, and for that reason we’re still evaluating moving to cgo + BoringSSL for sign operations, despite <a href="https://dave.cheney.net/2016/01/18/cgo-is-not-go">cgo’s many downsides</a>.</p>
    <div>
      <h2>Changing our tooling</h2>
      <a href="#changing-our-tooling">
        
      </a>
    </div>
    <p>Process isolation is a powerful tool for protecting secrets in memory. Our move to Keyless Everywhere demonstrates that this is not a simple tool to leverage. Re-architecting an existing system such as nginx to use process isolation to protect secrets was time-consuming and difficult. Another approach to memory safety is to use a memory-safe language such as Rust.</p><p>Rust was originally developed by Mozilla but is starting <a href="https://www.infoq.com/articles/programming-language-trends-2019/">to be used much more widely</a>. The main advantage that Rust has over C/C++ is that it has memory safety features without a garbage collector.</p><p>Re-writing an existing application in a new language such as Rust is a daunting task. That said, many new Cloudflare features, from the powerful <a href="/announcing-firewall-rules/">Firewall Rules</a> feature to our <a href="/announcing-warp-plus/">1.1.1.1 with WARP</a> app, have been written in Rust to take advantage of its powerful memory-safety properties. We’re really happy with Rust so far and plan on using it even more in the future.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>The harrowing aftermath of Heartbleed taught the industry a lesson that should have been obvious in retrospect: keeping important secrets in applications that can be accessed remotely via the Internet is a risky security practice. In the following years, with a lot of work, we leveraged process separation and Keyless SSL to ensure that the next Heartbleed wouldn’t put customer keys at risk.</p><p>However, this is not the end of the road. Recently memory disclosure vulnerabilities such as <a href="https://arxiv.org/abs/1807.10535">NetSpectre</a> have been discovered which are able to bypass application process boundaries, so we continue to actively explore new ways to keep keys secure.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/41IkUo52ZjxvkXCjUsoKGE/280a2f7580e8d374abffe61b61615bff/image3.png" />
            
            </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Crypto Week]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Keyless SSL]]></category>
            <category><![CDATA[Cryptography]]></category>
            <guid isPermaLink="false">4l12BK6lPNLLMUIUI3kNN</guid>
            <dc:creator>Nick Sullivan</dc:creator>
            <dc:creator>Chris Broglie</dc:creator>
        </item>
        <item>
            <title><![CDATA[Delegated Credentials for TLS]]></title>
            <link>https://blog.cloudflare.com/keyless-delegation/</link>
            <pubDate>Fri, 01 Nov 2019 13:00:00 GMT</pubDate>
            <description><![CDATA[ Announcing support for a new cryptographic protocol making it possible to deploy encrypted services while still maintaining performance and control of private keys: Delegated Credentials for TLS.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Today we’re happy to announce support for a new cryptographic protocol that helps make it possible to deploy encrypted services in a global network while still maintaining fast performance and tight control of private keys: Delegated Credentials for TLS. We have been working with partners from Facebook, Mozilla, and the broader IETF community to define this emerging standard. We’re excited to share the gory details today in this blog post.</p><p>Also, be sure to check out the blog posts on the topic by our friends at <a href="https://engineering.fb.com/security/delegated-credentials-improving-the-security-of-tls-certificates/">Facebook</a> and <a href="https://blog.mozilla.org/security/2019/11/01/validating-delegated-credentials-for-tls-in-firefox/">Mozilla</a>!</p>
    <div>
      <h2>Deploying TLS globally</h2>
      <a href="#deploying-tls-globally">
        
      </a>
    </div>
    <p>Many of the technical problems we face at Cloudflare are widely shared problems across the Internet industry. As gratifying as it can be to solve a problem for ourselves and our customers, it can be even more gratifying to solve a problem for the entire Internet. For the past three years, we have been working with peers in the industry to solve a specific shared problem in the TLS infrastructure space: How do you terminate TLS connections while storing keys remotely and maintaining performance and availability? Today we’re announcing that Cloudflare now supports Delegated Credentials, the result of this work.</p><p>Cloudflare’s TLS/SSL features are among the top reasons customers use our service. Configuring TLS is hard to do without internal expertise. By automating TLS, web site and web service operators gain the latest TLS features and the most secure configurations by default. It also reduces the risk of outages or bad press due to misconfigured or insecure encryption settings. Customers also gain early access to unique features like <a href="/introducing-tls-1-3/">TLS 1.3</a>, <a href="/towards-post-quantum-cryptography-in-tls/">post-quantum cryptography</a>, and <a href="/high-reliability-ocsp-stapling/">OCSP stapling</a> as they become available.</p><p>Unfortunately, for web services to authorize a service to terminate TLS for them, they have to trust the service with their private keys, which demands a high level of trust. For services with a global footprint, there is an additional level of nuance. They may operate multiple data centers located in places with varying levels of physical security, and each of these needs to be trusted to terminate TLS.</p><p>To tackle these problems of trust, Cloudflare has invested in two technologies: <a href="/announcing-keyless-ssl-all-the-benefits-of-cloudflare-without-having-to-turn-over-your-private-ssl-keys/">Keyless SSL</a>, which allows customers to use Cloudflare without sharing their private key with Cloudflare; and <a href="/introducing-cloudflare-geo-key-manager/">Geo Key Manager</a>, which allows customers to choose the geographical locations in which Cloudflare should keep their keys. Both of these technologies are able to be deployed without any changes to browsers or other clients. They also come with some downsides in the form of availability and performance degradation.</p><p>Keyless SSL introduces extra latency at the start of a connection. In order for a server without access to a private key to establish a connection with a client, that servers needs to reach out to a key server, or a remote point of presence, and ask them to do a private key operation. This not only adds additional latency to the connection, causing the content to load slower, but it also introduces some troublesome operational constraints on the customer. Specifically, the server with access to the key needs to be highly available or the connection can fail. Sites often use Cloudflare to improve their site’s availability, so having to run a high-availability key server is an unwelcome requirement.</p>
    <div>
      <h2>Turning a pull into a push</h2>
      <a href="#turning-a-pull-into-a-push">
        
      </a>
    </div>
    <p>The reason services like Keyless SSL that rely on remote keys are so brittle is their architecture: they are pull-based rather than push-based. Every time a client attempts a handshake with a server that doesn’t have the key, it needs to pull the authorization from the key server. An alternative way to build this sort of system is to periodically push a short-lived authorization key to the server and use that for handshakes. Switching from a pull-based model to a push-based model eliminates the additional latency, but it comes with additional requirements, including the need to change the client.</p><p>Enter the new TLS feature of <a href="https://tools.ietf.org/html/draft-ietf-tls-subcerts-04">Delegated Credentials</a> (DCs). A delegated credential is a short-lasting key that the certificate’s owner has delegated for use in TLS. They work like a power of attorney: your server authorizes our server to terminate TLS for a limited time. When a browser that supports this protocol connects to our edge servers we can show it this “power of attorney”, instead of needing to reach back to a customer’s server to get it to authorize the TLS connection. This reduces latency and improves performance and reliability.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1wNKw1iaNaUHBESq06OXIK/fbbba3ca4614c398480a03e7ce00fc1b/pull-diagram-1.jpg" />
            
            </figure><p>The pull model</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/qfW3dQvRSnHowxNEW2lSf/d457a3d0b7c52f9c3523c7f2d73cd94a/push-diagram.jpg" />
            
            </figure><p>The push model</p><p>A fresh delegated credential can be created and pushed out to TLS servers long before the previous credential expires. Momentary blips in availability will not lead to broken handshakes for clients that support delegated credentials. Furthermore, a Delegated Credentials-enabled TLS connection is just as fast as a standard TLS connection: there’s no need to connect to the key server for every handshake. This removes the main drawback of Keyless SSL for DC-enabled clients.</p><p>Delegated credentials are intended to be an Internet Standard RFC that anyone can implement and use, not a replacement for Keyless SSL. Since browsers will need to be updated to support the standard, proprietary mechanisms like Keyless SSL and Geo Key Manager will continue to be useful. Delegated credentials aren’t just useful in our context, which is why we’ve developed it openly and with contributions from across industry and academia. Facebook has integrated them into their own TLS implementation, and you can read more about how they view the security benefits <a href="https://engineering.fb.com/security/delegated-credentials/">here.</a>  When it comes to improving the security of the Internet, we’re all on the same team.</p><p><i>"We believe delegated credentials provide an effective way to boost security by reducing certificate lifetimes without sacrificing reliability. This will soon become an Internet standard and we hope others in the industry adopt delegated credentials to help make the Internet ecosystem more secure."</i></p><p></p><p>— <b>Subodh Iyengar</b>, software engineer at Facebook</p>
    <div>
      <h2>Extensibility beyond the PKI</h2>
      <a href="#extensibility-beyond-the-pki">
        
      </a>
    </div>
    <p>At Cloudflare, we’re interested in pushing the state of the art forward by experimenting with new algorithms. In TLS, there are three main areas of experimentation: ciphers, key exchange algorithms, and authentication algorithms. Ciphers and key exchange algorithms are only dependent on two parties: the client and the server. This freedom allows us to deploy exciting new choices like <a href="/do-the-chacha-better-mobile-performance-with-cryptography/">ChaCha20-Poly1305</a> or <a href="/towards-post-quantum-cryptography-in-tls/">post-quantum key agreement</a> in lockstep with browsers. On the other hand, the authentication algorithms used in TLS are dependent on certificates, which introduces certificate authorities and the entire public key infrastructure into the mix.</p><p>Unfortunately, the public key infrastructure is very conservative in its choice of algorithms, making it harder to adopt newer cryptography for authentication algorithms in TLS. For instance, <a href="https://en.wikipedia.org/wiki/EdDSA">EdDSA</a>, a highly-regarded signature scheme, is not supported by certificate authorities, and <a href="https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/">root programs limit the certificates that will be signed.</a> With the emergence of quantum computing, experimenting with new algorithms is essential to determine which solutions are deployable and functional on the Internet.</p><p>Since delegated credentials introduce the ability to use new authentication key types without requiring changes to certificates themselves, this opens up a new area of experimentation. Delegated credentials can be used to provide a level of flexibility in the transition to post-quantum cryptography, by enabling new algorithms and modes of operation to coexist with the existing PKI infrastructure. It also enables tiny victories, like the ability to use smaller, faster Ed25519 signatures in TLS.</p>
    <div>
      <h2>Inside DCs</h2>
      <a href="#inside-dcs">
        
      </a>
    </div>
    <p>A delegated credential contains a public key and an expiry time. This bundle is then signed by a certificate along with the certificate itself, binding the delegated credential to the certificate for which it is acting as “power of attorney”. A supporting client indicates its support for delegated credentials by including an extension in its Client Hello.</p><p>A server that supports delegated credentials composes the TLS Certificate Verify and Certificate messages as usual, but instead of signing with the certificate’s private key, it includes the certificate along with the DC, and signs with the DC’s private key. Therefore, the private key of the certificate only needs to be used for the signing of the DC.</p><p>Certificates used for signing delegated credentials require a special X.509 certificate extension (currently only available at <a href="https://docs.digicert.com/manage-certificates/certificate-profile-options/">DigiCert</a>). This requirement exists to avoid breaking assumptions people may have about the impact of temporary access to their keys on security, particularly in cases involving HSMs and the still unfixed <a href="/rfc-8446-aka-tls-1-3/">Bleichenbacher oracles</a> in older TLS versions.  Temporary access to a key can enable signing lots of delegated credentials which start far in the future, and as a result support was made opt-in. Early versions of QUIC had <a href="https://www.nds.ruhr-uni-bochum.de/media/nds/veroeffentlichungen/2015/08/21/Tls13QuicAttacks.pdf">similar issues</a>, and ended up adopting TLS to fix them. Protocol evolution on the Internet requires working well with already existing protocols and their flaws.</p>
    <div>
      <h2>Delegated Credentials at Cloudflare and Beyond</h2>
      <a href="#delegated-credentials-at-cloudflare-and-beyond">
        
      </a>
    </div>
    <p>Currently we use delegated credentials as a performance optimization for Geo Key Manager and Keyless SSL. Customers can update their certificates to include the special extension for delegated credentials, and we will automatically create delegated credentials and distribute them to the edge through the Keyless SSL or Geo Key Manager. For more information, see the <a href="https://developers.cloudflare.com/ssl/keyless-ssl/dc/">documentation.</a> It also enables us to be more conservative about where we keep keys for customers, improving our security posture.</p><p>Delegated Credentials would be useless if it wasn’t also supported by browsers and other HTTP clients. Christopher Patton, a former intern at Cloudflare, implemented support in Firefox and its underlying NSS security library. <a href="https://blog.mozilla.org/security/2019/11/01/validating-delegated-credentials-for-tls-in-firefox/">This feature is now in the Nightly versions of Firefox</a>. You can turn it on by activating the configuration option security.tls.enable_delegated_credentials at about:config. Studies are ongoing on how effective this will be in a wider deployment. There also is support for Delegated Credentials in BoringSSL.</p><p><i>"At Mozilla we welcome ideas that help to make the Web PKI more robust. The Delegated Credentials feature can help to provide secure and performant TLS connections for our users, and we're happy to work with Cloudflare to help validate this feature."</i></p><p></p><p>— <b>Thyla van der Merwe</b>, Cryptography Engineering Manager at Mozilla</p><p>One open issue is the question of client clock accuracy. Until we have a wide-scale study we won’t know how many connections using delegated credentials will break because of the 24 hour time limit that is imposed.  Some clients, in particular mobile clients, may have inaccurately set clocks, the root cause of one third of all <a href="https://www.cloudflare.com/learning/ssl/common-errors/">certificate errors</a> in Chrome. Part of the way that we’re aiming to solve this problem is through standardizing and improving <a href="/roughtime/">Roughtime</a>, so web browsers and other services that need to validate certificates can do so independent of the client clock.</p><p>Cloudflare’s global scale means that we see connections from every corner of the world, and from many different kinds of connection and device. That reach enables us to find rare problems with the deployability of protocols. For example, our <a href="/why-tls-1-3-isnt-in-browsers-yet/">early deployment</a> helped inform the development of the TLS 1.3 standard. As we enable developing protocols like delegated credentials, we learn about obstacles that inform and affect their future development.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>As new protocols emerge, we'll continue to play a role in their development and bring their benefits to our customers. Today’s announcement of a technology that overcomes some limitations of Keyless SSL is just one example of how Cloudflare takes part in improving the Internet not just for our customers, but for everyone. During the standardization process of turning the draft into an RFC, we’ll continue to maintain our implementation and come up with new ways to apply delegated credentials.</p> ]]></content:encoded>
            <category><![CDATA[Crypto Week]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Keyless SSL]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Cryptography]]></category>
            <guid isPermaLink="false">21MHSnISq1AaWWdB5lruxJ</guid>
            <dc:creator>Nick Sullivan</dc:creator>
            <dc:creator>Watson Ladd</dc:creator>
        </item>
        <item>
            <title><![CDATA[Geo Key Manager: How It Works]]></title>
            <link>https://blog.cloudflare.com/geo-key-manager-how-it-works/</link>
            <pubDate>Tue, 26 Sep 2017 13:00:00 GMT</pubDate>
            <description><![CDATA[ Today we announced Geo Key Manager, a feature that gives customers control over where their private keys are stored with Cloudflare. This builds on a previous Cloudflare innovation called Keyless SSL and a novel cryptographic access control mechanism. ]]></description>
            <content:encoded><![CDATA[ <p>Today we announced <a href="/introducing-cloudflare-geo-key-manager">Geo Key Manager</a>, a feature that gives customers unprecedented control over where their private keys are stored when uploaded to Cloudflare. This feature builds on a previous Cloudflare innovation called Keyless SSL and a novel cryptographic access control mechanism based on both identity-based encryption and broadcast encryption. In this post we’ll explain the technical details of this feature, the first of its kind in the industry, and how Cloudflare leveraged its existing network and technologies to build it.</p>
    <div>
      <h3>Keys in different area codes</h3>
      <a href="#keys-in-different-area-codes">
        
      </a>
    </div>
    <p>Cloudflare launched <a href="/announcing-keyless-ssl-all-the-benefits-of-cloudflare-without-having-to-turn-over-your-private-ssl-keys/">Keyless SSL</a> three years ago to wide acclaim. With Keyless SSL, customers are able to take advantage of the full benefits of Cloudflare’s network while keeping their HTTPS private keys inside their own infrastructure. Keyless SSL has been popular with customers in industries with regulations around the control of access to private keys, such as the financial industry. Keyless SSL adoption has been slower outside these regulated industries, partly because it requires customers to run custom software (the key server) inside their infrastructure.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/121VptrQkTUYUPss5HLIWs/1f01586195f9611eb1490f98166bd8ef/image5.png" />
            
            </figure><p></p><p>Standard Configuration</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/W3u9jRPpuvciPzndNJYTE/f99f3e97afb98b68c17447bde600b805/image7.png" />
            
            </figure><p></p><p>Keyless SSL</p><p>One of the motivating use cases for Keyless SSL was the expectation that customers may not trust a third party like Cloudflare with their private keys. We found that this is actually very uncommon; most customers actually do trust Cloudflare with their private keys. But we have found that sometimes customers would like a way to reduce the risk associated with having their keys in some physical locations around the world.</p><p>This is where Geo Key Manager is useful: it lets customers limit the exposure of their private keys to certain locations. It’s similar Keyless SSL, but instead of having to run a key server inside your infrastructure, Cloudflare hosts key servers in the locations of your choosing. This reduces the complexity of deploying Keyless SSL and gives the control that people care about. Geo Key Manager “just works” with no software required.</p>
    <div>
      <h3>A Keyless SSL Refresher</h3>
      <a href="#a-keyless-ssl-refresher">
        
      </a>
    </div>
    <p>Keyless SSL was developed at Cloudflare to make HTTPS more secure. Content served over HTTPS is both encrypted and authenticated so that eavesdroppers or attackers can’t read or modify it. HTTPS makes use of a protocol called Transport Layer Security (TLS) to keep this data safe.</p><p>TLS has two phases: a handshake phase and a data exchange phase. In the handshake phase, cryptographic keys are exchanged and a shared secret is established. As part of this exchange, the server proves its identity to the client using a certificate and a private key. In the data exchange phase, shared keys from the handshake are used to encrypt and authenticate the data.</p><p>A TLS handshake can be naturally broken down into two components:</p><ol><li><p>The private key operation</p></li></ol><ul><li><p>Everything else</p></li></ul><p>The private key operation is critical to the TLS handshake: it allows the server to prove that it owns a given certificate. Without this private key operation, the client has no way of knowing whether or not the server is authentic. In Keyless SSL, the private key operation is separated from the rest of the handshake. In a Keyless SSL handshake, instead of performing the private key operation locally, the server makes a remote procedure call to another server (called the key server) that controls the private key.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2qVkp63Sxh2oKSbGYq5S4t/57ce236a21aa8623fc7a7a7c3e9a1030/image8.gif" />
            
            </figure><p>Keyless SSL lets you logically separate concerns so that a compromise of the web server does not result in a compromise of the private key. This additional security comes at a cost. The remote procedure call from the server to the key server can add latency to the handshake, slowing down connection establishment. The additional latency cost corresponds to the round-trip time from the server to the key server, which can be as much as a second if the key server is on the other side of the world.</p><p>Luckily, this latency cost only applies to the first time you connect to a server. Once the handshake is complete, the key server is not involved. Furthermore, if you reconnect to a site you don’t have to pay the latency cost either because resuming a connection with <a href="/tls-session-resumption-full-speed-and-secure/">TLS Session Resumption</a> doesn’t require the private key.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5eRBZiNmYoLOrG7EKJiBtC/589586580fa3559e67aa222047ca7908/geo-key-frame_4x.gif" />
            
            </figure><p>Latency is only added for the initial connection.</p><p>For a deep-dive on this topic, read <a href="/keyless-ssl-the-nitty-gritty-technical-details/">this post I wrote</a> in 2014.</p><p>With Keyless SSL as a basic building block, we set out to design Geo Key Manager.</p>
    <div>
      <h3>Geo Key Manager: designing the architecture</h3>
      <a href="#geo-key-manager-designing-the-architecture">
        
      </a>
    </div>
    <p>Cloudflare has a truly international customer base and we’ve learnt that customers around the world have different regulatory and statutory requirements, and different risk profiles, concerning the placement of their private keys. There’s no one size fits all solution across the planet. With that philosophy in mind, we set out to design a very flexible system for deciding where keys can be kept.</p><p>The first problem to solve was access control. How could we limit the number of locations that a private key is sent to? Cloudflare has a database that takes user settings and distributes them to all edge locations <a href="/kyoto-tycoon-secure-replication/">securely, and very fast</a>. However, this system is optimized to synchronize an entire database worldwide; modifying it to selectively distribute keys to different locations was too big of an architectural change for Cloudflare. Instead, we decided to explore the idea of a cryptographic access control (CAC) system.</p><p>In a CAC system, data is encrypted and distributed everywhere. A piece of data can only be accessed if you have the decryption key. By only sending decryption keys to certain locations, we can effectively limit who has access to data. For example, we could encrypt customer private keys once—right after they’re uploaded—and send the encrypted keys to every location using the existing database replication system.</p><p>We’ve experimented with CAC systems before, most notably with the <a href="https://github.com/cloudflare/redoctober">Red October</a> project. With Red October, you can encrypt a piece of data so that multiple people are required to decrypt it. This is how <a href="/pal-a-container-identity-bootstrapping-tool/">PAL</a>, our Docker Secrets solution works. However, Red October system is ill-suited to Geo Key Manager for a number of reasons:</p><ol><li><p>The more locations you encrypt to, the larger the encrypted key gets</p></li></ol><ul><li><p>There is no way to encrypt a key to “everywhere except a given location” without having to re-encrypt when new locations are added (which we do <a href="/portland/">frequently</a>)</p></li><li><p>There has to be a secure registry of each datacenter’s public key</p></li></ul><p>For Geo Key Manager we wanted something that provides users with granular control and can scale with Cloudflare’s growing network. We came up with the following requirements:</p><ul><li><p>Users can select from a set of pre-defined regions they would like their keys to be in (E.U., U.S., Highest Security, etc.)Users can add specific datacenter locations outside of the chosen regions (e.g. all U.S. locations plus Toronto)</p></li><li><p>Users can choose selected datacenter locations inside their chosen regions to not send keys (e.g. all E.U. locations except London)</p></li><li><p>It should be fast to decrypt a key and easy to store it, no matter how complicated the configuration</p></li></ul><p>Building a system to satisfy these requirements gives customers the freedom to decide where their keys are kept and the scalability necessary to be useful with a growing network.</p>
    <div>
      <h3>Identity-based encryption</h3>
      <a href="#identity-based-encryption">
        
      </a>
    </div>
    <p>The cryptographic tool that allows us to satisfy these requirements is called Identity-based encryption.</p><p>Unlike traditional public key cryptography, where each party has a public key and a private key, in identity-based encryption, your identity <i>is</i> your public key. This is a very powerful concept, because it allows you to encrypt data to a person without having to obtain a their public key ahead of time. Instead of a large random-looking number, a public key can literally be any string, such as “<a>bob@example.com</a>” or “beebob”. Identity-based encryption is useful for email, where you can imagine encrypting a message using person’s email address as the public key. Compare this to the complexity of using PGP, where you need to find someone’s public key and validate it before you send a message. With identity-based encryption, you can even encrypt data to someone before they have the private key associated with their identity (which is managed by a central key generation authority).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1Q2nDn5ijrqWLGMEpqCvtL/8492334a36047cfa46df3a9c041d9e6a/image10.png" />
            
            </figure><p></p><p>Public Key Cryptography (PGP, etc.)</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1tKl78Up3D6ZQcQObkEscl/9052e1d04856d4a690cb368a984c5469/image1-2.png" />
            
            </figure><p></p><p>Identity-based Encryption</p><p>ID-based encryption was proposed by <a href="https://discovery.csc.ncsu.edu/Courses/csc774-S08/reading-assignments/shamir84.pdf">Shamir in the 80s</a>, but it wasn’t fully practical until <a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.66.1131">Boneh and Franklin’s proposal</a> in 2001. Since then, a variety of interesting schemes been discovered and put to use. The underlying cryptographic primitive that makes efficient ID-based encryption possible is called <a href="https://www.math.uwaterloo.ca/~ajmeneze/publications/pairings.pdf">Elliptic Curve Pairings</a>, a topic that deserves its own blog post.</p><p>Our scheme is based the combination of two primitives:</p><p><b>Identity-Based Broadcast Encryption</b> (IBBE)</p><p><b>Identity-Based Revocation</b> (IBR)</p><p><i>Identity-Based Broadcast Encryption</i> (IBBE) is like a allowlist. It lets you take a piece of data and encrypt it to a set of recipients. The specific construction we use is from a 2007 paper by <a href="https://link.springer.com/content/pdf/10.1007/978-3-540-76900-2_12.pdf">Delerablee</a>. Critically, the size of the ciphertext does not depend on the number of recipients. This means we can efficiently encrypt data without it getting larger no matter how many recipients there are (or PoPs in our case).</p><p><i>Identity-based Revocation</i> (IBR) is like a blocklist. It lets you encrypt a piece of data so that all recipients can decrypt it except for a pre-defined set of recipients who are excluded. The implementation we used was from section 4 of a paper by <a href="https://pdfs.semanticscholar.org/5da9/eaa24ba749f1ae193800b6961a37b88da1de.pdf">Attrapadung et al. from 2011</a>. Again, the ciphertext size does not depend on the number of excluded identities.</p><p>These two primitives can be combined to create a very flexible cryptographic access control scheme. To do this, create two sets of identities: an identity for each region, and an identity for each datacenter location. Once these sets have been decided, each server is provisioned the identity-based encryption private key for its region and its location.</p><p>With this in place, you can configure access to the key in terms of the following sets:</p><ul><li><p>Set of regions to encrypt to</p></li><li><p>Set of locations inside the region to exclude</p></li><li><p>Set of locations outside the region to include</p></li></ul><p>Here’s how you can encrypt a customer key so that a given access control policy (regions, blocklist, allowlist) can be enforced:</p><ol><li><p>Create a key encryption key KEK</p></li></ol><ul><li><p>Split it in two halves KEK1, KEK2</p></li><li><p>Encrypt KEK1 with IBBE to include the set of regions (allowlist regions)</p></li><li><p>Encrypt KEK2 with IBR to exclude locations in the regions defined in 3 (blocklist colo)</p></li><li><p>Encrypt KEK with IBBE to include locations not in the regions defined in 3 (allowlist colo)</p></li><li><p>Send all three encrypted keys (KEK1)region, (KEK2)exclude, (KEK)include to all locations</p></li></ul><p>This can be visualized as follows with</p><ul><li><p>Regions: U.S. and E.U.</p></li><li><p>Blocklist colos: Dallas and London</p></li><li><p>Allowlist colos: Sydney and Tokyo</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3dEYAkvLTz4wg4vnpSmJ1y/80f8c01cc9ad92f920c1886d14541898/image11.png" />
            
            </figure><p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5A6r4FXSSdko6wNNKA2sFi/d898890a6196d82796618e7245850a45/image4.jpg" />
            
            </figure><p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2ZEZXyLDAkN0Gpxr0ils0f/fa28cb9857466912df7c5001c1ee62b2/image2-2.png" />
            
            </figure><p>The locations inside the regions allowlist can decrypt half of the key (KEK1), and need to also be outside the blocklist to decrypt the other half of the key (KEK2). In the example, London and Dallas only have access to KEK1 because they can’t decrypt KEK2. These locations can’t reconstruct KEK and therefore can’t decrypt the private key. Every other location in the E.U. and U.S. can decrypt KEK1 and KEK2 so they can construct KEK to decrypt the private key. Tokyo and Sydney can decrypt the KEK from the allowlist and use it to decrypt the private key.</p><p>This will make the private TLS key available in all of the EU and US except for Dallas and London and it will additionally be available in Tokyo and Sydney. The result is the following map:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/S9frEbHm1U5VAzfqpeTWW/dcb0b031d8d552c0b1322f243b02d360/image3.png" />
            
            </figure><p>This design lets customers choose with per-datacenter granularity where their private keys can be accessed. If someone connects to a datacenter that can’t decrypt the private key, that SSL connection is handled using Keyless SSL, where the key server is located in a location with access to the key. The Geo Key code automatically finds the nearest data center that can access the relevant TLS private key and uses Keyless SSL to handle the TLS handshake with the client.</p>
    <div>
      <h3>Creating a key server network</h3>
      <a href="#creating-a-key-server-network">
        
      </a>
    </div>
    <p>Any time you use Keyless SSL for a new connection, there’s going to be a latency cost for connecting to the key server. With Geo Key Manager, we wanted to reduce this latency cost as much as possible. In practical terms, this means we need to know which key server will respond fastest.</p><p>To solve this, we created an overlay network between all of our datacenters to measure latency. Every datacenter has an outbound connection to every other datacenter. Every few seconds, we send a “ping” (a <a href="https://github.com/cloudflare/gokeyless/blob/d129f600f7c60cc36d9da9ef99eefe430b05a3c4/protocol/protocol.go#L95">message in the Keyless SSL protocol</a>, not an <a href="https://en.wikipedia.org/wiki/Ping_(networking_utility)">ICMP message</a>) and we measure how long it takes the server to send a corresponding “pong”.</p><p>When a client connects to a site behind Cloudflare in a datacenter that can’t decrypt the private key for that site, we use metadata to find out which datacenters have access to the key. We then choose the datacenter that has the lowest latency according to our measurements and use that datacenter’s key server for Keyless SSL. If the location with the lowest latency is overloaded, we may choose another location with higher latency but more capacity.</p><p>The data from these measurements was used to construct the following map, highlighting the additional latency added for visitors around the world for the US-only configuration.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1BbX0LQqopTXlY86GvIpa8/922b82e0e9e38dc3865f3148d9156ce1/image6.png" />
            
            </figure><p>Latency added when keys are in U.S. only. Green: no latency cost,Yellow: &lt;50ms,Red: &gt; 100ms</p>
    <div>
      <h3>In conclusion</h3>
      <a href="#in-conclusion">
        
      </a>
    </div>
    <p>We’re constantly innovating to provide our customers with powerful features that are simple to use. With Geo Key Manager, we are leveraging Keyless SSL and 21st century cryptography to improve private key security in an increasingly complicated geo-political climate.</p> ]]></content:encoded>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[SSL]]></category>
            <category><![CDATA[HTTPS]]></category>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Keyless SSL]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Cryptography]]></category>
            <guid isPermaLink="false">6n44oy7eH4vHKsbzTZKngz</guid>
            <dc:creator>Nick Sullivan</dc:creator>
        </item>
        <item>
            <title><![CDATA[One More Thing: Keyless SSL and CloudFlare's Growing Network]]></title>
            <link>https://blog.cloudflare.com/one-more-thing-keyless-ssl-and-cloudflares-growing-network/</link>
            <pubDate>Sun, 28 Sep 2014 18:16:18 GMT</pubDate>
            <description><![CDATA[ I wanted to write one more thing about Keyless SSL, our announcement from last week, before attention shifts to what we'll be announcing on Monday. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>I wanted to write one more thing about <a href="/announcing-keyless-ssl-all-the-benefits-of-cloudflare-without-having-to-turn-over-your-private-ssl-keys/">Keyless SSL</a>, our announcement from last week, before attention shifts to <a href="/celebrating-cloudflares-4th-birthday/">what we'll be announcing on Monday</a>. Keyless allows us to provide CloudFlare's service without having private SSL keys stored locally on our edge servers. The news last week focused on how this could allow very large customers, like <a href="http://www.wired.com/2014/09/new-internet-security-tool-guards-goldman-sachs-eavesdroppers/">major financial institutions</a>, to use CloudFlare without trusting us with their private keys.</p><p>But there's another use that will benefit the entire CloudFlare userbase, not just our largest enterprise customers, and it's this: Keyless SSL is a key part of our strategy to continue to expand CloudFlare's global network.</p>
    <div>
      <h3>CloudFlare's Global Network Today</h3>
      <a href="#cloudflares-global-network-today">
        
      </a>
    </div>
    <p>CloudFlare's network today consists of <a href="https://www.cloudflare.com/network-map">28 edge data centers that span much of the globe</a>. We have technical and security requirements for these facilities in order to ensure that the equipment they house remains secure. Generally, we're in <a href="http://en.wikipedia.org/wiki/Data_center#Data_center_tiers">Tier III or IV data center facilities</a> with the highest level of security. In our San Jose facility, for instance, you have to pass through 5 biometric scans, in addition to multiple 24x7 manned guard check points, before you can get to the electronically locked cabinets housing our servers.</p><p>There are only about 30 locations around the world where a large number of networks come together in a building that meets these security requirements. In other words, we have largely run out of places that it makes sense for us to add a new location where we are confident enough in the facility's security to store sensitive information like customers' private keys.</p>
    <div>
      <h3>Bigger Network, New Challenges</h3>
      <a href="#bigger-network-new-challenges">
        
      </a>
    </div>
    <p>With most of CloudFlare's rival services, even those that have a seemingly larger network footprint, the minute you ask them to enable SSL the size of the network shrinks to something that resembles our network today. That's because they too don't feel comfortable storing customers' private keys in many of their edge nodes. And that's why most legacy CDN providers charge such such an enormous premium the minute you ask them to support SSL.</p><p>But it makes sense to continue to grow our network. As we do, not only can we provide faster performance, but we can further isolate and mitigate large scale attacks. The way we think about it at CloudFlare is that, ultimately, we want to have equipment running in every cell phone tower base station. In order to do that, we need to ensure that we can do so securely. There are many requirements to pull that off, but one of them is ensuring that our customers' most sensitive data is never stored anywhere without the highest security standards. That's where Keyless SSL comes in.</p>
    <div>
      <h3>Securely Extending CloudFlare's Edge</h3>
      <a href="#securely-extending-cloudflares-edge">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7dTsXvQUG7PdF44EsvDaTA/6ad54911d931621d215683854d35db6e/cloudflare-illustration-map-upcoming.png" />
            
            </figure><p>The map above shows all the locations where CloudFlare is actively working to turn up data centers over the next 12 months. As we expand into some of the more distant corners of the Internet, Keyless SSL allows us to offer our full range of services without needing to store customers' SSL in facilities that don't meet the highest security standards.</p><p>Beyond technical concerns, different regions of the world have different geo-political concerns. For instance, European customers may not trust their keys being stored in the United States, American customers may not trust their keys being stored in China, and Chinese customers may not trust their keys being stored in Europe. Keyless will allow us to honor those geopolitical concerns on a customer by customer basis, either ourselves or in partnership with trusted third parties who can serve as key storage agents.</p><p>There are, of course, a number of other technical challenges to ensuring that a server in a potentially hostile environment can be secured and trusted. The good news is many of you reading this are holding in your hand a modern example of computing platform that has been locked down tightly to only run authorized software: your smart phone. We have been putting the pieces together to offer a global secure network including <a href="https://twitter.com/grittygrease">hiring cryptographers out of Apple</a>, <a href="http://cryptoseal.com/">acquiring companies like CryptoSeal</a>, and talking about <a href="http://www.rsaconference.com/writable/presentations/file_upload/stu-m06b-running-secure-server-software-on-insecure-hardware-without-a-parachute.pdf">best practices for keeping secrets safe in unsafe environments (PDF link)</a> — it all has to do with continuing to securely expand CloudFlare's global network.</p><p>So, on the <a href="/celebrating-cloudflares-4th-birthday/">eve of a big announcement that may or may not have something to do with massively expanding the encrypted web</a>, know that we're also leveraging technologies like Keyless SSL in order to securely expand the size of our network to better serve all our customers, not just the big enterprises that increasingly are trusting us to protect and accelerate their networks.</p> ]]></content:encoded>
            <category><![CDATA[Keyless SSL]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[SSL]]></category>
            <guid isPermaLink="false">1xbavODyyzs0kSwH1MJJE3</guid>
            <dc:creator>Matthew Prince</dc:creator>
        </item>
        <item>
            <title><![CDATA[Keyless SSL: The Nitty Gritty Technical Details]]></title>
            <link>https://blog.cloudflare.com/keyless-ssl-the-nitty-gritty-technical-details/</link>
            <pubDate>Fri, 19 Sep 2014 08:53:46 GMT</pubDate>
            <description><![CDATA[ We announced Keyless SSL yesterday to an overwhelmingly positive response. We read through the comments on this blog, Reddit, Hacker News, and people seem interested in knowing more and getting deeper into the technical details. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6CV42lgrprhs4KV01YK9gK/66f3c3048fc3dee8cad62ee2cf5a8b6b/illustration-keyless-ssl.png" />
            
            </figure><p>We announced <a href="/announcing-keyless-ssl-all-the-benefits-of-cloudflare-without-having-to-turn-over-your-private-ssl-keys/">Keyless SSL</a> yesterday to an overwhelmingly positive response. We read through the comments on this blog, <a href="http://www.reddit.com/r/programming/comments/2grd1d/cloudflare_annouces_keyless_ssl/">Reddit</a>, <a href="https://news.ycombinator.com/item?id=8334933">Hacker News</a>, and people seem interested in knowing more and getting deeper into the technical details. In this blog post we go into extraordinary detail to answer questions about how Keyless SSL was designed, how it works, and why it’s secure. Before we do so, we need some background about how encryption works on the Internet. If you’re already familiar, feel free to <a href="#makingitkeyless">skip ahead</a>.</p>
    <div>
      <h3>TLS</h3>
      <a href="#tls">
        
      </a>
    </div>
    <p>Transport Layer Security (TLS) is the workhorse of <a href="https://www.cloudflare.com/learning/security/glossary/website-security-checklist/">web security</a>. It lets websites prove their identity to web browsers, and protects all information exchanged from prying eyes using encryption. The TLS protocol has been around for years, but it’s still mysterious to even hardcore tech enthusiasts. Understanding the fundamentals of TLS is the key to understanding Keyless SSL.</p>
    <div>
      <h3>Dual goals</h3>
      <a href="#dual-goals">
        
      </a>
    </div>
    <p>TLS has two main goals: confidentiality and authentication. Both are critically important to securely communicating on the Internet.</p><p>Communication is considered confidential when two parties are confident that nobody else can understand their conversation. Confidentiality can be achieved using symmetric encryption: use a key known only to the two parties involved to encrypt messages before sending them. In TLS, this symmetric encryption is typically done using a strong block cipher like <a href="http://en.wikipedia.org/wiki/Advanced_Encryption_Standard">AES</a>. Older browsers and platforms might use a cipher like <a href="http://en.wikipedia.org/wiki/Triple_DES">Triple DES</a> or the stream cipher <a href="http://en.wikipedia.org/wiki/RC4">RC4</a>, <a href="/killing-rc4-the-long-goodbye/">which is now considered insecure</a>.</p><p>The other crucial goal of TLS is authentication. Authentication is a way to ensure the person on the other end is who they say they are. This is accomplished with public keys. Websites use certificates and public key cryptography to prove their identity to web browsers. And browsers need two things to trust a certificate: proof that the other party is the owner of the certificate, and proof that the certificate is trusted.</p><p>A website certificate contains a public key, and if the website can prove that it controls the associated private key, that’s proof that they are the owner of the certificate. A browser considers a certificate trusted if the certificate was granted by a trusted certificate authority, and contains the site’s domain name. More technical details of how trust works with web certificates is described in <a href="/introducing-cfssl/">a previous blog post</a> about our open source SSL toolkit, CFSSL.</p><p>In the context of the web, confidentiality and authentication are achieved through the process of establishing a shared key and proving ownership of a certificate. TLS does this through a series of messages called a “handshake”.</p>
    <div>
      <h3>What’s in a handshake?</h3>
      <a href="#whats-in-a-handshake">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2zkUI687fju7i7jdQhW7hb/96b07ae5d835fa8dff7a71b4061f1bd3/handshake1.jpg" />
            
            </figure><p>The TLS protocol evolved from the Secure Sockets Layer (SSL) protocol which was developed by Netscape in the mid-1990s. In 1999, the Internet Engineering Task Force (IETF) standardized a new protocol called TLS, which is an updated version of SSL. In fact, TLS is so similar to SSL that TLS 1.0 uses the SSL protocol version number 3.1. This may seem confusing at first, but makes sense since TLS is just a minor update to SSL 3.0. Subsequent versions of TLS have followed this pattern. Since TLS is an evolution of the SSL protocol, people still use the terms TLS and SSL somewhat interchangeably.</p><p>There are two main types of handshakes in TLS: one based on <a href="http://en.wikipedia.org/wiki/RSA_(cryptosystem)">RSA</a>, and one based on <a href="http://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_key_exchange">Diffie-Hellman</a>. RSA and Diffie-Hellman were the two algorithms which ushered in the era of modern cryptography, and brought cryptography to the masses. These two handshakes differ only in how the two goals of key establishment and authentication are achieved:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3B1xXZSACJ04l2CYJhHbzN/36b086a5ab2cbaf3b07fd494b17b5749/keyless-blog-table.png" />
            
            </figure><p>The RSA and DH handshakes both have their advantages and disadvantages. The RSA handshake only uses one public key algorithm operation, RSA. A DH handshake with an RSA certificate requires the same RSA operation, but with an additional DH operation. Given that the certificate is RSA, the RSA handshake is faster to compute. Public key algorithms like RSA and DH use a lot of CPU and are the slowest part of the TLS handshake. A laptop can only perform a couple hundred RSA encryptions a second versus around ten million per second of the symmetric cipher AES.</p><p>The DH handshake requires two algorithms to run, but the advantage it brings is that it allows key establishment to happen independently of the server’s private key. This gives the connection <a href="/staying-on-top-of-tls-attacks/">forward secrecy</a>, a useful property that protects conversations from being decrypted after the fact if the private key is somehow exposed. The DH version of the handshake also opens up the possibility of using non-RSA certificates that can improve performance, including <a href="/ecdsa-the-digital-signature-algorithm-of-a-better-internet/">ECDSA keys</a>. Elliptic curves provide the same security with less computational overhead. A DH handshake with and elliptic curve DSA certificate and elliptic curve Diffie-Hellman key agreement can be faster than a one-operation RSA handshake.</p><p>CloudFlare supports both handshakes, but, as we will describe later, the type of handshake used is chosen by the server. CloudFlare will choose a DH handshake whenever we can.</p>
    <div>
      <h3>TLS Glossary</h3>
      <a href="#tls-glossary">
        
      </a>
    </div>
    <p>Before we walk through the steps of the handshake, here are a couple definitions.</p><p><b>1. Session key</b>This is the end result of a handshake. It’s a key for a symmetric cipher, and allows the client and server to encrypt messages to each other.</p><p><b>2. Client random</b>This is a sequence of 32 bytes created by the client. It’s unique for each connection, and is supposed to contain a four-byte timestamp followed by 28 random bytes. Recently, Google Chrome switched to using 32 bytes of random in order to prevent client fingerprinting. These random values are often called a <a href="http://en.wikipedia.org/wiki/Cryptographic_nonce">nonce</a>.</p><p><b>3. Server random</b>A server random is the same as the client random except generated by the server.</p><p><b>4. Pre-main secret</b>This is a 48-byte blob of data. It can be combined with both the client random and the server random to create the session key using a “pseudorandom function” (PRF).</p><p><b>5. Cipher suite</b>This is a unique identifier for combining algorithms making up a TLS connection. It defines one algorithm for each of the following:</p><ul><li><p>key establishment (typically a Diffie-Hellman variant or RSA)</p></li><li><p>authentication (the certificate type)</p></li><li><p>confidentiality (a symmetric cipher)</p></li><li><p>integrity (a hash function)</p></li></ul><p>For example “AES128-SHA” defines a session that uses:</p><ul><li><p>RSA for key establishment (implied)</p></li><li><p>RSA for authentication (implied)</p></li><li><p>128-bit <a href="http://en.wikipedia.org/wiki/Advanced_Encryption_Standard">Advanced Encryption Standard</a> in <a href="https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation">Cipher Block Chaining (CBC) mode</a> for confidentiality</p></li><li><p>160-bit <a href="http://en.wikipedia.org/wiki/SHA-1">Secure Hashing Algorithm (SHA)</a> for integrity</p></li></ul><p>A more daunting, but valid cipher suite is “ECDHE-ECDSA-AES256-GCM-SHA384” which defines a session that uses:</p><ul><li><p>Elliptic Curve Diffie-Hellman Ephemeral (<a href="http://en.wikipedia.org/wiki/Elliptic_curve_Diffie%E2%80%93Hellman">ECDHE</a>) key exchange for key establishment</p></li><li><p>Elliptic Curve Digital Signature Algorithms (<a href="https://www.cloudflare.com/learning/dns/dnssec/ecdsa-and-dnssec/">ECDSA</a>) for authentication</p></li><li><p>256-bit <a href="http://en.wikipedia.org/wiki/Advanced_Encryption_Standard">Advanced Encryption Standard</a> in <a href="http://en.wikipedia.org/wiki/Galois/Counter_Mode">Galois/Counter mode (GCM)</a> for confidentiality</p></li><li><p>384-bit <a href="http://en.wikipedia.org/wiki/SHA-2">Secure Hashing Algorithm</a> for integrity</p></li></ul><p>With these definitions in hand, let’s walk through an RSA handshake.</p>
    <div>
      <h3>RSA handshake</h3>
      <a href="#rsa-handshake">
        
      </a>
    </div>
    <p>Note that none of the messages in the handshake are encrypted with a session key; they are all sent in the clear.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4L8nV2DJ221DB4RXyTpP8r/8390e681a1444bc7a8a10edfc81c34d3/ssl_handshake_rsa.jpeg.jpeg" />
            
            </figure><p><b>Message 1: “Client Hello”</b></p><p>The client hello contains the protocol version that the client wants to use, and some other information to get the handshake started including the client random and a list of cipher suites. Modern browsers also include the hostname they are looking for, called the <a href="http://en.wikipedia.org/wiki/Server_Name_Indication">Server Name Indication (SNI)</a>. SNI lets the web server host multiple domains on the same IP address.</p><p><b>Message 2: “Server Hello”</b></p><p>After receiving the client hello, the server picks the parameters for the handshake going forward. The choice of cipher suite determines what type of handshake is performed. The server “hello” message contains the server random, the server’s chosen cipher suite, and the server’s certificate. The certificate contains the server’s public key and domain name.</p><p>Note: CloudFlare’s cipher suite preferences are posted publicly on our <a href="https://github.com/cloudflare/sslconfig">Github page</a>.</p><p><b>Message 3: “Client Key Exchange”</b></p><p>After validating that the certificate is trusted and belongs to the site they are trying to reach, the client creates a random pre-main secret. This secret is encrypted with the public key from the certificate, and sent to the server.</p><p>Upon receiving this message, the server uses its private key to decrypt this pre-main secret. Now that both sides have the pre-main secret, and both client and server randoms, they can both derive the same session key. Then they exchange a short message to indicate that the next message they send will be encrypted.</p><p>The handshake is officially complete when the client and server exchange “Finished” messages. The actual text is literally: “client finished” or “server finished” encrypted with the session key. Any subsequent communication between the two parties are encrypted with the session key.</p><p>This handshake is elegant because it combines key exchange and authentication in one step. The logic is that if the server can correctly derive the session key, then they must have access to the private key, and, therefore, be the owner of the certificate.</p><p>The downside of this handshake is that the messages secured by it are only as safe as the private key. Suppose a third party has recorded the handshake and the subsequent communication. If that party gets access to the private key in the future, they will be able to decrypt the pre-main secret and derive the session key. With that they can decrypt the entire message. This is true even if the certificate is expired or revoked. This leads us to another form of handshake that can provide confidentiality even if the private key is compromised.</p>
    <div>
      <h3>Ephemeral Diffie-Hellman handshake</h3>
      <a href="#ephemeral-diffie-hellman-handshake">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4piTy8g3Q4Gd4x846uqptM/37365ab04d5081f8790e2f4e5859deb1/diffiehellman.gif" />
            
            </figure><p>The ephemeral Diffie-Hellman handshake is an alternative form of the TLS handshake. It uses two different mechanisms: one for establishing a shared pre-main secret, and one for authenticating the server. The key feature that this relies on is the Diffie-Hellman key agreement algorithm.</p><p>In Diffie-Hellman, two parties with different secrets exchange messages to obtain a shared secret. This handshake relies on the simple fact that exponents are commutative. Specifically that taking a number to the power of a, and the result to the power of b, is the same as taking the same number to the power of b, and the result to the power of a.</p><p>The algorithm works like this:</p><ul><li><p>person a has secret a, sends g<sup>a</sup> to person b</p></li><li><p>person b has secret b, sends g<sup>b</sup> to person a</p></li><li><p>person a computes (g<sup>b</sup>)<sup>a</sup></p></li><li><p>person b computes (g<sup>a</sup>)<sup>b</sup></p></li><li><p>Both person a and b end up with g<sup>ab</sup>, which is their shared secret</p></li></ul><p>This doesn't work well with regular numbers because g<sup>ab</sup> can get really large, and there are efficient ways to take the nth root of a number. However, we can change the problem space and make it work. This is done by restricting the computation to numbers of a fixed size by always dividing the result of a computation by big prime number and taking the remainder. This is called modular arithmetic. Taking an nth root in modular arithmetic is called the <a href="http://en.wikipedia.org/wiki/Discrete_logarithm">discrete logarithm problem</a> and is considered a hard problem.</p><p>Another variant of the Diffie-Hellman key agreement uses Elliptic Curves, ECDHE. For more information on Elliptic Curves, check out <a href="/a-relatively-easy-to-understand-primer-on-elliptic-curve-cryptography/">this primer</a> we published last year. A shared secret can be derived using either of these fixed-size Diffie-Hellman key agreement algorithms.</p><p>Now let’s go through a Diffie-Hellman handshake:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1IaI0Rxkc1Kbzu1Yib8sms/c6d045b219e45e1164098a8ab55e06d4/ssl_handshake_diffie_hellman.jpeg.jpeg" />
            
            </figure><p><b>Message 1: “Client Hello”</b></p><p>Just like in the RSA case, the client hello contains the protocol version, the client random, a list of cipher suites, and, optionally, the SNI extension. If the client speaks ECDHE, they include the list of curves they support. If this is omitted, or there is a mismatch, it can be <a href="http://terinstock.com/blog/2014/07/02/tls-with-erlang.html">tricky to debug</a>.</p><p><b>Message 2: “Server Hello”</b></p><p>After receiving the client hello, the server picks the parameters for the handshake going forward, including the curve for ECDHE. The server “hello” message contains the server random, the server’s chosen cipher suite, and the server’s certificate.</p><p>The RSA and Diffie-Hellman handshakes start to differ at this point with a new message type.</p><p><b>Message 3: “Server Key Exchange”</b></p><p>In order to start the Diffie-Hellman key exchange, the server needs to pick some starting parameters and send them to the client---this corresponds to the g<sup>a</sup> we described above. The server also needs a way to prove that it has control of the private key, so the server computes a digital signature of all the messages up to this point. Both the Diffie-Hellman parameters and the signature are sent in this message.</p><p><b>Message 4: “Client Key Exchange”</b></p><p>After validating that the certificate is trusted, and belongs to the site they are trying to reach, the client validates the digital signature sent from the server. They also send the client half of the Diffie-Hellman handshake (corresponding to g<sup>b</sup> above).</p><p>At this point, both sides can compute the pre-main secret from the Diffie-Hellman parameters (corresponding to g<sup>ab</sup> above). With the pre-main secret and both client and server randoms, they can derive the same session key. They then exchange a short message to indicate that they the next message they send will be encrypted.</p><p>Just like in the RSA handshake, this handshake is officially complete when the client and server exchange “Finished” messages. Any subsequent communication between the two parties are encrypted with the session key.</p>
    <div>
      <h3>Making it keyless</h3>
      <a href="#making-it-keyless">
        
      </a>
    </div>
    <p>Yesterday we announced <a href="/announcing-keyless-ssl-all-the-benefits-of-cloudflare-without-having-to-turn-over-your-private-ssl-keys/">Keyless SSL</a>, CloudFlare’s solution that allows sites to use CloudFlare without requiring them to give up custody of their private keys.</p><p>One takeaway from the handshake diagrams above is that the private key is only used once in each handshake. This allows us to split the TLS handshake geographically, with most of the handshake happening at CloudFlare’s edge while moving the private key operations to a remote key server. This key server can be put on the customer’s infrastructure, giving them exclusive access to the private key.</p><p>Once the secure tunnel is established, the RSA handshake looks like this:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/01k8tXP0Bha2Eg673443AR/d967683aacb8017b98ad877d11deebbd/cloudflare_keyless_ssl_handshake_rsa.jpeg.jpeg" />
            
            </figure><p>The DH handshake looks like this:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2FX1r495YFfPckQJy7Ilqf/399b32288dec5a99906639b9db85d440/cloudflare_keyless_ssl_handshake_diffie_hellman.jpeg.jpeg" />
            
            </figure><p>Extending the TLS handshake in this way required changes to the NGINX server and OpenSSL to make the private key operation both remote and non-blocking (so NGINX can continue with other requests while waiting for the key server). Both the NGINX/OpenSSL changes, the protocol between the CloudFlare’s server, and the key server were audited by iSEC Partners and Matasano Security. They found the security of Keyless SSL equivalent to on-premise SSL. Keyless SSL has also been studied by academic researchers from both provable security and performance angles.</p><p>The key server can run on Linux (packaged for Red Hat/CentOS, Debian and Ubuntu, and others), other UNIX operating systems (including FreeBSD), and Microsoft Windows Server. Customers also get access to a <a href="https://github.com/cloudflare/keyless">reference implementation</a> written in C, so they can build their own compatible key server.</p><p>The key server will soon be integrated with hardware security module (HSM) vendors and key management solutions (such as Venafi) to provide customers with additional ways to control how the keys are managed in their infrastructure.</p><p>Keyless SSL supports multiple key servers for the same certificate. Key servers are stateless, allowing customers to use off-the-shelf hardware and scale the deployment of key servers linearly with traffic. By running multiple key servers and load balancing via DNS, the customer’s site can be kept highly available.</p>
    <div>
      <h3>Protecting the oracle</h3>
      <a href="#protecting-the-oracle">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4KAt3u8z3Y77B0DZmIseyl/010971ff03016aeee0a02654cae7f71e/oracle2.JPG.jpeg" />
            
            </figure><p>For Keyless SSL to be secure, the connection from CloudFlare’s edge to the key server also needs to be secure. The key server can act as a cryptographic oracle by performing private key operations for anyone who can contact it. Ensuring that only CloudFlare can ask the key server to perform operations is crucial to the security of Keyless SSL.</p><p>We secure the connection from CloudFlare to the key server with mutually authenticated TLS. Previously, we described TLS handshakes that were only authenticated in one direction: the client validated the server. In mutually authenticated TLS, both client and server have certificates and authenticate each other. The key server authenticates CloudFlare and CloudFlare authenticates the key server.</p><p>In Keyless SSL, the key server only allows connections from clients with a certificate signed by a CloudFlare internal certificate authority. We use certificates granted by our own certificate authority for both sides of this connection. We have strict controls over how these certificates are granted and use the <a href="http://en.wikipedia.org/wiki/X.509#Extensions_informing_a_specific_usage_of_a_certificate">X.509 Extended Key Usage</a> option to ensure that certificates are only used as intended. This prevents any party that doesn’t have a CloudFlare granted certificate from communicating with the key server. Customers also have the option to add firewall rules to limit incoming connections to those from <a href="https://www.cloudflare.com/ips">CloudFlare’s IP space</a>.</p><p>Additionally, we restrict the cipher suite for this connection to one of the following:</p><ul><li><p>ECDHE-ECDSA-AES256-GCM-SHA384</p></li><li><p>ECDHE-RSA-AES256-GCM-SHA384</p></li></ul><p>These are two of the strongest ciphers available in OpenSSL and guarantee the connection between CloudFlare and the keyserver has perfect forward secrecy.</p>
    <div>
      <h3>Other security considerations</h3>
      <a href="#other-security-considerations">
        
      </a>
    </div>
    <p>The key server itself can be modified to work with hardware security module (HSM), providing additional hardware security for customers who want to protect the key server from undiscovered software vulnerabilities similar to <a href="/searching-for-the-prime-suspect-how-heartbleed-leaked-private-keys/">Heartbleed</a>.</p><p>The key server is not subject to padding oracle attacks like that of <a href="http://en.wikipedia.org/wiki/Adaptive_chosen-ciphertext_attack">Bleichenbacher</a> because it uses constant size responses. Side-channel attacks such as timing attacks are ineffective as long as the underlying cryptographic library used on the key server is immune. We use OpenSSL in our reference implementation which has been hardened against such attacks.</p>
    <div>
      <h3>Performance enhancements</h3>
      <a href="#performance-enhancements">
        
      </a>
    </div>
    <p>CloudFlare is designed to make sites faster: it should take less time to connect to a site on CloudFlare than the same site off CloudFlare. This is also the case with Keyless SSL. Connecting to a site with Keyless SSL should be faster than connecting to the same site with CloudFlare disabled. People have asked how can that be, given that the Keyless SSL requires an additional connection to the key server. The answer lies in geography.</p><p>CloudFlare’s data centers are geographically distributed in 20 countries around the world and are located within less than 20ms of 95% of the Internet’s active population. This allows visitors to communicate with a CloudFlare server that is closest to them on the network. Messages sent between visitor and CloudFlare don’t have to travel far, so the connection latency is smaller. This proximity effect is one of the ways that CloudFlare accelerates websites.</p><p>In the Keyless SSL diagrams above, all the messages except one are traveling over the short link between CloudFlare and the visitor. The only long round trip is the one to the key server.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/ZAiXaRx3SJzO0bXRfyw5B/2ad17e4dea139e9bf1b2c84d88ad3ac8/illustration-ssl-with-cf-and-without.png" />
            
            </figure><p>Consider the scenario where a visitor in San Francisco wants to visit a site hosted in London over TLS. Without CloudFlare, the TLS handshake requires two round-trips from San Francisco to London. With Keyless SSL and a keyserver hosted in London, the visitor will end up in CloudFlare’s nearby <a href="/and-then-there-were-threecloudflares-new-data/">San Jose</a> data center. In this scenario, only one of the messages has to travel to London and back. Messages have to travel a shorter distance, resulting in a faster handshake.</p><p>The reason we only require one round-trip to the key server is persistent connections. Once CloudFlare has connected to a key server, it keeps the connection ready for any new visitors to the site. The first connection to a Keyless SSL powered site is fast, but the major performance improvement comes when a visitor returns to the site.</p>
    <div>
      <h3>Abbreviated handshake</h3>
      <a href="#abbreviated-handshake">
        
      </a>
    </div>
    <p>TLS provides an excellent performance feature called “session resumption”. If a client has previously established a session with the server, and is trying to connect again, they can use an abbreviated handshake. There are two mechanisms to do so: session IDs and session tickets.</p><p>Session IDs require the server to keep the session state (i.e. the session key) ready in case a previous session needs to be resumed. In the case of session tickets, the server sends a session ticket (consisting of the session key encrypted with a ticket key) to the client during the initial handshake. When resuming a session, the client sends the encrypted key back to the server who decrypts it and resumes the session. There is no need to use the private key for session resumption.</p><p>Firefox and Chrome are the major browsers that support session tickets. All other modern browsers support resumption via session IDs. One of the challenges faced when using these techniques at scale is load balancing. In order for a server to resume a connection, it needs to have the previously established session key. If the visitor tries to resume a connection with a new server, that server needs to get the original session key somehow.</p><p>The main problem with session resumption is that is was not meant to scale to load-balanced servers. If a client starts a session on one server, it cannot resume that session on another server. This is not a failing of the protocol, just a missing feature in open source web servers.</p><p>With Keyless SSL, we are introducing advanced session resumption capabilities to solve this problem. This includes worldwide session resumption via session tickets and session resumption within a data center via session IDs. Session resumption allows repeat visitors to have lightning fast connection times because there is no need to go back to the key server to resume a connection.</p>
    <div>
      <h3>Session ticket resumption</h3>
      <a href="#session-ticket-resumption">
        
      </a>
    </div>
    <p>With session tickets, we can resume a session from any machine on our network. This required significant engineering work that we are opening up to the community.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2LELH7yWGHIz7bHaWlm3Lu/6d2e15b0816af0a0dd589fdbbcd73de5/session_resumption_with_session_ticket.jpeg.jpeg" />
            
            </figure><p>Twitter <a href="https://blog.twitter.com/2013/forward-secrecy-at-twitter">recently announced</a> that they are using session ticket keys rotated every 12 hours. We are upping the ante by rotating session ticket keys every hour. We built a centralized session ticket key generator that issues new keys every hour for distribution across our global network. Each key persists for a user-configurable amount of time (defaulting to 96 hours), after which it is permanently deleted. To distribute the keys, we added a TLS layer to the key-value store <a href="/kyoto_tycoon_with_postgresql/">Kyoto Tycoon</a> so that replication is fully encrypted with mutually authenticated TLS and pinned to CloudFlare’s CA. With Kyoto Tycoon, ticket keys are replicated globally within seconds to every one of our edge machines. In keeping with our <a href="/keeping-our-open-source-promise/">open source philosophy</a>, we plan on open sourcing our changes to Kyoto Tycoon. With the ticket keys available on every server, we can resume any connection on any machine in our entire network.</p><p>The rotation of ticket keys helps us maintain perfect forward secrecy for our customers while reducing latency for returning visitors using Firefox and Chrome.</p>
    <div>
      <h3>Session ID resumption</h3>
      <a href="#session-id-resumption">
        
      </a>
    </div>
    <p>We can also resume sessions across multiple machines with a session ID. Unlike with session tickets, we can only resume sessions within a datacenter. This turns out to be good enough for 99.99% of users because CloudFlare’s Anycast network directs requests to the nearest data center. Browsers don’t typically move between cities very often, so most resumption requests end up in the same place. And, in the worst case, such as if you access a site from your phone in one city and then fly to another, your client will simply setup a new session.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/56ZE8zrk32JgNCIrjO0cX6/6f8543c5c041caf19c97e2c5b18db7ee/session_resumption_with_session_id.jpeg.jpeg" />
            
            </figure><p>We can resume sessions using an ID by caching session keys within a datacenter. For each new connection, we cache an encrypted versions of the session key in a centralized location, indexed by session ID. If a new request come in with a session ID that has been seen before, we look it up in the central store that can be accessed from all the servers in any data center. These session keys do not leave the datacenter and persist for a user-configurable amount of time, again defaulting to 96 hours. Session ID caching lets us use an abbreviated handshake for almost all resumed connection attempts in browsers other than Chrome or Firefox.</p><p>Other technically sophisticated organizations also use session resumption. Google, for instance, uses a similar technique to resume sessions across its infrastructure. <b>Note:</b> <i>All CloudFlare customers with SSL enabled now get the benefits of this advanced session resumption, even if they are not using Keyless SSL.</i></p>
    <div>
      <h3>Open Source</h3>
      <a href="#open-source">
        
      </a>
    </div>
    <p>CloudFlare developed a lot of code when building Keyless SSL and have contributed major portions of it back to the community:</p><p><b>Strict SSL:</b> this code allows upstream connections from NGINX to validate TLS connections, needed for validating the identity of the key server. <a href="http://mailman.nginx.org/pipermail/nginx-announce/2014.txt">This change</a> was merged into NGINX.</p><p><b>Session tickets:</b> we added support for session ticket in NGINX. <a href="http://mailman.nginx.org/pipermail/nginx-devel/2013-October/004370.html">This change</a> was merged into NGINX.</p><p><b>CFSSL:</b> we recently open sourced the tool we use for our internal certificate authority. It is available on <a href="/introducing-cfssl/">GitHub</a>.</p><p><b>Kyoto Tycoon:</b> we are soon open sourcing our changes to Kyoto Tycoon, a high performance key value store we use extensively, to allow mutually authenticated replication.</p><p><b>Key Server:</b> The reference implementation of the Keyless SSL key server is now available on <a href="https://github.com/cloudflare/keyless">Github</a>.</p>
    <div>
      <h3>Why it matters</h3>
      <a href="#why-it-matters">
        
      </a>
    </div>
    <p>Keyless SSL is a big advancement allowing website owners to use a service like CloudFlare to make their website faster and more secure, while retaining control of their private keys. As we said in the <a href="/announcing-keyless-ssl-all-the-benefits-of-cloudflare-without-having-to-turn-over-your-private-ssl-keys/">previous post announcing it</a>, Sebastien was able to build the initial Keyless SSL prototype overnight. Making sure it was secure, fast, and could scale is what took us two years of engineering. Now, with persistent connections and advanced session resumption techniques, using Keyless SSL is not only safe, it’s blazing fast!</p> ]]></content:encoded>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Keyless SSL]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[SSL]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Cryptography]]></category>
            <guid isPermaLink="false">2jSUItVOXSaexPMNtaDeDk</guid>
            <dc:creator>Nick Sullivan</dc:creator>
        </item>
        <item>
            <title><![CDATA[Announcing Keyless SSL™: All the Benefits of CloudFlare Without Having to Turn Over Your Private SSL Keys]]></title>
            <link>https://blog.cloudflare.com/announcing-keyless-ssl-all-the-benefits-of-cloudflare-without-having-to-turn-over-your-private-ssl-keys/</link>
            <pubDate>Thu, 18 Sep 2014 13:00:56 GMT</pubDate>
            <description><![CDATA[ CloudFlare is an engineering-driven company. This is a story we're proud of because it embodies the essence of who we are: when faced with a problem, we found a novel solution.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>CloudFlare is an engineering-driven company. This is a story we're proud of because it embodies the essence of who we are: when faced with a problem, we found a novel solution. Technical details to follow but, until then, welcome to the no hardware world. (Update: The <a href="/keyless-ssl-the-nitty-gritty-technical-details/">post with the technical details</a> is now online.)</p>
    <div>
      <h3>Fall in San Francisco</h3>
      <a href="#fall-in-san-francisco">
        
      </a>
    </div>
    <p>The story begins on a Saturday morning, in the Fall of 2012, almost exactly two years ago. I got a call on my cell phone that woke me. It was a man who introduced himself as the Chief Information Security Officer (<a href="https://www.cloudflare.com/ciso/">CISO</a>) at one of the world's largest banks.</p><p>"I got your number from a reporter," he said. "We have an incident. Could you and some of your team be in New York Monday morning? We'd value your advice." We were a small startup. Of course we were going to drop everything and fly across the country to see if we could help.</p><p>I called John Roberts and Sri Rao, two members of CloudFlare's team. John had an air of calm about him and owned more khaki pants than any of the rest of us. Sri was a senior member of our technical operations team and could, already at that point, justifiably claim he'd essentially "seen it all" in the two years he'd spent keeping CloudFlare's network online.</p><p>Sunday night we packed into a plane to New York. En route I made Sri promise he wouldn't wear cargo shorts to the meeting with the bank executives the next day. And he didn't. Instead, we all showed up in ill-fitting suits like the out-of-place engineers that we were.</p>
    <div>
      <h3>Rock and the Hard Place</h3>
      <a href="#rock-and-the-hard-place">
        
      </a>
    </div>
    <p>At the meeting the bankers explained the rock and the hard place they were between. On one side they were under attack. As the New York Times and other publications have subsequently reported, in the Fall of 2012 allegedly <a href="http://www.nytimes.com/2013/01/09/technology/online-banking-attacks-were-work-of-iran-us-officials-say.html">Iranian hackers systematically launched DDoS attacks</a> that crippled major US <a href="https://www.cloudflare.com/financial-services/">financial institutions</a>.</p><p>The bankers related that the attacks, which were between 60 - 80Gbps (far shy of the 500Gbps+ attacks we regularly see today), were sufficient to cripple their on-premise network hardware solutions. The multiple banks that we visited that day told us the same story. Whether it was their <a href="https://www.cloudflare.com/learning/performance/what-is-load-balancing/">load balancer</a>, their <a href="https://www.cloudflare.com/learning/security/what-is-a-firewall/">firewall</a>, their router, or their switch, under attack, something had become saturated and was unable to keep up with the traffic. It didn't matter how clever the software on the device was, in every case they were dead at <a href="https://www.cloudflare.com/learning/ddos/layer-3-ddos-attacks/">Layer 3</a>.</p><p>If that was the rock, what was the hard place? The bankers all acknowledged what they needed was a cloud-based solution that could scale to meet the challenges they faced. Unfortunately, since they needed to support encrypted connections, that meant the cloud-based solution needed to terminate SSL connections. And there was the rub.</p>
    <div>
      <h3>The Key is the (SSL) Key</h3>
      <a href="#the-key-is-the-ssl-key">
        
      </a>
    </div>
    <p>An SSL key is the data that allows an organization to establish a secure connection with the customers that connect to it. It is also the data that lets an organization establish its identity. If you have an organization private SSL key, you can authenticate as if you were it. You can spoof identity and intercept traffic.</p><p>If, say, a media organization loses an SSL key, it's a very bad day. If a financial institution loses one, it's a nightmare. In addition to the public embarrassment and loss of trust, in the United States, the bankers we met with that Fall day in 2012 told us, if an SSL key is lost it's a critical security event that must be reported to the Federal Reserve.</p><p>Other vendors have tried to deal with this by, what several of the bankers we met with, termed: "security theater." They show you pictures of big, locked racks of servers with electronic combination locks.</p><p>We came away from that day of meetings in New York with one conclusion: the only way organizations that had the highest standards of <a href="https://www.cloudflare.com/learning/ssl/what-is-ssl/">SSL security</a> could ever adopt the benefits of the cloud is if we never took possession of their SSL keys.</p><p>Sri, John, and I returned to San Francisco somewhat disheartened. I relayed what we'd learned to our engineering team. Everyone was bummed for a bit. Then, Sebastien Pahl, one of our engineers who had previously helped build DotCloud and Docker, said, "Do we really need to have physical access to the private key?"</p><p>That spawned a late evening in the office in front of several white boards. We'd speculated previously that there was a way to split session signing, the only part of the <a href="https://www.cloudflare.com/learning/ssl/what-happens-in-a-tls-handshake/">SSL handshake</a> that requires the private key, from rest of the process. Sebastien pulled up the documentation on his phone and was convinced that there was a way to do it. Over the course of the night, he convinced the rest of us.</p>
    <div>
      <h3>Creative Engineers FTW</h3>
      <a href="#creative-engineers-ftw">
        
      </a>
    </div>
    <p>Sebastien is the kind of engineer that, when he's transfixed with a problem, can't sleep. It's a trait we hire for at CloudFlare. He showed up the next morning looking both exhausted and excited. "I've proven it's possible," he said. "It's crude. It won't scale. It probably has security vulnerabilities galore, but I've proved we can terminate SSL connections even if we don't have physical access to the private SSL key."</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/67ye2noSXbS8Cjl30wK2MG/93f62328ce478c9c352994b027d2cba3/keyless-comic-v1.gif" />
            
            </figure><p>Tomorrow, we'll publish a full post on the nitty, gritty techical details of how, what has come to be called Keyless SSL™, works. (Update: The <a href="/keyless-ssl-the-nitty-gritty-technical-details/">post with the technical details</a> is now online.) For now, I'll just tell you about what Sebastien had built. It was a dramatic demo. A simple agent ran on a Raspberry Pi. A web server, running on a remote server on CloudFlare's network, received HTTPS connections. When the Raspberry Pi was plugged in, the connections went through from a browser as they would normally. The lock appeared and the connection was secured, end-to-end. The minute the Raspberry Pi's power was disconnected, HTTPS access terminated.</p><p>Sebastien had proven that the solution to what the banks needed was possible: you could have SSL keys remote from the actual server terminating the connection. If that worked, there was no need to ever have limited on-premise network hardware again. Provide the functionality in the infinitely scalable environment of the cloud, but keep the keys on-premise so there's no risk they are ever misappropriated.</p><p>A prototype made in an evening is one thing, having something production ready is another. Sebastien turned the project over to John Graham-Cumming, Piotr Sikora, and Nick Sullivan, three of the lead engineers on our team. They worked with the banks that had originally contacted us to build a system that worked in high availability environments.</p><p>To make it work, we needed to hold connections open between CloudFlare's network and agents running on our customers' infrastructure. Moreover, we needed to share data about crytographic sessions setup for a visitor between all the machines that could serve that visitor. Making it work was one thing, making it fast was another. And, today, Keyless SSL clients are experiencing 3x+ faster SSL termination globally using the service than they were when they were relying only on on-premise solutions.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2u2xEAMVLaATb46eazdLOz/66d8589ef795eb71b2bbf4a0c0462b55/illustration-keyless-ssl-explained-01.png" />
            
            </figure><p>Tomorrow Nick Sullivan will spend time going through the details of how Keyless SSL works. For now know this: private clouds are an oxymoron. Keeping your network behind on-premise hardware you control is a recipe for disaster. Over time, the network edge needs the infinite scalability and elastacity that only a service like CloudFlare can provide. And, now, with Keyless SSL, anyone can get that flexibility without having to turn over their most guarded secrets: their private SSL keys.</p><p><i>Here's what people are saying about Keyless SSL:</i></p>
    <div>
      <h3>Security</h3>
      <a href="#security">
        
      </a>
    </div>
    <p>World-renowned security experts Jon Callas and Phil Zimmermann support CloudFlare's latest announcement sharing, “One of the core principles of computer security is to limit access to cryptographic keys to as few parties as possible, ideally only the endpoints. Application such as PGP, Silent Circle, and now Keyless SSL implement this principle and are correspondingly more secure.”</p><p>A spokesperson from NCC Group’s Cryptography Services practice commented: “We’ve seen how private keys can be stolen, and investing in techniques to limit their exposure makes the Internet a safer place. Our review of Keyless SSL indicates the keys themselves do not leave your infrastructure, and a secure channel with CloudFlare both protects the communication and reduces the attack surface for your key.”</p><p>"Because this system keeps your long-lived SSL private keys on-premise, it provides the same protection to those keys as conventional on-premise SSL solutions. This provides the security and performance benefits of managing SSL traffic in the cloud." explained Jian Jiang, Independent Academic Researcher at UC Berkeley</p>
    <div>
      <h3>Enterprise</h3>
      <a href="#enterprise">
        
      </a>
    </div>
    <p>Senior Director of Trust at EMC Corporation, Davi Ottenheimer believes Keyless SSL is a fundamental innovation in security. “Everyone should be increasingly aware and concerned about the risks of handing their private keys over to service providers. The trade-offs between control and services are being solved by innovation in key management. Keyless solutions, where customers retain control, clearly improve security while maintaining the best service offerings. As we move to a more interconnected world with more localized-access to global providers, our trust has to be based on security controls that remain relevant within the latest advances of <a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/">content delivery networks</a>. Keeping control of your own private key, yet enabling a service provider to serve your customers with the same level of trust, is a real breakthrough in content delivery security.”</p><p>“Recent incidents like the APT exploit of <a href="https://www.venafi.com/blog/post/attack-on-trust-threat-bulletin-apt-operators-exploit-heartbleed">Heartbleed</a> to breach Community Health Systems and the <a href="https://www.venafi.com/blog/post/the-mask-attacks-on-trust-and-game-over/">Mask</a> operation show that attacks on keys and certificates that establish trust are on the rise. If security teams don’t protect their keys and certificates they undermine their critical threat protection and existing security controls,” said Kevin Boeck, vice president of security strategy &amp; threat intelligence at Venafi. “With our partner CloudFlare, Venafi supports the development of Keyless SSL technology to help further protect our Venafi Trust Protection Platform customers and secure their use of cloud services.”</p>
    <div>
      <h3>Financial</h3>
      <a href="#financial">
        
      </a>
    </div>
    <p>“At Coinbase, we take security very seriously. To be successful in the Bitcoin ecosystem we prioritize security highly,” said Ryan McGeehan, director of security at Coinbase. “Technology that improves the security of our critical infrastructure, like our SSL keys, is always welcomed."</p><p>“As a private-cloud file-sync and share startup working with many financial organizations worldwide, we are always looking for the best security technologies that help keep important data safe, secure, and behind the firewall while maintaining the scale benefits of the cloud,” said Yuri Sagalov, co-founder and CEO of AeroFS. “Keyless SSL lets companies get the best of both worlds: Companies get to keep their private keys behind the corporate firewall where they belong, while still providing edge-level encryption for their customers accessing their services.</p> ]]></content:encoded>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Keyless SSL]]></category>
            <category><![CDATA[SSL]]></category>
            <guid isPermaLink="false">4WNg6FkyjYxtmowLlg7evy</guid>
            <dc:creator>Matthew Prince</dc:creator>
        </item>
    </channel>
</rss>