
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sun, 05 Apr 2026 20:04:54 GMT</lastBuildDate>
        <item>
            <title><![CDATA[How we brought HTTPS Everywhere to the cloud (part 1)]]></title>
            <link>https://blog.cloudflare.com/how-we-brought-https-everywhere-to-the-cloud-part-1/</link>
            <pubDate>Sat, 24 Sep 2016 15:46:26 GMT</pubDate>
            <description><![CDATA[ CloudFlare's mission is to make HTTPS accessible for all our customers. It provides security for their websites, improved ranking on search engines, better performance with HTTP/2, and access to browser features such as geolocation that are being deprecated for plaintext HTTP. ]]></description>
            <content:encoded><![CDATA[ <p>CloudFlare's mission is to make HTTPS accessible for all our customers. It provides security for their websites, <a href="https://webmasters.googleblog.com/2014/08/https-as-ranking-signal.html">improved ranking on search engines</a>, <a href="/introducing-http2/">better performance with HTTP/2</a>, and access to browser features such as geolocation that are being deprecated for plaintext HTTP. With <a href="https://www.cloudflare.com/ssl/">Universal SSL</a> or similar features, a simple button click can now enable encryption for a website.</p><p>Unfortunately, as described in a <a href="/fixing-the-mixed-content-problem-with-automatic-https-rewrites/">previous blog post</a>, this is only half of the problem. To make sure that a page is secure and can't be controlled or eavesdropped by third-parties, browsers must ensure that not only the page itself but also all its dependencies are loaded via secure channels. Page elements that don't fulfill this requirement are called mixed content and can either result in the entire page being reported as insecure or even completely blocked, thus breaking the page for the end user.</p>
    <div>
      <h2>What can we do about it?</h2>
      <a href="#what-can-we-do-about-it">
        
      </a>
    </div>
    <p>When we conceived the Automatic HTTPS Rewrites project, we aimed to automatically reduce the amount of mixed content on customers' web pages without breaking their websites and without any delay noticeable by end users while receiving a page that is being rewritten on the fly.</p><p>A naive way to do this would be to just rewrite <code>http://</code> links to <code>https://</code> or let browsers do that with <a href="https://www.w3.org/TR/upgrade-insecure-requests/"><code>Upgrade-Insecure-Requests</code></a> directive.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6YhZ5thm7SrTbeJ65wiYoT/7428a3a42d5a26fe18a57c2558012046/tumblr_inline_nyupi1faxM1qkjeen_500-1.gif" />
            
            </figure><p>Unfortunately, such approach is very fragile and unsafe unless you're sure that</p><ol><li><p>Each single HTTP sub-resource is also available via HTTPS.</p></li><li><p>It's available at the exact same domain and path after protocol upgrade (more often than you might think that's <i>not</i> the case).</p></li></ol><p>If either of these conditions is unmet, you end up rewriting resources to non-existing URLs and breaking important page dependencies.</p><p>Thus we decided to take a look at the existing solutions.</p>
    <div>
      <h2>How are these problems solved already?</h2>
      <a href="#how-are-these-problems-solved-already">
        
      </a>
    </div>
    <p>Many security aware people use the <a href="https://www.eff.org/https-everywhere">HTTPS Everywhere</a> browser extension to avoid those kinds of issues. HTTPS Everywhere contains a well-maintained database from the <a href="https://www.eff.org/">Electronic Frontier Foundation</a> that contains all sorts of mappings for popular websites that safely rewrite HTTP versions of resources to HTTPS only when it can be done without breaking the page.</p><p>However, most users are either not aware of it or are not even able to use it, for example, on mobile browsers.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4nhcIelWSXtuEWz07ilh04/054f2fc8ad7d51fea105f1778e6ccbe7/4542048705_25a394a2f3_b.jpg" />
            
            </figure><p><a href="https://creativecommons.org/licenses/by/2.0/">CC BY 2.0</a> <a href="https://www.flickr.com/photos/generated/4542048705/in/photolist-7VnbQz-9sd2LW-4EEoZv-d2T6A7-5hKfKu-8UcLHh-pBjRDg-5gCYKG-8vS5Gw-8vP6yc-bj9pgX-qaSiZi-951EJW-75Xuvx-5pft8J-eyebR1-8dyjPV-r9csMz-991WwM-a3aW4T-3JAiSH-6fGqt7-cs2ud1-nEDWYQ-bLR6yz-4JKM5j-6KMths-4eWtLa-5iij6Z-bQSzaP-dKY18j-8SU3Vr-8nGQmE-bwPWoF-323VBR-FuKadJ-p8VD7-x9knmA-hJG7Bc-3KHP4m-8YmLDZ-6CmJme-ngT44v-7ThvBy-4m9A3n-7AGkE-ogJ97T-yCChfV-ok7E25-8Nkr9w">image</a> by <a href="https://www.flickr.com/photos/generated/">Jared Tarbell</a></p><p>So we decided to flip the model around. Instead of re-writing URLs in the browser, we would re-write them inside the CloudFlare reverse proxy. By taking advantage of the existing database on the server-side, website owners could turn it on and all their users would instantly benefit from HTTPS rewriting. The fact that it’s automatic is especially useful for websites with user-generated content where it's not trivial to find and fix all the cases of inserted insecure third-party content.</p><p>At our scale, we obviously couldn't use the existing JavaScript rewriter. The performance challenges for a browser extension which can find, match and cache rules lazily as a user opens websites, are very different from those of a CDN server that handles millions of requests per second. We usually don't get a chance to rewrite them before they hit the cache either, as many pages are dynamically generated on the origin server and go straight through us to the client.</p><p>That means, to take advantage of the database, we needed to learn how the existing implementation works and create our own in the form of a native library that could work without delays under our load. Let's do the same here.</p>
    <div>
      <h2>How does HTTPS Everywhere know what to rewrite?</h2>
      <a href="#how-does-https-everywhere-know-what-to-rewrite">
        
      </a>
    </div>
    <p>HTTPS Everywhere rulesets can be found in <a href="https://github.com/EFForg/https-everywhere/tree/master/src/chrome/content/rules"><code>src/chrome/content/rules</code></a> folder of the <a href="https://github.com/EFForg/https-everywhere">official repository</a>. They are organized as XML files, each for their own set of hosts (with few exclusions). This allows users with basic technical skills to write and contribute missing rules to the database on their own.</p><p>Each ruleset is an XML file of the following structure:</p>
            <pre><code>&lt;ruleset name="example.org"&gt;
  &lt;!-- Target domains --&gt;
  &lt;target host="*.example.org" /&gt;
 
  &lt;!-- Exclusions --&gt;
  &lt;exclusion pattern="^http://example\.org/i-am-http-only" /&gt;
 
  &lt;!-- Rewrite rules --&gt;
  &lt;rule from="^http://(www\.)?example\.org/" to="https://$1example.org/" /&gt;
&lt;/ruleset&gt;</code></pre>
            <p>At the moment of writing, the HTTPS Everywhere database consists of ~22K such rulesets covering ~113K domain wildcards with ~32K rewrite rules and exclusions.</p><p>For performance reasons, we can't keep all those ruleset XMLs in memory, go through nodes, check each wildcard, perform replacements based on specific string format and so on. All that work would introduce significant delays in page processing and increase memory consumption on our servers. That's why we had to perform some compile-time tricks for each type of node to ensure that rewriting is smooth and fast for any user from the very first request.</p><p>Let's walk through those nodes and see what can be done in each specific case.</p>
    <div>
      <h3>Target domains</h3>
      <a href="#target-domains">
        
      </a>
    </div>
    <p>First of all, we get target elements which describe domain wildcards that current ruleset potentially covers.</p>
            <pre><code>&lt;target host="*.example.org" /&gt;</code></pre>
            <p>If a wildcard is used, it can be <a href="https://www.eff.org/https-everywhere/rulesets#wildcard-targets">either left-side or right-side</a>.</p><p>Left-side wildcard like <code>*.example.org</code> covers any hostname which has example.org as a suffix - no matter how many subdomain levels you have.</p><p>Right-side wildcard like <code>example.*</code> covers only one level instead so that subdomains with the same beginning but one unexpected domain level are not accidentally caught. For example, the Google ruleset, among others, uses the <code>google.*</code> wildcard and it should match <code>google.com</code>, <code>google.ru</code>, <code>google.es</code> etc. but not <code>google.mywebsite.com</code>.</p><p>Note that a single host can be covered by several different rulesets as wildcards can overlap, so the rewriter should be given entire database in order to find a correct replacement. Still, matching hostname allows to instantly reduce all ~22,000 rulesets to only 3-5 which we can deal with more easily.</p><p>Matching wildcards at runtime one-by-one is, of course, possible, but very inefficient with ~113K domain wildcards (and, as we noted above, one domain can match several rulesets, so we can't even bail out early). We need to find a better way.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/018PMe56SuXGqrCUnghAoJ/a864c686c92348249d570b30b35419f9/3901819627_c3908690a0_b.jpg" />
            
            </figure><p><a href="https://creativecommons.org/licenses/by/2.0/">CC BY 2.0</a> <a href="https://www.flickr.com/photos/vige/3901819627/in/photolist-6WMR3c-qqw1zU-7Gsyt1-4mQ6Sr-7GxrjW-bZTMQs-6HAcEr-58J6-pJb9qT-55o5bP-4c2bxs-4MEcWm-6yf4xg-dkdJkY-crpQwG-br8o3Y-4tXRcD-a3DzL7-nAYdFT-729Vjb-d5qcf-a59ugi-AKFWW-d2g9e5-3LQJEe-fqMVts-762EoB-4Lreh9-57pKGy-wnqcdN-99jyGb-6oAMor-8U28ub-9bYp3-92DYLM-6x8aZg-4MEcLQ-7n2QqA-8pydBi-ocFj72-fAyhG7-7B9Qwt-xxknG-d3Tk63-axF8dU-o4ALKi-grY52F-9bXtY-8KRwXd-a2syrf">image</a> by <a href="https://www.flickr.com/photos/vige/">vige</a></p><p>We use <a href="http://www.colm.net/open-source/ragel/">Ragel</a> to build fast lexers in other pieces of our code. Ragel is a state machine compiler which takes grammars and actions described with its own syntax and generates source code in a given programming language as an output. We decided to use it here too and wrote a script that generates a Ragel grammar from our set of wildcards. In turn, Ragel converts it into C code of a state machine capable of going through characters of URLs, matching hosts and invoking custom handler on each found ruleset.</p><p>This leads us to another interesting problem. At the moment of writing among 113K domain wildcards we have 4.7K that have a left wildcard and less than 200 which have a right wildcard. Left wildcards are expensive in state machines (including regular expressions) as they cause <a href="https://en.wikipedia.org/wiki/Combinatorial_explosion">DFA space explosion</a> during compilation so Ragel got stuck for more than 10 minutes without giving any result - trying to analyze all the <code>*.</code> prefixes and merge all the possible states where they can go, resulting in a complex tree.</p><p>Instead, if we choose to look from the end of the host, we can significantly simplify the state tree (as only 200 wildcards need to be checked separately now instead of 4.7K), thus reducing compile time to less than 20 seconds.</p><p>Let's take an oversimplified example to understand the difference. Say, we have following target wildcards (3 left-wildcards against 1 right-wildcard and 1 simple host):</p>
            <pre><code>&lt;target host="*.google.com" /&gt;
&lt;target host="*.google.co.uk" /&gt;
&lt;target host="*.google.es" /&gt;
&lt;target host="google.*" /&gt;
&lt;target host="google.com" /&gt;</code></pre>
            <p>If we build Ragel state machine directly from those:</p>
            <pre><code>%%{
    machine hosts;
 
    host_part = (alnum | [_\-])+;
 
    main := (
        any+ '.google.com' |
        any+ '.google.co.uk' |
        any+ '.google.es' |
        'google.' host_part |
        'google.com.ua'
    );
}%%</code></pre>
            <p>We will get the following state graph:</p>
            <figure>
            <a href="http://staging.blog.mrk.cfdata.org/content/images/2016/09/1.png">
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1sykJ6ByovKV68tzpCF5zn/7fe8636ec9e22c152e6e8e83e007509a/1.png" />
            </a>
            </figure><p>You can see that the graph is already pretty complex as each starting character, even <code>g</code> which is an explicit starting character of <code>'google.'</code> and <code>'google.com'</code> strings, still needs to simultaneously go also into <code>any+</code> matches. Even when you have already parsed the <code>google.</code> part of the host name, it can still correctly match any of the given wildcards whether as <code>google.google.com</code>, <code>google.google.co.uk</code>, <code>google.google.es</code>, <code>google.tech</code> or <code>google.com.ua</code>. This already blows up the complexity of the state machine, and we only took an oversimplified example with three left wildcards here.</p><p>However, if we simply reverse each rule in order to feed the string starting from the end:</p>
            <pre><code>%%{
    machine hosts;
 
    host_part = (alnum | [_\-])+;
 
    main := (
        'moc.elgoog.' |
        'ku.oc.elgoog.' |
        'se.elgoog.' |
        host_part '.elgoog' |
        'au.moc.elgoog'
    );
}%%</code></pre>
            <p>we can get much simpler graph and, consequently, significantly reduced graph build and matching times:</p>
            <figure>
            <a href="http://staging.blog.mrk.cfdata.org/content/images/2016/09/2.png">
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6NfYuIYypMQPvlSOkk40so/e44c0007f0c2d621e5d755cb2e3ad099/2.png" />
            </a>
            </figure><p>So now, all that we need is to do is to go through the host part in the URL, stop on <code>/</code> right after and start the machine backwards from this point. There is no need to waste time with in-memory string reversal as Ragel provides the <code>getkey</code> instruction for custom data access expressions which we can use for accessing characters in reverse order after we match the ending slash.</p><p>Here is animation of the full process:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5dcX7Zc8OH98Y9SDAIAu5E/a2cb285821563467d2ae59f365402099/third.gif" />
            
            </figure><p>After we've matched the host name and found potentially applicable rulesets, we need to ensure that we're not rewriting URLs which are not available via HTTPS.</p>
    <div>
      <h3>Exclusions</h3>
      <a href="#exclusions">
        
      </a>
    </div>
    <p>Exclusion elements serve exactly this goal.</p>
            <pre><code>&lt;exclusion pattern="^http://(www\.)?google\.com/analytics/" /&gt;
&lt;exclusion pattern="^http://(www\.)?google\.com/imgres/" /&gt;</code></pre>
            <p>The rewriter needs to test against all the exclusion patterns before applying any actual rules. Otherwise, paths that have issues or can't be served over HTTPS will be incorrectly rewritten and will potentially break the website.</p><p>We don't care about matched groups nor do we care even which particular regular expression was matched, so as an extra optimization, instead of going through them one-by-one, we merge all the exclusion patterns in the ruleset into one regular expression that can be internally optimized by a regexp engine.</p><p>For example, for the exclusions above we can create the following regular expression, common parts of which can be merged internally by a regexp engine:</p>
            <pre><code>(^http://(www\.)?google\.com/analytics/)|(^http://(www\.)?google\.com/imgres/)</code></pre>
            <p>After that, in our action we just need to call <code>pcre_exec</code> without a match data destination – we don't care about matched groups, but only about completion status. If a URL matches a regular expression, we bail out of this action as following rewrites shouldn't be applied. After this, Ragel will automatically call another matched action (another ruleset) on its own until one is found.</p><p>Finally, after we both matched the host name and ensured that our URL is not covered by any exclusion patterns, we can go to the actual rewrite rules.</p>
    <div>
      <h3>Rewrite rules</h3>
      <a href="#rewrite-rules">
        
      </a>
    </div>
    <p>These rules are presented as JavaScript regular expressions and replacement patterns. The rewriter matches the URL against each of those regular expressions as soon as a host matches and a URL is not an exclusion.</p>
            <pre><code>&lt;rule from="^http://(\w{2})\.wikipedia\.org/wiki/" to="https://secure.wikimedia.org/wikipedia/$1/wiki/" /&gt;</code></pre>
            <p>As soon as a match is found, the replacement is performed and the search can be stopped. Note: while exclusions cover dangerous replacements, it's totally possible and valid for the URL to not match any of actual rules - in that case it should be just left intact.</p><p>After the previous steps we are usually reduced only to couple of rules, so unlike in the case with exclusions, we don't apply any clever merging techniques for them. It turned out to be easier to go through them one-by-one rather than create a regexp engine specifically optimized for the case of multi-regexp replacements.</p><p>However, we don't want to waste time on regexp analysis and compilation on our edge server. This requires extra time during initialization and memory for carrying unnecessary text sources of regular expressions around. PCRE allows regular expressions to be precompiled into its own format using pcre_compile. Then, we gather all these compiled regular expressions into one binary file and link it using <code>ld --format=binary</code> - a neat option that tells linker to attach any given binary file as a named data resource available to the application.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4oBelLGyvi8KGq7mQmTvFK/e5356dd36c37b29aa1fefccca167e43a/15748968831_9d97f7167f_z.jpg" />
            
            </figure><p><a href="https://creativecommons.org/licenses/by/2.0/">CC BY 2.0</a> <a href="https://www.flickr.com/photos/sidelong/15748968831/in/photolist-pZFzKn-nCgsND-5kmFQB-bm5Ny4-3qR9NP-NfYDG-e7AwCH-eqqc2o-e3DgoN-6ZcGVn-pkmTXn-3oT9Nj-8y4HB7-H93FUT-6pSxvu-aukZ2w-2yo3n-2fTgn7-dXH6No-nBzysU-nsnMR1-dHoz6o-zXDcxE-9G5ydk-HJPTCt-qoQnCi-zmKYcs-4vwvyV-ygPe2Q-rUH8dy-dSbR9U-sc8NEN-htr2XH-uDEHXF-ehnr4K-xDLoGG-gMbuTr-bygmuu-r26oQx-bDJmuS-7WHeZ7-o5V5nL-bn3PNf-9Fr7nQ-dbbuB6-4sGsph-77HwTg-gbA7WS-27jJRy-7xGShs">image</a> by <a href="https://www.flickr.com/photos/sidelong/">DaveBleasdale</a></p><p>The second part of the rule is the replacement pattern which uses the simplest feature of JavaScript regex replacement - number-based groups and has the form of <code>https://www.google.com.$1/</code> which means that the resulting string should be concatenation of <code>"https://www.google.com."</code> with the matched group at position <code>1</code>, and a <code>"/"</code>.</p><p>Once again, we don't want to waste time performing repetitive analysis looking for dollar signs and converting string indexes to numeric representation at runtime. Instead, it's more efficient to split this pattern at compile-time into <code>{ "https://www.google.com.", "/" }</code> static substrings plus an array of indexes which need to be inserted in between - in our case just <code>{ 1 }</code>. Then, at runtime, we simply build a string going through both arrays one-by-one and concatenating strings with found matches.</p><p>Finally, after such string is built, it's inserted in place of the previous attribute value and sent to the client.</p>
    <div>
      <h3>Wait, but what about testing?</h3>
      <a href="#wait-but-what-about-testing">
        
      </a>
    </div>
    <p>Glad you asked.</p><p>The HTTPS Everywhere extension uses an automated checker that checks the validity of rewritten URLs on any change in ruleset. In order to do that, rulesets are required to contain special test elements that cover all the rewrite rules.</p>
            <pre><code>&lt;test url="http://maps.google.com/" /&gt;</code></pre>
            <p>What we need to do on our side is to collect those test URLs, combined with our own auto-generated tests from wildcards, and to invoke both the HTTPS Everywhere built-in JavaScript rewriter and our own side-by-side to ensure that we're getting same results — URLs that should be left intact, are left intact with our implementation and URLs that are rewritten, are rewritten identically.</p>
    <div>
      <h2>Can we fix even more mixed content?</h2>
      <a href="#can-we-fix-even-more-mixed-content">
        
      </a>
    </div>
    <p>After all this was done and tested, we decided to look around for other potential sources of guaranteed rewrites to extend our database.</p><p>And one such is <a href="https://hstspreload.appspot.com/">HSTS preload list</a> maintained by Google and used by all the major browsers. This list allows website owners who want to ensure that their website is never loaded via <code>http://</code>, to submit their hosts (optionally together with subdomains) and this way opt-in to auto-rewrite of any <code>http://</code> references to <code>https://</code> by a modern browser before even hitting the origin.</p><p>This means, the origin guarantees that the HTTPS version will be always available and will serve just the same content as HTTP - otherwise any resources referenced from it will simply break as the browser won't attempt to fallback to HTTP after domain is in the list. A perfect match for another ruleset!</p><p>As we already have a working solution and don't have any complexities around regular expressions in this list, we can download the JSON version of it <a href="https://chromium.googlesource.com/chromium/src/net/+/master/http/transport_security_state_static.json">directly from the Chromium source</a> and convert to the same XML ruleset with wildcards and exclusions that our system already understands and handles, as part of the build process.</p><p>This way, both databases are merged and work together, rewriting even more URLs on customer websites without any major changes to the code.</p>
    <div>
      <h2>That was quite a trip</h2>
      <a href="#that-was-quite-a-trip">
        
      </a>
    </div>
    <p>It was... but it's not really the end of the story. You see, in order to provide safe and fast rewrites for everyone, and after analyzing the alternatives, we decided to write a new streaming HTML5 parser that became the core of this feature. We intend to use it for even more tasks in future to ensure that we can improve security and performance of our customers websites in even more ways.</p><p>However, it deserves a separate blog post, so stay tuned.</p><p>And remember - if you're into web performance, security or just excited about the possibility of working on features that do not break millions of pages every second - we're <a href="https://www.cloudflare.com/join-our-team/">hiring</a>!</p><p>P.S. We are incredibly grateful to the folks at the EFF who created the HTTPS Everywhere extension and worked with us on this project.</p> ]]></content:encoded>
            <category><![CDATA[HTTPS]]></category>
            <category><![CDATA[Mixed Content Errors]]></category>
            <category><![CDATA[SSL]]></category>
            <category><![CDATA[HTTP2]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Crypto Week]]></category>
            <guid isPermaLink="false">3Zps5SYwGYawkTGZfKjlfn</guid>
            <dc:creator>Ingvar Stepanyan</dc:creator>
        </item>
        <item>
            <title><![CDATA[Fixing the mixed content problem with Automatic HTTPS Rewrites]]></title>
            <link>https://blog.cloudflare.com/fixing-the-mixed-content-problem-with-automatic-https-rewrites/</link>
            <pubDate>Thu, 22 Sep 2016 14:34:37 GMT</pubDate>
            <description><![CDATA[ It used to be difficult, expensive, and slow to set up an HTTPS capable web site. Then along came CloudFlare’s Universal SSL that made switching from http:// to https:// as easy as clicking a button.  ]]></description>
            <content:encoded><![CDATA[ <p>CloudFlare aims to put an end to the unencrypted Internet. But the web has a chicken and egg problem moving to HTTPS.</p><p>Long ago it was difficult, expensive, and slow to set up an HTTPS capable web site. Then along came services like CloudFlare’s <a href="/introducing-universal-ssl/">Universal SSL</a> that made switching from http:// to https:// as easy as clicking a button. With one click a site was served over HTTPS with a freshly minted, <a href="https://www.cloudflare.com/application-services/products/ssl/">free SSL certificate</a>.</p><p>Boom.</p><p>Suddenly, the website is available over HTTPS, and, even better, the website gets faster because it can take advantage of the latest web protocol <a href="https://www.cloudflare.com/http2/">HTTP/2</a>.</p><p>Unfortunately, the story doesn’t end there. Many otherwise secure sites suffer from the problem of mixed content. And mixed content means the green padlock icon will not be displayed for an https:// site because, in fact, it’s not truly secure.</p><p>Here’s the problem: if an https:// website includes any content from a site (even its own) served over http:// the green padlock can’t be displayed. That’s because resources like images, JavaScript, audio, video etc. included over http:// open up a security hole into the secure web site. A backdoor to trouble.</p><p>Web browsers have known this was a problem for a long, long time. Way back in 1997, Internet Explorer 3.0.2 warned users of sites with mixed content with the following dialog box.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6fFSyQa1HkTERZMaHgzcd7/17cf2d14f450feaa32164ac0ac3c6a7b/IC310968.gif" />
            
            </figure><p>Today, Google Chrome shows a circled i on any https:// that has insecure content.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2UbRZJMuwMEpCkstNVdxAh/5938aa65b3bea23c830841646df74ad9/Screen-Shot-2016-09-22-at-11.22.08.png" />
            
            </figure><p>And Firefox shows a padlock with a warning symbol. To get a green padlock from either of these browsers requires every single subresource (resource loaded by a page) to be served over HTTPS.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/567KKnhwZN2f9MjwnOXzaE/9c7eb09e29684c2a6f0b9684eff925f2/Screen-Shot-2016-09-22-at-11.23.45.png" />
            
            </figure><p>If you had clicked Yes back in 1997, Internet Explorer would have ignored the dangers of mixed content and gone ahead and loaded subresources using plaintext HTTP. Clicking No prevented them from being loaded (often resulting in a broken but secure web page).</p>
    <div>
      <h3>Transitioning to fully secure content</h3>
      <a href="#transitioning-to-fully-secure-content">
        
      </a>
    </div>
    <p>It's tempting, but naive, to think that the solution to mixed content is easy: “Simply load everything using https:// and just fix your website”. Unfortunately, the smörgåsbord of content loaded into modern websites from first-party and third-party web sites makes it very hard to ‘simply’ make that change.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5z1ldg0RSCpvmwSsUEr3At/fb802e6f873ddeec6da07219cb2081cd/14127169401_54ca5e9c1f_z.jpg" />
            
            </figure><p><a href="https://creativecommons.org/licenses/by/2.0/">CC BY 2.0</a> <a href="https://www.flickr.com/photos/jeepersmedia/14127169401/in/photolist-nwnrhe-nwFTKm-nwpaXy-nd8jZZ-nwnqMB-6bsjcC-nwFSD3-aQ5Zk4-c4uV99-6itRSF-6iy17Y-8iDUTL-jb2wag-MvpRU-B61f-B61b-7GcrpR-jCxYo4-bTkuiD-2rLaU-a7VsfV-eyNEd-dUEjYu-4iVNkY-a3Gcnb-nJhD2H-nHasXC-5L7FZy-2i8iQ9-qN5RF-6HHCqb-6HDxai-6HDxnK-6HHCvU-5T9v3L-6ytdUs-6HHCyG-2WZgpg-5XZ5BD-b4SFeF-hNBK9K-8JDyeY-pbysVv-5dkLUi-6iy19o-6HDxiV-4o63bZ-kiN76q-ik1o7a-qkcqbe">image</a> by <a href="https://www.flickr.com/photos/jeepersmedia/">Mike Mozart</a></p><p>Wired <a href="https://www.wired.com/2016/09/now-encrypting-wired-com/">documented</a> their transition to https:// in a series of blog posts that shows just how hard it can be to switch everything to HTTPS. They started in <a href="https://www.wired.com/2016/04/wired-launching-https-security-upgrade/">April</a> and spent 5 months on the process (after having already prepped for months just to get to https:// on their main web site). In May they wrote about a <a href="https://www.wired.com/2016/05/wired-first-big-https-rollout-snag/">snag</a>:</p><p><i>"[…] one of the biggest challenges of moving to HTTPS is preparing all of our content to be delivered over secure connections. If a page is loaded over HTTPS, all other assets (like images and Javascript files) must also be loaded over HTTPS. We are seeing a high volume of reports of these “mixed content” issues, or events in which an insecure, HTTP asset is loaded in the context of a secure, HTTPS page. To do our rollout right, we need to ensure that we have fewer mixed content issues—that we are delivering as much of WIRED.com’s content as securely possible.”</i></p><p>In 2014, the New York Times identified mixed content as a <a href="http://open.blogs.nytimes.com/2014/11/13/embracing-https/">major hurdle</a> to going secure:</p><p><i>"To successfully move to HTTPS, all requests to page assets need to be made over a secure channel. It’s a daunting challenge, and there are a lot of moving parts. We have to consider resources that are currently being loaded from insecure domains — everything from JavaScript to advertisement assets.”</i></p><p>And the W3C <a href="https://www.w3.org/TR/upgrade-insecure-requests/">recognized</a> this as a huge problem saying: <i>“Most notably, mixed content checking has the potential to cause real headache for administrators tasked with moving substantial amounts of legacy content onto HTTPS. In particular, going through old content and rewriting resource URLs manually is a huge undertaking.”</i> And cited the BBC’s <a href="http://www.bbc.co.uk/blogs/internet/entries/f7126d19-2afa-3231-9c4e-0f7198c468ab">huge archive</a> as a difficult example.</p><p>But it’s not just media sites that have a problem with mixed content. Many CMS users find it difficult or impossible to update all the links that their CMS generates as they may be buried in configuration files, source code or databases. In addition, sites that need to deal with user-generated content also face a problem with http:// URIs.</p>
    <div>
      <h3>The Dangers of Mixed Content</h3>
      <a href="#the-dangers-of-mixed-content">
        
      </a>
    </div>
    <p>Mixed content comes in two flavors: active and passive. Modern web browsers approach the dangers from these different types of mixed content as follows: active mixed content (the most dangerous) is automatically and completely blocked, passive mixed content is allowed through but results in a warning.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1CleAMTHGwVEkjUZX07Dko/dc8d461778fb08337f10c53e00a7ee4b/6714200883_2ba8167533_b.jpg" />
            
            </figure><p><a href="https://creativecommons.org/licenses/by/2.0/">CC BY 2.0</a> <a href="https://www.flickr.com/photos/99mph/6714200883/in/photolist-bej31c-szQAb-i2swr-fnz3A-avkaMz-SB8LC-4kKK2r-6UFtFq-5EhUFH-i2sBe-dHEMmA-A4cie-aC6Wtc-4ZRUkX-srjkGs-5raEpY-eosHU-Ee7Ye-6vWtxh-6Pd2Nq-5Yz5u2-nU1Pfn-nBPiU4-N6B99-7WL2J9-FEkU5X-dU1PPb-JbHZad-aBEqKL-6w5v1r-65Nths-6DbhPs-nsfgLN-67jbBc-nAxzxi-7krEou-4GxJDe-nUsvgg-9kk75E-8AAsi-jJpNkj-4a5Znf-NQtE-d5xmAL-qiBCh-8cM-qXdkTc-9aLMpU-dWVoe-4A1jyr">image</a> by <a href="https://www.flickr.com/photos/99mph/">Ben Tilley</a></p><p>Active content is anything that can modify the DOM (the web page itself). Resources included via the <code>&lt;script&gt;</code>, <code>&lt;link&gt;</code>, <code>&lt;iframe&gt;</code> or <code>&lt;object&gt;</code> tags, CSS values that use <code>url</code> and anything requested using <code>XMLHTTPRequest</code> is capable of modifying a page, reading cookies and accessing user credentials.</p><p>Passive content is anything else: images, audio, video that are written into the page but that cannot themselves access the page.</p><p>Active content is a real threat because if an attacker manages to intercept the request for an http:// URI they can replace the content with their own. This is not a theoretical concern. In 2015 Github was attacked by a system dubbed the <a href="https://citizenlab.org/2015/04/chinas-great-cannon/">Great Cannon</a> that intercepted requests for common JavaScript files over HTTP and replaced them with a JavaScript attack script. The Great Cannon weaponized innocent users’ machines by intercepting TCP and exploiting the inherent vulnerability in active content loaded from http:// URIs.</p><p>Passive content is a different kind of threat: because requests for passive content are sent in the clear an eavesdropper can monitor the requests and extract information. For example, a well positioned eavesdropper could monitor cookies, web pages visited and potentially authentication information.</p><p>The <a href="http://codebutler.com/firesheep/">Firesheep</a> Firefox add-on can be used to monitor a local network (for example, in a <a href="https://www.cloudflare.com/learning/access-management/coffee-shop-networking/">coffee shop</a>) for requests sent over HTTP and automatically steal cookies allowing a user to hijack someone’s identity with a single click.</p><p>Today, modern browsers block active content that's loaded insecurely, but allow passive content through. Nevertheless, transitioning to all https:// is the only way to address all the security concerns of mixed content.</p>
    <div>
      <h3>Fixing Mixed Content Automatically</h3>
      <a href="#fixing-mixed-content-automatically">
        
      </a>
    </div>
    <p>We’ve wanted to help fix the mixed content properly for a long time as our goal is that the web become completely encrypted. And, like other CloudFlare services, we wanted to make this a ‘one click’ experience.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/28Wmhnv7H20f0tjIPihGlt/b4ce58dc1aa7db8ab248029a3cc1a2fd/1078317132_0e96301aef_b--1-.jpg" />
            
            </figure><p><a href="https://creativecommons.org/licenses/by/2.0/">CC BY 2.0</a> <a href="https://www.flickr.com/photos/pinkmoose/1078317132/in/photolist-2DhE7G-pvcxy5-k8yyU-dGahX9-7cRyRV-byeo7f-cNka9m-5VqLgk-byenTE-dG4STe-dGafYU-dG4RnT-tAGvqC-tAGA6u-uxhxfW-4Wrx7L-dGahZQ-dGafW9-dG4QiZ-dGagHY-4K5jAk-dwrgVQ-dGafTE-xyEYA-dG4SYP-9LEfKd-7cRy3F-dGainJ-dGahAN-4LPBp9-bnWEe3-bNknmF-pTFtg2-dG4Ra8-6hZDxR-AobuPS-bM92up-dGafDC-yuJw-q8ZsgV-4GGuAM-4Twy1U-dGaf9s-dG4QfX-52rQZg-dpK5hj-dG4RSz-dpJU8v-4EJ5JP-5pvYKT">image</a> by <a href="https://www.flickr.com/photos/pinkmoose/">Anthony Easton</a></p><p>We considered a number of approaches:</p><p><i>Automatically insert the upgrade-insecure-requests CSP directive</i></p><p>The <a href="https://www.w3.org/TR/upgrade-insecure-requests/">upgrade-insecure-requests</a> directive can be added in a <a href="https://www.w3.org/TR/CSP/">Content Security Policy</a> header like this:</p>
            <pre><code>Content-Security-Policy: upgrade-insecure-requests</code></pre>
            <p>which instructs the browser to automatically upgrade any HTTP request to HTTPS. This can be useful if the website owner knows that every subresource is available over HTTPS. The website will not have to actually change http:// URIs embedded in the website to https://, the browser will take care of that automatically.</p><p>Unfortunately, there is a large downside to upgrade-insecure-requests. Since the browser blindly upgrades every URI to https:// regardless of whether the resulting URI will actually work pages can be <a href="https://www.w3.org/TR/upgrade-insecure-requests/#example-failed">broken</a>.</p><p><i>Modify all links to use https://</i></p><p>Since not all browsers used by visitors to CloudFlare web sites support upgrade-insecure-requests we considered upgrading all http:// URIs to https:// as pages pass through our service. Since we are able to parse and modify web pages in real-time we could have created an ‘upgrade-insecure-requests’ service that did not rely on browser support.</p><p>Unfortunately, that still suffers from the same problem of broken links when an http:// URI is transformed to https:// but the resource can’t actually be loaded using HTTPS</p><p><i>Modify links that point to other CloudFlare sites</i></p><p>Since CloudFlare gives all our 4 million customers free <a href="/introducing-universal-ssl/">Universal SSL</a> and we cover a large percentage of web traffic we considered just upgrading http:// to https:// for URIs that we know (because they use our service) will work.</p><p>This solves part of the problem but isn’t a good solution for the general problem of upgrading from HTTP to HTTPS</p><p><i>Create a system that rewrites known HTTPS capable URIs</i></p><p>Finally, we settled upon doing something smart: upgrade a URI from http:// to https:// if we know that the resource can be served using HTTPS. To figure out which links are upgradable we turned to the EFF’s excellent <a href="https://www.eff.org/https-Everywhere">HTTPS Everywhere</a> extension and Google Chrome <a href="https://github.com/chromium/hstspreload.appspot.com">HSTS preload</a> list to augment our knowledge of CloudFlare sites that have SSL enabled.</p><p>We are very grateful that the EFF graciously accepted to help us with this project.</p><p>The HTTPS Everywhere ruleset goes far beyond just switching http:// to https://: it contains rules (and exclusions) that allow it (and us) to target very specific URIs. For example, here’s an actual HTTP Everywhere rule for gstatic.com:</p>
            <pre><code>&lt;rule from="^http://(csi|encrypted-tbn\d|fonts|g0|[\w-]+\.metric|ssl|t\d)\.
gstatic\.com/" to="https://$1.gstatic.com/"/&gt;</code></pre>
            <p>It uses a regular expression to identify specific subdomains of gstatic.com that can safely be upgraded to use HTTPS. The complete set of rules can be browsed <a href="https://www.eff.org/https-everywhere/atlas">here</a>.</p><p>Because we need to upgrade a huge number of URIs embedded in web pages (we estimate around 5 million per second) we benchmarked existing HTML parsers (including our own) and decided to write a new one for this type of rewriting task. We’ll write more about its design, testing and performance in a future post.</p>
    <div>
      <h3>Automatic HTTPS Rewrites</h3>
      <a href="#automatic-https-rewrites">
        
      </a>
    </div>
    <p>Automatic HTTPS Rewrites are now available in the customer dashboard for all CloudFlare customers. Today, this feature is disabled by default and can be enabled in ‘one click’:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7DZLdLkvnV1oot1cCmSTR/8eddcb3deaaeb16f0bead6cea6817a52/Screen-Shot-2016-09-22-at-11.06.05.png" />
            
            </figure><p>We will be monitoring the performance and effectiveness of this feature and enable it by default for Free and Pro customers later in the year. We also plan to use the Content Security Policy reporting features to give customers an automatic view of which URIs remain to be upgraded so that their transition to all https:// is made as simple as possible: sometimes just finding which URIs need to be changed can be very hard as Wired <a href="https://www.wired.com/2016/05/wired-first-big-https-rollout-snag/">found out</a>.</p><p>Would love to hear how this feature works out for you.</p> ]]></content:encoded>
            <category><![CDATA[HTTPS]]></category>
            <category><![CDATA[Automatic HTTPS]]></category>
            <category><![CDATA[SSL]]></category>
            <category><![CDATA[Mixed Content Errors]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Crypto Week]]></category>
            <guid isPermaLink="false">7pVhoHfZHUSP7R5waHgPNy</guid>
            <dc:creator>John Graham-Cumming</dc:creator>
        </item>
        <item>
            <title><![CDATA[Encryption Week]]></title>
            <link>https://blog.cloudflare.com/encryption-week/</link>
            <pubDate>Tue, 20 Sep 2016 13:04:33 GMT</pubDate>
            <description><![CDATA[ Since CloudFlare’s inception, we have worked tirelessly to make encryption as simple and as accessible as possible. Over the last two years, we’ve made CloudFlare the easiest way to enable encryption for web properties and internet services.  ]]></description>
            <content:encoded><![CDATA[ <p><i>Note: Since this was published we have renamed Encryption Week to Crypto Week. You can find this post, and news from Encryption and SSL weeks </i><a href="/tag/crypto-week"><i>here</i></a><i>.</i></p><p>Since CloudFlare’s inception, we have worked tirelessly to make encryption as simple and as accessible as possible. Over the last two years, we’ve made CloudFlare the easiest way to enable encryption for web properties and internet services. From the launch of <a href="/introducing-universal-ssl/">Universal SSL</a>, which gives HTTPS to millions of sites for free, to the <a href="/cloudflare-ca-encryption-origin/">Origin CA</a>, which helps customers encrypt their origin servers, to the <a href="/sha-1-deprecation-no-browser-left-behind/">“No Browser Left Behind” initiative</a>, which ensures that the encrypted Internet is available to everyone, CloudFlare has pushed to make Internet encryption better and more widespread.</p><p>This week we are introducing three features that will dramatically increase both the quality and the quantity of encryption on the Internet. We are are happy to introduce <a href="/introducing-tls-1-3/"><b>TLS 1.3</b></a>, <a href="/fixing-the-mixed-content-problem-with-automatic-https-rewrites/"><b>Automatic HTTPS Rewrites</b></a>, and <a href="/opportunistic-encryption-bringing-http-2-to-the-unencrypted-web/"><b>Opportunistic Encryption</b></a> throughout this week. We consider strong encryption to be a right and fundamental to the growth of the Internet, so we’re making all three of these features available to all customers for free.</p><p>Every day this week there will be new technical content on this blog about these features. We're calling it Encryption Week.</p>
    <div>
      <h3>TLS 1.3: Faster and more secure</h3>
      <a href="#tls-1-3-faster-and-more-secure">
        
      </a>
    </div>
    <p>HTTPS is the standard for web encryption. Services that support HTTPS use a protocol called TLS to encrypt and authenticate connections. This week, CloudFlare will be the first service on the internet to offer the latest version of the protocol, TLS 1.3. CloudFlare has been heavily involved in the development of the protocol, which is more secure and delivers tangible performance benefits over previous versions. Establishing an HTTPS connection with TLS 1.3 requires fewer messages than previous versions of TLS, making page load times noticeably faster, especially on mobile networks.</p><p>If it takes 50 miliseconds for a message to travel from the browser to CloudFlare, the speed boost from TLS 1.3 is enough to take sites that seem <a href="https://hpbn.co/primer-on-latency-and-bandwidth/#speed-of-light-and-propagation-latency">“sluggish”</a> (over 300ms), and turn them into sites that load comfortably fast (under 300ms).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/MgDOAFwcU9C9wPEhJDxVB/25045f31233f88bac137bdfe6830c4e8/13vs12.jpg" />
            
            </figure><p>Comparison of TLS 1.2 and TLS 1.3 handshakes</p><p>TLS 1.3 is enabled in the developer releases of both Firefox and Chrome (<a href="https://nightly.mozilla.org/">Firefox Nightly</a> and <a href="https://www.google.com/chrome/browser/canary.html">Chrome Canary</a>), and all major browsers have committed to implementing the new protocol.</p>
    <div>
      <h3>Why websites don’t use encryption</h3>
      <a href="#why-websites-dont-use-encryption">
        
      </a>
    </div>
    <p>CloudFlare offers HTTPS to all customer sites through Universal SSL, but many sites don’t take advantage of it and continue to offer their sites over unencrypted HTTP. One of the main reasons sites don’t take advantage of HTTPS is because of so-called “mixed content.” This week we are launching Automatic HTTPS Rewrites, a feature that helps site owners address mixed content so they can safely upgrade to HTTPS.</p><p>A user requesting a web page over HTTPS can assume that the connection is authenticated, encrypted and that the response has not been tampered with. However, if any of the sub-resources (scripts, images) are requested over an insecure protocol such as HTTP, then those parts of the site can be accessed and modified by attackers. HTTPS sites with insecure sub-resources contain mixed content. Modern browsers will attempt to protect users from mixed content by providing warnings for passive content (such as images), and blocking insecure active content (scripts, stylesheets) from loading. Because of this, sites with mixed content often don’t work correctly.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4zJXu6vRAgT9tUVuabF11K/229536d6ecf781e0098de895ac3357ae/mixedvsfixed.jpg" />
            
            </figure><p>Sites served with HTTP currently look “neutral” to visitors compared to HTTPS sites with mixed content, so site operators often prefer to serve their sites with insecure HTTP rather than partially-secured HTTPS. This is changing. Both Chrome and Firefox announced that they will begin to show increasingly negative indicators to visitors of HTTP sites. <a href="https://security.googleblog.com/2016/09/moving-towards-more-secure-web.html">Chrome is planning</a> to eventually put the words “Not Secure” in the address bar for HTTP sites, making it much less appealing for site operators to choose HTTP over HTTPS. Because of this, fixing mixed content and moving your site to HTTPS is more important than ever.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5On9CC982TmTvvjCUQomUp/bdfe5b43f29344c80fc3a2ead9058973/image01.png" />
            
            </figure>
    <div>
      <h4>Automatic HTTPS Rewrites: Fixing mixed content automatically</h4>
      <a href="#automatic-https-rewrites-fixing-mixed-content-automatically">
        
      </a>
    </div>
    <p>Some mixed content is not mixed content at all. Often, sub-resources are available over HTTPS, but the page’s source has been hardcoded to download them over HTTP. Manually changing “http” to “https” in the page’s source is often enough to fix mixed content in these cases. However, many sites can’t do that because their sites are created dynamically by content management systems or they include third party resources that they have no control over.</p><p>For Wordpress customers, we tackled this issue by modifying the CloudFlare Wordpress plugin to <a href="/flexible-ssl-wordpress-fixing-mixed-content-errors/">automatically rewrite insecure URLs</a>. Building on the success of this approach, we decided to build URL rewriting functionality into CloudFlare itself. The result is a feature called Automatic HTTPS Rewrites, which we are making available to all customers for free this week.</p><p>CloudFlare customers with Automatic HTTPS Rewrites enabled on their site will have “http" replaced with “https” for all sub-resources that are also available over HTTPS. We use the EFF’s <a href="https://www.eff.org/https-Everywhere">HTTPS Everywhere list</a>, Chrome’s <a href="https://hstspreload.appspot.com/">HSTS preload list</a>, and soon our own internal list of HTTPS-enabled domains to determine whether sites can be upgraded. This feature facilitates the seamless upgrade of customer sites to HTTPS, allowing them to take full advantage of Universal SSL. As a bonus, Automatic HTTPS Rewrites also re-writes links from “http://” to “https://” whenever possible, ensuring that web visitors <a href="https://www.cloudflare.com/learning/security/how-to-improve-wordpress-security/">stay safe</a> even when they leave your site.</p>
    <div>
      <h3>Opportunistic Encryption</h3>
      <a href="#opportunistic-encryption">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4nHRS3s4GG09UqQluqTvzr/8b4a98369ef4df344cff2184dc425bfe/2861549541_0336349bfe_z.jpg" />
            
            </figure><p>CC 2.0 Generic <a href="https://www.flickr.com/photos/qwghlm/2861549541">Chris Applegate</a></p><p>Many sites can fix mixed content by changing “http” to “https,” but some sites can’t. This is often because they have sub-resources hosted on domains that don’t support HTTPS. One of the major causes of mixed content is advertising. Many ad exchanges still serve advertisements that contain references to insecure HTTP assets, making it hard for publishers and others that rely on advertising revenue to upgrade to HTTPS. These sites not only miss out on the many security and privacy benefits of serving their site over an encrypted connection, they also miss out on the performance benefits of <a href="https://www.cloudflare.com/http2/">HTTP/2</a>, which is only available over encrypted connections.</p><p>Enter <a href="https://tools.ietf.org/html/draft-ietf-httpbis-http2-encryption-06">Opportunistic Encryption</a>, a web feature that allows HTTP websites to be accessed over an encrypted HTTP/2 connection. With Opportunistic Encryption, CloudFlare adds a header to tell supporting browsers that the site is available over an encrypted connection. Opportunistic Encryption will be available to all customers later this week, for free.</p><p>For HTTP sites, Opportunistic Encryption can provide some (but not all) of the benefits of HTTPS. Connections secured with Opportunistic Encryption don’t get some HTTPS-only features such as the <a href="https://developers.google.com/web/updates/2016/04/geolocation-on-secure-contexts-only?hl=en">location API</a> and the green lock icon. However, the connection is encrypted (and soon authenticated — we present a valid certificate and Firefox Nightly validates it), protecting data from passive snooping.</p><p>The big advantage provided by Opportunistic Encryption is <a href="https://www.cloudflare.com/http2/">HTTP/2</a>, the new web protocol that can dramatically improve load times. HTTP/2 is unavailable over unencrypted connections. Visitors using browsers that support Opportunistic Encryption (currently only Firefox 38 and later) will be able to browse HTTP sites using HTTP/2 for the first time.</p>
    <div>
      <h3>The end of the unencrypted Internet</h3>
      <a href="#the-end-of-the-unencrypted-internet">
        
      </a>
    </div>
    <p>At CloudFlare we are dedicated to improving security and performance for our customers, and building a safer web with strong encryption built in by default. The three features we are introducing during Encryption Week will work in tandem to improve security and performance of all CloudFlare customers.</p><ul><li><p>All encrypted sites will gain faster connection times from TLS 1.3</p></li><li><p>Sites that can be upgraded to HTTPS by Automatic HTTPS Rewrites will gain HTTPS</p></li><li><p>Sites that can’t be upgraded to HTTPS will gain encryption and HTTP/2 from Opportunistic Encryption</p></li></ul><p>One of our goals at CloudFlare is to put an end to the unencrypted Internet. These three features enable a faster internet while moving us closer to that goal.</p> ]]></content:encoded>
            <category><![CDATA[Mixed Content Errors]]></category>
            <category><![CDATA[HTTPS]]></category>
            <category><![CDATA[TLS 1.3]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Automatic HTTPS]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Crypto Week]]></category>
            <guid isPermaLink="false">4zqkhCaqfru1yKMAlS0vUH</guid>
            <dc:creator>Nick Sullivan</dc:creator>
        </item>
        <item>
            <title><![CDATA[Flexible SSL & Wordpress: Fixing “Mixed Content” Errors]]></title>
            <link>https://blog.cloudflare.com/flexible-ssl-wordpress-fixing-mixed-content-errors/</link>
            <pubDate>Wed, 21 Jan 2015 21:47:09 GMT</pubDate>
            <description><![CDATA[ As many are aware, CloudFlare launched Universal SSL several months ago. We saw lots of customers sign up and start using these new, free SSL certificates. For many customers that didn’t already have an SSL certificate, they were able to use “Flexible SSL”. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>As many are aware, CloudFlare launched <a href="https://support.google.com/webmasters/answer/6073543?utm_source=wmx_blog&amp;utm_medium=referral&amp;utm_campaign=tls_en_post">Universal SSL</a> several months ago. We saw lots of customers sign up and start using these new, <a href="https://www.cloudflare.com/application-services/products/ssl/">free SSL certificates</a>. For many customers that didn’t already have an SSL certificate, they were able to use “Flexible SSL”.</p><p>Flexible SSL creates a secure (HTTPS) connection between the website visitor and CloudFlare and then an in-secure (HTTP) connection between CloudFlare and the origin server. For any site using absolute links to assets (i.e. javascript, css, and image files), this can lead to a “Mixed Content” error.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6fogaeQLTAc6tj2BAMzJ0g/ea7d5d7f7ff94469267238d4875aae4f/Screen-Shot-2015-01-21-at-10-50-54-AM-5.png" />
            
            </figure>
    <div>
      <h4>Mixed Content = Mixed Protocol</h4>
      <a href="#mixed-content-mixed-protocol">
        
      </a>
    </div>
    <p>What is “Mixed Content”? This can be understood as mixed protocol. When the webpage is loaded over SSL (HTTPS protocol), most browsers expect all of the assets to be loaded over the same protocol. Some browsers will display an error about loading “insecure content” while others will just block the insecure content outright.</p><p>This error only applies to pages loaded over SSL, since the browser is working to make sure that secure pages only load equally secure assets.</p>
    <div>
      <h4>Wordpress Plugin Updates</h4>
      <a href="#wordpress-plugin-updates">
        
      </a>
    </div>
    <p>The latest version of the CloudFlare plugin for Wordpress works to resolve a lot of these errors by altering the protocol within the links to assets. Where currently your Wordpress site may link to your stylesheet like this:</p>
            <pre><code>http://www.example.com/wp/wp-content/themes/twentyfifteen/style.css?ver=4.1</code></pre>
            <p>We’ll remove the “http:” part to make this into a relative protocol:</p>
            <pre><code>//www.example.com/wp/wp-content/themes/twentyfifteen/style.css?ver=4.1</code></pre>
            <p>A further update to this rewriting doesn’t enable this rewriting for canonical URLs, since Google and other search engines expect this to be an absolute URL. Google also <a href="https://support.google.com/webmasters/answer/6073543?utm_source=wmx_blog&amp;utm_medium=referral&amp;utm_campaign=tls_en_post">recommends securing your site with SSL</a>, and being able to enable Flexible SSL is a great way to achieve this SEO boost!</p>
    <div>
      <h4>Relative Protocol</h4>
      <a href="#relative-protocol">
        
      </a>
    </div>
    <p>A relative protocol <code>//www.example.com</code> tells your browser to load the asset over the same protocol as the main page. If your site loads over HTTPS, then the browser will try to load the asset over HTTPS as well. (In other words, NOT mixed. Everything is over HTTPS!)</p><p>This approach has no negative side effects for customers who don’t enable SSL (or who have traffic visit over HTTP as well as HTTPS), since if the page is loaded over HTTP, the assets will be loaded on the same protocol.</p>
    <div>
      <h4>Best Practices using Wordpress and Flexible SSL</h4>
      <a href="#best-practices-using-wordpress-and-flexible-ssl">
        
      </a>
    </div>
    <p>Protocol rewriting is on by default in our current version of the Wordpress plugin, so you should be able to enable Flexible SSL on your account and have traffic go to <a href="https://www.yourdomain.com">https://www.yourdomain.com</a>.</p><p>CloudFlare recommends also adding a page rule for <a href="http://www.yourdomain.com">http://www.yourdomain.com</a>* that redirects all traffic to HTTPS (assuming that Wordpress is set up for the WWW subdomain and isn’t within a directory on your domain).</p><p>With these two changes we can secure all traffic to your Wordpress site between the customer and CloudFlare’s server.</p>
    <div>
      <h4>Limitations of Protocol Rewriting</h4>
      <a href="#limitations-of-protocol-rewriting">
        
      </a>
    </div>
    <p>While this improvement should allow many Wordpress users to enable Flexible SSL without any other changes to their website, there are a few items to consider:</p><p>If after upgrading to the latest version of the Wordpress plugin, you still get “Mixed Content” errors, it’s likely that a plugin you are using adds assets to the site though javascript. The ways and methods to do this vary greatly, so it’s not currently possible to catch all of these instances. Plugin developers are encouraged to link to assets relatively or use Wordpress’s built-in methods for adding assets to the page.</p><p>It’s also important to note the limitation of Flexible SSL. For websites that collect credit cards and other secure information, it’s important to secure this information from the customer all the way to your server, so we recommend using Full SSL and having an SSL certificate on your server because this also secures the traffic between CloudFlare and your origin.</p> ]]></content:encoded>
            <category><![CDATA[WordPress]]></category>
            <category><![CDATA[HTTPS]]></category>
            <category><![CDATA[Mixed Content Errors]]></category>
            <category><![CDATA[SSL]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">7h0xjOF9xXTYY9GdmqgUzy</guid>
            <dc:creator>David Fritsch</dc:creator>
        </item>
    </channel>
</rss>