
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sat, 04 Apr 2026 14:40:31 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Protecting Holocaust educational websites]]></title>
            <link>https://blog.cloudflare.com/protecting-holocaust-educational-websites/</link>
            <pubDate>Thu, 27 Jan 2022 17:21:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare’s Project Galileo provides free protection to at-risk groups across the world including Holocaust educational and remembrance websites ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4MJVdXVmNgsOwOpJk3jlxx/1079b17226f9fc139370bab555e200d8/image1-23.png" />
            
            </figure><p>Today is the <a href="https://en.wikipedia.org/wiki/International_Holocaust_Remembrance_Day">International Holocaust Remembrance Day</a>. On this day, we commemorate the victims that were murdered by the Nazis and their accomplices.</p><p>During the <a href="https://en.wikipedia.org/wiki/The_Holocaust">Holocaust</a>, and in the events that led to it, the Nazis exterminated one third of the European Jewish population. Six million Jews, along with countless other members of minority and disability groups, were murdered because the Nazis believed they were inferior.</p><p>Cloudflare’s <a href="https://www.cloudflare.com/galileo/">Project Galileo</a> provides free protection to at-risk groups across the world including Holocaust educational and remembrance websites. During the past year alone, Cloudflare mitigated over a quarter of a million cyber threats launched against Holocaust-related websites.</p>
    <div>
      <h3>Antisemitism and the Final Solution</h3>
      <a href="#antisemitism-and-the-final-solution">
        
      </a>
    </div>
    <p>In the Second World War and the years leading up to it, <a href="https://en.wikipedia.org/wiki/Antisemitism">antisemitism</a> served as the foundation of <a href="https://en.wikipedia.org/wiki/Nuremberg_Laws">racist laws</a> and fueled violent <a href="https://en.wikipedia.org/wiki/Pogrom">Pogroms</a> against Jews. The tipping point was a night of violence known as the <a href="https://en.wikipedia.org/wiki/Kristallnacht">Kristallnacht ("Night of Broken Glass")</a>. Jews and other minority groups were outlawed, dehumanized, persecuted and killed. Jewish businesses were boycotted, Jewish books burned and synagogues destroyed. Jews, Roma and other “enemies of the Reich'' were forced into closed <a href="https://en.wikipedia.org/wiki/Nazi_ghettos">ghettos</a> and <a href="https://en.wikipedia.org/wiki/Nazi_concentration_camps">concentration camps</a>. Finally, as part of the <a href="https://en.wikipedia.org/wiki/Final_Solution">Final Solution for the Jewish Question</a>, Germany outlined a policy to deliberately and systematically exterminate the Jewish race in what came to be known as the <a href="https://en.wikipedia.org/wiki/The_Holocaust">Holocaust</a>.</p><p>As part of the Final Solution, the Nazis deployed <a href="https://en.wikipedia.org/wiki/Einsatzgruppen">mobile killing units</a>. Jews were taken to forests near their villages, forced to dig mass graves, undress, and then shot — falling into the mass graves they dug. This was the first step. However, this was “inefficient”. More “efficient” solutions were engineered using deadly gas. Eventually, <a href="https://en.wikipedia.org/wiki/Extermination_camp">six main extermination camps</a> were established. They were extremely “efficient” at exterminating humans. Initially, the Nazis experimented with <a href="https://en.wikipedia.org/wiki/Gas_van">gas vans</a> for mass extermination. Later, they built and operated <a href="https://en.wikipedia.org/wiki/Gas_chamber">gas chambers</a> which could kill more humans and do it faster. After being gassed, prisoners would load the bodies into <a href="https://en.wikipedia.org/wiki/Auschwitz_concentration_camp#Crematorium_I,_first_gassings">ovens in crematoriums</a> to be burned. In one of the larger death camps, <a href="https://en.wikipedia.org/wiki/Auschwitz_concentration_camp#Auschwitz_II-Birkenau">Auschwitz-Birkenau</a>, more than one million Jews were murdered — some 865,000 were gassed and burned on arrival.</p>
    <div>
      <h3>Fighting racism with education</h3>
      <a href="#fighting-racism-with-education">
        
      </a>
    </div>
    <p>Seventy-seven years later, sadly, racism and antisemitism are once again on the rise and <a href="https://www.independent.co.uk/news/charles-michel-jewish-auschwitz-moshe-kantor-brussels-b2001332.html">have gained traction across Europe during the pandemic</a> and <a href="https://www.forbes.com/sites/nickmorrison/2022/01/26/pledge-to-tackle-campus-antisemitism-in-uk/">across UK university campuses</a>. Earlier this week, <a href="https://www.timesofisrael.com/un-chief-decries-antisemitism-urges-to-stand-firm-against-hatred-and-bigotry/">United Nations Secretary-General António Guterres decried the resurgence of antisemitism</a> and said that “<i>...the rise in antisemitism — the oldest form of hate and prejudice — has seen new reports of physical attacks, verbal abuse, the desecration of Jewish cemeteries, synagogues vandalized, and last week the</i> <a href="https://www.nytimes.com/live/2022/01/15/us/synagogue-hostage-texas-colleyville"><i>hostage-taking of the rabbi and members of Beth Israel Congregation in Colleyville, Texas</i></a><i>.</i>”</p><p>It is through education that we will defeat bigotry and racism, and we will do our part at Cloudflare — through education and by supporting Holocaust educational organizations.</p><blockquote><p><b><i>“Our response to ignorance must be education”</i></b>- United Nations Secretary-General António Guterres</p></blockquote>
    <div>
      <h3>Supporting Holocaust educational organizations with Project Galileo</h3>
      <a href="#supporting-holocaust-educational-organizations-with-project-galileo">
        
      </a>
    </div>
    <p>As part of <a href="https://www.cloudflare.com/galileo/">Project Galileo</a>, we currently provide free security and performance products to more than 1,500 organizations in 111 countries. These organizations are targeted by cyber attacks due to their critical work. These groups include human rights defenders, independent media and journalists, and organizations that work in strengthening democracy. Among them are organizations dedicated to educating about the horrors of the Holocaust, and preserving and telling the stories of the victims and survivors of the Holocaust to younger and future generations.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3qbNw0Cujl08aAX1OSy0FG/3fc4754b8ea52f53eea48dd4d4bf665a/image2-26.png" />
            
            </figure>
    <div>
      <h3>Cyber attacks on Holocaust-related websites</h3>
      <a href="#cyber-attacks-on-holocaust-related-websites">
        
      </a>
    </div>
    <p>Over the past year, we’ve seen cyber attacks on Holocaust-related websites gradually increase throughout the year. These attacks include mostly application-layer attacks that were automatically detected and mitigated by Cloudflare’s <a href="https://www.cloudflare.com/waf/">Web Application Firewall</a> and <a href="https://www.cloudflare.com/ddos/">DDoS Protection</a> systems.</p><p>In May 2021, cyber attacks on Holocaust-related websites peaked as they increased by 263% compared to their monthly average.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2NDY117GktS74UDP5Z8I3Z/97132f6f5ac5eacc628b43a133fbbb56/image3-33.png" />
            
            </figure>
    <div>
      <h3>Applying to Project Galileo</h3>
      <a href="#applying-to-project-galileo">
        
      </a>
    </div>
    <p>Cloudflare’s mission is to help build a better Internet. Part of this mission includes protecting free expression online for vulnerable groups.</p><p>The Internet can be a powerful tool in this matter. However, organizations often face attacks from powerful and entrenched opponents, yet operate on limited budgets and lack the resources to secure themselves against malicious traffic intended to silence them. If they are silenced, the Internet stops fulfilling its promise.</p><p>To combat the threats, Cloudflare’s Project Galileo provides robust security and performance products for at-risk public interest websites at no cost. Application to Project Galileo is open to any vulnerable public interest website. You can <a href="https://www.cloudflare.com/galileo/#:~:text=Visit%20our%20partners%20for%20sponsorship%20opportunities">apply via our partners</a> or <a href="https://www.cloudflare.com/galileo/#apply">apply directly to Project Galileo</a> if you don’t have any affiliation with our trusted partners.</p>
    <div>
      <h3>A note from Cloudflare’s Jewish employees</h3>
      <a href="#a-note-from-cloudflares-jewish-employees">
        
      </a>
    </div>
    <p>Many of us, like myself, are descendants of Holocaust survivors. My grandparents fled from Nazi-occupied Poland to survive. Sadly, my grandparents — as other elderly survivors, are no longer with us. Many of us have faced antisemitism in various forms. Together, we are part of <a href="/how-employee-resource-groups-ergs-can-change-an-organization/">Cloudflare’s Employee Resource Group</a> for Cloudflare’s Jewish community: Judeoflare. We have a responsibility to make sure the world remembers and never forgets the atrocities of the Holocaust and what racism and antisemitism can lead to.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/37VGvzx3DLPt832SKMzSkX/314fd9c807d68734b9097daa30d81fff/image4-18.png" />
            
            </figure> ]]></content:encoded>
            <category><![CDATA[Holocaust]]></category>
            <category><![CDATA[Project Galileo]]></category>
            <category><![CDATA[Judeoflare]]></category>
            <category><![CDATA[Israel]]></category>
            <category><![CDATA[Employee Resource Groups]]></category>
            <category><![CDATA[History]]></category>
            <guid isPermaLink="false">3rK9g1VjgtwL1vx65pRPOk</guid>
            <dc:creator>Omer Yoachimik</dc:creator>
        </item>
        <item>
            <title><![CDATA[The History of the URL]]></title>
            <link>https://blog.cloudflare.com/the-history-of-the-url/</link>
            <pubDate>Thu, 05 Mar 2020 13:00:00 GMT</pubDate>
            <description><![CDATA[ January 11th, 1982, 22 computer scientists met to discuss an issue with ‘computer mail’ (now known as email).  ]]></description>
            <content:encoded><![CDATA[ <p>On the <a href="https://www.rfc-editor.org/rfc/rfc805.txt">11th of January 1982</a> twenty-two computer scientists met to discuss an issue with ‘computer mail’ (now known as email). Attendees included <a href="https://en.wikipedia.org/wiki/Bill_Joy">the guy who would create Sun Microsystems</a>, <a href="https://en.wikipedia.org/wiki/Dave_Lebling">the guy who made Zork</a>, <a href="https://en.wikipedia.org/wiki/David_L._Mills">the NTP guy</a>, and <a href="https://en.wikipedia.org/wiki/Bob_Fabry">the guy who convinced the government to pay for Unix</a>. The problem was simple: there were 455 hosts on the ARPANET and the situation was getting out of control.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6WQsRS2vJe3lMfiLeHhd3s/e8fe9e6e6fb360ebb970bc00c7da412f/arpanet-1969.gif" />
          </figure><p>This issue was occuring now because the ARPANET was on the verge of <a href="https://www.rfc-editor.org/rfc/rfc801.txt">switching</a> from its original <a href="https://en.wikipedia.org/wiki/Network_Control_Program">NCP protocol</a>, to the <a href="https://www.cloudflare.com/learning/ddos/glossary/tcp-ip/">TCP/IP protocol</a> which powers what we now call the Internet. With that switch suddenly there would be a multitude of interconnected networks (an ‘Inter... net’) requiring a more ‘hierarchical’ domain system where ARPANET could resolve its own domains while the other networks resolved theirs.</p><p>Other networks at the time had great names like “COMSAT”, “CHAOSNET”, “UCLNET” and “INTELPOSTNET” and were maintained by groups of universities and companies all around the US who wanted to be able to communicate, and could afford to lease 56k lines from the phone company and buy the requisite <a href="https://en.wikipedia.org/wiki/PDP-11">PDP-11s</a> to handle routing.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/AUhl6rf41MVuuXzajTXpv/338bedb4843127707fad342f65ad53ec/HOU-1.jpg" />
          </figure><p>In the original ARPANET design, a central Network Information Center (NIC) was responsible for maintaining a file listing every host on the network. The file was known as the <a href="https://tools.ietf.org/html/rfc952"><code>HOSTS.TXT</code></a> file, similar to the <code>/etc/hosts</code> file on a Linux or OS X system today. Every network change would <a href="https://www.rfc-editor.org/rfc/rfc952.txt">require</a> the NIC to FTP (a protocol invented in <a href="https://tools.ietf.org/html/rfc114">1971</a>) to every host on the network, a significant load on their infrastructure.</p><p>Having a single file list every host on the Internet would, of course, not scale indefinitely. The priority was email, however, as it was the predominant addressing challenge of the day. Their ultimate conclusion was to create a hierarchical system in which you could query an external system for just the domain or set of domains you needed. In their words: “The conclusion in this area was that the current ‘user@host’ mailbox identifier should be extended to ‘user@host.domain’ where ‘domain’ could be a hierarchy of domains.” And the domain was born.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/635FebLCDpXlTg9jhxUg1J/799b4740183b0f4113e462cb42a48e73/arpanet.gif" />
          </figure><p>It’s important to dispel any illusion that these decisions were made with prescience for the future the <a href="https://www.cloudflare.com/learning/dns/glossary/what-is-a-domain-name/">domain name</a> would have. In fact, their elected solution was primarily decided because it was the “one causing least difficulty for existing systems.” For example, <a href="https://www.rfc-editor.org/rfc/rfc799.txt">one proposal</a> was for email addresses to be of the form <code>&lt;user&gt;.&lt;host&gt;@&lt;domain&gt;</code>. If email usernames of the day hadn’t already had ‘.’ characters you might be emailing me at ‘zack.cloudflare@com’ today.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3dGooF27BGHVbjLixUPFTh/d35225f7e08e878ff1dc0c79ed74aee0/arpanet-1987.gif" />
          </figure>
    <div>
      <h2>What is Cloudflare?</h2>
      <a href="#what-is-cloudflare">
        
      </a>
    </div>
    <p>Cloudflare allows you to move caching, load balancing, rate limiting, and even network firewall and code execution out of your infrastructure to our points of presence within milliseconds of virtually every Internet user.<a href="https://www.cloudflare.com/case-studies/?utm_source=blog&amp;utm_campaign=Blog+CTA&amp;utm_content=History+of+URL&amp;utm_term=middle_of_post">Read A Case Study</a> <a href="https://www.cloudflare.com/plans/enterprise/contact/?utm_source=blog&amp;utm_campaign=Blog+CTA&amp;utm_content=History+of+URL&amp;utm_term=middle_of_post">Contact Sales</a></p>
    <div>
      <h3>UUCP and the Bang Path</h3>
      <a href="#uucp-and-the-bang-path">
        
      </a>
    </div>
    <blockquote><p>It has been said that the principal function of an operating system is to define a number of different names for the same object, so that it can busy itself keeping track of the relationship between all of the different names. Network protocols seem to have somewhat the same characteristic.</p><p>— David D. Clark, <a href="https://www.rfc-editor.org/rfc/rfc814.txt"><code>1982</code></a></p></blockquote><p>Another <a href="https://www.rfc-editor.org/ien/ien116.txt">failed proposal</a> involved separating domain components with the exclamation mark (<code>!</code>). For example, to connect to the <code>ISIA</code> host on <code>ARPANET</code>, you would connect to <code>!ARPA!ISIA</code>. You could then query for hosts using wildcards, so <code>!ARPA!*</code> would return to you every <code>ARPANET</code> host.</p><p>This method of addressing wasn’t a crazy divergence from the standard, it was an attempt to maintain it. The system of exclamation separated hosts dates to a data transfer tool called <a href="https://en.wikipedia.org/wiki/UUCP">UUCP</a> <a href="http://www.cs.dartmouth.edu/~doug/reader.pdf">created</a> in 1976. If you’re reading this on an OS X or Linux computer, <code>uucp</code> is likely still installed and available at the terminal.</p><p>ARPANET was introduced in 1969, and quickly became a powerful communication tool... among the handful of universities and government institutions which had access to it. The Internet as we know it wouldn’t become publically available outside of research institutions until <a href="http://www.cybertelecom.org/notes/nsfnet.htm">1991</a>, twenty one years later. But that didn’t mean computer users weren’t communicating.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7mK9fOr1LrYPMsYZdsAAaM/e564a827a038629670fd2bca4b3af650/coupler.jpg" />
          </figure><p>In the era before the Internet, the general method of communication between computers was with a direct point-to-point dial up connection. For example, if you wanted to send me a file, you would have your modem call my modem, and we would transfer the file. To craft this into a network of sorts, UUCP was born.</p><p>In this system, each computer has a file which lists the hosts its aware of, their phone number, and a username and password on that host. You then craft a ‘path’, from your current machine to your destination, through hosts which each know how to connect to the next:</p>
            <pre><code>sw-hosts!digital-lobby!zack</code></pre>
            <p>This address would form not just a method of sending me files or connecting with my computer directly, but also would be my email address. In this era before ‘mail servers’, if my computer was off you weren’t sending me an email.</p><p>While use of ARPANET was restricted to top-tier universities, UUCP created a bootleg Internet for the rest of us. It formed the basis for both <a href="https://en.wikipedia.org/wiki/Usenet">Usenet</a> and the <a href="https://en.wikipedia.org/wiki/Bulletin_board_system">BBS</a> system.</p>
    <div>
      <h3>DNS</h3>
      <a href="#dns">
        
      </a>
    </div>
    <p>Ultimately, the <a href="https://www.cloudflare.com/learning/dns/what-is-dns/">DNS system</a> we still use today would be <a href="https://www.rfc-editor.org/rfc/rfc882.txt">proposed</a> in 1983. If you run a DNS query today, for example using the <code>dig</code> tool, you’ll likely see a response which looks like this:</p>
            <pre><code>;; ANSWER SECTION:
google.com.   299 IN  A 172.217.4.206</code></pre>
            <p>This is informing us that google.com is reachable at <code>172.217.4.206</code>. As you might know, the <code>A</code> is informing us that this is an ‘address’ record, mapping a domain to an IPv4 address. The <code>299</code> is the ‘time to live’, letting us know how many more seconds this value will be valid for, before it should be queried again. But what does the <code>IN</code> mean?</p><p><code>IN</code> stands for ‘Internet’. Like so much of this, the field dates back to an era when there were several competing computer networks which needed to interoperate. Other potential values were <code>CH</code> for the <a href="https://en.wikipedia.org/wiki/Chaosnet">CHAOSNET</a> or <code>HS</code> for Hesiod which was the name service of the <a href="https://en.wikipedia.org/wiki/Project_Athena">Athena system</a>. CHAOSNET is long dead, but a much evolved version of Athena is still used by students at MIT to this day. You can find the list of <a href="http://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml">DNS classes</a> on the IANA website, but it’s no surprise only one potential value is in common use today.</p>
    <div>
      <h3>TLDs</h3>
      <a href="#tlds">
        
      </a>
    </div>
    <blockquote><p>It is extremely unlikely that any other TLDs will be created.</p><p>— Jon Postel, <a href="https://tools.ietf.org/html/rfc1591"><code><u>1994</u></code></a></p></blockquote><p>Once it was decided that domain names should be arranged hierarchically, it became necessary to decide what sits at the root of that hierarchy. That root is traditionally signified with a single ‘.’. In fact, ending all of your domain names with a ‘.’ is semantically correct, and will absolutely work in your web browser: <a href="http://google.com./"><code>google.com.</code></a></p><p>The first <a href="https://www.cloudflare.com/learning/dns/top-level-domain/">TLD</a> was <code>.arpa</code>. It allowed users to address their old traditional ARPANET hostnames during the transition. For example, if my machine was previously registered as <code>hfnet</code>, my new address would be <code>hfnet.arpa</code>. That was only temporary, during the transition, server administrators had a very important choice to make: which of the five TLDs would they assume? “.com”, “.gov”, “.org”, “.edu” or “.mil”.</p><p>When we say DNS is hierarchical, what we mean is there is a set of root DNS servers which are responsible for, for example, turning <code>.com</code> into the <code>.com</code> nameservers, who will in turn answer how to get to <code>google.com</code>. The root DNS zone of the internet is composed of thirteen DNS server clusters. There are only <a href="https://www.internic.net/zones/named.cache">13 server clusters</a>, because that’s all we can fit in a single UDP packet. Historically, DNS has operated through UDP packets, meaning the response to a request can never be more than 512 bytes.</p>
            <pre><code>;       This file holds the information on root name servers needed to
;       initialize cache of Internet domain name servers
;       (e.g. reference this file in the "cache  .  "
;       configuration file of BIND domain name servers).
;
;       This file is made available by InterNIC 
;       under anonymous FTP as
;           file                /domain/named.cache
;           on server           FTP.INTERNIC.NET
;       -OR-                    RS.INTERNIC.NET
;
;       last update:    March 23, 2016
;       related version of root zone:   2016032301
;
; formerly NS.INTERNIC.NET
;
.                        3600000      NS    A.ROOT-SERVERS.NET.
A.ROOT-SERVERS.NET.      3600000      A     198.41.0.4
A.ROOT-SERVERS.NET.      3600000      AAAA  2001:503:ba3e::2:30
;
; FORMERLY NS1.ISI.EDU
;
.                        3600000      NS    B.ROOT-SERVERS.NET.
B.ROOT-SERVERS.NET.      3600000      A     192.228.79.201
B.ROOT-SERVERS.NET.      3600000      AAAA  2001:500:84::b
;
; FORMERLY C.PSI.NET
;
.                        3600000      NS    C.ROOT-SERVERS.NET.
C.ROOT-SERVERS.NET.      3600000      A     192.33.4.12
C.ROOT-SERVERS.NET.      3600000      AAAA  2001:500:2::c
;
; FORMERLY TERP.UMD.EDU
;
.                        3600000      NS    D.ROOT-SERVERS.NET.
D.ROOT-SERVERS.NET.      3600000      A     199.7.91.13
D.ROOT-SERVERS.NET.      3600000      AAAA  2001:500:2d::d
;
; FORMERLY NS.NASA.GOV
;
.                        3600000      NS    E.ROOT-SERVERS.NET.
E.ROOT-SERVERS.NET.      3600000      A     192.203.230.10
;
; FORMERLY NS.ISC.ORG
;
.                        3600000      NS    F.ROOT-SERVERS.NET.
F.ROOT-SERVERS.NET.      3600000      A     192.5.5.241
F.ROOT-SERVERS.NET.      3600000      AAAA  2001:500:2f::f
;
; FORMERLY NS.NIC.DDN.MIL
;
.                        3600000      NS    G.ROOT-SERVERS.NET.
G.ROOT-SERVERS.NET.      3600000      A     192.112.36.4
;
; FORMERLY AOS.ARL.ARMY.MIL
;
.                        3600000      NS    H.ROOT-SERVERS.NET.
H.ROOT-SERVERS.NET.      3600000      A     198.97.190.53
H.ROOT-SERVERS.NET.      3600000      AAAA  2001:500:1::53
;
; FORMERLY NIC.NORDU.NET
;
.                        3600000      NS    I.ROOT-SERVERS.NET.
I.ROOT-SERVERS.NET.      3600000      A     192.36.148.17
I.ROOT-SERVERS.NET.      3600000      AAAA  2001:7fe::53
;
; OPERATED BY VERISIGN, INC.
;
.                        3600000      NS    J.ROOT-SERVERS.NET.
J.ROOT-SERVERS.NET.      3600000      A     192.58.128.30
J.ROOT-SERVERS.NET.      3600000      AAAA  2001:503:c27::2:30
;
; OPERATED BY RIPE NCC
;
.                        3600000      NS    K.ROOT-SERVERS.NET.
K.ROOT-SERVERS.NET.      3600000      A     193.0.14.129
K.ROOT-SERVERS.NET.      3600000      AAAA  2001:7fd::1
;
; OPERATED BY ICANN
;
.                        3600000      NS    L.ROOT-SERVERS.NET.
L.ROOT-SERVERS.NET.      3600000      A     199.7.83.42
L.ROOT-SERVERS.NET.      3600000      AAAA  2001:500:9f::42
;
; OPERATED BY WIDE
;
.                        3600000      NS    M.ROOT-SERVERS.NET.
M.ROOT-SERVERS.NET.      3600000      A     202.12.27.33
M.ROOT-SERVERS.NET.      3600000      AAAA  2001:dc3::35
; End of file</code></pre>
            <p>Root DNS servers operate in safes, inside locked cages. A clock sits on the safe to ensure the camera feed hasn’t been looped. Particularly given how slow <a href="https://www.cloudflare.com/learning/dns/dnssec/how-dnssec-works/">DNSSEC</a> implementation has been, an attack on one of those servers could allow an attacker to redirect all of the Internet traffic for a portion of Internet users. This, of course, makes for the most fantastic heist movie to have never been made.</p><p>Unsurprisingly, the nameservers for top-level TLDs don’t actually change all that often. <a href="http://dns.measurement-factory.com/writings/wessels-pam2003-paper.pdf">98%</a> of the requests root DNS servers receive are in error, most often because of broken and toy clients which don’t properly cache their results. This became such a problem that several root DNS operators had to <a href="https://www.as112.net/">spin up</a> special servers just to return ‘go away’ to all the people asking for reverse DNS lookups on their local IP addresses.</p><p>The TLD nameservers are administered by different companies and governments all around the world (<a href="https://www.verisign.com/">Verisign</a> manages <code>.com</code>). When you <a href="https://www.cloudflare.com/products/registrar/">purchase a .com domain</a>, about 0.18goestotheICANN,and7.85 <a href="http://webmasters.stackexchange.com/questions/61467/if-icann-only-charges-18%C2%A2-per-domain-name-why-am-i-paying-10">goes to</a> Verisign.</p>
    <div>
      <h3>Punycode</h3>
      <a href="#punycode">
        
      </a>
    </div>
    <p>It is rare in this world that the silly name us developers think up for a new project makes it into the final, public, product. We might name the company database Delaware (because that’s where all the companies are registered), but you can be sure by the time it hits production it will be CompanyMetadataDatastore. But rarely, when all the stars align and the boss is on vacation, one slips through the cracks.</p><p>Punycode is the system we use to encode unicode into domain names. The problem it is solving is simple, how do you write 比薩.com when the entire internet system was built around using the <a href="https://en.wikipedia.org/wiki/ASCII">ASCII</a> alphabet whose most foreign character is the tilde?</p><p>It’s not a simple matter of switching domains to use <a href="https://en.wikipedia.org/wiki/Unicode">unicode</a>. The <a href="https://tools.ietf.org/html/rfc1035">original documents</a> which govern domains specify they are to be encoded in ASCII. Every piece of internet hardware from the last forty years, including the <a href="http://www.cisco.com/c/en/us/support/routers/crs-1-multishelf-system/model.html">Cisco</a> and <a href="http://www.juniper.net/techpubs/en_US/release-independent/junos/information-products/pathway-pages/t-series/t1600/">Juniper</a> routers used to deliver this page to you make that assumption.</p><p>The web itself was <a href="http://1997.webhistory.org/www.lists/www-talk.1994q3/1085.html">never ASCII-only</a>. It was actually originally concieved to speak <a href="https://en.wikipedia.org/wiki/ISO/IEC_8859-1">ISO 8859-1</a> which includes all of the ASCII characters, but adds an additional set of special characters like ¼ and letters with special marks like ä. It does not, however, contain any non-Latin characters.</p><p>This restriction on HTML was ultimately removed in <a href="https://tools.ietf.org/html/rfc2070">2007</a> and that same year Unicode <a href="https://googleblog.blogspot.com/2008/05/moving-to-unicode-51.html">became</a> the most popular character set on the web. But domains were still confined to ASCII.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1blJl7MdG3rkY8t0d6Cpmc/2aedd6ceaad087b8e3178c54fe80e76f/ie-hebrew.gif" />
          </figure><p>As you might guess, Punycode was not the first proposal to solve this problem. You most likely have heard of UTF-8, which is a popular way of encoding Unicode into bytes (the 8 is for the eight bits in a byte). In the year <a href="https://tools.ietf.org/html/draft-jseng-utf5-01">2000</a> several members of the Internet Engineering Task Force came up with UTF-5. The idea was to encode Unicode into five bit chunks. You could then map each five bits into a character allowed (A-V &amp; 0-9) in domain names. So if I had a website for Japanese language learning, my site 日本語.com would become the cryptic M5E5M72COA9E.com.</p><p>This encoding method has several disadvantages. For one, A-V and 0-9 are used in the output encoding, meaning if you wanted to actually include one of those characters in your doman, it had to be encoded like everything else. This made for some very long domains, which is a serious problem when each segment of a domain is restricted to 63 characters. A domain in the Myanmar language would be restricted to no more than 15 characters. The proposal does make the very interesting suggestion of using UTF-5 to allow Unicode to be transmitted by Morse code and telegram though.</p><p>There was also the question of how to let clients know that this domain was encoded so they could display them in the appropriate Unicode characters, rather than showing M5E5M72COA9E.com in my address bar. There were <a href="https://tools.ietf.org/html/draft-ietf-idn-compare-01">several suggestions</a>, one of which was to use an unused bit in the DNS response. It was the “last unused bit in the header”, and the DNS folks were “very hesitant to give it up” however.</p><p>Another suggestion was to start every domain using this encoding method with <code>ra--</code>. At <a href="https://tools.ietf.org/html/draft-ietf-idn-race-00">the time</a> (mid-April 2000), there were no domains which happened to start with those particular characters. If I know anything about the Internet, someone registered an <code>ra--</code> domain out of spite immediately after the proposal was published.</p><p>The <a href="https://tools.ietf.org/html/rfc3492">ultimate conclusion</a>, reached in 2003, was to adopt a format called Punycode which included a form of delta compression which could dramatically shorten encoded domain names. Delta compression is a particularily good idea because the odds are all of the characters in your domain are in the same general area within Unicode. For example, two characters in Farsi are going to be much closer together than a Farsi character and another in Hindi. To give an example of how this works, if we take the nonsense phrase:</p><p>يذؽ</p><p>In an uncompressed format, that would be stored as the three characters <code>[1620, 1584, 1597]</code> (based on their Unicode code points). To compress this we first sort it numerically (keeping track of where the original characters were): <code>[1584, 1597, 1620]</code>. Then we can store the lowest value (<code>1584</code>), and the delta between that value and the next character (<code>13</code>), and again for the following character (<code>23</code>), which is significantly less to transmit and store.</p><p>Punycode then (very) efficiently encodes those integers into characters allowed in domain names, and inserts an <code>xn--</code> at the beginning to let consumers know this is an encoded domain. You’ll notice that all the Unicode characters end up together at the end of the domain. They don’t just encode their value, they also encode where they should be inserted into the ASCII portion of the domain. To provide an example, the website 熱狗sales.com becomes <code>xn--sales-r65lm0e.com</code>. Anytime you type a Unicode-based domain name into your browser’s address bar, it is encoded in this way.</p><p>This transformation could be transparent, but that introduces a major security problem. All sorts of Unicode characters print identically to existing ASCII characters. For example, you likely can’t see the difference between Cyrillic small letter a (“а”) and Latin small letter a (“a”). If I register Cyrillic аmazon.com (xn--mazon-3ve.com), and manage to trick you into visiting it, it’s gonna be hard to know you’re on the wrong site. For that reason, when you visit <a href="http:/??.ws">??.ws</a>, your browser somewhat lamely shows you <code>xn--vi8hiv.ws</code> in the address bar.</p>
    <div>
      <h3>Protocol</h3>
      <a href="#protocol">
        
      </a>
    </div>
    <p>The first portion of the URL is the protocol which should be used to access it. The most common protocol is <code>http</code>, which is the simple document transfer protocol Tim Berners-Lee invented specifically to power the web. It was not the only option. <a href="http://1997.webhistory.org/www.lists/www-talk.1993q2/0339.html">Some people</a> believed we should just use Gopher. Rather than being general-purpose, Gopher is specifically designed to send structured data similar to how a file tree is structured.</p><p>For example, if you request the <code>/Cars</code> endpoint, it might return:</p>
            <pre><code>1Chevy Camaro             /Archives/cars/cc     gopher.cars.com     70
iThe Camaro is a classic  fake                  (NULL)              0
iAmerican Muscle car      fake                  (NULL)              0
1Ferrari 451              /Factbook/ferrari/451  gopher.ferrari.net 70</code></pre>
            <p>which identifies two cars, along with some metadata about them and where you can connect to for more information. The understanding was your client would parse this information into a usable form which linked the entries with the destination pages.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5UWn6JpG7A4jcCvywWjfia/b8a926b595c7a75b73b9f3067dff707e/gopher.gif" />
          </figure><p>The first popular protocol was FTP, which was created in 1971, as a way of listing and downloading files on remote computers. Gopher was a logical extension of this, in that it provided a similar listing, but included facilities for also reading the metadata about entries. This meant it could be used for more liberal purposes like a news feed or a simple database. It did not have, however, the freedom and simplicity which characterizes HTTP and HTML.</p><p>HTTP is a very simple protocol, particularily when compared to alternatives like FTP or even the <a href="https://blog.cloudflare.com/http3-the-past-present-and-future/">HTTP/3</a> protocol which is rising in popularity today. First off, HTTP is entirely text based, rather than being composed of bespoke binary incantations (which would have made it significantly more efficient). Tim Berners-Lee correctly intuited that using a text-based format would make it easier for generations of programmers to develop and debug HTTP-based applications.</p><p>HTTP also makes almost no assumptions about what you’re transmitting. Despite the fact that it was invented expliticly to accompany the HTML language, it allows you to specify that your content is of any type (using the MIME <code>Content-Type</code>, which was a new invention at the time). The protocol itself is rather simple:</p><p>A request:</p>
            <pre><code>GET /index.html HTTP/1.1
Host: www.example.com</code></pre>
            <p>Might respond:</p>
            <pre><code>HTTP/1.1 200 OK
Date: Mon, 23 May 2005 22:38:34 GMT
Content-Type: text/html; charset=UTF-8
Content-Encoding: UTF-8
Content-Length: 138
Last-Modified: Wed, 08 Jan 2003 23:11:55 GMT
Server: Apache/1.3.3.7 (Unix) (Red-Hat/Linux)
ETag: "3f80f-1b6-3e1cb03b"
Accept-Ranges: bytes
Connection: close

&lt;html&gt;
    &lt;head&gt;
        &lt;title&gt;An Example Page&lt;/title&gt;
    &lt;/head&gt;
    &lt;body&gt;
        Hello World, this is a very simple HTML document.
    &lt;/body&gt;
&lt;/html&gt;</code></pre>
            <p>To put this in context, you can think of the networking system the Internet uses as starting with IP, the Internet Protocol. IP is responsible for getting a small packet of data (around 1500 bytes) from one computer to another. On top of that we have TCP, which is responsible for taking larger blocks of data like entire documents and files and sending them via many IP packets reliably. On top of that, we then implement a protocol like HTTP or FTP, which specifies what format should be used to make the data we send via TCP (or UDP, etc.) understandable and meaningful.</p><p>In other words, TCP/IP sends a whole bunch of bytes to another computer, the protocol says what those bytes should be and what they mean.</p><p>You can make your own protocol if you like, assemblying the bytes in your TCP messages however you like. The only requirement is that whoever you are talking to speaks the same language. For this reason, it’s common to standardize these protocols.</p><p>There are, of course, many less important protocols to play with. For example there is a <a href="https://www.rfc-editor.org/rfc/rfc865.txt">Quote of The Day</a> protocol (port 17), and a <a href="https://www.rfc-editor.org/rfc/rfc864.txt">Random Characters</a> protocol (port 19). They may seem silly today, but they also showcase just how important that a general-purpose document transmission format like HTTP was.</p>
    <div>
      <h3>Port</h3>
      <a href="#port">
        
      </a>
    </div>
    <p>The timeline of Gopher and HTTP can be evidenced by their default port numbers. Gopher is 70, HTTP 80. The HTTP port was assigned (likely by <a href="https://en.wikipedia.org/wiki/Jon_Postel">Jon Postel</a> at the IANA) at the request of Tim Berners-Lee sometime between <a href="https://tools.ietf.org/html/rfc1060">1990</a> and <a href="https://tools.ietf.org/html/rfc1340">1992</a>.</p><p>This concept, of registering ‘port numbers’ predates even the Internet. In the original NCP protocol which powered the ARPANET remote addresses were identified by 40 bits. The first 32 identified the remote host, similar to how an IP address works today. The last eight were known as the <a href="https://tools.ietf.org/html/rfc433">AEN</a> (it stood for “Another Eight-bit Number”), and were used by the remote machine in the way we use a port number, to separate messages destined for different processes. In other words, the address specifies which machine the message should go to, and the AEN (or port number) tells that remote machine which application should get the message.</p><p>They quickly <a href="https://tools.ietf.org/html/rfc322">requested</a> that users register these ‘socket numbers’ to limit potential collisions. When port numbers were expanded to 16 bits by TCP/IP, that registration process was continued.</p><p>While protocols have a default port, it makes sense to allow ports to also be specified manually to allow for local development and the hosting of multiple services on the same machine. That same logic was the <a href="http://1997.webhistory.org/www.lists/www-talk.1992/0335.html">basis</a> for prefixing websites with <code>www.</code>. At the time, it was unlikely anyone was getting access to the root of their domain, just for hosting an ‘experimental’ website. But if you give users the hostname of your specific machine (<code>dx3.cern.ch</code>), you’re in trouble when you need to replace that machine. By using a common subdomain (<code>www.cern.ch</code>) you can change what it points to as needed.</p>
    <div>
      <h3>The Bit In-between</h3>
      <a href="#the-bit-in-between">
        
      </a>
    </div>
    <p>As you probably know, the URL syntax places a double slash (<code>//</code>) between the protocol and the rest of the URL:</p>
            <pre><code>http://cloudflare.com</code></pre>
            <p>That double slash was inherited from the <a href="https://en.wikipedia.org/wiki/Apollo/Domain">Apollo</a> computer system which was one of the first networked workstations. The Apollo team had a similar problem to Tim Berners-Lee: they needed a way to separate a path from the machine that path is on. Their solution was to create a special path format:</p>
            <pre><code>//computername/file/path/as/usual</code></pre>
            <p>And TBL copied that scheme. Incidentally, he now <a href="https://www.w3.org/People/Berners-Lee/FAQ.html#etc">regrets</a> that decision, wishing the domain (in this case <code>example.com</code>) was the first portion of the path:</p>
            <pre><code>http:com/example/foo/bar/baz</code></pre>
            <blockquote><p>URLs were never intended to be what they’ve become: an arcane way for a user to identify a site on the Web. Unfortunately, we’ve never been able to standardize URNs, which would give us a more useful naming system. Arguing that the current URL system is sufficient is like praising the DOS command line, and stating that most people should simply learn to use command line syntax. The reason we have windowing systems is to make computers easier to use, and more widely used. The same thinking should lead us to a superior way of locating specific sites on the Web.</p><p>— Dale Dougherty <a href="https://lists.w3.org/Archives/Public/www-talk/1996JanFeb/0075.html"><code><u>1996</u></code></a></p></blockquote><p>There are several different ways to understand the ‘Internet’. One is as a system of computers connected using a computer network. That version of the Internet came into being in 1969 with the creation of the ARPANET. Mail, files and chat all moved over that network before the creation of HTTP, HTML, or the ‘web browser’.</p><p>In 1992 Tim Berners-Lee created three things, giving birth to what <i>we</i> consider the Internet. The HTTP protocol, HTML, and the URL. His goal was to bring ‘Hypertext’ to life. Hypertext at its simplest is the ability to create documents which link to one another. At the time it was viewed more as a science fiction panacea, to be complimented by <a href="https://en.wikipedia.org/wiki/Hypermedia">Hypermedia</a>, and any other word you could add ‘Hyper’ in front of.</p><p>The key requirement of Hypertext was the ability to link from one document to another. In TBL’s time though, these documents were hosted in a multitude of formats and accessed through protocols like <a href="https://en.wikipedia.org/wiki/Gopher_(protocol)">Gopher</a> and FTP. He needed a consistent way to refer to a file which encoded its protocol, its host on the Internet, and where it existed on that host.</p><p>At <a href="https://www.w3.org/Conferences/IETF92/WWX_BOF_mins.html">the original</a> World-Wide Web presentation in March of 1992 TBL described it as a ‘Universal Document Identifier’ (UDI). Many <a href="https://www.w3.org/Protocols/old/osi-ds-29-00.txt">different formats</a> were considered for this identifier:</p>
            <pre><code>protocol: aftp host: xxx.yyy.edu path: /pub/doc/README
 
PR=aftp; H=xx.yy.edu; PA=/pub/doc/README;
 
PR:aftp/xx.yy.edu/pub/doc/README
 
/aftp/xx.yy.edu/pub/doc/README</code></pre>
            <p>This document also explains why spaces must be encoded in URLs (%20):</p><p>&gt; The use of white space characters has been avoided in UDIs: spaces
&gt; are not legal characters.  This was done because of the frequent
&gt; introduction of extraneous white space when lines are wrapped by
&gt; systems such as mail, or sheer necessity of narrow column width, and
&gt; because of the  inter-conversion of various forms of white space
&gt; which occurs during character code conversion and the transfer of
&gt; text between applications.</p><p>What’s most important to understand is that the URL was fundamentally just an abbreviated way of refering to the combination of scheme, domain, port, credentials and path which previously had to be understood contextually for each different communication system.</p><p>It was first officially defined in an <a href="https://www.ietf.org/rfc/rfc1738.txt">RFC</a> published in 1994.</p>
            <pre><code>scheme:[//[user:password@]host[:port]][/]path[?query][#fragment]</code></pre>
            <p>This system made it possible to refer to different systems from within Hypertext, but now that virtually all content is hosted over HTTP, may not be as necessary anymore. As early as <a href="https://lists.w3.org/Archives/Public/www-talk/1996JanFeb/0075.html">1996</a> browsers were already inserting the <code>http://</code> and <code>www.</code> for users automatically (rendering any advertisement which still contains them truly ridiculous).</p>
    <div>
      <h3>Path</h3>
      <a href="#path">
        
      </a>
    </div>
    <blockquote><p>I do not think the question is whether people can learn the meaning of the URL, I just find it it morally abhorrent to force grandma or grandpa to understand what, in the end, are UNIX file system conventions.</p><p>— Israel del Rio <a href="https://lists.w3.org/Archives/Public/www-talk/1996JanFeb/0041.html"><code><u>1996</u></code></a></p></blockquote><p>The slash separated path component of a URL should be familiar to any user of any computer built in the last fifty years. The hierarchal filesystem itself was introduced by the <a href="http://www.multicians.org/">MULTICS</a> system. Its creator, in turn, attributes it to <a href="http://www.csl.sri.com/users/neumann/">a two hour conversation with Albert Einstein</a> he had in 1952.</p><p>MULTICS used the greater than symbol (<code>&gt;</code>) to separated file path components. For example:</p>
            <pre><code>&gt;usr&gt;bin&gt;local&gt;awk</code></pre>
            <p>That was perfectly logical, but unfortunately the Unix folks <a href="https://www.bell-labs.com/usr/dmr/www/cacm.html">decided</a> to use <code>&gt;</code> to represent redirection, delegating path separation to the forward slash (<code>/</code>).</p>
    <div>
      <h3>Snapchat the Supreme Court</h3>
      <a href="#snapchat-the-supreme-court">
        
      </a>
    </div>
    <blockquote><p>Wrong. We are I now see clearly *disagreeing*. You and I.</p><p>...</p><p>As a person I reserve the right to use different criteria for different purposes. I want to be able to give names to generic works, AND to particular translations AND to particular versions. I want a richer world than you propose. I don’t want to be constrained by your two-level system of “documents” and “variants”.</p><p>— Tim Berners-Lee <a href="http://1997.webhistory.org/www.lists/www-talk.1993q3/1003.html"><code><u>1993</u></code></a></p></blockquote><p><a href="http://journals.cambridge.org/action/displayAbstract?fromPage=online&amp;aid=9282809&amp;fileId=S1472669614000255">One half</a> of the URLs referenced by US Supreme Court opinions point to pages which no longer exist. If you were reading an academic paper in 2011, written in 2001, you have better than even odds that any given URL won’t <a href="http://bmcbioinformatics.biomedcentral.com/articles/10.1186/1471-2105-14-S14-S5">be valid</a>.</p><p>There was <a href="http://1997.webhistory.org/www.lists/www-talk.1993q2/0234.html">a fervent belief</a> in 1993 that the URL would die, in favor of the ‘URN’. The Uniform Resource Name is a permanent reference to a given piece of content which, unlike a URL, will never change or break. Tim Berners-Lee first described the “urgent need” for them as early as <a href="http://1997.webhistory.org/www.lists/www-talk.1991/0018.html">1991</a>.</p><p>The simplest way to craft a URN might be to simply use a cryptographic hash of the contents of the page, for example: <code>urn:791f0de3cfffc6ec7a0aacda2b147839</code>. This method doesn’t meet the criteria of the web community though, as it wasn’t really possible to figure out who to ask to turn that hash into a piece of real content. It also didn’t account for the format changes which often happen to files (compressed vs uncompressed for example) which nevertheless represent the same content.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/U6NLQ1PisI9chJnfB9WRX/2a1e5eea553fce184f64f5e3021531e7/12859_2013_Article_6083_Fig2_HTML.jpg" />
          </figure><p>In 1996 Keith Shafer and several others proposed a solution to the problem of broken URLs. <a href="http://purl.oclc.org/OCLC/PURL/INET96">The link</a> to this solution is now broken. Roy Fielding posted an implementation suggestion in July of 1995. <a href="http://ftp.ics.uci.edu/pub/ietf/uri/draft-ietf-uri-roy-urn-urc-00.txt">That link</a> is now broken.</p><p>I was able to find these pages through Google, which has functionally made page titles the URN of today. The URN format was ultimately finalized in 1997, and has essentially never been used since. The implementation is itself interesting. Each URN is composed of two components, an <code>authority</code> who can resolve a given type of URN, and the specific ID of this document in whichever format the <code>authority</code> understands. For example, <code>urn:isbn:0131103628</code> will identify a book, forming a permanent link which can (hopefully) be turned into a set of URLs by your local <code>isbn</code> resolver.</p><p>Given the power of search engines, it’s possible the best URN format today would be a simple way for files to point to their former URLs. We could allow the search engines to index this information, and link us as appropriate:</p>
            <pre><code>&lt;!-- On http://zack.is/history --&gt;
&lt;link rel="past-url" href="http://zackbloom.com/history.html"&gt;
&lt;link rel="past-url" href="http://zack.is/history.html"&gt;</code></pre>
            
    <div>
      <h3>Query Params</h3>
      <a href="#query-params">
        
      </a>
    </div>
    <blockquote><p>The "application/x-www-form-urlencoded" format is in many ways an aberrant monstrosity, the result of many years of implementation accidents and compromises leading to a set of requirements necessary for interoperability, but in no way representing good design practices.</p><p>— <a href="https://url.spec.whatwg.org/#application/x-www-form-urlencoded">WhatWG URL Spec</a></p></blockquote><p>If you’ve used the web for any period of time, you are familiar with query parameters. They follow the path portion of the URL, and encode options like <code>?name=zack&amp;state=mi</code>. It may seem odd to you that queries use the ampersand character (<code>&amp;</code>) which is the same character used in HTML to encode special characters. In fact, if you’ve used HTML for any period of time, you likely have had to encode ampersands in URLs, turning <code>http://host/?x=1&amp;y=2</code> into <code>http://host/?x=1&amp;amp;y=2</code> or <code>http://host?x=1&amp;#38;y=2</code> (that particular confusion has <a href="http://1997.webhistory.org/www.lists/www-talk.1992/0447.html">always existed</a>).</p><p>You may have also noticed that cookies follow a similar, but different format: <code>x=1;y=2</code> which doesn’t actually conflict with HTML character encoding at all. This idea was not lost on the W3C, who encouraged implementers to support <code>;</code> as well as <code>&amp;</code> in query parameters as early as <a href="https://tools.ietf.org/html/rfc1866#section-8.2.1">1995</a>.</p><p>Originally, this section of the URL was strictly used for searching ‘indexes’. The Web was originally created (and its funding was based on it creating) a method of collaboration for high energy physicists. This is not to say Tim Berners-Lee didn’t know he was really creating a general-purpose communication tool. He <a href="http://1997.webhistory.org/www.lists/www-talk.1993q1/0286.html">didn’t add support</a> for tables for years, which is probably something physicists would have needed.</p><p>In any case, these ‘physicists’ needed a way of encoding and linking to information, and a way of searching that information. To provide that, Tim Berners-Lee created the <a href="https://developer.mozilla.org/en-US/docs/Web/HTML/Element/isindex"><code>&lt;ISINDEX&gt;</code></a> tag. If <code>&lt;ISINDEX&gt;</code> appeared on a page, it would inform the browser that this is a page which can be searched. The browser should show a search field, and allow the user to send a query to the server.</p><p>That <a href="https://www.w3.org/History/19921103-hypertext/hypertext/WWW/Addressing/Search.html">query</a> was formatted as keywords separated by plus characters (<code>+</code>):</p>
            <pre><code>http://cernvm/FIND/?sgml+cms</code></pre>
            <p>In fantastic Internet fashion, this tag was quickly abused to do all manner of things including providing an input to calculate square roots. It was <a href="https://lists.w3.org/Archives/Public/www-talk/1992NovDec/0042.html">quickly proposed</a> that perhaps this was too specific, and we really needed a general purpose <code>&lt;input&gt;</code> tag.</p><p>That particular proposal actually uses plus signs to separate the components of what otherwise looks like a modern GET query:</p>
            <pre><code>http://somehost.somewhere/some/path?x=xxxx+y=yyyy+z=zzzz</code></pre>
            <p>This was far from universally acclaimed. <a href="https://lists.w3.org/Archives/Public/www-talk/1992NovDec/0032.html">Some believed</a> we needed a way of saying that the content on the other side of links should be searchable:</p>
            <pre><code>&lt;a HREF="wais://quake.think.com/INFO" INDEX=1&gt;search&lt;/a&gt;</code></pre>
            <p>Tim Berners-Lee <a href="https://lists.w3.org/Archives/Public/www-talk/1992NovDec/0044.html">thought</a> we should have a way of defining strongly-typed queries:</p>
            <pre><code>&lt;ISINDEX TYPE="iana:/www/classes/query/personalinfo"&gt;</code></pre>
            <p>I can be somewhat confident in saying, in retrospect, I am glad the more generic solution won out.</p><p>The real work on <code>&lt;INPUT&gt;</code> <a href="http://1997.webhistory.org/www.lists/www-talk.1993q1/0079.html">began</a> in January of 1993 based on an older SGML type. It was (perhaps unfortunately), <a href="http://1997.webhistory.org/www.lists/www-talk.1993q1/0085.html">decided</a> that <code>&lt;SELECT&gt;</code> inputs needed a separate, richer, structure:</p>
            <pre><code>&lt;select name=FIELDNAME type=CHOICETYPE [value=VALUE] [help=HELPUDI]&gt; 
    &lt;choice&gt;item 1
    &lt;choice&gt;item 2
    &lt;choice&gt;item 3
&lt;/select&gt;</code></pre>
            <p>If you’re curious, reusing <code>&lt;li&gt;</code>, rather than introducing the <code>&lt;option&gt;</code> element was <a href="http://1997.webhistory.org/www.lists/www-talk.1993q2/0188.html">absolutely</a> considered. There were, of course, alternative form proposals. <a href="http://1997.webhistory.org/www.lists/www-talk.1993q2/0168.html">One</a> included some variable substituion evocative of what Angular might do today:</p>
            <pre><code>&lt;ENTRYBLANK TYPE=int LENGTH=length DEFAULT=default VAR=lval&gt;Prompt&lt;/ENTRYBLANK&gt;
&lt;QUESTION TYPE=float DEFAULT=default VAR=lval&gt;Prompt&lt;/QUESTION&gt;
&lt;CHOICE DEFAULT=default VAR=lval&gt;
    &lt;ALTERNATIVE VAL=value1&gt;Prompt1 ...
    &lt;ALTERNATIVE VAL=valuen&gt;Promptn
&lt;/CHOICE&gt;</code></pre>
            <p>In this example the inputs are checked against the type specified in <code>type</code>, and the <code>VAR</code> values are available on the page for use in <a href="http://1997.webhistory.org/www.lists/www-talk.1993q2/0150.html">string substitution</a> in URLs, à la:</p>
            <pre><code>http://cloudflare.com/apps/$appId</code></pre>
            <p>Additional <a href="http://1997.webhistory.org/www.lists/www-talk.1993q2/0188.html">proposals</a> actually used <code>@</code>, rather than <code>=</code>, to separate query components:</p>
            <pre><code>name@value+name@(value&amp;value)</code></pre>
            <p>It was Marc Andreessen who <a href="http://1997.webhistory.org/www.lists/www-talk.1993q3/0812.html">suggested</a> our current method based on what he had already implemented in Mosaic:</p>
            <pre><code>name=value&amp;name=value&amp;name=value</code></pre>
            <p>Just <a href="http://1997.webhistory.org/www.lists/www-talk.1993q4/0437.html">two months later</a> Mosaic would add support for <code>method=POST</code> forms, and ‘modern’ HTML forms were born.</p><p>Of course, it was also Marc Andreessen’s company <a href="https://web.archive.org/web/19990421025406/http://home.mcom.com/newsref/std/cookie_spec.html">Netscape</a> who would create the cookie format (using a different separator). Their proposal was itself painfully shortsighted, led to the attempt to introduce a <a href="https://www.ietf.org/rfc/rfc2965.txt"><code>Set-Cookie2</code></a> header, and introduced fundamental structural issues we still deal with at Cloudflare to this day.</p>
    <div>
      <h3>Fragments</h3>
      <a href="#fragments">
        
      </a>
    </div>
    <p>The portion of the URL following the ‘#’ is known as the fragment. Fragments were a part of URLs since their <a href="https://www.w3.org/History/19921103-hypertext/hypertext/WWW/Addressing/Addressing.html">initial specification</a>, used to link to a specific location on the page being loaded. For example, if I have an anchor on my site:</p>
            <pre><code>&lt;a name="bio"&gt;&lt;/a&gt;</code></pre>
            <p>I can link to it:</p>
            <pre><code>http://zack.is/#bio</code></pre>
            <p>This concept was gradually extended to any element (rather than just anchors), and moved to the <code>id</code> attribute rather than <code>name</code>:</p>
            <pre><code>&lt;h1 id="bio"&gt;Bio&lt;/h1&gt;</code></pre>
            <p>Tim Berners-Lee decided to use this character based on its connection to addresses in the United States (despite the fact that he’s British by birth). In <a href="https://www.w3.org/People/Berners-Lee/FAQ.html#etc">his words</a>:</p><blockquote><p>In a snail mail address in the US at least, it is common to use the number sign for an apartment number or suite number within a building. So 12 Acacia Av #12 means “The building at 12 Acacia Av, and then within that the unit known numbered 12”. It seemed to be a natural character for the task. Now, http://www.example.com/foo#bar means “Within resource http://www.example.com/foo, the particular view of it known as bar”.</p></blockquote><p>It turns out that the <a href="https://en.wikipedia.org/wiki/NLS_(computer_system)">original Hypertext system</a>, created by Douglas Engelbart, also used the ‘#’ character for the same purpose. This may be coincidental or it could be a case of accidental “idea borrowing”.</p><p>Fragments are explicitly not included in HTTP requests, meaning they only live inside the browser. This concept proved very valuable when it came time to implement client-side navigation (before <a href="https://developer.mozilla.org/en-US/docs/Web/API/History_API">pushState</a> was introduced). Fragments were also very valuable when it came time to think about how we can store state in URLs without actually sending it to the server. What could that mean? Let’s explore:</p>
    <div>
      <h3>Molehills and Mountains</h3>
      <a href="#molehills-and-mountains">
        
      </a>
    </div>
    <blockquote><p>There is a whole standard, as yukky as SGML, on Electronic data Intercahnge [sic], meaning forms and form submission. I know no more except it looks like fortran backwards with no spaces.</p><p>— Tim Berners-Lee <a href="http://1997.webhistory.org/www.lists/www-talk.1993q1/0091.html"><code>1993</code></a></p></blockquote><p>There is a popular perception that the internet standards bodies didn’t do much from the finalization of HTTP 1.1 and HTML 4.01 in 2002 to when HTML 5 really got on track. This period is also known (only by me) as the Dark Age of XHTML. The truth is though, the standardization folks were <i>fantastically busy</i>. They were just doing things which ultimately didn’t prove all that valuable.</p><p>One such effort was the Semantic Web. The dream was to create a Resource Description Framework (editorial note: run away from any team which seeks to create a framework), which would allow metadata about content to be universally expressed. For example, rather than creating a nice web page about my Corvette Stingray, I could make an RDF document describing its size, color, and the number of speeding tickets I had gotten while driving it.</p><p>This is, of course, in no way a bad idea. But the format was XML based, and there was a big chicken-and-egg problem between having the entire world documented, and having the browsers do anything useful with that documentation.</p><p>It did however provide a powerful environment for philosophical argument. One of the best such arguments lasted at least ten years, and was known by the masterful codename ‘<a href="https://www.w3.org/2001/tag/issues.html#httpRange-14">httpRange-14</a>’.</p><p>httpRange-14 sought to answer the fundamental question of what a URL is. Does a URL always refer to a document, or can it refer to anything? Can I have a URL which points to my car?</p><p>They didn’t attempt to answer that question in any satisfying manner. Instead they focused on how and when we can use 303 redirects to point users from links which aren’t documents to ones which are, and when we can use URL fragments (the bit after the ‘#’) to <a href="http://blog.iandavis.com/2010/11/a-guide-to-publishing-linked-data-without-redirects/">point users to linked data</a>.</p><p>To the pragmatic mind of today, this might seem like a silly question. To many of us, you can use a URL for whatever you manage to use it for, and people will use your thing or they won’t. But the Semantic Web cares for nothing more than semantics, so it was on.</p><p>This particular topic was discussed on <a href="http://www.w3.org/2002/07/01-tag-summary#arch-doc">July 1st 2002</a>, <a href="http://www.w3.org/2002/07/15-tag-summary#L3330">July 15th 2002</a>, <a href="http://www.w3.org/2002/07/22-tag-summary#L3974">July 22nd 2002</a>, <a href="http://www.w3.org/2002/07/29-tag-summary#httpRange-14">July 29th 2002</a>, <a href="http://lists.w3.org/Archives/Public/www-tag/2002Sep/0127">September 16th 2002</a>, and at least 20 other occasions through 2005. It was resolved by the great ‘<a href="https://lists.w3.org/Archives/Public/www-tag/2005Jun/0039.html">httpRange-14 resolution</a>’ of 2005, then reopened by complaints in <a href="https://lists.w3.org/Archives/Public/www-tag/2007Jul/0034.html">2007</a> and <a href="https://lists.w3.org/Archives/Public/public-awwsw/2011Jan/0021.html">2011</a> and <a href="https://www.w3.org/2001/tag/doc/uddp/change-proposal-call.html">a call for new solutions</a> in 2012. The question was heavily discussed by the <a href="https://groups.google.com/forum/#!searchin/pedantic-web/httprange-14/pedantic-web/iLY6VFvN-H0/SXQwc-lOpM8J">pedantic web</a> group, which is very aptly named. The one thing which didn’t happen is all that much semantic data getting put on the web behind any sort of URL.</p>
    <div>
      <h3>Auth</h3>
      <a href="#auth">
        
      </a>
    </div>
    <p>As you may know, you can include a username and password in URLs:</p>
            <pre><code>http://zack:shhhhhh@zack.is</code></pre>
            <p>The browser then encodes this authentication data into <a href="https://en.wikipedia.org/wiki/Base64">Base64</a>, and sends it as a header:</p>
            <pre><code>Authentication: Basic emFjazpzaGhoaGho</code></pre>
            <p>The only reason for the Base64 encoding is to allow characters which might not be valid in a header, it provides no obscurity to the username and password values.</p><p>Particularily over the pre-SSL internet, this was very problematic. Anyone who could snoop on your connection could easily see your password. <a href="http://1997.webhistory.org/www.lists/www-talk.1993q3/0297.html">Many alternatives</a> were proposed including <a href="https://en.wikipedia.org/wiki/Kerberos_(protocol)">Kerberos</a> which is a widely used security protocol both then and now.</p><p>As with so many of these examples though, the simple <a href="http://1997.webhistory.org/www.lists/www-talk.1993q3/0882.html">basic auth proposal</a> was easiest for browser manufacturers (Mosaic) to implement. This made it the first, and ultimately the only, solution until developers were given the tools to build their own authentication systems.</p>
    <div>
      <h3>The Web Application</h3>
      <a href="#the-web-application">
        
      </a>
    </div>
    <p>In the world of web applications, it can be a little odd to think of the basis for the web being the hyperlink. It is a method of linking one document to another, which was gradually augmented with styling, code execution, sessions, authentication, and ultimately became the social shared computing experience so many 70s researchers were trying (and failing) to create. Ultimately, the conclusion is just as true for any project or startup today as it was then: all that matters is adoption. If you can get people to use it, however slipshod it might be, they will help you craft it into what they need. The corollary is, of course, no one is using it, it doesn’t matter how technically sound it might be. There are countless tools which millions of hours of work went into which precisely no one uses today.</p><p>This was adapted from a post which originally appeared on the Eager blog. In 2016 Eager become <a href="https://www.cloudflare.com/apps">Cloudflare Apps</a>.</p> ]]></content:encoded>
            <category><![CDATA[Deep Dive]]></category>
            <category><![CDATA[History]]></category>
            <guid isPermaLink="false">244DV2Aflr3SROi8emvf7e</guid>
            <dc:creator>Zack Bloom</dc:creator>
        </item>
        <item>
            <title><![CDATA[50 Years of The Internet. Work in Progress to a Better Internet]]></title>
            <link>https://blog.cloudflare.com/50-years-of-the-internet-work-in-progress-to-a-better-internet/</link>
            <pubDate>Tue, 29 Oct 2019 07:16:00 GMT</pubDate>
            <description><![CDATA[ Over fifty years ago, the first network packet took flight from the UCLA campus to the Stanford Research Institute. This kicked-off the world of packet networking, ARPANET, and the modern Internet. ]]></description>
            <content:encoded><![CDATA[ <p>It was fifty years ago when the very first network packet took flight from the Los Angeles campus at UCLA to the Stanford Research Institute (SRI) building in Palo Alto. Those two California sites had kicked-off the world of packet networking, of the Arpanet, and of the modern Internet as we use and know it today. Yet by the time the third packet had been transmitted that evening, the receiving computer at SRI had crashed. The “L” and “O” from the word “LOGIN” had been transmitted successfully in their packets; but that “G”, wrapped in its own packet, caused the death of that nascent packet network setup. Even today, software crashes, that’s a solid fact; but this historic crash, is exactly that — historic.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2TTNW1HbwIXlZyszMky7js/c82b21d6cba3324249cb711fae72a979/G69Dec.png" />
            
            </figure><p>Courtesy of <a href="http://mercury.lcs.mit.edu/~jnc/tech/arpageo.html"><b>MIT Advanced Network Architecture Group</b></a> </p><p>So much has happened since that day (October 29’th to be exact) in 1969, in fact it’s an understatement to say “so much has happened”! It’s unclear that one blog article would ever be able to capture the full history of packets from then to now. Here at Cloudflare we say we are helping build a “<b>better Internet</b>”, so it would make perfect sense for us to honor the history of the Arpanet and its successor, the Internet, by focusing on some of the other folks that have helped build a <b>better Internet</b>.</p>
    <div>
      <h3>Leonard Kleinrock, Steve Crocker, and crew - those first packets</h3>
      <a href="#leonard-kleinrock-steve-crocker-and-crew-those-first-packets">
        
      </a>
    </div>
    <p>Nothing takes away from what happened that October day. The move from a circuit-based networking mindset to a packet-based network is momentus. The phrase <a href="https://www.wired.com/1996/10/atm-3/">net-heads vs bell-heads</a> was born that day - and it’s still alive today! The basics of why the Internet became a <a href="https://www.ietf.org/blog/permissionless-innovation/">permissionless innovation</a> was instantly created the moment that first packet traversed that network fifty years ago.</p><p>Professor Leonard (Len) Kleinrock continued to work on the very-basics of packet networking. The network used on that day expanded from two nodes to four nodes (in 1969, one IMP was delivered each month from BBN to various university sites) and created a network that spanned the USA from coast to coast and then beyond.</p><p>In the 1973 map there’s a series of boxes marked TIP. These are a version of the IMP that was used to connect computer terminals along with computers (hosts) to the ARPANET. Every IMP and TIP was managed by Bolt, Beranek and Newman (BBN), based in Cambridge Mass. This is vastly different from today’s Internet where every network is operated autonomously.</p><p>By 1977 the ARPANET had grown further with links from the United States mainland to Hawaii plus links to Norway and the United Kingdom.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2n5icuY7rTANONAut47vTw/88656911232d57e82de30a2a6a92ad5d/Arpanet_logical_map-_march_1977.png" />
            
            </figure><p>ARPANET logical map 1977 via <a href="https://en.wikipedia.org/wiki/ARPANET">Wikipedia</a></p><p>Focusing back to that day in 1969, Steve Crocker (who was a graduate student at UCLA at that time) headed up the development of the NCP software. The Network Control Program (later remembered as Network Control Protocol) provided the host to host transmission control software stack. Early versions of telnet and FTP ran atop NCP.</p><p>During this journey both Len Kleinrock, Steve Crocker, and the other early packet pioneers have always been solid members of the Internet community and continue to deliver daily to a better Internet.</p><p>Steve Crocker and Bill Duvall have written a <a href="/fifty-years-ago/">guest blog</a> about that day fifty years ago. Please read it after you've finished reading this blog.</p><p>BTW: Today, on this 50th anniversary, UCLA is celebrating history via this <a href="http://newsroom.ucla.edu/releases/internet-50-symposium-ucla">symposium</a> (see also <a href="https://samueli.ucla.edu/internet50/">https://samueli.ucla.edu/internet50/</a>).</p><p>Their collective accomplishments are extensive and still relevant today.</p>
    <div>
      <h3>Vint Cerf and Bob Kahn - the creation of TCP/IP</h3>
      <a href="#vint-cerf-and-bob-kahn-the-creation-of-tcp-ip">
        
      </a>
    </div>
    <p>In 1973 Vint Cerf was asked to work on a protocol to replace the original NCP protocol. The new protocol is now known as TCP/IP. Of course, everyone had to move from NCP to TCP and that was outlined in <a href="https://tools.ietf.org/html/rfc801">RFC801</a>. At the time (1982 and 1983) there were around 200 to 250 hosts on the ARPANET, yet that transition was still a major undertaking.</p><p>Finally, on January 1st, 1983, fourteen years after that first packet flowed, the NCP protocol was retired and TCP/IP was enabled. The ARPANET got what would become the Internet’s first large scale addressing scheme (IPv4). This was better in so many ways; but in reality, this transition was just one more stepping stone towards our modern and better Internet.</p>
    <div>
      <h3>Jon Postel - The RFCs, The numbers, The legacy</h3>
      <a href="#jon-postel-the-rfcs-the-numbers-the-legacy">
        
      </a>
    </div>
    <p>Some people write code, some people write documents, some people organize documents, some people organize numbers. Jon Postel did all of these things. Jon was the first person to be in charge of allocating numbers (you know - IP addresses) back in the early 80’s. In a way it was a thankless job that no-one else wanted to do. Jon was also the keeper of the early documents (Request For Comment or RFCs) that provide us with how the packet network should operate. Everything was available so that anyone could write code and join the network. Everyone was also able to write a fresh document (or update an existing document) so that the ecosystem of the Arpanet could grow. Some of those documents are still in existence and referenced today. <a href="https://tools.ietf.org/html/rfc791">RFC791</a> defines the IP protocol and is dated 1981 - it’s still an active document in-use today! Those early days and Jon’s massive contributions have been <a href="https://www.internethalloffame.org/inductees/jon-postel">well documented and acknowledged</a>. A better Internet is impossible without these conceptual building blocks.</p><p>Jon passed away in 1998; however, his legacy and his thoughts are still in active use today. He once said within the TCP world: “<i>Be conservative in what you send, be liberal in what you accept</i>”. This is called the <a href="https://en.wikipedia.org/wiki/Robustness_principle">robustness principle</a> and it’s still key to writing good network protocol software.</p>
    <div>
      <h3>Bill Joy &amp; crew - Berkeley BSD Unix 4.2 and its TCP/IP software</h3>
      <a href="#bill-joy-crew-berkeley-bsd-unix-4-2-and-its-tcp-ip-software">
        
      </a>
    </div>
    <p>What’s the use of a protocol if you don’t have software to speak it. In the early 80’s there were many efforts to build both affordable and fast hardware, along with the software to speak to that hardware. At the University of California, Berkeley (UCB) there was a group of software developers tasked in 1980 by the Defense Advanced Research Projects Agency (DARPA) to implement the brand-new TCP/IP protocol stack on the VAX under Unix. They not-only solved that task; but they went a long way further than just that goal.</p><p>The folks at UCB (Bill Joy, Marshall Kirk McKusick, Keith Bostic, Michael Karels, and others) created an operating system called 4.2BSD (Berkeley Software Distribution) that came with TCP/IP ingrained in its core. It was based on the AT&amp;T’s Unix v6 and Unix/32V; however it had significantly deviated in many ways. The networking code, or sockets as its interface is called, became the underlying building blocks of each and every piece of networking software in the modern world of the Internet. We at Cloudflare have written numerous times about <a href="/revenge-listening-sockets/">networking kernel code</a> and it all boils down to the code that was written back at UCB. Bill Joy went on to be a founder of Sun Microsystems (which commercialized 4.2BSD and much more). Others from UCB went on to help build other companies that still are relevant to the Internet today.</p><p>Fun fact: Berkeley’s Unix (or FreeBSD, OpenBSD, NetBSD as its variants are known) is now the basis of every iPhone, iPad and Mac laptops software in existence. Android’s and Chromebooks come from a different lineage; but still hold those BSD methodologies as the fundamental basis of all their networking software.</p>
    <div>
      <h3>Al Gore - The Information Superhighway - or retold as “funding the Internet”</h3>
      <a href="#al-gore-the-information-superhighway-or-retold-as-funding-the-internet">
        
      </a>
    </div>
    <p>Do you believe that Al Gore invented the Internet? It’s actually doesn’t matter which side of this statement you want to argue; the simple fact is that the US Government funded the National Science Foundation (NSF) with the task of building an “information superhighway”. Al Gore himself said: “<i>how do we create a nationwide network of information superhighways? Obviously, the private sector is going to do it, but the Federal government can catalyze and accelerate the process</i>. '' He said that statement on September 19, 1994 and this blog post author knows that fact because I was there in the room when he said it!</p><p>The United States Federal Government help fund the growth of the Arpanet into the early version of the Internet. Without the government's efforts, we may not have been where we are today. Luckily, just a handful of years later, the NSF decided that in fact the commercial world could and should be the main building blocks for the Internet and instantly the Internet as we know it today was born. Packets that fly across commercial backbones are paid for via commercial contracts. The parts that are still funded by the government (any government) are normally only the parts used by universities, or military users.</p><p>But this author is still going to thank Al Gore for helping create a better Internet back in the early 90’s.</p>
    <div>
      <h3>Sir Tim Berners-Lee - The World Wide Web</h3>
      <a href="#sir-tim-berners-lee-the-world-wide-web">
        
      </a>
    </div>
    <p>What can I say? In 1989 Tim Berners-Lee (who was later knighted and is now Sir Tim) invented the World Wide Web and we would not have billions of people using the Internet today without him. Period!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7lWqjPOt0fjRD8A5Mnp7s9/97fb2df2b62ecd0b161506613d662095/pasted-image-0--1--1.png" />
            
            </figure><p>via <a href="https://www.reddit.com/r/pcmasterrace/comments/b086lz/tim_bernerslee_and_vint_cerf_wearing_funny_shirts/">Reddit</a></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6kgtCNxHJv59vs7lcxDNes/598897c195737d200d111105ad70a681/pasted-image-0--2--1.png" />
            
            </figure><p>via <a href="https://www.reddit.com/r/pcmasterrace/comments/b086lz/tim_bernerslee_and_vint_cerf_wearing_funny_shirts/">Reddit</a></p><p>Yeah, let's clear up that subtle point. Sir Tim invented the World Wide Web (WWW) and Vint Cerf invented the Internet. When folks talk about using one or the other, it’s worth reminding then there is a difference. But I digress!</p><p>Sir Tim’s creation is what provides everyday folks with a window into information on the Internet. Before the WWW we had textual interfaces to information; but only if you knew where to look and what to type. We really need to remember every time we click on a link or press submit to buy something, that the only way that is usable is such mass and uniform form is because of Sir Tim’s creation.</p>
    <div>
      <h3>Sally Floyd - The subtle art of dropping packets</h3>
      <a href="#sally-floyd-the-subtle-art-of-dropping-packets">
        
      </a>
    </div>
    <p>Random Early Detection (RED) is an algorithm that saved the Internet back in the early 90’s. Built on earlier work by Van Jacobson, it defined a method to drop packets when a router was overloaded, or more importantly about to be overloaded. Packet network, before Van Jacobson’s or Sally Floyd’s work, would congest heavily and slow down. It seemed natural to never throw away data; but between the two inventors of RED, that all changed. Her follow-up work is described in an August 1993 <a href="https://www.icir.org/floyd/papers/early.twocolumn.pdf">paper</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1wLPHC4YYSovaP5xyQYTok/fc7da8142c0b5249844ed839cf412b99/download.png" />
            
            </figure><p>Networks have become much more complex since August 1993, yet the RED code still exists and is used in nearly every Unix or Linux kernel today. See the <a href="http://man7.org/linux/man-pages/man8/tc-red.8.html">tc-red(8)</a> command and/or the Linux kernel <a href="https://github.com/torvalds/linux/blob/master/net/sched/sch_red.c">code</a> itself.</p><p>It’s with great sorrow that Sally Floyd <a href="https://www.nytimes.com/2019/09/04/science/sally-floyd-dead.html">passed away</a> in late August. But, rest assured, her algorithm will possibly be used to help keep a better Internet flowing smoothly forever.</p>
    <div>
      <h3>Jay Adelson and Al Avery - The datacenter that interconnect networks</h3>
      <a href="#jay-adelson-and-al-avery-the-datacenter-that-interconnect-networks">
        
      </a>
    </div>
    <p>Remember that comment by Al Gore above saying that the private sector would build the Internet. Back in the late 90’s that’s exactly what happened. Telecom companies were selling capacity to fledgling ISPs. Nationwide IP backbones were being built by the likes of PSI, Netcom, UUnet, Digex, CAIS, ANS, etc. The telco’s themselves like MCI, Sprint, but interestingly not AT&amp;T at the time, were getting into providing Internet access in a big way.</p><p>In the US everything was moving very fast. By the mid-90’s there was no way to get a connection anymore from a regional research network for your shiny new ISP. Everything had all gone commercial and the NSF funded parts of the Internet were not available for commercial packets.</p><p>The NSF, in it’s goal to allow commercial networks to build the Internet, had also specified that those networks should interconnect at four locations around the country. New Jersey, Chicago, Bay Area, California, and Washington DC area.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/J2XRuQgRjwo6FJLsLFkFC/3f4fd8bb86f17c0931dd213c37d79f75/NewNSFNETArchitecture.jpg" />
            
            </figure><p>Network Access Point via <a href="https://en.wikipedia.org/wiki/Network_access_point">Wikipedia</a></p><p>The NAP’s, as they were called, were to provide interconnection between networks and to provide the research networks a way to interconnect with commercial network along with themselves. The NAPs suddenly exploded in usage, near-instantly needing to be bigger, The buildings they were housed in ran out of space or power or both! Yet those networks needed homes, interconnections needed a better structure and the old buildings that were housing the Internet’s routers just didn’t cut it anymore.</p><p>Jay and Al had a vision. New massive datacenters that could securely house the growing need for the power-hungry Internet. But that’s only a small portion of the vision. They realized that if many networks all lived under the same roof then interconnecting them could indeed build a better Internet. They installed Internet Exchanges and a standardized way of cross-connecting from one network to another. They were carrier neutral, so that everyone was treated equal. It was, what became known as the “<i>network effect</i>” and it was a success. The more networks you had under one roof, the more that other networks would want to be housed within those same roofs. The company they created was (and still is) called Equinix. It wasn’t the first company to realize this; but it sure has become one of the biggest and most successful in this arena.</p><p>Today, a vast amount of the Internet uses Equinix datacenters, it’s IXs along with similar offerings from similar companies. Jay and Al’s vision absolutely paved the way to a better internet.</p>
    <div>
      <h3>Everyone who’s a member of The Internet Society 1992-Today</h3>
      <a href="#everyone-whos-a-member-of-the-internet-society-1992-today">
        
      </a>
    </div>
    <p>It turns out that people realized that the modern Internet is not all-commercial all-the-time. There is a need for other influences to be had. Civil society, governments, academics, along with those commercial entities should also have a say in how the Internet evolves. This brings into the conversation a myriad of people that have either been members of The Internet Society (ISOC) and/or have worked directly for ISOC over it’s 27+ years. This is the organization that manages and helps fund the IETF (where protocols are discussed and standardized). ISOC plays a decisive role at The Internet Governance Forum (IGF), and fosters a clear understanding of how the Internet should be used and protected to both the general public and regulators worldwide. ISOCs involvement with Internet Exchange development (vital as the Internet grows and connects users and content) has been a game changer for many-many countries, especially in Africa.</p><p>ISOC has an interesting funding mechanism centered around the dotORG domain. You may not have realized that you were helping the Internet grow when you <a href="https://www.cloudflare.com/products/registrar/">registered</a> and paid for your <a href="https://www.cloudflare.com/application-services/products/registrar/buy-org-domains/">.org domain</a>; however, you are!</p><p>Over the life of ISOC, the Internet has moved from being the domain of engineers and scientists into something used by nearly everyone; independent of technical skill or in-fact a full understanding of it’s inner workings. ISOC’s mission is "<i>to promote the open development, evolution and use of the Internet for the benefit of all people throughout the world</i>". It has been a solid part of that growth.</p><p>Giving voice to everyone on how the Internet could grow and how it should (or should not be) regulated, is front-and-center for every person involved with ISOC globally. Defining both an inclusive Internet and a better Internet is the everyday job for those people.</p>
    <div>
      <h3>Kanchana Kanchanasut - Thailand and .TH</h3>
      <a href="#kanchana-kanchanasut-thailand-and-th">
        
      </a>
    </div>
    <p>In the 1988, amongst other things, Professor Kanchana Kanchanasut registered and operated the country Top Level Domain .TH (which is the two-letter <a href="https://www.iso.org/iso-3166-country-codes.html">ISO 3166</a> code for Thailand). This was the first country to have a TLD; something all countries take for granted today.</p><p>Also in 1988, five Thai universities got dial-up connections to the Internet because of her work. However, the real breakthrough came when Prof. Kanchanasut’s efforts led to the first leased line interconnecting Thailand to the nascent Internet of the early 90’s. That was 1991 and since then Thailand’s connectivity has exploded. It’s an amazingly well connected country. Today it boasts a plethora of mobile operators, and international undersea and cross-border cables, along with Prof. Kanchanasut’s present-day work spearheading an independent and growing Internet Exchange within Thailand.</p><p>In 2013, the "<i>Mother of the Internet in Thailand</i>" as she is affectionately called, was <a href="https://www.eurekalert.org/pub_releases/2013-07/aiot-apk070813.php">inducted</a> into the Internet Hall of Fame by the Internet Society. If you’re in Thailand, or South East Asia, then she’s the reason why you have a better Internet.</p>
    <div>
      <h3>The list continues</h3>
      <a href="#the-list-continues">
        
      </a>
    </div>
    <p>In the fifty years since that first packet there have been heros, both silent and profoundly vocal that have moved the Internet forward. There’s no was all could be named or called out; however, you will find many listed if you go look. Wander through the thousands of RFC’s, or check out the <a href="https://www.internethalloffame.org/">Internet Hall of Fame</a>. The Internet today is a better Internet because anyone can be a contributor.</p>
    <div>
      <h3>Cloudflare and the better Internet</h3>
      <a href="#cloudflare-and-the-better-internet">
        
      </a>
    </div>
    <p>Cloudflare, or in fact any part of the Internet, would not be where it is today without the groundbreaking work of these people plus many others unnamed here. This fifty year effort has moved the needle in such a way that without all of them the runaway success of the Internet could not have been possible!</p><p>Cloudflare is just over <a href="/birthday-week-2019/">nine years old</a> (that’s only 18% of this fifty year period). Gazillions and gazillions of packets have flowed since Cloudflare started providing it services and we sincerely believe we have done our part with those services to build a better Internet.. Oh, and we haven’t finished our work, far from it! We still have a long way to go in helping build a better Internet. <a href="/founders-letter/">And we’re just getting started</a>!</p><blockquote><p>A letter from Matthew Prince (<a href="https://twitter.com/eastdakota?ref_src=twsrc%5Etfw">@eastdakota</a>) and Michelle Zatlyn (<a href="https://twitter.com/zatlyn?ref_src=twsrc%5Etfw">@zatlyn</a>) <a href="https://twitter.com/hashtag/BetterInternet?src=hash&amp;ref_src=twsrc%5Etfw">#BetterInternet</a> <a href="https://twitter.com/search?q=%24NET&amp;src=ctag&amp;ref_src=twsrc%5Etfw">$NET</a> <a href="https://t.co/BHLI8MuuTS">https://t.co/BHLI8MuuTS</a> <a href="https://t.co/Jirb0bPUzJ">pic.twitter.com/Jirb0bPUzJ</a></p><p>— Cloudflare (@Cloudflare) <a href="https://twitter.com/Cloudflare/status/1172495649042046976?ref_src=twsrc%5Etfw">September 13, 2019</a></p></blockquote><p>If you’re interested in helping build a better Internet and want to join Cloudflare in our offices in San Francisco, Singapore, London, Austin, Sydney, Champaign, Munich, San Jose, New York or our new Lisbon Portugal offices, then buzz over to our <a href="https://www.cloudflare.com/careers/">jobs</a> page and come join us! #betterInternet</p> ]]></content:encoded>
            <category><![CDATA[Better Internet]]></category>
            <category><![CDATA[History]]></category>
            <guid isPermaLink="false">5JGPsv6uz47ZFRnD7EAdGF</guid>
            <dc:creator>Martin J Levy</dc:creator>
        </item>
        <item>
            <title><![CDATA[Fifty Years Ago]]></title>
            <link>https://blog.cloudflare.com/fifty-years-ago/</link>
            <pubDate>Tue, 29 Oct 2019 07:15:00 GMT</pubDate>
            <description><![CDATA[ On 29 October 2019, Professor Leonard (“Len”) Kleinrock is chairing a celebration at the University of California, Los Angeles (UCLA).  The date is the fiftieth anniversary of the first full system test and remote host-to-host login over the Arpanet. ]]></description>
            <content:encoded><![CDATA[ <p><i>This is a guest post by Steve Crocker of Shinkuro, Inc. and Bill Duvall of Consulair. Fifty years ago they were both present when the first packets flowed on the Arpanet.</i></p><p>On 29 October 2019, Professor Leonard (“Len”) Kleinrock is chairing a celebration at the University of California, Los Angeles (UCLA).  The date is the fiftieth anniversary of the first full system test and remote host-to-host login over the Arpanet.  Following a brief crash caused by a configuration problem, a user at UCLA was able to log in to the SRI SDS 940 time-sharing system.  But let us paint the rest of the picture.</p><p>The Arpanet was a bold project to connect sites within the ARPA-funded computer science research community and to use packet-switching as the technology for doing so.  Although there were parallel packet-switching research efforts around the globe, none were at the scale of the Arpanet project. Cooperation among researchers in different laboratories, applying multiple machines to a single problem and sharing of resources were all part of the vision.  And over the fifty years since then, the vision has been fulfilled, albeit with some undesired outcomes mixed in with the enormous benefits.  However, in this blog, we focus on just those early days.</p><p>In September 1969, Bolt, Beranek and Newman (BBN) in Cambridge, MA delivered the first Arpanet IMP (packet switch) to Len Kleinrock’s laboratory at UCLA. The Arpanet incorporated his theoretical work on packet switching and UCLA was chosen as the network measurement site for validation of his theories.  The second IMP was installed a month later at Doug Engelbart’s laboratory at the Stanford Research Institute – now called SRI International – in Menlo Park, California.  Engelbart had invented the mouse and his lab had developed a graphical interface for structured and hyperlinked text.  Engelbart’s vision saw computer users sharing information over a wide-scale network, so the Arpanet was a natural candidate for his work. Today, we have seen that vision travel from SRI to Xerox to Apple to Microsoft, and it is now a part of everyone’s environment.</p><p>“IMP” stood for Interface Message Processor; we would now simply say “router.” Each IMP was connected to up to four host computers.  At UCLA the first host was a Scientific Data Systems (SDS) Sigma 7.  At SRI, the host was an SDS 940.  Jon Postel, Vint Cerf and Steve Crocker were among the graduate students at UCLA involved in the design of the protocols between the hosts on the Arpanet, as were Bill Duvall, Jeff Rulifson, and others at SRI (see RFC 1 and RFC 2.)</p><p>SRI and UCLA quickly connected their hosts to the IMPs.  Duvall at SRI modified the SDS 940 time-sharing system to allow host to host terminal connections over the net. Charley Kline wrote the complementary client program at UCLA.  These efforts required building custom hardware for connecting the IMPs to the hosts, and programming for both the IMPs and the respective hosts.  At the time, systems programming was done either in assembly language or special purpose hybrid languages blending simple higher-level language features with assembler.  Notable examples were ESPOL for the Burroughs 5500 and PL/I for Multics.  Much of Engelbart’s NLS system was written in such a language, but the time-sharing system was written in assembler for efficiency and size considerations.</p><p>Along with the delivery of the IMPs, a deadline of October 31 was set for connecting the first hosts.  Testing was scheduled to begin on October 29 in order to allow a few days for necessary debugging and handling of unanticipated problems.   In addition to the high-speed line that connected the SRI and UCLA IMPs, there was a parallel open, dedicated voice line. On the evening of October 29 Duvall at SRI donned his headset as did Charley Kline at UCLA, and both host-IMP pairs were started. Charley typed an L, the first letter of a LOGIN command.  Duvall, tracking the activity at SRI, saw that the L was received, and that it launched a user login process within the 940. The 940 system was full duplex, so it echoed an “L” across the net to UCLA.  At UCLA, the L appeared on the terminal.  Success! Charley next typed O and received back O.  Charley typed G, and there was silence.  At SRI, Duvall quickly determined that an echo buffer had been sized too small<sup>[1]</sup>, re-sized it, and restarted the system. <a href="https://www.imdb.com/title/tt5275828/Charley">Charley</a>  typed “LO” again, and received back the normal “LOGIN”.  He typed a confirming RETURN, and the first host-to-host login on the Arpanet was completed.</p><p>Len Kleinrock noted that the first characters sent over the net were “LO.”  Sensing the importance of the event, he expanded “LO" to “Lo and Behold”, and used that in the title of the movie called “Lo and Behold: Reveries of the Connected World.”  See <a href="https://www.imdb.com/title/tt5275828">imdb.com/title/tt5275828</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2V3iJD7gRxgXpHP3W2czFM/169d93bd6410d03b1a59314b80f25f30/image3-3.jpg" />
            
            </figure><p><i>Engelbart's five finger keyboard and mouse with three buttons. The mouse evolved and became ubiquitous. The five finger keyboard faded.</i></p><p>IMPs continued to be installed on the Arpanet at the rate of roughly one per month over the next two years.  Soon we had a spectacularly large network with more than twenty hosts, and the connections between the IMPs were permanent telephone lines operating at the lightning speed of 50,000 bits per second<sup>[2]</sup>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2WajwPnqRvhupq0qahNekn/5f6f465555c937384992e4e7730f301a/image2-2.jpg" />
            
            </figure><p><i>Len Kleinrock and IMP #1 at UCLA</i></p><p>Today, all computers come with hardware and software to communicate with other computers.  Not so back then.  Each computer was the center of its own world, and expected to be connected only to subordinate “peripheral” devices – printers, tape drives, etc.  Many even used different character sets.  There was no standard method for connecting two computers together, not even ones from the same manufacturer. Part of what made the Arpanet project bold was the diversity of the hardware and software at the research centers.  Almost all of the hosts at these sites were time-shared computers.  Typically, several people shared the same computer, and the computer processed each user’s computation a little bit at a time.  These computers were large and expensive.  Personal computers were fifteen years in the future, and smart phones were science fiction.  Even Dick Tracy’s fantasy two-way wrist radio envisioned only voice interaction, not instant access to databases and sharing of pictures and videos.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3S3Do4NnQ3hA6jBBGAAd8G/c8e8e7c775e4854e2fb265b20caca000/image1.jpg" />
            
            </figure><p><i>Dick Tracy and his two-way radio.</i></p><p>Each site had to create a hardware connection from the host(s) to the IMP. Further, each site had to add drivers or more to the operating system in its host(s) so that programs on the host could communicate with the IMP.  The protocols for host to host communication were in their infancy and unproven.</p><p>During those first two years when IMPs were being installed monthly, we met with students and researchers at the other sites to develop the first suite of protocols.  The bottom layer was forgettably named the Host-Host protocol<sup>[3]</sup>.  Telnet, for emulating terminal dial-up, and the File Transfer Protocol (FTP) were on the next layer above the Host-Host protocol.  Email started as a special case of FTP and later evolved into its own protocol.  Other networks sprang up and the Arpanet became the seedling for the Internet, with TCP providing a reliable, two-way host to host connection, and IP below it stitching together the multiple networks of the Internet.  But the Telnet and FTP protocols continued for many years and are only recently being phased out in favor of more robust and more secure alternatives.</p><p>The hardware interfaces, the protocols and the software that implemented the protocols were the tangible engineering products of that early work.  Equally important was the social fabric and culture that we created.  We knew the system would evolve, so we envisioned an open and evolving architecture.  Many more protocols would be created, and the process is now embodied in the Internet Engineering Task Force (IETF).  There was also a strong spirit of cooperation and openness.  The Request for Comments (RFCs) series of notes were open for anyone to write and everyone to read.  Anyone was welcome to participate in the design of the protocol, and hence we now have important protocols that have originated from all corners of the world.</p><p>In October 1971, two years after the first IMP was installed, we held a meeting at MIT to test the software on all of the hosts.  Researchers at each host attempted to login, via Telnet, to each of the other hosts.  In the spirit of Samuel Johnson’s famous quote<sup>[4]</sup>, the deadline and visibility within the research community stimulated frenetic activity all across the network to get everything working.  Almost all of the hosts were able to login to all of the other hosts.  The Arpanet was finally up and running.  And the bakeoff at MIT that October set the tone for the future: test your software by connecting to others.  No need for formal standards certification or special compliance organizations; the pressure to demonstrate your stuff actually works with others gets the job done.</p><hr /><p><sup>[1]</sup> The SDS 940 had a maximum memory size of 65K 24-bit words. The time-sharing system along with all of its associated drivers and active data had to share this limited memory, so space was precious and all data structures and buffers were kept to the minimum possible size. The original host-to-host protocol called for terminal emulation and single character messages, and buffers were sized accordingly. What had not been anticipated was that in a full duplex system such as the 940, multiple characters might be echoed for a single received character. Such was the case when the G of LOG was echoed back as “GIN” due to the command completion feature of the SDS 940 operating system.</p><p><sup>[2]</sup> “50,000” is not a misprint. The telephone lines in those days were analog, not digital. To achieve a data rate of 50,000 bits per second, AT&amp;T used twelve voice grade lines bonded together and a Western Electric series 303A modem that spread the data across the twelve lines. Several years later, an ordinary “voice grade” line was implemented with digital technology and could transmit data at 56,000 bits per second, but in the early days of the Arpanet 50Kbs was considered very fast. These lines were also quite expensive.</p><p><sup>[3]</sup> In the papers that described the Host-Host protocol, the term Network Control Program (NCP) designated the software addition to the operating system that implemented the Host-Host protocol. Over time, the term Host-Host protocol fell into disuse in favor of Network Control Protocol, and the initials “NCP” were repurposed.</p><p><sup>[4]</sup> Samuel Johnson - ‘Depend upon it, sir, when a man knows he is to be hanged in a fortnight, it concentrates his mind wonderfully.’</p> ]]></content:encoded>
            <category><![CDATA[History]]></category>
            <guid isPermaLink="false">16GiXeoxlG8n2bRTNrtzRy</guid>
            <dc:creator>Guest Author</dc:creator>
        </item>
        <item>
            <title><![CDATA[The History of Stock Quotes]]></title>
            <link>https://blog.cloudflare.com/history-of-stock-quotes/</link>
            <pubDate>Mon, 25 Dec 2017 19:37:39 GMT</pubDate>
            <description><![CDATA[ In honor of all the fervor around Bitcoin, we thought it would be fun to revisit the role finance has had in the history of technology even before the Internet came around. This was adapted from a post which originally appeared on the Eager blog. ]]></description>
            <content:encoded><![CDATA[ <p>In honor of all the fervor around Bitcoin, we thought it would be fun to revisit the role finance has had in the history of technology even before the Internet came around. This was adapted from a post which originally appeared on the Eager blog.</p><p>On 10th of April 1814, almost one hundred thousand troops fought the battle of Toulouse in Southern France. The war had ended on April 6th. Messengers delivering news of Napoleon Is abdication and the end of the war wouldn’t reach Toulouse until April 12th.</p><p>The issue was not the lack of a rapid communication system in France, it just hadn’t expanded far enough yet. France had an <a href="https://web.archive.org/web/20160310150944/http://www.ieeeghn.org/wiki/images/1/17/Dilhac.pdf">elaborate semaphore system</a>. Arranged all around the French countryside were buildings with <a href="http://bnrg.cs.berkeley.edu/~randy/Courses/CS39C.S97/optical/optical.html">mechanical flags</a> which could be rotated to transmit specific characters to the next station in line. When the following station showed the same flag positions as this one, you knew the letter was acknowledged, and you could show the next character. This system allowed roughly one character to be transmitted per minute, with the start of a message moving down the line at almost <a href="http://www.lowtechmagazine.com/2007/12/email-in-the-18.html">900 miles per hour</a>. It wouldn’t expand to Toulouse until 1834 however, twenty years after the Napoleonic battle.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7enhF9lKpc3llsNoyJBiHZ/4c0346c356f8fccc4f217218177746b4/semaphore.png" />
            
            </figure><p>Cappy Telegraph System</p>
    <div>
      <h3>Stocks and Trades</h3>
      <a href="#stocks-and-trades">
        
      </a>
    </div>
    <p>It’s should be no secret that money motivates. Stock trading presents one of the most obvious uses of fast long-distance communication. If you can find out about a ship sinking or a higher than expected earnings call before other traders, you can buy or sell the right stocks and make a fortune.</p><p>In France, it was strictly forbidden to use the semaphore system for anything other than government business however. Being such a public method of communication, it wasn’t really possible for an enterprising investor to ‘slip in’ a message without discovery. The ‘Blanc brothers’ figured out one method however. They discovered they could bribe the operator to include one extra bit of information, the “Error - cancel last transmitted symbol” control character with a message. If an operative spotted that symbol, they knew it <a href="https://books.google.com/books?id=APJ7QeR_XPkC&amp;pg=PA21&amp;lpg=PA21&amp;dq=semaphore+arbitrage+france&amp;source=bl&amp;ots=fORT7ZdmtX&amp;sig=2_1f_7Ry6mTKAAI32s0D0mdsLAY&amp;hl=en&amp;sa=X&amp;ved=0ahUKEwiLgYXvp5bOAhVJXB4KHexjAt4Q6AEIOjAE#v=onepage&amp;q=semaphore%20arbitrage%20france&amp;f=false">was time to buy</a>.</p><p>Semaphore had several advantages over an electric telegraph. For one, there were no lines to cut, making it easier to defend during war. Ultimately though, its slow speed, need for stations every ten miles or so, and complete worthlessness at night and in bad weather made its time on this earth limited.</p>
    <div>
      <h3>Thirty-Six Days Out of London</h3>
      <a href="#thirty-six-days-out-of-london">
        
      </a>
    </div>
    <p>Ships crossing the Atlantic were never particularly fast. We American’s didn’t learn of the end of our own revolution at the Treaty of Versailles until October 22nd, almost two months after it had been signed. The news came from a ship “<a href="https://web.archive.org/web/20000307031413/http://webandwire.com/storey1.htm">thirty-six days out of london</a>”.</p><p>Anyone who could move faster could make money. At the end of the American Civil War, Jim Fisk chartered high speed ships to speed to London and short Confederate Bonds before the news could reach the British market. He made a fortune.</p><p>It wasn’t long before high speed clipper ships were making the trip with mail and news in twelve or thirteen days regularly. Even then though, there was fierce competition among newspapers to get the information first. New York newspapers like the Herald and the Tribune banded together to form the New York Associated Press (now known just as the Associated Press) to pay for a boat to meet these ships 50 miles off shore. The latest headlines were sent back to shore via pigeon or the growing telegraph system.</p>
    <div>
      <h3>The Gold Indicator</h3>
      <a href="#the-gold-indicator">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/iUERWOc10KmQKhrbtuLvd/130f507d54d8869949ef68ab8786f62a/gold-indicator.png" />
            
            </figure><p>Most of the technology used by the morse code telegraph system was built to satisfy the demands of the finance industry.</p><p>The first financial indicator was a pointer which sat above the gold exchange in New York. In our era of complex technology, the pointer system has the refreshing quality of being very simple. An operator in the exchange had a button which turned a motor. The longer he held the button down, the further the motor spun, changing the indication. This system had no explicit source of ‘feedback’, beyond the operator watching the indicator and letting go of his button when it looked right.</p><p>Soon, other exchanges were clamoring for a similar indicator. Their motors were wired to those of the Gold Exchange. This did not form a particularly reliable system. Numerous boys had to run from site to site, resetting the indicators when they came out of sync from that at the Gold Exchange.</p>
    <div>
      <h3>The Ticker Tape</h3>
      <a href="#the-ticker-tape">
        
      </a>
    </div>
    <blockquote><p>I am crushed for want of means. My stockings all want to see my mother, and my hat is hoary from age.</p><p>— Samuel Morse, in his diary</p></blockquote><p>This same technology formed the basis for the original ticker tape machines. A printing telegraph from this era communicated using a system of pulses over a wire. Each pulse would move the print head one ‘step’ on a racheting wheel. Each step would align a different character with the paper to be printed on. A longer pulse over the wire would energize an electromagnet enough to stamp the paper into the print head. Missing a single pulse though would send the printer out of alignment creating a 19th century version of <a href="https://en.wikipedia.org/wiki/Mojibake">Mojibake</a>.</p><p>It was Thomas Edison who invented the ‘automatic rewinder’, which allowed the machines to be synchronized remotely. The first system used a screw drive. If you moved the print head through three full revolutions without printing anything, you would reach the end of the screw and it would stop actually rotating at a known character, aligning the printers. Printing an actual character would reset the screw. A later system of Edisons used the polarity of the wire to reset the system. If you flipped the polarity on the wire, switching negative and positive, the head would continue to turn in response to pulses, but it would stop at a predefined character, allowing you to ‘reset’ any of the printers which may have come out of alignment. This was actually a big enough problem that there is an entire US Patent Classification devoted to ‘Union Devices’ (<a href="http://www.patentec.com/data/class/defs/178/41.html">178/41</a>).</p><blockquote><p>It will therefore be understood from the above explanation that the impression of any given character upon the type-wheel may be produced upon the paper by an operator stations at a distant point, ... simply by transmitting the proper number of electrical impulses of short duration by means of a properly-constructed circuit-breaker, which will cause the type-wheel to revolve without sensibly affecting the impression device. When the desired character is brought opposite the impression-lever the duration of the final current is prolonged, and the electro-magnet becomes fully magnetized, and therefore an impression of the letter or character upon the paper is produced.</p><p>— Thomas A. Edison, Patent for the Printing Telegraph <a href="http://pdfpiw.uspto.gov/.piw?docid=00103924&amp;PageNum=3&amp;IDKey=E8127CDB1E2C&amp;HomeUrl=http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1%2526Sect2=HITOFF%2526d=PALL%2526p=1%2526u=%25252Fnetahtml%25252FPTO%25252Fsrchnum.htm%2526r=1%2526f=G%2526l=50%2526s1=0103924.PN.%2526OS=PN/0103924%2526RS=PN/0103924"><code>1870</code></a></p></blockquote><p>Ticker tape machines used their own vocabulary:</p>
            <pre><code>IBM 4S 651/4</code></pre>
            <p>Meant 400 shares of IBM had just been sold for $65.25 per share (stocks were priced using fractions, not decimal numbers).</p><p>Ticker tape machines delivered a continuous stream of quotes while the market was open. The great accumulation of used ticker tape led to the famous ‘Ticker Tape parades’, where thousands of pounds of the tape would be thrown from windows on Wall Street. Today we still have ticker tape parades, but not the tape itself, the paper is bought <a href="http://www.minyanville.com/mvpremium/2011/05/12/does-anyone-use-ticker-tape/">specifically to be thrown out the window</a>.</p>
    <div>
      <h3>Trans-Lux</h3>
      <a href="#trans-lux">
        
      </a>
    </div>
    <p>What’s the best way to share the stock ticker tape with a room full of traders? The early solution was a chalkboard where relevant stock trades could be written and updated throughout the day. Men were also employed to read the ticker and remember the numbers, ready to recall the most recent prices when asked.</p><p>A better <a href="https://www.google.com/patents/US2307433?dq=inassignee:%22Trans+Lux+Corp%22&amp;hl=en&amp;sa=X&amp;ved=0ahUKEwid2bnezqDOAhWRZj4KHXo2DTUQ6AEILDAC">solution</a> came from the Trans-Lux company in 1939 however. They devised a printer which would print on translucent paper. The numbers could then be projected onto a screen from the rear, creating the first large stock ticker everyone could read.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5sYFQKDu7nFuqz9RR9aKV3/ca338289f80810ce7642f917dd921fac/translux.jpg" />
            
            </figure><p>Trans-lux Projection Stock Ticker</p><p>This was improved through the creation of the Trans-Lux Jet. The <a href="https://www.google.com/patents/US3589672?dq=inassignee:%22Trans+Lux+Corp%22&amp;hl=en&amp;sa=X&amp;ved=0ahUKEwiBu5qP0KDOAhVEFh4KHYS_DI04ChDoAQhaMAk">Jet</a> was a continuous tape composed of flippable cards. One side of each card was a bright color while the other was black. As each card passed by a row of electrically-controlled pneumatic jets, some were flipped, writing out a message which would pass along the display just as modern stock tickers do. The system would be controlled using a shift register which would read in the stock quotes and translate them into pneumatic pulses.</p>
    <div>
      <h3>The Quotron</h3>
      <a href="#the-quotron">
        
      </a>
    </div>
    <p>The key issue with a stock ticker is you have to be around when a trade of stock you care about is printed. If you miss it, you have to search back through hundreds of feet of tape to get the most recent price. If you couldn’t find the price, the next best option was a call to the trading floor in New York. What traders needed was a way of looking up the most recent quote for any given stock.</p><p>In 1960 Jack Scantlin released the Quotron, the first computerized solution. Each brokerage office would become host to a Quotron ‘main unit’, which was a reasonably sized ‘computer’ equipped with a magnetic tape write head and a separate magnetic tape read head. The tape would continually feed while the market was open, the write head keeping track of the current stock trades coming in over the stock ticker lines. When it was time to read a stock value, the tape would be unspooled between the two heads falling into a bucket. This would allow the read head to find the latest value of the stock even as the write head continued to store trades.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7kbIQiuKW5wKUtQ0Y2VxGp/477c9cfa29d73883ec65c948604cd10a/quotron.png" />
            
            </figure><p>Quotron Keypad</p><p>Each desk would be equipped with a keypad / printer combination unit which allowed a trader to enter a stock symbol and have the latest quote print onto a slip of paper. A printer was used because electronic displays were too expensive. In the words of engineer Howard Beckwith:</p><blockquote><p>We considered video displays, but the electronics to form characters was too expensive then. I also considered the “Charactron tube” developed by Convair in San Diego that I had used at another company . . . but this also was too expensive, so we looked at the possibility of developing our own printer. As I remember it, I had run across the paper we used in the printer through a project at Electronic Control Systems where I worked prior to joining Scantlin. The paper came in widths of about six inches, and had to be sliced . . . I know Jack Scantlin and I spent hours in the classified and other directories and on the phone finding plastic for the tape tank, motors to drive the tape, pushbuttons, someone to make the desk unit case, and some company that would slice the tape. After we proved the paper could be exposed with a Xenon flash tube, we set out to devise a way to project the image of characters chosen by binary numbers stored in the shift register. The next Monday morning Jack came in with the idea of the print wheel, which worked beautifully.</p></blockquote><p>The main ‘computer’ in each office was primitive by our standards. For one, it didn’t include a microprocessor. It was a hardwired combination of a shift register and some comparison and control logic. The desk units were connected with a 52-wire cable, giving each button on each unit its own wire. This was necessary because they units contained no logic themselves, their printing and button handling logic is all handled in the main computer.</p><blockquote><p>When a broker in the office selected an exchange, entered a stock symbol, and requested a last price on his desk unit, the symbol would be stored in relays in the main unit, and the playback sprocket would begin driving the tape backwards over a read head at about ten feet per second, dumping the tape into the bin between the two heads (market data would continue to be recorded during the read operation). The tape data from the tracks for the selected exchange would be read into a shift register, and when the desired stock symbol was recognized, the register contents would be “frozen,” and the symbol and price would be shifted out and printed on the desk unit.</p></blockquote><p>Only a single broker could use the system at a time:</p><blockquote><p>If no desk units were in use, the main unit supplied power to all desk units in the office, and the exchange buttons on each unit were lit. When any broker pressed a lit button, the main unit disconnected the other desk units, and waited for the request from the selected desk unit. The desk unit buttons were directly connected to the main unit via the cable, and the main unit contained the logic to decode the request. It would then search the tape, as described above, and when it had an answer ready, would start the desk unit paper drive motor, count clock pulses from the desk unit (starting, for each character, when it detected an extra-large, beginning-of-wheel gap between pulses), and transmit a signal to operate the desk unit flash tube at the right time to print each character.</p></blockquote>
    <div>
      <h3>Ultronics</h3>
      <a href="#ultronics">
        
      </a>
    </div>
    <p>The Quotron system provided a vast improvement over a chalk board, but it was far from perfect. For one, it was limited to the information available over the ticker tape lines, which didn’t include information like the stocks volume, earnings, and dividends. A challenger named Ultronics created a system which used a similar hardwired digital computer, but with a <a href="https://en.wikipedia.org/wiki/Drum_memory">drum memory</a> rather than a magnetic tape.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6QJHSW8wnWgn5Ly7NwaT5w/103ad0e265b13959dd470647900f1f68/drum-memory.jpg" />
            
            </figure><p>Drum Memory</p><p>The logic was advanced enough to allow the system to calculate volume, high and low for each stock as the data was stored. Rather than store one of these expensive memory units in every brokerage, Ultronics had centralized locations around the US which were connected to brokerages and each other using 1000 bps Dataphone lines.</p><p>This system notably used a form of packet addressing, possibly for the first time ever. When each stock quote was returned it included the address of the terminal which had made the request. That terminal was able to identify the responses meant for it based on that address, allowing all the terminals to be connected to the same line.</p>
    <div>
      <h3>Quotron II</h3>
      <a href="#quotron-ii">
        
      </a>
    </div>
    <blockquote><p>At one time during system checkout we had a very elusive problem which we couldn’t pin down. In looking over the programs, we realized that the symptoms we were seeing could occur if an unconditional jump instruction failed to jump. We therefore asked CDC whether they had any indication that that instruction occasionally misbehaved. The reply was, “Oh, no. That’s one of the more reliable instructions,” This was our first indication that commands could be ordered by reliability.</p><p>— Montgomery Phister, Jr. <code>1989</code></p></blockquote><p>Facing competition from the Ultronics quote computers, it was time for Jack Scantlin’s team to create something even more powerful. What they created was the Quotron II. The Quotron II was powered by magnetic core memory, an early form of random-access memory which allowed them to read and store any stock’s value in any order. Unfortunately there wasn’t actually enough memory. They had 24K of memory to store 3000 securities.</p><blockquote><p>One stock sold for over $1000; some securities traded in 32nds of a dollar; the prices to be stored included the previous day’s close, and the day’s open, high, low, and last, together with the total number of shares traded-the volume. Clearly we’d need 15 bits for each price (ten for the $1000, five for the 32nds), or 75 bits for the five prices alone. Then we’d need another 20 for a four-letter stock symbol, and at least another 12 for the volume. That added up to 107 bits (nine words per stock, or 27,000 words for 3000 stocks) in a format that didn’t fit conveniently into 12-bit words.</p></blockquote><p>Their solution was to store most of the stocks in a compressed format. Each stocks previous closing price was stored in 11 bits, and store the other four values as six bit increments from that number. Any stocks priced over $256, stocks which used fractions smaller than eighths, and too large increments, were stored in a separate overflow memory area.</p><p>The Quotron II system was connected to several remote sites using eight Dataphone lines which provided a total bandwidth of 16 kbps.</p><p>The fundamental system worked by having one 160A computer read stock prices from a punch tape (using about 5000 feet of tape a day) into the common memory. A second 160A responded to quote requests over the Dataphone lines. The remote offices were connected to bankers office using teletype lines which could transmit up to 100 words-per-minute where a device would forward the messages to the requesting terminal.</p><p>It’s somewhat comforting to learn that hack solutions are nothing new:</p><blockquote><p>Once the system was in operation, we had our share of troubles. One mysterious system failure had the effect of cutting off service to all customers in the St. Louis area. Investigation revealed that something was occasionally turning off some 160A memory bits which enabled service to that region. The problem was “solved” for a time by installing a patch which periodically reinstated those bits, just in case.</p></blockquote><p>The system was also notable for introducing the +/- tick to represent if a stock had gone up or down since the last trade. It also added some helpful calculated quantities such as the average price change of all NYSE stocks.</p><p>The story of Quotron II showcases the value of preparing for things to go wrong even if you’re not exactly sure how they will, and graceful degradation:</p><blockquote><p>Jack Scantlin was worried about this situation, and had installed a feature in the Quotron program which discarded these common-memory programs, thus making more room for exceptions, when the market went haywire. On the day President Kennedy was assassinated, Jack remembers sitting in his office in Los Angeles watching features disappear until brokers could get nothing but last prices.</p><p>Those of us who worked on Quotron II didn’t use today’s labels. Multiprogramming, multiprocessor, packet, timesharing-we didn’t think in those terms, and most of us had never even heard them. But we did believe we were breaking new ground; and, as I mentioned earlier, it was that conviction more than any other factor that made the work fascinating, and the time fly.</p></blockquote><p>It’s valuable to remember that as easy as this system might be to create with modern technology, it was a tremendous challenge at the time. “Most of us lived Quotron 12 to 14 hour days, six and a half days a week; but the weeks flew by, and before we turned around twice, five years were gone...”</p>
    <div>
      <h3>NASDAQ</h3>
      <a href="#nasdaq">
        
      </a>
    </div>
    <blockquote><p>Anyone who has ever been involved with the demonstration of an on-line process knows what happens next. With everyone crowded around to watch, the previously infallible gear or program begins to fall apart with a spectacular display of recalcitrance. Well so it went. We set the stage, everyone held their breath, and then the first query we keyed in proceeded to pull down the whole software structure.</p></blockquote>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7GTHX0O9rWmFVxDDWLhAgu/58ca54b644b6448ae12033c02b7de858/nasdaq.jpg" />
            
            </figure><p>NASDAQ Terminal</p><p>Feeling pressure from the SEC to link all the nation’s securities markets, the National Association of Securities Dealers decided to build an ‘<a href="https://books.google.com/books?id=anGeHoYoTDQC&amp;pg=PA34&amp;lpg=PA34&amp;dq=nasdaq+bunker+ramo+univac+1108&amp;source=bl&amp;ots=Wq7M7cDNQY&amp;sig=ZwPPmeW1lbeqCztKWeFuq10lAr4&amp;hl=en&amp;sa=X&amp;ved=0ahUKEwjbjtiZyYLOAhUIwj4KHfMaDygQ6AEIKzAC#v=onepage&amp;q&amp;f=false">automated quotation service</a>’ for their stocks. Unlike a stock ticker, which provides the price of the last trade of a stock, the purpose of the NASDAQ system was to allow traders to advertise the prices they would accept to other traders. This was extremely valuable, as before the creation of this system, it was left to each trader to strike a deal with their fellow stock brokers, a very different system than the roughly ‘single-price-for-all’ system we have today.</p><p>The NASDAQ system was powered by two Univac 1108 computers for <a href="https://babel.hathitrust.org/cgi/pt?id=mdp.39015077928664;view=1up;seq=241">redundancy</a>. The central system in Connecticut was connected to regional centers in Atlanta, Chicago, New York and San Francisco where requests were aggregated and disseminated. As of December 1975 there was 20,000 miles of dedicated telephone lines connecting the regional centers to 642 brokerage offices.</p><p>Each NASDAQ terminal was composed of a CRT screen and dedicated keyboard. A request for a stock would return the currently available bid and ask price of each ‘market maker’ around the country. The market makers where the centers where stock purchases were aggregated and a price set. The trader could quickly see where the best price was available, and call the market maker to execute his trade. Similarly, the market makers could use the terminal units to update their quotations and transmit the latest values. This type of detailed ‘per-market-maker’ information is actually still a part of the NASDAQ system, but it’s only accessible to paying members.</p><p>One thing this system didn’t do is support trading via computer, without calling the money maker on the phone (the AQ in NASDAQ actually stands for ‘Automated Quotations’, no stock purchasing capability was originally intended). This became a problem on Black Monday in 1987 when the stock market lost almost a quarter of its value in a single day. During the collapse, many money makers couldn’t keep up with the selling demand, leaving many small investors facing big loses with no way to sell.</p><p>In response the NASDAQ created the <a href="https://en.wikipedia.org/wiki/Small-order_execution_system">Small Order Execution System</a> which allowed small orders of a thousand shares or less to be traded automatically. The theory was these small trades didn’t require the man-to-man blustering and bargaining which was necessary for large-scale trading. Eventually this was phased out, in favor of the nearly all computerized trading based system we have today.</p>
    <div>
      <h3>Now</h3>
      <a href="#now">
        
      </a>
    </div>
    <p>Today over three trillion dollars worth of stocks are traded every month on computerized stock exchanges. The stocks being traded represent over forty billion dollars worth of corporate value. With the right credentials and backing it’s <a href="https://en.wikipedia.org/wiki/List_of_trading_losses">possible</a> to gain or lose billions of dollars in minutes.</p><p>These markets make it possible for both the average citizen and billion dollar funds to invest in thousands of companies. In turn, it allows those companies to raise the money they need to (hopefully) grow.</p><p>Our next post in this series is on the history of digital communication before the Internet came along. Subscribe to be notified of its release.</p><p>/* Hide period after author */ .post-header .meta a { border-right: 5px solid white; margin-right: -5px; position: relative; } /* Post */ body { background-color: white; } pre, code { font-size: inherit; line-height: inherit; } section.primary-content { line-height: 1.6; color: black; } blockquote { padding-bottom: 1.5em; padding-top: 1em; font-style: italic; font-size: 1.2rem; } blockquote.pull-quote-centered { font-size: 1.2em; text-align: center; max-width: 100%; margin-left: auto; margin-right: auto; } blockquote blockquote { margin-left: 1em; padding-left: 1em; border-left: 5px solid rgba(0, 0, 0, 0.2); padding-bottom: 0.5em; padding-top: 0.5em; margin-bottom: 0.5em; margin-top: 0.5em; } p.attribution { color: #666; font-size: 1em; padding-bottom: 1em; } a code.year { text-decoration: underline; } figure.standard { margin: 2em 0; }</p> ]]></content:encoded>
            <category><![CDATA[History]]></category>
            <category><![CDATA[Fun]]></category>
            <guid isPermaLink="false">39I8UFvE8f4ajx90SFaOzL</guid>
            <dc:creator>Zack Bloom</dc:creator>
        </item>
        <item>
            <title><![CDATA[The History of Email]]></title>
            <link>https://blog.cloudflare.com/the-history-of-email/</link>
            <pubDate>Sat, 23 Sep 2017 16:00:00 GMT</pubDate>
            <description><![CDATA[ This was adapted from a post which originally appeared on the Eager blog. Eager has now become the new Cloudflare Apps. ]]></description>
            <content:encoded><![CDATA[ <p>This was adapted from a post which originally appeared on the Eager blog. Eager has now become the new <a href="https://www.cloudflare.com/apps">Cloudflare Apps</a>.</p><blockquote><p>QWERTYUIOP</p><p>— Text of the first email ever sent, 1971</p></blockquote><p>The ARPANET (a precursor to the Internet) was created “to help maintain U.S. technological superiority and guard against unforeseen technological advances by potential adversaries,” in other words, to avert the next Sputnik. Its purpose was to allow scientists to share the products of their work and to make it more likely that the work of any one team could potentially be somewhat usable by others. One thing which was not considered particularly valuable was allowing these scientists to communicate using this network. People were already perfectly capable of communicating by phone, letter, and in-person meeting. The purpose of a computer was to do <a href="http://www.kurzweilai.net/memorandum-for-members-and-affiliates-of-the-intergalactic-computer-network">massive computation</a>, to <a href="http://groups.csail.mit.edu/medg/people/psz/Licklider.html">augment our memories</a> and <a href="http://www.theatlantic.com/magazine/archive/1945/07/as-we-may-think/303881/">empower our minds</a>.</p><p>Surely we didn’t need a computer, this behemoth of technology and innovation, just to talk to each other.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1RJOCLGtgVm3eGa2QACMXd/b35b27f9fe2850bb744f2d5736bb4c10/posts-history-of-email-images-first-email-computers.jpg" />
            
            </figure><p>The computers which sent (and received) the first email.</p><p></p><p>The history of computing moves from massive data processing mainframes, to time sharing where many people share one computer, to the diverse collection of personal computing devices we have today. Messaging was first born in the time sharing era, when users wanted the ability to message other users of the same time shared computer.</p><p>Unix machines have a command called <code>write</code> which can be used to send messages to other currently logged-in users. For example, if I want to ask Mark out to lunch:</p>
            <pre><code>$ write mark
write: mark is logged in more than once; writing to ttys002

Hi, wanna grab lunch?</code></pre>
            <p>He will see:</p>
            <pre><code>Message from zack@Awesome-Mainframe.local on ttys003 at 10:36 ...
Hi, wanna grab lunch?</code></pre>
            <p>This is absolutely hilarious if your coworker happens to be using a graphical tool like vim which will not take kindly to random output on the screen.</p>
    <div>
      <h3>Persistant Messages</h3>
      <a href="#persistant-messages">
        
      </a>
    </div>
    <blockquote><p>When the mail was being developed, nobody thought at the beginning it was going to be the smash hit that it was. People liked it, they thought it was nice, but nobody imagined it was going to be the explosion of excitement and interest that it became. So it was a surprise to everybody, that it was a big hit.</p><p>— Frank Heart, director of the ARPANET infrastructure team</p></blockquote><p>An early alternative to Unix called <a href="https://en.wikipedia.org/wiki/TOPS-20">Tenex</a> took this capability one step further. Tenex included the ability to send a message to another user by writing onto the end of a file which only they could read. This is conceptually very simple, you could implement it yourself by creating a file in everyones home directory which only they can read:</p>
            <pre><code>mkdir ~/messages
chmod 0442 ~/messages</code></pre>
            <p>Anyone who wants to send a message just has to append to the file:</p>
            <pre><code>echo "??\n" &gt;&gt; /Users/zack/messages</code></pre>
            <p>This is, of course, not a great system because anyone could delete your messages! I trust the Tenex implementation (called <code>SNDMSG</code>) was a bit more secure.</p>
    <div>
      <h3>ARPANET</h3>
      <a href="#arpanet">
        
      </a>
    </div>
    <p>In 1971, the Tenex team had just gotten access to the ARPANET, the network of computers which was a main precursor to the Internet. The team quickly created a program called CPYNET which could be used to send files to remote computers, similar to FTP today.</p><p>One of these engineers, Ray Tomlinson, had the idea to combine the message files with <a href="https://tools.ietf.org/html/rfc310">CPYNET</a>. He added a command which allowed you to append to a file. He also wired things up such that you could add an <code>@</code> symbol and a remote machine name to your messages and the machine would automatically connect to that host and append to the right file. In other words, running:</p>
            <pre><code>SNDMSG zack@cloudflare</code></pre>
            <p>Would append to the <code>/Users/zack/messages</code> file on the host <code>cloudflare</code>. And email was born!</p>
    <div>
      <h3>FTP</h3>
      <a href="#ftp">
        
      </a>
    </div>
    <p>The CPYNET format did not have much of a life outside of Tenex unfortunately. It was necessary to create a standard method of communication which every system could understand. Fortunately, this was also the goal of another similar protocol, FTP. FTP (the File Transfer Protocol) sought to create a single way by which different machines could transfer files over the ARPANET.</p><p>FTP <a href="https://tools.ietf.org/html/rfc114">originally</a> didn’t include support for email. Around the time it was <a href="http://www.rfc-editor.org/rfc/rfc385.txt">updated</a> to use TCP (rather than the NCP protocol which ARPANET historically used) the <code>MAIL</code> command was added.</p>
            <pre><code>$ ftp
&lt; open bbn

&gt; 220 HELLO, this is the BBN mail service

&lt; MAIL zack

&gt; 354 Type mail, ended by &lt;CRLF&gt;.&lt;CRLF&gt;

&lt; Sup?
&lt; .

&gt; 250 Mail stored</code></pre>
            <p>These commands were ultimately <a href="https://tools.ietf.org/html/rfc772">borrowed from</a> FTP and formed the basis for the SMTP (Simple Mail Transfer Protocol) protocol in <a href="https://tools.ietf.org/html/rfc821">1982</a>.</p>
    <div>
      <h3>Mailboxes</h3>
      <a href="#mailboxes">
        
      </a>
    </div>
    <p>The format for defining how a message should be transmitted (and often how it would be stored on disk) was first <a href="https://tools.ietf.org/html/rfc733">standardized</a> in 1977:</p>
            <pre><code>Date     :  27 Aug 1976 0932-PDT
From     :  Ken Davis &lt;KDavis at Other-Host&gt;
Subject  :  Re: The Syntax in the RFC
To       :  George Jones &lt;Group at Host&gt;,
              Al Neuman at Mad-Host

There’s no way this is ever going anywhere...</code></pre>
            <p>Note that at this time the ‘at’ word could be used rather than the ‘@’ symbol. Also note that this use of headers before the message predates HTTP by almost fifteen years. This format remains nearly identical today.</p><p>The Fifth Edition of Unix used a very similar <a href="https://en.wikipedia.org/wiki/Mbox">format</a> for storing a users email messages on disk. Each user would have a file which contained their messages:</p>
            <pre><code>From MAILER-DAEMON Fri Jul  8 12:08:34 1974
From: Author &lt;author@example.com&gt;
To: Recipient &lt;recipient@example.com&gt;
Subject: Save $100 on floppy disks

They’re never gonna go out of style!

From MAILER-DAEMON Fri Jul  8 12:08:34 1974
From: Author &lt;author@example.com&gt;
To: Recipient &lt;recipient@example.com&gt;
Subject: Seriously, buy AAPL

You’ve never heard of it, you’ve never heard of me, but when you see
that stock symbol appear.  Buy it.

- The Future</code></pre>
            <p>Each message began with the word ‘From’, meaning if a message happened to contain From at the beginning of a line it needed to be escaped lest the system think that’s the start of a new message:</p>
            <pre><code>From MAILER-DAEMON Fri Jul  8 12:08:34 2011
From: Author &lt;author@example.com&gt;
To: Recipient &lt;recipient@example.com&gt;
Subject: Sample message 1

This is the body.
&gt;From (should be escaped).
There are 3 lines.</code></pre>
            <p>It was technically possible to interact with your email by simply editing your mailbox file, but it was much more common to use an email client. As you might expect there was a <a href="http://www.rfc-editor.org/rfc/rfc808.txt">diversity</a> of clients available, but a few are of historical note.</p><p>RD was an editor which was created by <a href="http://www.livinginternet.com/i/ii_roberts.htm">Lawrence Roberts</a> who was actually the program manager for the ARPANET itself at the time. It was a set of macros on top of the Tenex text editor (TECO), which itself would later become Emacs.</p><p>RD was the first client to give us the ability to sort messages, save messages, and delete them. There was one key thing missing though: any integration between receiving a message and sending one. RD was strictly for consuming emails you had received, to reply to a message it was necessary to compose an entirely new message in SNDMSG or another tool.</p><p>That innovation came from MSG, which itself was an improvement on a client with the hilarious name BANANARD. MSG added the ability to reply to a message, in the words of Dave Crocker:</p><blockquote><p>My subjective sense was that propagation of MSG resulted in an exponential explosion of email use, over roughly a 6-month period. The simplistic explanation is that people could now close the Shannon-Weaver communication loop with a single, simple command, rather than having to formulate each new message. In other words, email moved from the sending of independent messages into having a conversation.</p></blockquote><p>Email wasn’t just allowing people to talk more easily, it was changing how they talk. In <a href="http://www.livinginternet.com/References/Ian%20Hardy%20Email%20Thesis.txt">the words</a> of C. R. Linklider and Albert Vezza in 1978:</p><blockquote><p>One of the advantages of the message systems over letter mail was that, in an ARPANET message, one could write tersely and type imperfectly, even to an older person in a superior position and even to a person one did not know very well, and the recipient took no offense... Among the advantages of the network message services over the telephone were the fact that one could proceed immediately to the point without having to engage in small talk first, that the message services produced a preservable record, and that the sender and receiver did not have to be available at the same time.</p></blockquote><p>The most popular client from this era was called <a href="https://en.wikipedia.org/wiki/MH_Message_Handling_System">MH</a> and was composed of several command line utilities for doing various actions with and to your email.</p>
            <pre><code>$ mh

% show

(Message inbox:1)
Return-Path: joed
Received: by mysun.xyz.edu (5.54/ACS)
        id AA08581; Mon, 09 Jan 1995 16:56:39 EST
Message-Id: &lt;9501092156.AA08581@mysun.xyz.edu&gt;
To: angelac
Subject: Here’s the first message you asked for
Date: Mon, 09 Jan 1995 16:56:37 -0600
From: "Joe Doe" &lt;joed&gt;

Hi, Angela!  You asked me to send you a message.  Here it is.
I hope this is okay and that you can figure out how to use
that mail system.

Joe</code></pre>
            <p>You could reply to the message easily:</p>
            <pre><code>% repl

To: "Joe Doe" &lt;joed&gt;
cc: angelac
Subject: Re: Here’s the first message you asked for
In-reply-to: Your message of "Mon, 09 Jan 1995 16:56:37 -0600."
        &lt;9501092156.AA08581@mysun.xyz.edu&gt;
-------

% edit vi</code></pre>
            <p>You could then edit your reply in vim which is actually pretty cool.</p><p>Interestingly enough, in June of 1996 the guide “<a href="http://rand-mh.sourceforge.net/book/">MH &amp; xmh: Email for Users &amp; Programmers</a>” was actually the first book in history to be published on the Internet.</p>
    <div>
      <h3>Pine, Elm &amp; Mutt</h3>
      <a href="#pine-elm-mutt">
        
      </a>
    </div>
    <blockquote><p>All mail clients suck. This one just sucks less.</p><p>— Mutt Slogan</p></blockquote><p>It took several years until terminals became powerful enough, and perhaps email pervasive enough, that a more graphical program was required. In 1986 Elm was introduced, which allowed you to interact with your email more interactively.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1TC1QcSpjdKD1OkLtaAPId/f0faa8605a8a6528ff4487fff3097fde/posts-history-of-email-images-elm.png" />
            
            </figure><p>Elm Mail Client</p><p>This was followed by more graphical <a href="https://en.wikipedia.org/wiki/Text-based_user_interface">TUI</a> clients like <a href="https://web.archive.org/web/19970126101209/http://www.cs.hmc.edu/~me/mutt/index.html">Mutt</a> and <a href="https://groups.google.com/forum/#!msg/comp.mail.misc/kqKQojTDVBM/kvYgyYbwfKoJ">Pine</a>.</p><p>In the words of the University of Washington’s <a href="http://www.washington.edu/pine/overview/project-history.html">Pine team</a>:</p><blockquote><p>Our goal was to provide a mailer that naive users could use without fear of making mistakes. We wanted to cater to users who were less interested in learning the mechanics of using electronic mail than in doing their jobs; users who perhaps had some computer anxiety. We felt the way to do this was to have a system that didn’t do surprising things and provided immediate feedback on each operation; a mailer that had a limited set of carefully-selected functions.</p></blockquote><p>These clients were becoming gradually easier and easier to use by non-technical people, and it was becoming clear how big of a deal this really was:</p><blockquote><p>We in the ARPA community (and no doubt many others outside it) have come to realize that we have in our hands something very big, and possibly very important. It is now plain to all of us that message service over computer networks has enormous potential for changing the way communication is done in all sectors of our society: military, civilian government, and private.</p></blockquote>
    <div>
      <h3>Webmail</h3>
      <a href="#webmail">
        
      </a>
    </div>
    <blockquote><p>Its like when I did the referer field. I got nothing but grief for my choice of spelling. I am now attempting to get the spelling corrected in the OED since my spelling is used several billion times a minute more than theirs.</p><p>— Phillip Hallam-Baker on his spelling of ’Referer’ <a href="https://groups.google.com/forum/#!original/alt.folklore.computers/7X75In21_54/JgV9Rw04f-EJ"><code>2000</code></a></p></blockquote><p>The first webmail client was created by Phillip Hallam-Baker at CERN in <a href="https://groups.google.com/forum/#!topic/comp.archives/vpWqUAmg8xU">1994</a>. Its creation was early enough in the history of the web that it led to the identification of the need for the <code>Content-Length</code> header in POST requests.</p><p>Hotmail was released in 1996. The name was chosen because it included the letters HTML to emphasize it being ‘on the web’ (it was original stylized as ‘HoTMaiL’). When it was launched users were limited to 2MB of storage (at the time a 1.6GB hard drive was $399).</p><p>Hotmail was originally implemented using FreeBSD, but in a decision I’m sure every engineer regretted, it was moved to Windows 2000 after the service was bought by Microsoft. In 1999, hackers revealed a security flaw in Hotmail that permitted anybody to log in to any Hotmail account using the password ‘<a href="http://archive.wired.com/science/discoveries/news/1999/08/21503">eh</a>’. It took until 2001 for ‘hackers’ to realize you could access other people’s messages by swap usernames in the URL and guessing at a valid message number.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6LQJvosil4wqu6Ku3J6kqf/15c82c1105b2576396d88815402c86ce/posts-history-of-email-images-gmail.jpg" />
            
            </figure><p>Gmail was famously created in 2004 as a ‘20% project’ of Paul Buchheit. Originally it wasn’t particularly believed in as a product within Google. They had to launch using a few hundred Pentium III computers no one else wanted, and it took three years before they had the resources to accept users without an invitation. It was notable both for being much closer to a desktop application (using AJAX) and for the unprecedented offer of 1GB of mail storage.</p>
    <div>
      <h3>The Future</h3>
      <a href="#the-future">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7vOMbpChEaJ0JNHRgRr7hn/474dc2b8b40a5b77989c1ab30a7b7e70/posts-history-of-email-images-mail.gif" />
            
            </figure><p>US Postal Mail Volume, <a href="http://www.slideshare.net/jesserobbins/devops-change/14-US_Postal_Service_Mail_Volume">KPCB</a></p><p>At this point email is a ubiquitous enough communication standard that it’s very possible postal mail as an everyday idea will die before I do. One thing which has not survived well is any attempt to replace email with a more complex messaging tool like <a href="https://en.wikipedia.org/wiki/Apache_Wave">Google Wave</a>. With the rise of more targeted communication tools like Slack, Facebook, and Snapchat though, you never know.</p><p>There is, of course, a cost to that. The ancestors of the Internet were kind enough to give us a communication standard which is free, transparent, and standardized. It would be a shame to see the tech communication landscape move further and further into the world of locked gardens and proprietary schemas.</p><p>We’ll leave you with two quotes:</p><blockquote><p>Mostly because it seemed like a neat idea. There was no directive to ‘go forth and invent e-mail’.</p><p>— Ray Tomlinson, answering a question about why he invented e-mail</p></blockquote><hr /><blockquote><p>Permit me to carry the doom-crying one step further. I am curious whether the increasingly easy access to computers by adolescents will have any effect, however small, on their social development. Keep in mind that the social skills necessary for interpersonal relationships are not taught; they are learned by experience. Adolescence is probably the most important time period for learning these skills. There are two directions for a cause-effect relationship. Either people lacking social skills (shy people, etc.) turn to other pasttimes, or people who do not devote enough time to human interactions have difficulty learning social skills. I do not [consider] whether either or both of these alternatives actually occur. I believe I am justified in asking whether computers will compete with human interactions as a way of spending time? Will they compete more effectively than other pasttimes? If so, and if we permit computers to become as ubiquitous as televisions, will computers have some effect (either positive or negative) on personal development of future generations?</p><p>— Gary Feldman, <a href="http://www.livinginternet.com/References/Ian%20Hardy%20Email%20Thesis.txt"><code>1981</code></a></p></blockquote><ul><li><p>Use Cloudflare Apps to build tools which can be installed by millions of sites.</p><p><a href="https://www.cloudflare.com/apps/developer/docs/getting-started">Build an app →</a></p><p>If you're in San Francisco, London or Austin: <a href="https://boards.greenhouse.io/cloudflare/jobs/850951">work with us</a>.</p></li><li><p>Our next post is on the history of the URL!<b>Get notified when new apps and apps-related posts are released:</b></p><p>Email Address</p></li></ul><p>(function($) {window.fnames = new Array(); window.ftypes = new Array();fnames[0]='EMAIL';ftypes[0]='email';fnames[1]='FNAME';ftypes[1]='text';fnames[2]='LNAME';ftypes[2]='text';}(jQuery));var $mcj = jQuery.noConflict(true);  /* Social */ .social { margin-top: 1.3em; } .fb_iframe_widget { padding-right: 1px; } .IN-widget { padding-left: 11px; } /* Hide period after author */ .post-header .meta a { border-right: 5px solid white; margin-right: -5px; position: relative; } /* Post */ body { background-color: white; } pre, code { font-size: inherit; line-height: inherit; } section.primary-content { font-size: 16px; line-height: 1.6; color: black; } blockquote { padding-bottom: 1.5em; padding-top: 1em; font-style: italic; font-size: 1.25rem; } blockquote.pull-quote-centered { font-size: 1.2em; text-align: center; max-width: 100%; margin-left: auto; margin-right: auto; } blockquote blockquote { margin-left: 1em; padding-left: 1em; border-left: 5px solid rgba(0, 0, 0, 0.2); padding-bottom: 0.5em; padding-top: 0.5em; margin-bottom: 0.5em; margin-top: 0.5em; } figure.standard { position: relative; max-width: 100%; margin: 1em auto; text-align: center; z-index: -1; } .figcaption { padding-top: .5em; font-size: .8em; color: #888; font-weight: 300; letter-spacing: .03em; line-height: 1.35; } .figcontent { display: inline-block; } p.attribution { color: #666; font-size: 0.9em; padding-bottom: 1em; } a code.year { text-decoration: underline; } .closing-cards #mc_embed_signup .mc-field-group { margin: 0.75em 0; } .closing-cards #mc_embed_signup input { font-size: 1.5em; height: auto; } .closing-cards #mc_embed_signup input[type="email"] { border: 1px solid #bcbcbc; border-radius: 2px; margin-bottom: 0; } .closing-cards #mc_embed_signup input[type="submit"] { background: #f38020; color: #fff; padding: .8em 1em .8em 1em; white-space: nowrap; line-height: 1.2; text-align: center; border-radius: 2px; border: 0; display: inline-block; text-rendering: optimizeLegibility; -webkit-tap-highlight-color: transparent; -webkit-font-smoothing: subpixel-antialiased; user-select: none; -webkit-appearance: none; appearance: none; letter-spacing: .04em; text-indent: .04em; cursor: pointer; } .closing-cards #mc_embed_signup div.mce_inline_error { background-color: transparent; color: #C33; padding: 0; display: inline-block; font-size: 0.9em; } .closing-cards #mc_embed_signup p:not(:empty) { line-height: 1.5; margin-bottom: 2em; } .closing-cards #mc_embed_signup input[type="email"] { font-size: 20px !important; width: 100% !important; padding: .6em 1em !important; } .closing-cards #mc_embed_signup .mc-field-group { margin: 0 !important; } .closing-cards #mc_embed_signup input[type="submit"] { font-size: 20px !important; margin-top: .5em !important; padding: .6em 1em !important; } .closing-cards #mc_embed_signup div.mce_inline_error { padding: 0; margin: 0; color: #F38020 !important; } aside.section.learn-more { display: none; } .closing-cards { background: #eee; width: 100%; list-style-type: none; margin-left: 0; } .closing-card { width: calc(50% - 10px) !important; font-size: 20px; padding: 1.5em; display: inline-block; box-sizing: border-box; vertical-align: top; } @media (max-width: 788px){ .closing-card { width: 100% !important; } .closing-card + .closing-card { border-top: 10px solid white; } }</p> ]]></content:encoded>
            <category><![CDATA[History]]></category>
            <category><![CDATA[Cloudflare Apps]]></category>
            <guid isPermaLink="false">6sYYukjP53HPjvEZS6EiH7</guid>
            <dc:creator>Zack Bloom</dc:creator>
        </item>
        <item>
            <title><![CDATA[The Languages Which Almost Became CSS]]></title>
            <link>https://blog.cloudflare.com/the-languages-which-almost-became-css/</link>
            <pubDate>Wed, 02 Aug 2017 06:00:00 GMT</pubDate>
            <description><![CDATA[ When Tim Berners-Lee announced HTML in 1991 there was no method of styling pages. How a given HTML tag was rendered was determined by the browser, often with significant input from the user’s preferences ]]></description>
            <content:encoded><![CDATA[ <p>This was adapted from a post which originally appeared on the Eager blog. Eager has now become the new <a href="https://www.cloudflare.com/apps">Cloudflare Apps</a>.</p><blockquote><p><i>In fact, it has been a constant source of delight for me over the past year to get to continually tell hordes (literally) of people who want to – strap yourselves in, here it comes – control what their documents look like in ways that would be trivial in TeX, Microsoft Word, and every other common text processing environment: “</i><i><b>Sorry, you’re screwed.</b></i><i>”</i></p><p><i>— Marc Andreessen </i><a href="http://1997.webhistory.org/www.lists/www-talk.1994q1/0648.html"><i><code><u>1994</u></code></i></a></p></blockquote><p>When Tim Berners-Lee announced HTML in 1991 there was no method of styling pages. How a given HTML tag was rendered was determined by the browser, often with significant input from the user’s preferences. To many, it seemed like a good idea to create a standard way for pages to ‘suggest’ how they might prefer to be rendered stylistically.</p><p>But CSS wouldn’t be introduced for five years, and wouldn’t be fully implemented for ten. This was a period of intense work and innovation which resulted in more than a few competing styling methods that just as easily could have become the standard.</p><p>While these languages are obviously not in common use today, we find it fascinating to think about the world that might have been. Even more surprisingly, it happens that many of these other options include features which developers would love to see appear in CSS even today.</p>
    <div>
      <h3>The First Proposal</h3>
      <a href="#the-first-proposal">
        
      </a>
    </div>
    <p>In early 1993 the Mosaic browser had not yet reached 1.0. Those browsers that did exist dealt solely with HTML. There was no method of specifying the style of HTML whatsoever, meaning whatever the browser decided an <code>&lt;h1&gt;</code> should look like, that’s what you got.</p><p>In June of that year, Robert Raisch made <a href="http://1997.webhistory.org/www.lists/www-talk.1993q2/0445.html">a proposal</a> to the www-talk mailing list to create a “an easily parsable format to deliver stylistic information along with Web documents” which would be called RRP.</p>
            <pre><code>@BODY fo(fa=he,si=18)</code></pre>
            <p>If you have no idea what this code is doing you are forgiven. This particular rule is setting the font family (<code>fa</code>) to helvetica (<code>he</code>), and the font size (<code>si</code>) to 18 points. It made sense to make the content of this new format as terse as was possible as it was born in the era before gzipping and when connection speeds hovered around 14.4k.</p><p>Some interesting things missing from this proposal were any mention of units, all numbers being interpreted based on their context (font sizes were always in points for example). This could be attributed to RRP being designed more as a “set of HINTS or SUGGESTIONS to the renderer” rather than a specification. This was considered necessary because the same stylesheet needed to function for both the common line-mode browsers (like <a href="https://en.wikipedia.org/wiki/Lynx_(web_browser)">Lynx</a>, and the graphical browsers which were becoming increasingly popular.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4H46tOPb64GwmtjWtuZa4P/14991a98c9774d9e483485581921c144/lynx.png" />
          </figure><p>Interestingly, RRP did include a method of specifying a columnar layout, a feature which wouldn’t make it to CSS until 2011. For example, three columns, each of width ‘80 units’ would look like this:</p>
            <pre><code>@P co(nu=3,wi=80)</code></pre>
            <p>It’s a little hard to parse, but not much worse than <code>white-space: nowrap</code> perhaps.</p><p>It’s worth noting that RRP did not support any of the ‘cascading’ we associate with stylesheets today. A given document could only have one active stylesheet at a time, which is a logical way to think about styling a document, even if it’s foreign to us today.</p><p>Marc Andreessen (the creator of Mosaic, which would become the most popular browser) was <a href="http://www.webhistory.org/www.lists/www-talk.1993q4/0266.html">aware</a> of the RRP proposal, but it was never implemented by Mosaic. Instead, Mosaic quickly moved (somewhat tragically) down the path of using HTML tags to define style, introducing tags like <code>&lt;FONT&gt;</code> and <code>&lt;CENTER&gt;</code>.</p>
    <div>
      <h3>Viola and the Proto-Browser Wars</h3>
      <a href="#viola-and-the-proto-browser-wars">
        
      </a>
    </div>
    <blockquote><p><i>Then why don't you just implement one of the many style sheet
proposals that are on the table. This would pretty much solve the
problem if done correctly.</i></p><p><i>So then I get to tell people, "Well, you get to learn this language
to write your document, and then you get to learn that language for
actually making your document look like you want it to." Oh, they'll
love that.</i></p><p><i>— Marc Andreessen </i><a href="http://1997.webhistory.org/www.lists/www-talk.1994q1/0683.html"><i><code><u>1994</u></code></i></a></p></blockquote><p>Contrary to popular perception, Mosaic was not the first graphical browser. It was predated by <a href="https://en.wikipedia.org/wiki/ViolaWWW">ViolaWWW</a>, a graphical browser originally written by Pei-Yuan Wei in just four days.</p><p>[IMAGE]</p><p>Pei-Yuan created a <a href="http://1997.webhistory.org/www.lists/www-talk.1993q4/0264.html">stylesheet language</a> which supports a form of the nested structure we are used to in CSS today:</p>
            <pre><code>(BODY fontSize=normal
      BGColor=white
      FGColor=black
  (H1   fontSize=largest
        BGColor=red
        FGColor=white)
)</code></pre>
            <p>In this case we are applying color selections to the body and specifically styling <code>H1</code>s which appear within the body. Rather than using repeated selectors to handle the nesting, PWP used a parenthesis system which is evocative of the indentation systems used by languages like Stylus and SASS which are preferred by some developers to CSS today. This makes PWP’s syntax potentially better in at least one way than the CSS language which would eventually become the lingua franca of the web.</p><p>PWP is also notable for introducing the method of referring to external stylesheets we still use today:</p>
            <pre><code>&lt;LINK REL="STYLE" HREF="URL_to_a_stylesheet"&gt;</code></pre>
            <p>ViolaWWW was unfortunately written to work chiefly with the <a href="https://en.wikipedia.org/wiki/X_Window_System">X Windowing System</a> which was only popular on Unix systems. When Mosaic was ported to Windows it quickly left Viola in the dust.</p>
    <div>
      <h3>Stylesheets Before the Web</h3>
      <a href="#stylesheets-before-the-web">
        
      </a>
    </div>
    <blockquote><p><i>HTML is the kind of thing that can
only be loved by a computer scientist. Yes, it expresses the underlying
structure of a document, but documents are more than just structured text
databases; they have visual impact. HTML totally eliminates any visual
creativity that a document’s designer might have.</i></p><p><i>— Roy Smith </i><a href="http://1997.webhistory.org/www.lists/www-talk.1993q3/0238.html"><i><code><u>1993</u></code></i></a></p></blockquote><p>The need for a language to express the style of documents long predates the Internet.</p><p>As you may know, HTML as we know it was originally based on a pre-Internet language called SGML. In 1987 the US Department of Defense decided to study if SGML could be used to make it easier to store and transmit the huge volume of documentation they deal with. Like any good government project, they wasted no time coming up with a name. The team was originally called the Computer-Aided Logistics Support team, then the Computer-aided Acquisition and Logistics Support team, then finally the Continuous Acquisition and Life-cycle Support initiative. In any case, the initials were CALS.</p><p>The CALS team created a language for styling SGML documents called FOSI which is an initialism which undoubtedly stands for some combination of four words. They published <a href="http://people.opera.com/howcome/2006/phd/archive/www.dt.navy.mil/tot-shi-sys/tec-inf-sys/cal-std/doc/28001C.pdf">a specification</a> for the language which is as comprehensive as it is incomprehensible. It also includes one of the best <a href="http://people.opera.com/howcome/2006/phd/i/fosi.png">nonsensical infographics</a> to ever exist on the web.</p><p>One inviolate rule of the Internet is: more will always get done if you can prove someone wrong in the process. In 1993, just four days after Pei-Yuan’s proposal, Steven Heaney <a href="http://1997.webhistory.org/www.lists/www-talk.1993q4/0295.html">proposed</a> that rather than “re-inventing the wheel,” it was best to use a variant of FOSI to style the web.</p><p>A FOSI document is itself written in SGML, which is actually a somewhat logical move given web developers existing familiarity with the SGML variant HTML. An example document looks like this:</p>
            <pre><code>&lt;outspec&gt;
  &lt;docdesc&gt;
    &lt;charlist&gt;
      &lt;font size="12pt" bckcol="white" fontcol="black"&gt;
    &lt;/charlist&gt;
  &lt;/docdesc&gt;
  &lt;e-i-c gi="h1"&gt;&lt;font size="24pt" bckcol="red", fontcol="white"&gt;&lt;/e-i-c&gt;
  &lt;e-i-c gi="h2"&gt;&lt;font size="20pt" bckcol="red", fgcol="white"&gt;&lt;/e-i-c&gt;
  &lt;e-i-c gi="a"&gt;&lt;font fgcol="red"&gt;&lt;/e-i-c&gt;
  &lt;e-i-c gi="cmd kbd screen listing example"&gt;&lt;font style="monoser"&gt;&lt;/e-i-c&gt;
&lt;/outspec&gt;</code></pre>
            <p>If you’re a bit confused what a <code>docdesc</code> or <code>charlist</code> are, so were the
members of <code>www-talk</code>. The only contextual information given was that <code>e-i-c</code> means ‘element in context’. FOSI is notable however for introducing the <code>em</code> unit which has now become the preferred method for people who know more about CSS than you to style things.</p><p>The language conflict which was playing out was actually as old as programming itself. It was the battle of functional ‘lisp-style’ syntax vs the syntax of more declarative languages. Pei-Yuan himself <a href="http://1997.webhistory.org/www.lists/www-talk.1993q4/0297.html">described</a> his syntax as “LISP’ish,” but it was only a matter of time until a true LISP variant entered the stage.</p>
    <div>
      <h3>The Turing-Complete Stylesheet</h3>
      <a href="#the-turing-complete-stylesheet">
        
      </a>
    </div>
    <p>For all its complexity, FOSI was actually perceived to be an <a href="http://xml.coverpages.org/kennDSSSLInt.html">interim solution</a> to the problem of formatting documents. The long-term plan was to create a language based on the functional programming language Scheme which could enable the most powerful document transformations you could imagine. This language was called DSSSL.</p><p>In the words of contributor Jon Bosak:</p><blockquote><p><i>It’s a mistake to put DSSSL into the same bag as scripting languages. Yes, DSSSL is Turing-complete; yes, it’s a programming language. But a script language (at least the way I use the term) is procedural; DSSSL very definitely is not. DSSSL is entirely functional and entirely side-effect-free. </i></p><p><i>Nothing ever happens in a DSSSL stylesheet. The stylesheet is one giant function whose value is an abstract, device-independent, nonprocedural description of the
formatted document that gets fed as a specification (a declaration, if you will) of display areas to downstream rendering processes.</i></p></blockquote><p>At its simplest, DSSSL is actually a pretty reasonable styling language:</p>
            <pre><code>(element H1
  (make paragraph
    font-size: 14pt
    font-weight: 'bold))</code></pre>
            <p>As it was a programming language, you could even define functions:</p>
            <pre><code>(define (create-heading heading-font-size)
  (make paragraph
    font-size: heading-font-size
    font-weight: 'bold))

(element h1 (create-heading 24pt))
(element h2 (create-heading 18pt))</code></pre>
            <p>And use mathematical constructs in your styling, for example to ‘stripe’ the rows of a table:</p>
            <pre><code>(element TR
  (if (= (modulo (child-number) 2)
        0)
    ...   ;even-row
    ...)) ;odd-row</code></pre>
            <p>As a final way of kindling your jealousy, DSSSL could treat inherited values as
variables, and do math on them:</p>
            <pre><code>(element H1
  (make paragraph
    font-size: (+ 4pt (inherited-font-size))))</code></pre>
            <p>DSSSL did, unfortunately, have the fatal flaw which would plague all
Scheme-like languages: too many parenthesis. Additionally, it was arguably <i>too complete</i> of a spec when it was finally published which made it intimidating to browser developers. The DSSSL spec included over 210 separate styleable properties.</p><p>The team did go on to create <a href="https://en.wikipedia.org/wiki/XSL">XSL</a>, a language for document transformation which is no less confusing, but which would be decidedly more popular.</p>
    <div>
      <h3>Why Did The Stylesheet Cross The Wire</h3>
      <a href="#why-did-the-stylesheet-cross-the-wire">
        
      </a>
    </div>
    <p>CSS does not include parent selectors (a method of styling a parent based on what children it contains). This fact has been <a href="http://stackoverflow.com/questions/1014861/is-there-a-css-parent-selector">long</a> <a href="http://stackoverflow.com/questions/45004/complex-css-selector-for-parent-of-active-child?lq=1">bemoaned</a> <a href="http://stackoverflow.com/questions/2000582/css-selector-for-foo-that-contains-bar?lq=1">by</a> <a href="http://stackoverflow.com/questions/4220327/css-selector-element-with-a-given-child?lq=1">Stack</a> <a href="http://stackoverflow.com/questions/21252551/apply-style-to-parent-if-it-has-child-with-css?lq=1">Overflow</a> posters, but it turns out there is a very good reason for its absence. Particularly in the early days of the Internet, it was considered critically important that the page be renderable before the document has been fully loaded. In other words, we want to be able to render the beginning of the HTML to the page before the HTML which will form the bottom of the page has been fully downloaded.</p><p>A parent selector would mean that styles would have to be updated as the HTML document loads. Languages like DSSSL were completely out, as they could perform operations on the document itself, which would not be entirely available when the rendering is to begin.</p><p>The first contributor to bring up this issue and <a href="http://people.opera.com/howcome/2006/phd/archive/odur.let.rug.nl/~bert/stylesheets.html">propose</a> a workable language was Bert Bos in March of 1995. His proposal also contains an early edition of the ‘smiley’ emoticon :-).</p><p>The language itself was somewhat ‘object-oriented’ in syntax:</p>
            <pre><code>*LI.prebreak: 0.5
*LI.postbreak: 0.5
*OL.LI.label: 1
*OL*OL.LI.label: A</code></pre>
            <p>Using <code>.</code> to signify direct children, and <code>*</code> to specify ancestors.</p><p>His language also has the cool property of defining how features like links work in the stylesheet itself:</p>
            <pre><code>*A.anchor: !HREF</code></pre>
            <p>In this case we specified that the destination of the link element is the value of its <code>HREF</code> attribute. The idea that the behavior of elements like links should be controllable was popular in several proposals. In the era pre-JavaScript, there was not an existing way of controlling such things, so it seemed logical to include it in these new proposals.</p><p>One functional <a href="http://people.opera.com/howcome/2006/phd/archive/tigger.cc.uic.edu/~cmsmcq/style-primitives.html">proposal</a>, introduced in 1994 by a gentleman with the name ‘C.M. Sperberg-McQueen’, includes the same behavior functionally:</p>
            <pre><code>(style a
  (block #f)     ; format as inline phrase
  (color blue)   ; in blue if you’ve got it
  (click (follow (attval 'href)))  ; and on click, follow url</code></pre>
            <p>His language also introduced the <code>content</code> keyword as a way of controlling the content of an HTML element from the stylesheet, a concept which was later introduced into CSS 2.1.</p>
    <div>
      <h3>What Might Have Been</h3>
      <a href="#what-might-have-been">
        
      </a>
    </div>
    <p>Before I talk about the language which actually became CSS, it’s worth mentioning one other language proposal if only because it is in some ways the thing of an early web developer’s dreams.</p><p>PSL96 was, in the naming convention of the time, the 1996 edition of the “Presentation Specification Language.” At its core, PSL looks like CSS:</p>
            <pre><code>H1 {
  fontSize: 20;
}</code></pre>
            <p>It quickly gets more interesting. You could express element position based on not just the sizes specified for them (<code>Width</code>), but the actual (<code>Actual Width</code>) sizes the browser rendered them as:</p>
            <pre><code>LI {
  VertPos: Top = LeftSib . Actual Bottom;
}</code></pre>
            <p>You’ll also notice you can use the element’s left sibling as a constraint.</p><p>You can also add logical expressions to your styles. For example to style only anchor elements which have
<code>hrefs</code>:</p>
            <pre><code>A {
  if (getAttribute(self, "href") != "") then
    fgColor = "blue";
    underlineNumber = 1;
  endif
}</code></pre>
            <p>That styling could be extended to do all manner of things we resort to classes today to accomplish:</p>
            <pre><code>LI {
  if (ChildNum(Self) == round(NumChildren(Parent) / 2 + 1)) then
    VertPos: Top = Parent.Top;
    HorizPos: Left = LeftSib.Left + Self.Width;
  else
    VertPos: Top = LeftSib.Actual Bottom;
    HorizPos: Left = LeftSib.Left;
  endif
}</code></pre>
            <p>Support for functionality like this could have perhaps truly enabled the dream of separating content from style. Unfortunately this language was plagued by being a bit too extensible, meaning it would have been very possible for its implementation to vary considerably from browser to browser. Additionally, it was published in a series of papers in the academic world, rather than on the www-talk mailing list where most of the functional work was being done. It was never integrated into a mainstream browser.</p>
    <div>
      <h3>The Ghost of CSS Past</h3>
      <a href="#the-ghost-of-css-past">
        
      </a>
    </div>
    <p>The language which, at least in name, would directly lead to CSS was called CHSS (Cascading HTML Style Sheets), <a href="http://people.opera.com/howcome/2006/phd/archive/www.w3.org/People/howcome/p/cascade.html">proposed</a> in 1994 by Håkon W Lie.</p><p>Like most good ideas, the original proposal was pretty nutty.</p>
            <pre><code>h1.font.size = 24pt 100%
h2.font.size = 20pt 40%</code></pre>
            <p>Note the percentages at the end of rules. This percentage referred to how much ‘ownership’ the current stylesheet was taking over this value. If a previous stylesheet had defined the <code>h2</code> font size as <code>30pt</code>, with <code>60%</code> ownership, and this stylesheet styled <code>h2</code>s as <code>20px 40%</code>, the two values would be combined based on their ownership percentage to get some value around <code>26pt</code>.</p><p>It is pretty clear how this proposal was made in the era of document-based HTML pages, as there is no way compromise-based design would work in our app-oriented world. Nevertheless, it did include the fundamental idea that stylesheets should cascade. In other words, it should be possible for multiple stylesheets to be applied to the same page.</p><p>This idea, in its original formulation, was generally considered important because it gave the end user control over what they saw. The original page would have one stylesheet, and the web user would have his or her own stylesheet, and the two would be combined to render the page. Supporting multiple stylesheets was viewed as a method of maintaining the personal-freedom of the web, not as a way of supporting developers (who were still coding individual HTML pages by hand).</p><p>The user would even be able to control how much control they gave to the suggestions of the page’s author, as expressed in an ASCII diagram in the proposal:</p>
            <pre><code>       User                   Author
Font   o-----x--------------o 64%
Color  o-x------------------o 90%
Margin o-------------x------o 37%
Volume o---------x----------o 50%</code></pre>
            <p>Like many of these proposals, it included features which would not make it into CSS for decades, if ever. For example, it was possible to write logical expressions based on the user’s environment:</p>
            <pre><code>AGE &gt; 3d ? background.color = pale_yellow : background.color = white
DISPLAY_HEIGHT &gt; 30cm ? http://NYT.com/style : http://LeMonde.fr/style</code></pre>
            <p>In a somewhat optimistic sci-fi vision of the future, it was believed your browser would know how relevant a given piece of content was to you, allowing it to show it to you at a larger size:</p>
            <pre><code>RELEVANCE &gt; 80 ? h1.font.size *= 1.5</code></pre>
            
    <div>
      <h3>You Know What Happened Next</h3>
      <a href="#you-know-what-happened-next">
        
      </a>
    </div>
    <blockquote><p><i>Microsoft is absolutely committed to open standards, especially on the Internet.</i></p><p><i>— John Ludeman </i><a href="http://1997.webhistory.org/www.lists/www-talk.1994q4/0003.html"><i><code><u>1994</u></code></i></a></p></blockquote><p>Håkon Lie went on to simplify his proposal and, working with Bert Bos, published the first version of the CSS spec in December of 1996. Ultimately he would go on to write his doctoral thesis on the creation of CSS, <a href="http://people.opera.com/howcome/2006/phd/">a document</a> which was heroically helpful to me in writing this.</p><p>Compared to many of the other proposals, one notable fact of CSS is its simplicity. It can be easily parsed, easily written, and easily read. As with many other examples over the history of the Internet, it was the technology which was easiest for a beginner to pick up which won, rather than those which were most powerful for an expert.</p><p>It is itself a reminder of how incidental much of this innovation can be. For example, support for contextual selectors (<code>body ol li</code>) was only added because Netscape already had a method for removing borders from images that were hyperlinks, and it seemed necessary to implement everything the popular browser could do. The functionality itself added a significant delay to the implementation of CSS, as at the time most browsers didn’t keep a ‘stack’ of tags as they parsed HTML. This meant the parsers had to be redesigned to support CSS fully.</p><p>Challenges like this (and the widespread use of non-standard HTML tags to define style) meant CSS was not usable until 1997, and was not fully supported by any single browser until March of 2000. As any developer can tell you, browser support wasn’t anywhere close to standards compliant until just a few years ago, more than fifteen years after CSS’ release.</p>
    <div>
      <h3>The Final Boss</h3>
      <a href="#the-final-boss">
        
      </a>
    </div>
    <blockquote><p><i>If Netscape 4 ignored CSS rules applied to the </i><i><code>&lt;body&gt;</code></i><i> element and added random amounts of whitespace to every structural element on your page, and if IE4 got </i><i><code>&lt;body&gt;</code></i><i> right but bungled padding, what kind of CSS was safe to write? Some developers chose not to write CSS at all. Others wrote one style sheet to compensate for IE4’s flaws and a different style sheet to compensate for the blunders of Netscape 4.</i></p><p><i>— Jeffrey Zeldman</i></p></blockquote><p>Internet Explorer 3 famously launched with (somewhat terrible) CSS support. To compete, it was decided that Netscape 4 should also have support for the language. Rather than doubling down on this third (considering HTML and JavaScript) language though, it was decided it should be implemented by converting the CSS into JavaScript, and executing it. Even better, it was decided this ‘JavaScript Style Sheet’ intermediary language should be <a href="https://web.archive.org/web/19970709133056/http://home.netscape.com/comprod/products/communicator/guide.html">accessible to web developers</a>.</p><p>The syntax is straight JavaScript, with the addition of some styling-specific APIs:</p>
            <pre><code>tags.H1.color = "blue";
tags.p.fontSize = "14pt";
with (tags.H3) {
  color = "green";
}

classes.punk.all.color = "#00FF00"
ids.z098y.letterSpacing = "0.3em"</code></pre>
            <p>You could even define functions which would be evaluated <i>every time the tag was encountered</i>:</p>
            <pre><code>evaluate_style() {
  if (color == "red"){
    fontStyle = "italic";
  } else {
    fontWeight = "bold";
  }
}

tag.UL.apply = evaluate_style();</code></pre>
            <p>The idea that we should simplify the dividing line between styles and scripts is certainly reasonable, and is now even experiencing a resurgence of sorts in the <a href="https://facebook.github.io/react/tips/inline-styles.html">React community</a>.</p><p>JavaScript was itself a very new language at this time, but via some reverse engineering Internet Explorer had already added support for it into IE3 (as “JScript”). The bigger issue was the community had already rallied around CSS, and Netscape was, at this time, viewed as <a href="https://lists.w3.org/Archives/Public/www-style/1996Jun/0068.html">bullies</a> by much of the standards community. When Netscape did <a href="https://www.w3.org/Submission/1996/1/WD-jsss-960822">submit</a> JSSS to the standards committee, it fell on deaf ears. Three years later, Netscape 6 dropped support for JSSS and it died a (mostly) quiet death.</p>
    <div>
      <h3>What Might Have Been</h3>
      <a href="#what-might-have-been">
        
      </a>
    </div>
    <p>Thanks to some <a href="https://www.w3.org/Style/CSS/Test/CSS1/current/">public shaming</a> by the W3C, Internet Explorer 5.5 launched with nearly complete CSS1 support in the year 2000. Of course, as we now know, browser CSS implementations were heroically buggy and difficult to work with for at least another decade.</p><p>Today the situation has fortunately improved dramatically, allowing developers to finally realize the dream of writing code once and trusting it will function (almost) the same from browser to browser.</p><p>Our conclusion from all of this is the realization of just how arbitrary and contextual many of the decisions which govern our current tools were. If CSS was designed the way it is just to satisfy the constraints of 1996, then maybe that gives us permission 20 years later to do things a little differently.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Apps]]></category>
            <category><![CDATA[History]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Programming]]></category>
            <guid isPermaLink="false">5yG9u87wVPpb6PEdtTLyXy</guid>
            <dc:creator>Zack Bloom</dc:creator>
        </item>
        <item>
            <title><![CDATA[The Web's Silver Jubilee]]></title>
            <link>https://blog.cloudflare.com/the-webs-silver-jubilee/</link>
            <pubDate>Tue, 11 Mar 2014 17:00:00 GMT</pubDate>
            <description><![CDATA[ No matter what your age, it's hard to believe that the World-Wide Web is 25 today. For the young the web has always been part of their lives, for the older it seems like it was invented only yesterday. ]]></description>
            <content:encoded><![CDATA[ <p>No matter what your age, it's hard to believe that the World-Wide Web is 25 today. For the young the web has always been part of their lives, for the older it seems like it was invented only yesterday. But, in truth, the World-Wide Web sprang into life in the form of a document circulated at CERN entitled <a href="http://www.w3.org/History/1989/proposal.html">Information Management: A Proposal</a> in March 1989.</p><p>The document contains a simple diagram proposing that "browsers" on heterogenous machine types would be able to access Hypertext Servers to view information.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/58uUWe37WW6ZNme2exf8Qq/0b174eb470b5c75bcc9280212eeb03b1/Image2.gif" />
            
            </figure><p>In one of the great understatements of computing history, Tim Berners-Lee, the author of the proposal, wrote a parenthetical comment that allowing heterogenous machine types to access these proposed Hypertext Servers would be "a boon for the world in general".</p><p>The most visible part of the explosion of the World-Wide Web is that we are all using it every day. But we don't often stop to think about the technical changes that underlie the web browsers that we use. Part of CloudFlare's work is to stay on top of the web as it changes so that everyone with a web server has the latest technology.</p><p>Here's a brief timeline of significant changes in web technology.</p><p>March 12, 1989: Tim Berners-Lee outlines the web in "Information Management: A Proposal"</p><p>Christmas Day, 1990: Berners-Lee releases the first version of his <a href="https://en.wikipedia.org/wiki/WorldWideWeb">WorldWideWeb</a> web browser.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5mV942cgsoa2R2LVn8yPSo/6c1a4fe2034c3ceec22b0ad7c46b7ac2/screensnap2_24c.gif" />
            
            </figure><p>1991: the first HTTP standard <a href="http://www.w3.org/Protocols/HTTP/AsImplemented.html">HTTP/0.9</a> is written up. It reflects that state of HTTP as implemented by early web browsers.</p><p>March 9, 1992: The <a href="https://en.wikipedia.org/wiki/ViolaWWW">ViolaWWW</a> browser is released. It remains popular until Mosaic is released.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1WEa2MqP4DgnpzCeOYP6Xg/5435fac05bdf35692c4b3218d8bcfc90/ViolaWWW.png" />
            
            </figure><p>July 1992: Port 80 is assigned for the HTTP protocol in <a href="https://www.ietf.org/rfc/rfc1340.txt">RFC 1340</a>.</p><p>November 3, 1992: Internal CERN document entitled <a href="http://www.w3.org/History/19921103-hypertext/hypertext/WWW/MarkUp/Tags.html">HTML Tags</a> is first HTML specification of any kind.</p><p>January 23, 1993: The <a href="https://en.wikipedia.org/wiki/Mosaic_(web_browser)">Mosaic</a> web browser is released and becomes very popular.</p><p>February 25, 1993: <a href="http://1997.webhistory.org/www.lists/www-talk.1993q1/0182.html">Marc Andreessen proposes</a> that the web should have an <code>&lt;img&gt;</code> tag so that images could be displayed in a browser along with text.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/JqVP9q7Z4X2XzqSXLNsEC/1230d6bf8c9c0c74272ef44231a9cc02/Mosaic_Netscape_0.9_on_Windows_XP.png" />
            
            </figure><p>1994: The first mobile web browser, PocketWeb for the Apple Newton, is released.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7npSwGp9mp94ThnLthCrMU/5f26a2ee4a942a0cf9698386fe35d40f/url.gif" />
            
            </figure><p>February 1995: Netscape <a href="https://web.archive.org/web/19970614020952/http://home.netscape.com/newsref/std/SSL.html">introduced SSL</a> for secure HTTP connections.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/68m0R3utBQaJnOboHEHicW/5c72e5c81d51cdac0fdc9436f65beeec/tumblr_l2l2c7PQqa1qzqua1o1_500.jpg" />
            
            </figure><p>September 1995: Netscape introduces <a href="https://en.wikipedia.org/wiki/JavaScript">JavaScript</a>.</p><p>November 24, 1995: HTML 2.0 is described in <a href="https://tools.ietf.org/html/rfc1866">RFC 1866</a>.</p><p>1996: Macromedia introduces <a href="http://en.wikipedia.org/wiki/Adobe_Flash">Flash</a>. It becomes Adobe Flash in 2005.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/qA3PosbakbVIBbXYqTDDt/050a5c772572ec3f1276486a13eca51b/befor-flash1.png" />
            
            </figure><p>May 1996: The specification for <a href="http://www.ietf.org/rfc/rfc1945.txt">HTTP/1.0</a> is released as RFC 1945.</p><p>November 1996: SSL 3.0 is released (it is described in <a href="https://tools.ietf.org/html/rfc6101">RFC 6101</a>). The W3C issues a <a href="http://www.w3.org/TR/WD-xml-961114.html">Working Draft</a> describing XML.</p><p>December 17, 1996: CSS Level 1 is published as a <a href="http://www.w3.org/TR/CSS1/">recommendation</a> by the W3C.</p><p>January 1997: The specification for <a href="http://www.ietf.org/rfc/rfc2068.txt">HTTP/1.1</a> is released as RFC 2068. The W3C releases a draft specification for <a href="http://www.w3.org/TR/REC-html32">HTML 3.2</a>.</p><p>December 18, 1997: The specification for <a href="http://www.w3.org/TR/REC-html40-971218/">HTML 4.0</a> is released.</p><p>1998: Microsoft introduces <a href="http://www.alexhopmann.com/xmlhttp.htm">XMLHTTP</a> as part of work being done on <a href="https://en.wikipedia.org/wiki/Outlook_Web_App">Outlook Web Access</a>. XMLHTTP later became XMLHTTPRequest and helped kick off Web 2.0.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4np2lJkLEfZMTdXrIb407C/5af83ebaa2c88dc5e0e0856bf62367c3/OWA_2000_interface.gif" />
            
            </figure><p>May 12, 1998: W3C publishes a specification for <a href="http://www.w3.org/TR/2008/REC-CSS2-20080411/">CSS Level 2</a>.</p><p>December 1998: IPv6 is written up in <a href="https://tools.ietf.org/html/rfc2460">RFC 2460</a>. The specification of <a href="http://www.w3.org/TR/html401/">HTML 4.01</a> is released.</p><p>January 1999: TLS 1.0 is described in <a href="https://tools.ietf.org/html/rfc2246">RFC 2246</a>. It is destined to replace SSL.</p><p>December 2000: Mozilla introduces support for XMLHTTPRequest in Gecko 0.6.</p><p>February 2004: Apple adds support for XMLHTTPRequest to Safari 1.2.</p><p>February 18, 2005: The term AJAX is used to <a href="http://www.adaptivepath.com/ideas/ajax-new-approach-web-applications/">describe</a> dynamic web sites using XMLHTTPRequest and JavaScript.</p><p>April 2006: TLS 1.1 is described in <a href="https://tools.ietf.org/html/rfc4346">RFC 4346</a>. The W3C releases a <a href="http://www.w3.org/TR/2006/WD-XMLHttpRequest-20060405/">Working Draft</a> describing XMLHTTPRequest.</p><p>October 2006: Microsoft Internet Explorer 7 supports XMLHTTPRequest.</p><p>January 2008: First draft version of <a href="http://www.w3.org/TR/html5/">HTML 5</a> is released.</p><p>August 2008: TLS 1.2 is described in <a href="https://tools.ietf.org/html/rfc5246">RFC 5246</a>.</p><p>April 12, 2011: <a href="http://www.w3.org/TR/2011/PR-CSS2-20110412/">CSS Level 2.1</a> becomes a W3C Proposed Recommendation.</p><p>November 2011: <a href="http://dev.chromium.org/spdy/spdy-whitepaper">SPDY</a> is introduced by Google.</p><p>December 2011: <a href="http://tools.ietf.org/html/rfc6455">RFC 6455</a> describes <a href="http://en.wikipedia.org/wiki/WebSocket">WebSockets</a>.</p><p>February 2012: SPDY Version 3 is <a href="http://tools.ietf.org/html/draft-mbelshe-httpbis-spdy-00">described</a> an Internet Draft.</p><p>November 2013: SPDY Version 3.1 is <a href="http://www.chromium.org/spdy/spdy-protocol/spdy-protocol-draft3-1">specified</a>. Work on SPDY continues as part of <a href="http://en.wikipedia.org/wiki/HTTP_2.0">HTTP/2.0</a>.</p><p>And so the web continues to evolve. New specifications and new protocols are tested and defined. HTML5 is an ongoing effort, as is <a href="https://en.wikipedia.org/wiki/Cascading_Style_Sheets#CSS_3">CSS Level 3</a>. Google recently began experimenting with a new web protocol called <a href="/staying-up-to-date-with-the-latest-protocols-spdy-3-1">QUIC</a>.</p><p>CloudFlare helps customers stay on top of the ever changing web with features like <a href="/introducing-cloudflares-automatic-ipv6-gatewa">automatic IPv6</a>, support for <a href="/staying-up-to-date-with-the-latest-protocols-spdy-3-1">SPDY/3.1</a>, and complete support for <a href="/introducing-strict-ssl-protecting-against-a-man-in-the-middle-attack-on-origin-traffic">TLS</a>.</p><p>25 years on the web is still growing, evolving and changing. Here's to 25 more years!</p><p>PS If reading that list of changes wasn't enough nostalgia for you... <a href="http://info.cern.ch/hypertext/WWW/TheProject.html">visit the web page that got it all started</a>.</p><p>When Tim Berners-Lee wrote the original proposal he sent it to his boss. His boss wrote on the top of it <a href="http://info.cern.ch/Proposal.html">Vague but exciting...</a>. It did turn out to be the latter!</p> ]]></content:encoded>
            <category><![CDATA[History]]></category>
            <category><![CDATA[spdy]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[SSL]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">7mndyDQaIXvpdixV119Gfe</guid>
            <dc:creator>John Graham-Cumming</dc:creator>
        </item>
    </channel>
</rss>