Subscribe to receive notifications of new posts:

The weird and wonderful world of DNS LOC records

2014-04-01

2 min read

A cornerstone of CloudFlare's infrastructure is our ability to serve DNS requests quickly and handle DNS attacks. To do both those things we wrote our own authoritative DNS server called RRDNS in Go. Because of it we've been able to fight off DNS attacks, and be consistenly one of the fastest DNS providers on the web.

Implementing an authoritative DNS server is a large task. That's in part because DNS is a very old standard (RFC 1035 dates to 1987), in part because as DNS has developed it has grown into a more and more complex system, and in part because what's written in the RFCs and what happens in the real-world aren't always the same thing.

One little used type of DNS record is the LOC (or location). It allows you to specify a physical location. CloudFlare handles millions of DNS records; of those just 743 are LOCs. Nevertheless, it's possible to set up a LOC record in the CloudFlare DNS editor.

Trinity

My site geekatlas.com has a LOC record as an Easter Egg. Here's how it's configured in the CloudFlare DNS settings:

LOC Example

When you operate at CloudFlare scale the little-used nooks and crannies turn out to be important. And even though there are only 743 LOC records in our entire database, at least one customer contacted support to find out why their LOC record wasn't being served.

And that sent me into the RRDNS source code to find out why.

The answer was simple. Although RRDNS had code for receiving requests for LOC records, creating response packets containing LOC data, there was a missing link. The CloudFlare DNS server stores the LOC record as a string (such as the 33 40 31 N 106 28 29 W 10m above) and no one had written the code to parse that and turn it into the internal format. Oops.

The textual LOC format and the binary, on-the-wire, format are described in RFC 1876 and it's one of many RFCs that updated the original 1987 standard. RFC 1876 is from 1996.

The textual format is fairly simple. Here's what the RFC says:

The LOC record is expressed in a primary file in the following format:

owner TTL class LOC ( d1 [m1 [s1]] {"N"|"S"} d2 [m2 [s2]]
                           {"E"|"W"} alt["m"] [siz["m"] [hp["m"]
                           [vp["m"]]]] )

where:

   d1:     [0 .. 90]            (degrees latitude)
   d2:     [0 .. 180]           (degrees longitude)
   m1, m2: [0 .. 59]            (minutes latitude/longitude)
   s1, s2: [0 .. 59.999]        (seconds latitude/longitude)
   alt:    [-100000.00 .. 42849672.95] BY .01 (altitude in meters)
   siz, hp, vp: [0 .. 90000000.00] (size/precision in meters)

If omitted, minutes and seconds default to zero, size defaults to 1m,
horizontal precision defaults to 10000m, and vertical precision
defaults to 10m.  These defaults are chosen to represent typical
ZIP/postal code area sizes, since it is often easy to find
approximate geographical location by ZIP/postal code.

So, there are required latitude, longitude and altitude and three optional values for the size of the location and precision information. Pretty simple.

Then there's the on-the-wire format. Unlike a TXT record the LOC record data is parsed and turned into a fixed size binary format. Back to RFC 1876:

  MSB                                           LSB
   +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
  0|        VERSION        |         SIZE          |
   +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
  2|       HORIZ PRE       |       VERT PRE        |
   +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
  4|                   LATITUDE                    |
   +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
  6|                   LATITUDE                    |
   +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
  8|                   LONGITUDE                   |
   +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
 10|                   LONGITUDE                   |
   +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
 12|                   ALTITUDE                    |
   +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
 14|                   ALTITUDE                    |
   +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+

So, 32 bits of latitude, longitude and altitude and then three 8 bit values for the size and precision. The latitude and longitude values have a pretty simple encoding that treats the 32 bits as an unsigned integer:

The latitude of the center of the sphere described by the SIZE field, expressed as a 32-bit integer, most significant octet first (network standard byte order), in thousandths of a second of arc.  2^31 represents the equator; numbers above that are north latitude.

And the altitude can be below sea-level but still unsigned:

The altitude of the center of the sphere described by the SIZE field, expressed as a 32-bit integer, most significant octet first (network standard byte order), in centimeters, from a base of 100,000m below the [WGS 84] reference spheroid used by GPS.

But the 8 bit values use a very special encoding that allows a wide range of approximate values to be packed into 8 bits and also be human-readable when dumped out in hex!

The diameter of a sphere enclosing the described entity, in centimeters, expressed as a pair of four-bit unsigned integers, each ranging from zero to nine, with the most significant four bits representing the base and the second number representing the power of ten by which to multiply the base.  This allows sizes from 0e0 (<1cm) to 9e9 (90,000km) to be expressed.  This representation was chosen such that the hexadecimal representation can be read by eye; 0x15 = 1e5.

For example, the value 0x12 means 1 * 10^2 or 100cm. 0x99 means 9 * 10^9 or 90,000,000m. The smallest value that can be represented is 1cm (it's 0x10). So, in just 8 bits there's a range values from 1cm to larger than the diameter of Jupiter.

To fix this I wrote a parser for the LOC text record type (and associated tests). It can be found here.

We've now rolled out the fix and all the existing LOC records are being served by RRDNS. For example, my geekatlas.com LOC record can be queried like this:

$ dig geekatlas.com LOC
; <<>> DiG 9.8.3-P1 <<>> geekatlas.com LOC
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 2997
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:    
;geekatlas.com.         IN  LOC

;; ANSWER SECTION:
geekatlas.com.      299 IN  LOC 33 40 31.000 N 106 28 29.000 W 10.00m 1m 10000m 10m

;; Query time: 104 msec
;; SERVER: 192.168.14.1#53(192.168.14.1)
;; WHEN: Tue Apr  1 14:13:48 2014
;; MSG SIZE  rcvd: 59

Cloudflare's connectivity cloud protects entire corporate networks, helps customers build Internet-scale applications efficiently, accelerates any website or Internet application, wards off DDoS attacks, keeps hackers at bay, and can help you on your journey to Zero Trust.

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here. If you're looking for a new career direction, check out our open positions.
RRDNSDNSReliabilityAttacksGo

Follow on X

Cloudflare|@cloudflare

Related posts

October 09, 2024 1:00 PM

Improving platform resilience at Cloudflare through automation

We realized that we need a way to automatically heal our platform from an operations perspective, and designed and built a workflow orchestration platform to provide these self-healing capabilities across our global network. We explore how this has helped us to reduce the impact on our customers due to operational issues, and the rich variety of similar problems it has empowered us to solve....

October 02, 2024 1:00 PM

How Cloudflare auto-mitigated world record 3.8 Tbps DDoS attack

Over the past couple of weeks, Cloudflare's DDoS protection systems have automatically and successfully mitigated multiple hyper-volumetric L3/4 DDoS attacks exceeding 3 billion packets per second (Bpps). Our systems also automatically mitigated multiple attacks exceeding 3 terabits per second (Tbps), with the largest ones exceeding 3.65 Tbps. The scale of these attacks is unprecedented....