DNS with Cloudflare

I’ve been looking for a new DNS provider for a little while, and I’ve settled on CloudFlare. I like the way they give back to the community with Universal SSL, I like that they’re consistently rated as fast and that they’re planning to implement DNSSEC (somebody needs to move in this area). I also like the price (free), even though I’d actually pay for the service if I needed to. I looked at Dyn (cheap, good performance, good features) and NSone (free and good performance) but was turned off in both cases by the Terms of Service that allows them to use my name in promo materials without my consent.

I’m still undecided on whether to use CloudFlare as a CDN but I’ve already seen benefit in using them for DNS. WebPageTest shows DNS resolution times out of Sydney to be a few hundred milliseconds faster than my old provider, AWS, and resolution times from Austin and New York are in the low tens of millis, which is great.

CloudFlare have impressed technically and socially, and that’s a great outcome.

Developing on an iPad

Surely I won’t need my laptop

I pack lightly when I travel so I left my laptop at home when we did a recent overseas trip to visit family. I didn’t anticipate writing any code or doing anything technical on holidays because I was tired from work, so leaving my laptop at home seemed like a good choice. And then I relaxed, and found a bit of mental energy and I wanted to do exactly what I’d expected not to do. So there I was with my iPad and Bluetooth keyboard, eager to do something technical and figuring that I’d be interesting to have a go with what’s at hand.

And then I attempt a significant upgrade using the iPad

I decided to try a significant upgrade of Nikola, which I use to publish this site. I’d fallen behind several versions, and there were a number of non-trivial breaking changes that I’d need to work through. Most of this task was unix command-line, text-file editing, a bit of python, reviewing web pages and reading doco. I’d have normally done it offline on my Macbook, but my tools in this case were:

  • A 3rd gen iPad.
  • A Logitech ultra-thin keyboard cover
  • Prompt by Panic (a terminal emulator with ssh)
  • A browser (or two)
  • my EC2 instance running CentOS 6
  • vi for editing

It’s quite effective, once you get used to it

Here’s what I found:

  • The Logitech keyboard lacks an ESC key, and in its place has the home button. As vi makes extensive use of the ESC key and I have many years of muscle memory with it at the top left of the keyboard, I found myself being kicked out of Prompt with great regularity. Fortunately Prompt allows me to create a ESC soft key, and I’ve positioned it just above the physical home key on the keyboard, which has helped considerably.
  • Prompt doesn’t seem to have a copy-and-paste function which is very frustrating. I really hope Prompt2 has it. I tried to setup iSSH but couldn’t get access to my private SSH key so I found myself using :r!grep -A search_term filename in vi to selectively suck in contents of files to my editor session
  • I miss being able to view two apps side-by-side, but a half-swipe in the app switcher is a pretty good workaround. The high-res retina display really shines in this use-case…
  • but the app switcher is slow on my iPad 3 (it seemed to slow down on the iOS 6-7 upgrade, I think)
  • Having my dotfiles accessible on github made it quite easy to recreate the important bits of my editing environment on the EC2 instance.
  • tmux rocks, it allows me to do split screen editing and allows trivial session restoration, which is important because…
  • Prompt looses connectivity to the server after a couple of minutes in the background, but setting the initial session command in Prompt to: tmux attach || tmux, along with the ssh agent makes restoring my current state a 5 second process.
  • I miss being able to “print” PDFs of some pages, which is helpful for capturing receipts. There’s no obvious solution but needing to live without something helps question its value, and I’m now less attached to capturing receipts in this manner.

And I’d do it again

I was surprised how effectively I was able to work. The experience would be better with a terminal client that supported copy-and-paste, a client that maintained connectivity for longer and a keyboard that had a physical ESC key but otherwise it was quite acceptable. It’s not my work platform of choice, but if I didn’t expect to do a significant amount of work, and had a stable network connection, I’d do it again.

Privacy with HTTPS

Privacy is a right

Everyone has a right to read and communicate in private, without fear of eavesdropping by a person or government, but that right isn’t honoured. This right is not related to the subject material and the desire for privacy should never be used to imply guilt. To make it possible to reading in private, this site can now be accessed over HTTPS. It doesn’t matter to me that this is a personal, low traffic site, nor is it important that the content is unlikely to offend; Everyone has a right to read and communicate in private.

I found enabling HTTPS to be an inexpensive and reasonably simple operation that hasn’t noticeably affected the performance of the site.

Enabling HTTPS isn’t costly

I’m using a 12-month free SSL certificate from StartSSL. The reputation of certificate providers is very important, and while I found it hard to find recommendations on reputable providers, these guys seem to be OK. When renewal time comes, a 12-month single domain certificate from them is $49, and I’m told that there are free options non-commercial use.

Enabling HTTPS isn’t that complicated

It took a few hours to setup, but most of that was because I like this sort of change to be repeatable so I did my usual dance with version control for all the keys, certificates and artifacts and then introduced HTTPS to the site via my ansible playbooks. When I started, I wasn’t particularly familiar with certificate formats or signing requests but I followed a StartSSL and nginx-oriented walkthrough which helped immensely. I used Qualys’ SSL Server Test to validate my setup.

HTTPS isn’t slow

Once I applied a few well understood optimisations, it’s almost the same speed as my http setup. I began with the “more complete nginx config” that was mentioned on the StartSSL and nginx-oriented walkthrough and read through some writing by Ilya Grikorik, particularly Optimizing NGINX TLS Time To First Byte (TTTFB). Some of the optimisations required using a newer version of nginx and I was thrilled to find that nginx maintain a CentOS repo which made the upgrade process trivial (no need to build from source or admit defeat and stick with the standard CentOS 6 version). The greatest performance improvement came from enabling OCSP stapling.

So how much slower is HTTPS?

My server is hosted by Rackspace in their Sydney datacentre. I’ve used Webpagetest to record the time to document complete, in situations where the webpage test client is in Sydney and San Jose:

  • HTTP with a Sydney client: 0.39s
  • HTTPS with a Sydney client: 0.42s
  • HTTP with a San Jose client: 1.78s
  • HTTPS with a San Jose client: 1.96s

So the difference is really just the extra TCP round-trip, within a margin of error.

What now?

There are still a few more changes to make, particularly enabling HTTP Strict Transport Security once I’m comfortable with my setup and to review and understand the optimisations described at Is TLS Fast Yet?. Oh yeah, that and enjoying that people can read in private.

Words and Pictures - Jan 2014

Building on November’s feat of watching a whole film on DVD, my wife and I escaped to IMAX to see something current! Astonishing! Here’s the full list of highlights form the month.

Reading:

  • The Builder’s High - Rands in Repose. There’s such joy and satisfaction in creating something.
  • The NSA and the Corrosion of Silicon Valley - Michael Dearing. The NSA’s activity is hurting US business. I’m actively avoiding using US-based companies where possible because of the US Government’s overreach and the NSA’s programs.
  • Command and Control - Eric Schlosser. An engaging look at the controls and technology behind the USA’s Cold-War nuclear weapons systems, and the accidents that have occurred with those weapons systems. Having read this, I’m utterly amazed that there has not been a major incident with the US nuclear arsenal.
  • 2014 Gates Annual Letter: Myths About Foreign Aid - Gates Foundation - Bill Gates. A long but informative piece, and well worth reading. I’m so glad we have thoughtful, empirically driven philanthropists like the Gates family.

Watching:

  • Breach - The story behind Robert Hanssen, an FBI agent convicted of spying for the Soviet Union. Ryan Phillippe and Chris Cooper are incredibly good.
  • Gravity - Seeing this at IMAX was a totally captivating experience. Brilliant.

Finding out whether a machine answers for a DNS name (including EC2)

The deployment script for this site is designed to be run from several different machines (mainly due to sketchy connectivity during my commute to work). This script copies files to a staging server, and to the production server via local rsync or rsync over ssh. I do not store all private keys on all deployments machines, so I need to have logic in place to use a local rsync when the deployment machine actually hosts the site being deployed. The implementation of this logic needs to handle situations where the DNS name resolves directly to the machine i.e. a.b.c resolves to 1.2.3.4 and 1.2.3.4 is an IP address on an interface on the machine that hosts a.b.c, and the situation where a host is behind a load balancer and only has an IP in the private address space but there is split horizon DNS in place.

I had the first situation before I used Amazon Route 53 for DNS, where a lookup of www.wordspeak.org from the EC2 instance returned the private IP address (172.31.x.y):

wordspeak.org        A       54.252.214.49
www.wordspeak.org    CNAME   ec2-54-252-214-49.ap-southeast-2.compute.amazonaws.com

I have the second situation now, where I use Amazon Route 53 ALIAS records to minimise the number of DNS changes necessary when I rebuild my EC2 instance. In this case, a lookup of www.wordspeak.org from the EC2 instance returns the public IP address (54.252.214.49):

wordspeak.org.       A       54.252.214.49
www.wordspeak.org    A       ALIAS wordspeak.org. (ze9bxr3mkt7lx)

While Amazon has a way for an EC2 instance to find its public address, my logic needs to be portable so it works on all the other (non-EC2) hosts that run the deployment script.

So here it is in python:

def does_this_machine_answer_for_this_hostname(dns_name):
   """Looks at DNS and local interfaces to see if this host answers for the
    DNS name in question

   Caveats:
   - Won't work reliably if the DNS entry resolves to more than one address
   - Assumes the interface configured with the IP associated with the host's
     hostname is actually the interface that accepts public traffic
     associated with DNS name in question
   """
   try:
       my_main_ip = socket.gethostbyname(socket.getfqdn())
   except socket.gaierror:
       # Can't resolve hostname to a public IP, so we're probably going to
       #  be referring to ourselves by localhost, so let's allocate an
       #  IP address accordingly.
       my_main_ip = "127.0.0.1"

   # do a round-trip to so that we match when the host is behind a load
   #  balancer and doesn't have a public IP address (assumes split-horizon
   #  DNS is configured to resolve names to internal addresses) e.g. AWS
   return my_main_ip == socket.gethostbyname(
       socket.gethostbyaddr(socket.gethostbyname(dns_name))[0])

And in bash/zsh:

function does_this_machine_answer_for_this_hostname () {
   # e.g. if [ does_this_machine_answer_for_this_hostname staging.wordspeak.org ]; ...
   my_main_ip=$(dig +short $(hostname --fqdn));
   resolved_ip=$(dig +short $(dig +short -x $(dig +short $1)));
   return $(test "${my_main_ip}" = "${resolved_ip}");
}

I hope you find it useful.