Making the Touch Bar useful

I’ve had a Touch Bar equipped Macbook Pro for about 6 months. Until recently, I only used the Touch ID sensor with any regularity; I couldn’t see a use for the other buttons and the switch from buttons to sliders was a usability regression. I recently read Making the Touch Bar finally useful and discovered how BetterTouchTool can be used to customise the touch bar. Wow.

Here’s my new touch bar:.

My customised Touch Bar

From right-to-left:

  • The escape key. As a user of an Apple iPad keyboard that lacks an escape key, I’ve mostly retrained muscle memory to use ctrl+[ for escape. I still use the key occasionally (and I’d still rather have a physical key).
  • Battery info. I had battery info in the menu bar but the Touch Bar is more noticeable, and the extra space allows for more info. This helps me be more aware of battery-hungry apps, and now my battery lasts longer.
  • ADSL connection info. My ADSL model exposes this info by SNMP. It was previously in the menu bar as a custom plugin for BitBar and, like the battery info, it’s more visible and useful in the Touch Bar.
  • www.wordspeak.org ping time. This is an indication of my upstream connection quality, not as monitoring. When touched, it opens an Alacritty terminal to my hosting machine, with the appropriate colour scheme.
  • Gateway ping time. Like the other ping time widget, this gives me an indication of link congestion, which often happens when devices are doing cloud backups or uploads. When touched, it also opens a custom Alacritty terminal.
  • Coffee time! Puts the laptop to sleep.
  • Volume and brightness. Buttons, not sliders. The Touch Bar is a small target, and I found it hard to set the brightness or volume correctly with the sliders that Apple uses in the default configuration.
  • Weather. Live, local weather from the Bureau of Meteorology implemented in shell
  • Clock. It’s in the menu bar too, but I notice it more in this location.

The Touch Bar is a well implemented piece of technology, with a poor default configuration. Paying a few dollars for a BTT licence to make it useful is a good move.

Smaller images with single-step compression

I’m reviewing image sizes to improve download times in my photo galleries and I’ve obtained the smallest file sizes by performing a single compression step rather than allowing each tool to perform compression during my image pipeline.

Each tool in my workflow has defaults that work well if there is no subsequent or proceeding compression, but produce sub-optimal results when used in an image pipeline where each tool performs compression. My workflow is:

  1. Load and edit photos in Apple’s Photos.app.
  2. Export from Photos.app. I choose a “JPEG Quality” level at export time.
  3. Stamp copyright and licencing info using ExifTool
  4. Resize images as a part of image gallery creation using Pillow, which generally involves a compression step
  5. Perform final optimisation using imageOptim. I’ve used this tool in the past to reduce jpg sizes with great success and it’s consistently given me the best image compression. Adding this step was the trigger point for this investigation.

I did experiments on my Arches gallery whose photos have a total uncompressed size of 103.6MB. I tried four permutations of compression with the results below, noting that the Pillow step also includes resizing:

  1. Compression by Photos.app, Pillow and imageOptim: Final size 4.6MB
    • Photos.app (medium quality) 103.6MB → 19.1MB
    • Pillow (75% quality) 19.1MB → 5.2MB
    • imageOptim (74% quality) 5.2MB → 4.6MB
  2. Compression by Pillow and imageOptim: Final size 4.6MB
    • Photos.app (maximum quality) 103.6MB → 103.6MB
    • Pillow (75% quality) 103.6MB → 5.2MB
    • imageOptim (74% quality) 5.2MB → 4.6MB
  3. Compression by JPEGmini and imageOptim: Final size 4.3MB
    • Photos.app (maximum quality) 103.6MB → 103.6MB
    • Pillow (100% quality) 103.6MB → 27.8MB
    • JPEGmini (no user-selectable settings) 27.8MB → 7.1MB
    • imageOptim (74% quality) 7.1MB → 4.3MB
  4. Compression by imageOptim only: Final size 4.1MB (best result)
    • Photos.app (maximum quality) 103.6MB → 103.6MB
    • Pillow (100% quality) 103.6MB → 27.8MB
    • imageOptim (74% quality) 27.8MB → 4.1MB

The best result comes from performing a single compression step with the best compression tool.

I expected that the individual compression tools would have complementary compression schemes and chaining them together would give the best compression. I suspect now that imageOptim uses all the types of compression schemes available in the other tools and gets the best result because it can take an image with low entropy i.e. an uncompressed image and introduce all the entropy (compression) in a single step.

Disincentives and Photo Hosting

I’ve been searching for a simple, scriptable photo hosting tool so that I can move off Flickr, and I’ve found one. Photo hosting tools don’t seem to have evolved much in recent times, which is unsurprising given social media does the job of photo sharing for most people. There are a few tools, for sure, but there’s something fun about hosting my own photos so I can optimise, meddle and learn and still have the joy of sharing (and indeed sharing without ads).

So it’s time for me to say goodbye to Flickr. It’s been a good technical platform and seems to have a community in it, but I didn’t like Flickr’s injection of ads in my galleries and the platform does feel irreversibly stagnated - I was excited while when Yahoo gave it some focus a few years back, but there’s nothing to indicate anyone’s committed to developing it.

So I’ve moved my photos across to a self-hosted instance of Sigal. Performing the move was a delightful task as I relived some of the memories that I’ve previously published, and then discovered albums that I could have published, but never did. I realised my reservations about Flickr were a subconscious disincentive for publishing.

So, with this renewed vigour, I’ve published photos from my 2016 trip to Cyprus along with the rest of my photos. I hope you enjoy them.

Yak shaving with Vagrant, Travis-CI and AWS

Tldr; Don’t use the vagrant package from your distribution if you intent to build plugins.

I’ve just finished setting up CI pipeline for a personal project. The project has an ansible playbook that I want to exercise every time there’s a commit or a PR. While completing the task I shaved a yak and narrowly avoided shaving a whole herd. I planned to use Vagrant in my Travis-CI pipeline to start an instance in AWS, run the playbook, look at the result and terminate the instance. Vagrant, Travis-CI and AWS are pretty common tools, so I was surprised at the wrangling involved before I ended up with a solution. I thought I’d document my findings to minimise the chance that others will have the same experience.

Finding #1: Travis’ default build agent has an old Vagrant

The default build agent is based on Ubuntu 12.04 LTS Server which ships with Ansible 1.0 in its apt repo. The vagrant-aws plugin requires Vagrant 1.2. Fortunately they have a Ubuntu 14.04 LTS Server beta which has a newer Vagrant in the apt repo.

Finding #2: The AWS-Vagrant plugin won’t build with a newer Vagrant because of missing libraries and tools

$ vagrant plugin install vagrant-aws
Installing the 'vagrant-aws' plugin. This can take a few minutes...
/usr/lib/ruby/1.9.1/rubygems/installer.rb:562:in `rescue in block in build_extensions': ERROR: Failed to build gem native extension. (Gem::Installer::ExtensionBuildError)

        /usr/bin/ruby1.9.1 extconf.rb
/usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require': cannot load such file -- mkmf (LoadError)
    from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
    from extconf.rb:4:in `<main>'

Searching for the mkmf LoadError in the context of Debian and Ubuntu gives recommendations to install a few devel packages including ruby-dev (some will say a specific version of ruby dev). It seems that AWS-Vagrant is a Ruby gem, and gems are built on-box (at least it seems so - I’m a Ruby rookie), so libraries and the ruby compiler are needed. These aren’t installed on the build agent.

$ sudo apt-get install build-essential libxslt-dev libxml2-dev zlib1g-dev ruby-dev

Finding #3: The AWS-Vagrant plugin needs Ruby version 2.0+

$ vagrant plugin install vagrant-aws
Installing the 'vagrant-aws' plugin. This can take a few minutes...
/usr/lib/ruby/1.9.1/rubygems/installer.rb:388:in `ensure_required_ruby_version_met': json requires Ruby version ~> 2.0. (Gem::InstallError)
    from /usr/lib/ruby/1.9.1/rubygems/installer.rb:156:in `install'
    from /usr/lib/ruby/1.9.1/rubygems/dependency_installer.rb:297:in `block in install'
    from /usr/lib/ruby/1.9.1/rubygems/dependency_installer.rb:270:in `each'
    from /usr/lib/ruby/1.9.1/rubygems/dependency_installer.rb:270:in `each_with_index'

vagrant plugin install wants to use Ruby 1.9.1 to perform the gem build, but that version is too old. Fortunately the build agent has Ruby 2.3.

Finding #4: Ubuntu Vagrant can’t load plugins built with Ruby 2.3

$ gem install --verbose vagrant-aws
...
Successfully installed vagrant-aws-0.7.2
41 gems installed
$ vagrant plugin install /home.travis/.rvm/gems/ruby-2.3.1/cache/vagrant-aws-0.7.2.gem
/usr/lib/ruby/1.9.1/rubygems/format.rb:32:in `from_file_by_path': Cannot load gem at [/home.travis/.rvm/gems/ruby-2.3.1/cache/vagrant-aws-0.7.2.gem] in /home/travis/build/edwinsteele/biblebox-pi (Gem::Exception)
    from /usr/share/vagrant/plugins/commands/plugin/action/install_gem.rb:36:in `call'
    from /usr/lib/ruby/vendor_ruby/vagrant/action/warden.rb:34:in `call'
    from /usr/share/vagrant/plugins/commands/plugin/action/bundler_check.rb:20:in `call'
    from /usr/lib/ruby/vendor_ruby/vagrant/action/warden.rb:34:in `call'
    from /usr/lib/ruby/vendor_ruby/vagrant/action/builder.rb:116:in `call'
...

Now I’m running out of ideas and I’m considering less conventional means like building on the Travis MacOS environment where I can use Homebrew or moving to another CI hosting provider entirely. Fortunately I stumbled on the answer…

The Answer

From the Vagrant docs

Beware of system package managers! Some operating system distributions include a vagrant package in their upstream package repos. Please do not install Vagrant in this manner. Typically these packages are missing dependencies or include very outdated versions of Vagrant. If you install via your system’s package manager, it is very likely that you will experience issues. Please use the official installers on the downloads page.

Yeah, I experienced issues. Once I followed the advice it was smooth. Perhaps I should have looked at the official docs sooner!

$ wget -O /tmp/vagrant.deb https://releases.hashicorp.com/vagrant/1.8.7/vagrant_1.8.7_x86_64.deb
$ sudo dpkg -i /tmp/vagrant.deb
$ vagrant plugin install vagrant-aws
$

And now I have a CI pipeline running on AWS after each commit. Nice.

Site Security Improvements

I did some security work on this site recently. I was able to get some nice wins without a great investment of time, in part due to the great resources that are available. Here are the areas of work, the resources that I used, and the outcomes:

Content Security Policy (CSP)

A CSP constrains the actions that a web page can take or the actions that can be performed upon it. It allows one to apply the principle of least privilege to a page and site. A CSP allows one to specify constraints like “Only Load CSS from these sources”, “Don’t allow this site to be embedded in frames” and “Don’t allow inline JavaScript”. I developed a CSP after reading a the HTML5rocks CSP tutorial and Scott Helme’s CSP intro. I validated my policy using Google’s CSP evaluator and Mozilla’s Observatory tool. In order to apply best-practices, which include disabling inline JavaScript and CSS, I needed to make a simple changes to the site. I’ve been conscious to minimise JavaScript and CSS as I’ve developed this site, and it was great to see how that choice made the application of best-practices a simple task.

Miscellaneous security headers

I implemented X-XSS-Protection, X-Content-Type-Options and X-Frame-Options and while the effect of these headers overlaps a little with CSP, providing them is still a good idea because of inconsistent CSP implementations and benefits unrelated to CSP. I learnt about them from Scott Helme’s Response Headers page and Mozilla’s web security guidelines. I validated my setup with the SecurityHeaders validation tool and Mozilla’s Observatory tool.

SSL

I already had a reasonable SSL setup but while looking at the Mozilla web security guidelines, I didn’t consider how my list of cipher choices would need regular updating (I’d last reviewed them 2 years ago!). Mozilla are good enough to provide nginx config snippets for to help with good cipher selection, and their config snippet included an HTTP Strict Transport Security (HSTS) directive. I’d considered HSTS before, but found the SSL certificate renewal process to be complex enough that I was unsure I wouldn’t accidentally take my site offline around renewal time. Having recently switched my site certificates over to the (awesome) Let’s Encrypt renewal process, I felt comfortable activating HSTS at the same time. I validated my setup with the Qualys SSL Report

Outcome

It took about 4 hours to make the changes, and after the changes were applied, this site 1 moved from an A to an A+ on the Qualys SSL Report. The Mozilla Observatory tool gives the site an A+ and the SecurityHeaders.io validator gives it an A. My nginx config is available on GitHub.


  1. Actually, I use Cloudflare as a CDN, so I ran the tests against my origin server.