VirtualBox + Secure Boot + Ubuntu = fail

Here are the steps I did to enable VirtualBox to work properly in Ubuntu with UEFI Secure Boot fully enabled*. The problem is the requirement that all kernel modules must be signed by a key trusted by the UEFI system, otherwise loading will fail. Ubuntu does not sign the third party vbox* kernel modules, but rather gives the user the option to disable Secure Boot upon installation of the virtualbox package. I could do that, but then I would see an annoying “Booting in insecure mode” message every time the machine starts, and also the dual boot Windows 10 installation I have would not function.

*Ubuntu 16.04 on a Dell Latitude E7440 with BIOS A18, and with a dual boot Windows 10 installation.

Credit goes to the primary source of information I used to resolve this problem, which applies specifically to Fedora/Redhat:
http://gorka.eguileor.com/vbox-vmware-in-secureboot-linux-2016-update/

And a relevant Ask Ubuntu question:
http://askubuntu.com/questions/760671/could-not-load-vboxdrv-after-upgrade-to-ubuntu-16-04-and-i-want-to-keep-secur

Steps to make it work, specifically for Ubuntu/Debian

  1. Install the virtualbox package. If the installation detects that Secure Boot is enabled, you will be presented with the issue at hand and given the option to disable Secure Boot. Choose “No”.
  2. Create a personal public/private RSA key pair which will be used to sign kernel modules. I chose to use the root account and the directory /root/module-signing/ to store all things related to signing kernel modules.
    $ sudo -i
    # mkdir /root/module-signing
    # cd /root/module-signing
    # openssl req -new -x509 -newkey rsa:2048 -keyout MOK.priv -outform DER -out MOK.der -nodes -days 36500 -subj "/CN=YOUR_NAME/"
    [...]
    # chmod 600 MOK.priv
  3. Use the MOK (“Machine Owner Key”) utility to import the public key so that it can be trusted by the system. This is a two step process where the key is first imported, and then later must be enrolled when the machine is booted the next time. A simple password is good enough, as it is only for temporary use.
    # mokutil --import /root/module-signing/MOK.der
    input password:
    input password again:
  4. Reboot the machine. When the bootloader starts, the MOK manager EFI utility should automatically start. It will ask for parts of the password supplied in step 3. Choose to “Enroll MOK”, then you should see the key imported in step 3. Complete the enrollment steps, then continue with the boot. The Linux kernel will log the keys that are loaded, and you should be able to see your own key with the command: dmesg|grep 'EFI: Loaded cert'
  5. Using a signing utility shippped with the kernel build files, sign all the VirtualBox modules using the private MOK key generated in step 2. I put this in a small script /root/module-signing/sign-vbox-modules, so it can be easily run when new kernels are installed as part of regular updates:
    #!/bin/bash
    
    for modfile in $(dirname $(modinfo -n vboxdrv))/*.ko; do
      echo "Signing $modfile"
      /usr/src/linux-headers-$(uname -r)/scripts/sign-file sha256 \
                                    /root/module-signing/MOK.priv \
                                    /root/module-signing/MOK.der "$modfile"
    done
    
    # chmod 700 /root/module-signing/sign-vbox-modules
  6. Run the script from step 5 as root. You will need to run the signing script every time a new kernel update is installed, since this will cause a rebuild of the third party VirtualBox modules. Use the script only after the new kernel has been booted, since it relies on modinfo -n and uname -r to tell which kernel version to sign for.
  7. Load vboxdrv module and fire up VirtualBox:
    # modprobe vboxdrv

The procedure can also be used to sign other third party kernel modules, like the nvidia graphics drivers, if so is required. (I have not tested that myself.)

Gracefully killing a Java process managed by systemd

The basic way for systemd to stop a running service is by sending the SIGTERM signal (a regular kill) to the process control group of the service, and after a configurable timeout, a SIGKILL to make things really go away if not responding. Of course, all the details are configurable, and often times you’d have some sort of control script which did the actual stopping, but I am only talking about the simple case here. I have a Java-based service and a Unit configuration file to integrate it with systemd. The JVM process is setup so that it gracefully shuts down the service upon reception of the SIGTERM signal (using shutdown hooks). However, the JVM will still exit with code 143 in the end due to receiving SIGTERM (128+15), and systemd thinks something went wrong when stopping the service. The solution is simple: add SuccessExitStatus=143  to the Unit configuration file for the service, and systemd will consider this code as successful termination. If the graceful shutdown can take some time, add TimeoutStopSec=NN as well to give the service process more time before systemd brings out the big gun.

Relevant manual pages: systemd.kill and systemd.service.

Securing web site with https and Let’s Encrypt

A padlock imageI have added support for secure communication using https/TLS to my personal web site, which runs on Linux home servers using Apache. After Let’s Encrypt appeared and started offering free security certificates, there was really no reason not to give it a spin. And there are many good reasons to support secure http communication, even for sites not dealing with sensitive or secret information:

  • Data integrity. Using https is an effective way to prevent tampering with the data from the server to the client. It is not unheard of for ISPs or other third parties to modify http traffic, for instance to inject ads in pages, and https is an effective counter measure.
  • Initiatives like Mozilla’s “deprecation of non-secure HTTP”, which aims to move the web to secure communication by default.
  • Giving visitors the choice to keep their web browsing private across the internet.
  • Allow secure site administration.

Skip directly to a section in this post:

  1. Acquiring a Let’s Encrypt certificate
  2. Configuring Apache on Ubuntu 14.04 to use Let’s Encrypt certificate
  3. WordPress and https
  4. Validating https protocol support
  5. Automatic certificate renewal
  6. Donations to Let’s Encrypt and the way forward

Acquiring a Let’s Encrypt certificate

Using this recipe as a guide, I decided to use the certbot tool, which automates the process of verifying ownership of domain and downloading a new certificate from Let’s Encrypt. It also provides automation of the certificate renewal process, which is necessary with Let’s Encrypt due to rather short expiry periods.

As preparation, basic SSL support in Apache should be enabled. This is accomplished on a standard Debian/Ubuntu Apache by doing as super user:

cd /etc/apache2/mods-enabled/
ln -s ../mods-available/ssl.* ../mods-available/socache_shmcb.load .

/usr/sbin/apachectl graceful

This links up the required modules and default configuration, then reloads Apache config. Since certbot requires that your Apache server is reachable from the internet on port 443, make sure any required port forwardings and/or firewall rules are adjusted in your router.

Note that there is a default virtualhost configuration for SSL in the file /etc/apache2/sites-available/default-ssl.conf. However, there is no need to enable this file, as all the required SSL options are added to the name based virtualhost for the site. If you decide to link default-ssl.conf into /etc/apache2/sites-enabled/, you should edit it to include /etc/letsencrypt/options-ssl-apache.conf, to disable vulnerable SSLv2/v3 protocols.

On to the certbot process, download and run as super user:

curl -sS https://dl.eff.org/certbot-auto -o /usr/local/bin/certbot-auto
chmod +x /usr/local/bin/certbot-auto

/usr/local/bin/certbot-auto --apache certonly

I opted for the conservative approach of not allowing certbot to do any permanent Apache configuration changes and only acquire a certificate, by using the certonly option. This will do the verification process (using your running Apache server) and save certificates and account details under /etc/letsencrypt/. Note that certbot may ask to install additional packages on the system the first time you run it.

Certbot successfully detected the name based virtual host definitions in the Apache configuration (including aliases) and allowed me to choose for which domain names to acquire a certifcate. After selecting the domains, the validation process starts and, if successful, a certificate for the domains will be downloaded and made available on the system. This was an impressively simple process.

Configuring Apache on Ubuntu 14.04 to use Let’s Encrypt certificate

Assuming basic SSL support is enabled in Apache, the server runs with default options which come from /etc/apache2/mods-available/ssl.conf and [optionally] /etc/apache2/sites-available/default-ssl.conf. To SSL-enable an existing name based virtual host defined for port 80, I simply copied the defintion to a separate virtual host for port 443. Then I added the following configuration to the new virtual host definition:

<VirtualHost *:443>
  ServerAdmin webmaster@stegard.net
  ServerName  stegard.net
  ServerAlias www.stegard.net

  # Letsencrypt SSL options
  Include /etc/letsencrypt/options-ssl-apache.conf
  # virtual host cert
  SSLCertificateFile      /etc/letsencrypt/live/www.stegard.net/cert.pem
  SSLCertificateChainFile /etc/letsencrypt/live/www.stegard.net/chain.pem
  SSLCertificateKeyFile   /etc/letsencrypt/live/www.stegard.net/privkey.pem

[...]

Note to include the Let’s Encrypt recommended SSL options (have a look at the contents first), then specify the certificate files. This applies to Apache version < 2.4.8. If you are running Apache from -backports on Trusty, or on a newer Ubuntu, then you’ll have version  >= 2.4.10, which needs a slightly different certificate setup:

# Letsencrypt SSL options
Include /etc/letsencrypt/options-ssl-apache.conf
# virtual host cert
SSLCertificateFile      /etc/letsencrypt/live/www.stegard.net/fullchain.pem
SSLCertificateKeyFile   /etc/letsencrypt/live/www.stegard.net/privkey.pem

(The SSLCertificateChainFile directive is deprecated starting with Apache 2.4.8.)

This was the only setup required to get functional SSL for my site.

WordPress and https

My site runs WordPress where the primary site URL uses plain http. This is how I wanted SSL to work:

  1. Both https and http should work across the site.
  2. If site is accessed on https, then all [internal] links and included resources should use https as well. (Enter on https, stay on https.)
  3. Site administration is only available on https.

Number 1 was easy – this works out of the box if Apache is configured correctly. But I got mixed content warnings for various resources inluded in pages when accessing over https. When evaluating how to fix a certain annoyance in WordPress, I tend to look for the simplest solutions which do not require extra plugins first. So I fixed images and other things included in content to use relative URLs. But there were still theme specific resources included with plain http URLs, likely due to the site URL being plain http. Not wanting to spend very much time diagnosing such specific cases, I ended up installing the SSL Insecure Content Fixer plugin. I run this plugin in “simple” mode, which I assume does not have to post-process all content being served (which I really wanted to avoid). That was enough to get consistent https-links when accessing content over https, and so number 2 is considered solved.

Since WordPress has a built in config option to force admin over SSL, adding the following to wp-config.php resolved number 3 on my list of requirements:

/** Admin over SSL */
define('FORCE_SSL_ADMIN', true);

Validating https protocol support

Qualys SSL labs has what looks like a rather thorough online validator for SSL protocol support. I ran my site through this validator and got an A-rating, with no glaring deficiencies. To improve the rating even further, I could have used a stronger RSA key than the default of 2048 bits, but it is certainly good enough for now.

Automatic certificate renewal

Obviously I like to keep site maintenance cost as low as possible, and automating the renewal process of certificates is a must. Certbot makes this very easy as well. I simply created a script in /etc/cron.daily/certbot-renew with the following content:

#!/bin/sh
exec /usr/local/bin/certbot-auto renew --no-self-upgrade --quiet

Make sure it is executable, and then automatic renewal should be all set.

Donations to Let’s Encrypt and the way forward

My first experience with Let’s Encrypt has been great. I’ve spent much longer writing this blog post than the time it took to get the site up and running properly with https support. Good things deserve appreciation, especially when they are given away for free. I made a small donation to Let’s Encrypt for their excellent service.

I will switch my site to use SSL by default given some time. The cost of providing secure http by default has been reduced to something so neglible, I have few reasons not to.

New version of ds

I made some incremental improvements to my trusty synchronization tool. I have added ssh master connection sharing support, which should give a nice speed up in many cases. In addition to this, there are cleanups in auto completion code and more flexible command line parsing. There is a release on Github.

I do certain things the old fashioned way – synchronizing application config and data across different hosts is one of those areas. I don’t use Dropbox or other internet services for file sync purposes, except for the builtin Google stuff on my Android phone (which is mostly app config backup). I like to keep it peer-to-peer, simple and manual. And I like to keep my data private. Ds is just the right tool to make syncing that much easier, which I guess is the reason I wrote it.

Cloud services is not something I need to manage my data. I have my own “cloud services”: A synchronization scheme which works well enough, automated backups with regular encrypted offsite storage (a future blog post), personal web and file services and high speed fibre optic internet connection – all at home. I will admit to a certain level of NIH syndrome here, but my own solutions work well enough that I will not bother looking into something else yet.

Distrowatching with Bittorrent

Updated ranking as of 25th of April:

  1. ubuntu-14.04.2-desktop-amd64.iso, ratio: 139
  2. ubuntu-14.04.2-desktop-i386.iso, ratio: 138
  3. ubuntu-14.10-desktop-i386.iso, ratio: 93.9
  4. ubuntu-14.10-desktop-amd64.iso, ratio: 87.0
  5. linuxmint-17.1-cinnamon-64bit.iso, ratio: 81.5
  6. debian-7.8.0-amd64-DVD-1.iso, ratio: 31.5
  7. Fedora-Live-Workstation-x86_64-21, ratio: 25.6
  8. debian-7.8.0-amd64-DVD-2.iso, ratio: 16.0
  9. debian-7.8.0-amd64-DVD-3.iso, ratio: 15.7
  10. Fedora-Live-Workstation-i686-21, ratio: 13.3
  11. debian-update-7.8.0-amd64-DVD-1.iso, ratio: 10.5
  12. debian-update-7.8.0-amd64-DVD-2.iso, ratio: 8.90

Total running time: 21 days, total uploaded: 1.04 terabytes.

Originally posted:

I have plenty of spare bandwidth at home, so I’ve been seeding a small selection of popular Linux ISOs via Bittorrent continuously for about 12 days now. Upload-cap was set to 1024 KB/s divided equally amongst all torrents (this is only about en eighth of my total uplink capacity).  Here are the results of the popularity contest as of now:

  1. ubuntu-14.04.2-desktop-amd64.iso, ratio: 93.3
  2. ubuntu-14.04.2-desktop-i386.iso, raitio: 83.5
  3. ubuntu-14.10-desktop-i386.iso, ratio: 57.1
  4. ubuntu-14.10-desktop-amd64.iso, ratio: 53.0
  5. linuxmint-17.1-cinnamon-64bit.iso, ratio: 46.1
  6. debian-7.8.0-amd64-DVD-1.iso, ratio: 18.3
  7. Fedora-Live-Workstation-x86_64-21, ratio: 15.6
  8. debian-7.8.0-amd64-DVD-3.iso, ratio: 10.1
  9. debian-7.8.0-amd64-DVD-2.iso, ratio: 9.48
  10. Fedora-Live-Workstation-i686-21, ratio: 7.82
  11. debian-update-7.8.0-amd64-DVD-2.iso, ratio: 6.13
  12. debian-update-7.8.0-amd64-DVD-1.iso, ratio: 6.06

A total of 636.0 GB has been uploaded. These are Transmission stats obtained at the time of writing this post. Though not statistically significant by any means, it is still interesting to note that Ubuntu seems more popular than Linux Mint on Bittorrent (contrary to what distrowatch.org has to say about it). Also, the LTS version of Ubuntu is more popular than the current 14.10 stable release. (I should add that the ratio of the Debian DVD ISOs cannot be directly compared, since these images are significantly larger in size. And Linux Mint MATE edition is not present at all.)

The list happens to be in accordance with my recommendation to anyone wanting to try Linux for the first time, specifically Ubuntu: go for the LTS version. (Recent Linux Mint is now also based on Ubuntu-LTS.) Many years of experience have taught me that the interim releases have a lot more bugs, annoyances and less polish. Sure, you learn a lot by fixing problems, but it’s perhaps not the best first time experience.