PXE Boot RetroPie

SD cards do fail … and often at inconvenient times. Rather than face the loss of saved games, I recently configured my RetroPie to boot off the network using PXE, TFTP, and a NFS root volume.

To start, read the existing documentation on this topic at https://www.raspberrypi.org/documentation/hardware/raspberrypi/bootmodes/net_tutorial.md. Based on that, I did the “client” work of configuring the Pi for USB boot. As I had existing DHCP (MicroTik), as well as TFTP and NFS (FreeNAS), I only needed the file system from my Pi, so I rsync’d that over ssh to the location I wanted to use on the FreeNAS host. Since I am moving the existing root filesystem from the SD card to NFS, I did not remove the ssh host config. After copying the files, I did edit /etc/fstab on the network copy, removing the /dev/mmcblkp1 and p2 lines (only proc should be left) as covered in the tutorial. Finally, before shutting down the Pi and removing the SD Card, grab the serial number by looking at /proc/cpuinfo because we will use it when configuring the TFTP server.

Next, I configured NFS within Services on FreeNAS (number of servers=4, serve UDP NFS clients=checked, Bind IP Address=checked the one listed, allow non-root mount=unchecked, enabled NFSv4=unchecked, NFSv3 ownership model for NFSv4=unchecked, require kerberos for NFSv4=unchecked, mountd bind port=, rpc.statd bind port=, rpc.lockd bind port=, support >16 groups=unchecked, log mountd requests=unchecked, log rpc.statd and rpc.lockd=unchecked). Then in Services, click Start Now to start NFS, and select Start on boot.

For the NFS Share, within Sharing on FreeNAS, I pointed to the path I wanted to use (/mnt/storage/user/retropie/root), and configured mapall user=root, mapall group=wheel, taking defaults for the rest (delete=unchecked, authorized ips/networks=, all directories=unchecked, read only=unchecked, quiet=unchecked, maproot user=, maproot group=).

Now, I’d like to say that all I had to do was copy some files over to the TFTP server and it worked; however, that is not what happened. After a bit of tcpdump and other troubleshooting, I came to find that the Pi insisted on having the TFTP server address be the same as the DHCP server address. This insistence was true for both the MicroTik router as well as an Ubiquiti EdgeRouter 6P. Regardless, what I did instead to get this working was put the Pi’s TFTP files onto the router and use TFTP on the router (instead of the FreeNAS I already had setup).

I will cover the MicroTik config first, as that is what I originally got this working on, and the EdgeRouter next. Within MicroTik, I setup the special DHCP option for the Pi. In DHCP Server, added a new Option (name=piboot, code=43, value=’Raspberry Pi Boot ‘) – note, there are three extra spaces at the end of the value based on reading https://www.raspberrypi.org/documentation/hardware/raspberrypi/bootmodes/net.md. Then, I edited the DHCP Network to add my piboot DHCP Option. Note, on my MicroTik I have the Boot File Name set to pxelinux.0 and Next Server as 192.168.88.217, which still work (the Pi does not use these).

Then, from the Pi’s /boot directory I added these files to the MicroTik under TFTP (all are allow=checked and read only=checked):

Request Filename                 Real Filename
--------------------------------------------------------------
bootcode.bin                     bootcode.bin
bf7072fe/bcm2710-rpi-3-b.dtb     bcm2710-rpi-3-b.dtb
bf7072fe/cmdline.txt             cmdline.txt
bf7072fe/config.txt              config.txt
bf7072fe/fixup.dat               fixup.dat
bf7072fe/kernel.img              kernel.img
bf7072fe/kernel7.img             kernel7.img
bf7072fe/start.elf               start.elf

A couple notes on the above:

  • bf7072fe is the serial number of my Pi (yours will be different, check /proc/cpuinfo)
  • To figure out which files were needed, I used Tools -> Packet Sniffer on the MicroTik with Filter on ports 67, 68, 69 (click Apply, then Start, then Packets)
  • There were several iterations of start packet sniffer filter, reboot Pi, watch Pi fail to boot, stop packet sniffer, view Packets, tweak, try again
  • There were a couple files it was looking for (based on packet sniffing) that did not exist in the /boot directory to begin with … they are not on the TFTP server and it is booting just fine

To configure the EdgeRouter, I started with the DHCP option 43. There are a couple ways you can do this from CLI to GUI, and I did both. I will cover the GUI here since it may be easier for some folks. In EdgeOS, go to the Config Tree and drill down to service, dhcp-server. We will add a global-parameter (click Add):

option option-43 code 43 = string;

Next, drill down into shared-network-name, the LAN the Pi is on, subnet, the subnet address, and add a subnet-parameter:

option option-43 "Raspberry Pi Boot   ";

Click Preview then Save. If you want another reference, there is a support article on Defining Custom DHCP Options.

You can confirm your changes (and the formatting result of " if you wish) by looking at /opt/vyatta/etc/dhcpd.conf (on the CLI, cat /opt/vyatta/etc/dhcpd.conf):

# ... only showing relative lines here ...

# The following 1 lines were added as global-parameters in the CLI and
# have not been validated
option option-43 code 43 = string;

shared-network LAN_1 {
  not authoritative;
  subnet 192.168.1.0 netmask 255.255.255.0 {
    option domain-name-servers 192.168.88.81, 192.168.88.90;
# The following 1 lines were added as subnet-parameters in the CLI and have not been validated
    option option-43 "Raspberry Pi Boot   ";
    option routers 192.168.1.1;
    option bootfile-name "pxelinux.0";
    filename "pxelinux.0";
    next-server 192.168.88.217;
    default-lease-time 86400;
    max-lease-time 86400;
    range 192.168.1.100 192.168.1.254;
  }
}

You can see I still have the “normal” bootfile-name and bootfile-server configured on this subnet, because the Pi ignores them.

Next, I had to install TFTP on EdgeOS (because it is not one of the “built-in” services). There is a support article that covers Adding Debian Packages to EdgeOS:

configure
set system package repository wheezy components 'main contrib non-free'
set system package repository wheezy distribution wheezy
set system package repository wheezy url http://http.us.debian.org/debian
commit
save
sudo apt-get update
sudo apt-get install tftpd-hpa

To then configure TFTP on EdgeOS, I setup the directory structure:

sudo mkdir -p /config/user-data/tftp/bf7072fe
sudo cp bootcode.bin /config/user-data/tftp/bootcode.bin
sudo cp bcm2710-rpi-3-b.dtb /config/user-data/tftp/bf7072fe/bcm2710-rpi-3-b.dtb
sudo cp cmdline.txt /config/user-data/tftp/bf7072fe/cmdline.txt
sudo cp config.txt /config/user-data/tftp/bf7072fe/config.txt
sudo cp fixup.dat /config/user-data/tftp/bf7072fe/fixup.dat
sudo cp kernel.img /config/user-data/tftp/bf7072fe/kernel.img
sudo cp kernel7.img /config/user-data/tftp/bf7072fe/kernel7.img
sudo cp start.elf /config/user-data/tftp/bf7072fe/start.elf
sudo chown -R root:vyattacfg /config/user-data/tftp

… and update the configuration file /etc/default/tftpd-hpa:

TFTP_USERNAME="tftp"
TFTP_DIRECTORY="/config/user-data/tftp"
TFTP_ADDRESS="192.168.1.1:69"
TFTP_OPTIONS="--secure"

You will see I modified the default address from 0.0.0.0 to a specific internal only address on the LAN where the Pi was because I did not want the TFTP server running on the Internet. Finally, a quick restart was all that was needed to get TFTP running (sudo /etc/init.d/tftpd-hpa stop ; /etc/init.d/tftpd-hpa start).

Once you have all the files the Pi needs on the TFTP server, and the NFS volume working, it should network boot as expected.

Running BIND in a FreeNAS Jail

These notes are with respect to FreeNAS 11 and Bind 9.

The first thing you will need is storage for jails. If this is already setup, you can reuse it. To create it, go to Storage and select the location you want to use, then click Create Dataset. You can name it as you wish … though jails is a handy one. All the other options are defaults (sync=inherit/standard, compression=inherit/lz4, share-type=unix, enable-atime=inherit/on, zfs-deduplicatoin=inherit/off, quotas=0/unlimited, reserved-space=0/none, read-only=inherit/off, record-size=inherit).

Next, we have to add the jail. Within Jails, click Add Jail. Again, any name will do … I chose bind. All the other options are defaults though check the IP address and netmask are what you wish to use (type=standard, ipv4-dhcp unchecked, IP aliases=, IP bridge config=, sysctls=allow.raw_sockets=true, autostart=checked, vimage=checked, NAT=unchecked). If not started, start the jail.

To install Bind, we want to get to a shell on the FreeNAS system (ssh works too).

# determine jail ID for the newly created jail
$ jls

# shell into the jail
$ jexec /bin/sh

# install dependencies
$ pkg install texinfo libedit

# install bind (for this, just took defaults when prompted)
# if you do not have /usr/ports, run portsnap fetch && portsnap extract
$ cd /usr/ports/dns/bind911
$ make install clean

To configure BIND, we edit /usr/local/etc/namedb/named.conf.

It is a good idea to create an acl for the internal network:

acl goodclients {
        192.168.0.0/16;
        localhost;
        localnets;
};

Within “options {“, there are a couple things to configure:

        recursion yes;
        allow-query { goodclients; };
        allow-recursion { goodclients; };
        allow-transfer { none; };

        listen-on { 192.168.88.81; };
        listen-on-v6 { none; };

Note: The listen configurations are which address(es) BIND listens on (this matches the jail’s IP config).

Now, we can validate the config and start the service:

$ named-checkconf
$ service named start
$ service named status

To have this run on startup in our jail, edit /etc/rc.conf (in the jail):

named_enable="YES"
named_program="/usr/local/sbin/named"

By default, resolvconf will try to update /etc/resolv.conf inside the jail which may not be what you want. You can configure a static config. Edit /etc/resolv.conf:

# resolvconf is disabled, see /etc/resolvconf.conf
nameserver 208.67.222.222
nameserver 208.67.220.220
nameserver 8.8.8.8
nameserver 8.8.4.4

Edit /etc/resolvconf.conf:

# disable resolvconf from running any subscribers:
resolvconf="NO"

That should do it. You can restart your jail and ensure /etc/resolv.conf matches expectations.

The listen-on config setup within named.conf is what you will want to configure the clients on your network to use. For example, this internal network has two BIND DNS servers (running on different FreeNAS machines), and so the DHCP configuration for this network sends down two DNS servers: 192.168.88.81 and 192.168.88.90. Specifying two enables the clients to be resilient to failure of any single BIND instance.

After setting this up, I ran DNS Benchmark. This tool is great for testing DNS configuration (from DHCP settings to DNS server performance, of your local servers as well as public ones). The conclusions it showed after running its benchmark proved the local network had optimal DNS configuration.

OpenStack Heat and Load Balancer as a Service

Continuing the OpenStack journey with Icehouse, got Load Balancer as a Service (LBaaS) running (using haproxy) and also explored the orchestration component, Heat.

All the details are over on GitHub, including a Heat template which boots two Nova compute web servers, provisions a load balancer in front of them, as well as obtains and associates a floating IP onto the load balancer VIP.

https://github.com/BrianBrophy/OpenStack-Installation/blob/master/README.md#working-with-orchestration-heat

Bootstrap OpenStack Icehouse On VirtualBox

I have been kicking the tires on OpenStack again. While the DevStack setup is probably most helpful for unit testing, I find it limiting when it comes to playing with OpenStack.

So, I worked through setting-up on a VirtualBox host a classic three-node architecture (control, network, and compute nodes) using Ubuntu 12.04 as my OpenStack host OS with VirtualBox running on Windows. Additionally, I fully scripted the configuration and installation of the OpenStack components on each node so that once you have the base Ubuntu 12.04 server installed on your VMs, you can be up and running on OpenStack in minutes.

If you would like to see more, checkout https://github.com/BrianBrophy/OpenStack-Installation.

Linux Backups To Amazon S3

Over the past couple years, several cloud storage services have emerged and as one may suspect, Amazon’s S3 is a popular choice due to it’s feature-set and price. While I have had many backup schemes over the years from tape drives to removable hard drives, they have all been as “disaster-survivable” as the last time I not only performed the backup but physically secured the backup media at an alternate location. With the accessibility of cloud storage, I wanted to push my backups out to the cloud so that they would immediately be at an alternate site on redundant and highly-available storage.
 

Given that we’re talking about Linux, tried-and-true rsync was certainly capable of meeting my requirements … provided I could present a file-system or ssh destination to Linux for rsync to replicate to. This is where s3fs (http://code.google.com/p/s3fs/) comes in because it is a “Linux FUSE-based file system backed by Amazon S3”. I have used s3fs for a couple years, and while it has gone through cycles of development and support, it recently “thawed” and underwent some nice improvements to the extent that it now supports all the usual suspects: user/group ownership, permission, last modified date/time stamp, and representing data (notably folders/directories) in a manner that is compatible with other S3 clients.
 

Before we begin, if you do not yet have an Amazon Web Services account and an S3 Bucket, head on over to https://aws.amazon.com and sign-up. You may be wondering about costs … as of the writing of this (July 2013), I am backing-up about 30 GB of data to S3 (standard tier, multiple regions, not reduced redundancy) and it runs less than $5.00/month for storage and moderate data transfer in/out and about a couple cents (yes, literally $.03 or so) each time I run rsync to compare several thousand files. AWS also has a detailed cost estimation tool at http://calculator.s3.amazonaws.com/calc5.html. Finally, if you are paranoid about an “insanely high” bill hitting your credit card, you can setup a billing alert to notify you when a certain threshold is observed.
 

The first step is obtaining and installing s3fs. You can review the details on the product web site; however, it is traditional Linux: download tarball, extract, then run configure, make, and make install. I’ve tested it on various flavors of Ubuntu, most recently on Ubuntu 12.04 LTS and s3fs 1.71.
 

The second step is configuring s3fs. Because my backup job will run as root, I have opted for the system-wide configuration file /etc/passwd-s3fs (ownership root:root, permission 600). If you chose to run as non-root, you can instead reference the user-specific configuration file ~/.passwd-s3fs (secure ownership and permission accordingly). The contents of this configuration file are really straightforward and the s3fs site has detailed examples (understandably, I am not going to provide mine as an example here).
 

As far as mounting the S3 file system, you can check if it is it presented and mount if need be:

# Configuration
s3fsCmd='/usr/local/bin/s3fs'
s3Bucket='my_backup'
# no trailing slash on local mount path
localMount="/mnt/s3-${s3Bucket}"

#######################################################################
# Checks the S3 file sytem, mounting if need be
#######################################################################
function checkMount() {
  df -kh | grep "${localMount}" >/dev/null 2>/dev/null
  if [ ! $? -eq 0 ] ; then
    echo "Mounting ${localMount}"
    ${s3fsCmd} "${s3Bucket}" "${localMount}"
    df -kh | grep "${localMount}" >/dev/null 2>/dev/null
    if [ ! $? -eq 0 ] ; then
      echo "ERROR: Unable to mount S3 Bucket '${s3Bucket}' to file system '${localMount}' using '${s3fsCmd}'"
      exit 1
    fi
  fi
}

 

Once presented, rsync can be leveraged as one may anticipate (with some optimizations for S3 like using --inplace):

checkMount
rsync -ahvO --progress --inplace --delete "/data" "${localMount}/"

 

You can also consider doing something like running the rsync a maximum number of attempts and if a non-zero return code is encountered, unmounting the volume (unmount –fl "${localMount}"), sleeping for perhaps a minute or two (to allow time for pending writes/syncs to occur), and running checkMount prior to attempting rsync again.

Including Perl Dependencies Within Your Application

Perl can be great at many things, but when it comes to doing something “real” (parsing XML, compression, logging with log4perl, handling Unix signals) you ultimately want to leverage CPAN modules … and generally, these are not “already installed, out of the box” within Linux distributions like Red Hat.

You may consider including in some installation How To notes on downloading modules from CPAN, and installing them … but, then you would have to worry about version conflicts on top of the hassle of possibly supporting people whom simply did not know how to do that. You are probably already delivering your application in a distribution package like RPM to simplify installation … so, why would you want to possibly complicate it again?

There is one way to navigate this which boils-down to including your dependencies in your package, within some application-specific library path for example. This is actually quite straight-forward:

  1. Download the CPAN modules you need and install them into something like /my/app/lib/
  2. Expand the include path within your code with:
    push(@INC, '/my/app/lib');

Now, what happens when you want to include binary library objects like *.so files? These, having been compiled for a system, are architecture-specific and usually Perl version-specific. We can extend our basic solution to support this as well by including architecture and version specific library paths. For example, we may have

  • /my/app/lib/perl58-linux-i386
  • /my/app/lib/perl58-linux-x86_64
  • /my/app/lib/perl510-linux-i386
  • /my/app/lib/perl510-linux-x86_64

The required step of downloading and installing the modules from CPAN is still a good start. This step now needs to be performed for each variation of Perl version and architecture we intend to support. So, you may have to do this on several OS and architecture combinations if you plan on supporting multiples (and save-off the resulting artifacts within source control so you can incorporate them into your build). Then, the code needs to be enhanced as well:

# Determine runtime version
# $] is a raw version string where examples are:
#     5.008008 for 5.8.8
#     5.010001 for 5.10.1
# So, major version is before the "."
#   then minor is the next 3
#   then patch the last 3
my $perlVersion = "$]";
if ($perlVersion =~ /^([\d]+)\.([\d]{3})([\d]{3})$/) {
  # quick hack of adding zero to the string to "convert" something
  #   like 008 to 8 and 010 to 10
  my $majorVersion = $1 + 0;
  my $minorVersion = $2 + 0;
  my $patchVersion = $3 + 0;
  # for our version and architecture specific lib path, we will
  #   use major and minor (example: 58 or 510)
  $perlVersion = "$majorVersion" . "$minorVersion"
} else {
  # Unexpected version format
  die "FATAL ERROR: Unknown Perl version found ('$perlVersion')\n";
}

# Determine if 32-bit or 64-bit
my $is64 = 'false';
foreach my $incPath (@INC) {
  if ($incPath =~ /64/) {
    $is64 = 'true';
    last;
  }
}

# Runtime-specific path
my $runtimeLibPath = "/my/app/lib/perl" . $perlVersion . "-linux-";
if ($is64 eq 'true') {
  $runtimeLibPath .= "x86_64";
} else {
  $runtimeLibPath .= "i386";
}

@ Add it to our include path
push(@INC, $runtimeLibPath);

Now that you see the pattern, you can expand this for other platforms beyond Linux if you wish also.

Ubuntu Kernel Clean-Up

Ubuntu is great at “set it and forget it” maintenance. Patches come down, and generally life is good.

However, the frequency of kernel updates can fill-up a modest /boot partition.

Recently faced with an inability to install an updated kernel due to a limited /boot partition, I had to do some housekeeping. I put together two quick scripts to help with this.
 

showInactiveKernels
Of the kernels installed on the system, which are not the one currently running?

#!/usr/bin/env bash

echo "Current Kernel: $(uname -r)"
old=$(dpkg -l "linux-image*" | grep '^[i]' | awk '{print $2}' | egrep -v "linux-image-$(uname -r)|linux-image-generic")
if [ "" == "$old" ] ; then
  echo "No inactive kernels found"
  exit
fi

for o in $old; do
  echo "  inactive kernel: $o"
done

 

removeInactiveKernels
Elaborating on the last script, prompting to remove them as you go

#!/usr/bin/env bash

old=$(dpkg -l "linux-image*" | grep '^[i]' | awk '{print $2}' | egrep -v "linux-image-$(uname -r)|linux-image-generic")
if [ "" == "$old" ] ; then
  echo "No inactive kernels found"
  exit
fi

for o in $old; do
  response='no'
  echo ""
  echo "Current Kernel: $(uname -r)"
  read -p "Do you wish to remove $o? ([no]|yes) " response
  if [ "$response" == "yes" ] ; then
    echo "  ... removing $o"
    apt-get -y remove "$o"
  fi
done

 

Using these provided a quick way to identify which kernels I could consider for removal/uninstallation, allowing me to clean-up /boot in no time.