Linux Backups To Amazon S3

Over the past couple years, several cloud storage services have emerged and as one may suspect, Amazon’s S3 is a popular choice due to it’s feature-set and price. While I have had many backup schemes over the years from tape drives to removable hard drives, they have all been as “disaster-survivable” as the last time I not only performed the backup but physically secured the backup media at an alternate location. With the accessibility of cloud storage, I wanted to push my backups out to the cloud so that they would immediately be at an alternate site on redundant and highly-available storage.
 

Given that we’re talking about Linux, tried-and-true rsync was certainly capable of meeting my requirements … provided I could present a file-system or ssh destination to Linux for rsync to replicate to. This is where s3fs (http://code.google.com/p/s3fs/) comes in because it is a “Linux FUSE-based file system backed by Amazon S3”. I have used s3fs for a couple years, and while it has gone through cycles of development and support, it recently “thawed” and underwent some nice improvements to the extent that it now supports all the usual suspects: user/group ownership, permission, last modified date/time stamp, and representing data (notably folders/directories) in a manner that is compatible with other S3 clients.
 

Before we begin, if you do not yet have an Amazon Web Services account and an S3 Bucket, head on over to https://aws.amazon.com and sign-up. You may be wondering about costs … as of the writing of this (July 2013), I am backing-up about 30 GB of data to S3 (standard tier, multiple regions, not reduced redundancy) and it runs less than $5.00/month for storage and moderate data transfer in/out and about a couple cents (yes, literally $.03 or so) each time I run rsync to compare several thousand files. AWS also has a detailed cost estimation tool at http://calculator.s3.amazonaws.com/calc5.html. Finally, if you are paranoid about an “insanely high” bill hitting your credit card, you can setup a billing alert to notify you when a certain threshold is observed.
 

The first step is obtaining and installing s3fs. You can review the details on the product web site; however, it is traditional Linux: download tarball, extract, then run configure, make, and make install. I’ve tested it on various flavors of Ubuntu, most recently on Ubuntu 12.04 LTS and s3fs 1.71.
 

The second step is configuring s3fs. Because my backup job will run as root, I have opted for the system-wide configuration file /etc/passwd-s3fs (ownership root:root, permission 600). If you chose to run as non-root, you can instead reference the user-specific configuration file ~/.passwd-s3fs (secure ownership and permission accordingly). The contents of this configuration file are really straightforward and the s3fs site has detailed examples (understandably, I am not going to provide mine as an example here).
 

As far as mounting the S3 file system, you can check if it is it presented and mount if need be:

# Configuration
s3fsCmd='/usr/local/bin/s3fs'
s3Bucket='my_backup'
# no trailing slash on local mount path
localMount="/mnt/s3-${s3Bucket}"

#######################################################################
# Checks the S3 file sytem, mounting if need be
#######################################################################
function checkMount() {
  df -kh | grep "${localMount}" >/dev/null 2>/dev/null
  if [ ! $? -eq 0 ] ; then
    echo "Mounting ${localMount}"
    ${s3fsCmd} "${s3Bucket}" "${localMount}"
    df -kh | grep "${localMount}" >/dev/null 2>/dev/null
    if [ ! $? -eq 0 ] ; then
      echo "ERROR: Unable to mount S3 Bucket '${s3Bucket}' to file system '${localMount}' using '${s3fsCmd}'"
      exit 1
    fi
  fi
}

 

Once presented, rsync can be leveraged as one may anticipate (with some optimizations for S3 like using --inplace):

checkMount
rsync -ahvO --progress --inplace --delete "/data" "${localMount}/"

 

You can also consider doing something like running the rsync a maximum number of attempts and if a non-zero return code is encountered, unmounting the volume (unmount –fl "${localMount}"), sleeping for perhaps a minute or two (to allow time for pending writes/syncs to occur), and running checkMount prior to attempting rsync again.

Including Perl Dependencies Within Your Application

Perl can be great at many things, but when it comes to doing something “real” (parsing XML, compression, logging with log4perl, handling Unix signals) you ultimately want to leverage CPAN modules … and generally, these are not “already installed, out of the box” within Linux distributions like Red Hat.

You may consider including in some installation How To notes on downloading modules from CPAN, and installing them … but, then you would have to worry about version conflicts on top of the hassle of possibly supporting people whom simply did not know how to do that. You are probably already delivering your application in a distribution package like RPM to simplify installation … so, why would you want to possibly complicate it again?

There is one way to navigate this which boils-down to including your dependencies in your package, within some application-specific library path for example. This is actually quite straight-forward:

  1. Download the CPAN modules you need and install them into something like /my/app/lib/
  2. Expand the include path within your code with:
    push(@INC, '/my/app/lib');

Now, what happens when you want to include binary library objects like *.so files? These, having been compiled for a system, are architecture-specific and usually Perl version-specific. We can extend our basic solution to support this as well by including architecture and version specific library paths. For example, we may have

  • /my/app/lib/perl58-linux-i386
  • /my/app/lib/perl58-linux-x86_64
  • /my/app/lib/perl510-linux-i386
  • /my/app/lib/perl510-linux-x86_64

The required step of downloading and installing the modules from CPAN is still a good start. This step now needs to be performed for each variation of Perl version and architecture we intend to support. So, you may have to do this on several OS and architecture combinations if you plan on supporting multiples (and save-off the resulting artifacts within source control so you can incorporate them into your build). Then, the code needs to be enhanced as well:

# Determine runtime version
# $] is a raw version string where examples are:
#     5.008008 for 5.8.8
#     5.010001 for 5.10.1
# So, major version is before the "."
#   then minor is the next 3
#   then patch the last 3
my $perlVersion = "$]";
if ($perlVersion =~ /^([\d]+)\.([\d]{3})([\d]{3})$/) {
  # quick hack of adding zero to the string to "convert" something
  #   like 008 to 8 and 010 to 10
  my $majorVersion = $1 + 0;
  my $minorVersion = $2 + 0;
  my $patchVersion = $3 + 0;
  # for our version and architecture specific lib path, we will
  #   use major and minor (example: 58 or 510)
  $perlVersion = "$majorVersion" . "$minorVersion"
} else {
  # Unexpected version format
  die "FATAL ERROR: Unknown Perl version found ('$perlVersion')\n";
}

# Determine if 32-bit or 64-bit
my $is64 = 'false';
foreach my $incPath (@INC) {
  if ($incPath =~ /64/) {
    $is64 = 'true';
    last;
  }
}

# Runtime-specific path
my $runtimeLibPath = "/my/app/lib/perl" . $perlVersion . "-linux-";
if ($is64 eq 'true') {
  $runtimeLibPath .= "x86_64";
} else {
  $runtimeLibPath .= "i386";
}

@ Add it to our include path
push(@INC, $runtimeLibPath);

Now that you see the pattern, you can expand this for other platforms beyond Linux if you wish also.

Ubuntu Kernel Clean-Up

Ubuntu is great at “set it and forget it” maintenance. Patches come down, and generally life is good.

However, the frequency of kernel updates can fill-up a modest /boot partition.

Recently faced with an inability to install an updated kernel due to a limited /boot partition, I had to do some housekeeping. I put together two quick scripts to help with this.
 

showInactiveKernels
Of the kernels installed on the system, which are not the one currently running?

#!/usr/bin/env bash

echo "Current Kernel: $(uname -r)"
old=$(dpkg -l "linux-image*" | grep '^[i]' | awk '{print $2}' | egrep -v "linux-image-$(uname -r)|linux-image-generic")
if [ "" == "$old" ] ; then
  echo "No inactive kernels found"
  exit
fi

for o in $old; do
  echo "  inactive kernel: $o"
done

 

removeInactiveKernels
Elaborating on the last script, prompting to remove them as you go

#!/usr/bin/env bash

old=$(dpkg -l "linux-image*" | grep '^[i]' | awk '{print $2}' | egrep -v "linux-image-$(uname -r)|linux-image-generic")
if [ "" == "$old" ] ; then
  echo "No inactive kernels found"
  exit
fi

for o in $old; do
  response='no'
  echo ""
  echo "Current Kernel: $(uname -r)"
  read -p "Do you wish to remove $o? ([no]|yes) " response
  if [ "$response" == "yes" ] ; then
    echo "  ... removing $o"
    apt-get -y remove "$o"
  fi
done

 

Using these provided a quick way to identify which kernels I could consider for removal/uninstallation, allowing me to clean-up /boot in no time.

Raspberry Pi Game Console

I recently received a Raspberry Pi (model B) and decided to kick the tires on it.  Having grown-up with Nintendo, I wanted to configure it as a nostalgic game console.
 

Before I started configuring the OS, I wanted to get a couple game controllers (so that I would not be using a keyboard/mouse to play the games).  I chose to use USB controllers (SNES Retro USB by Tomee) and due to the limited power of the Pi itself, also picked-up a powered USB hub.
 

With all the parts (controllers in powered hub), I started with some basic Raspberry Pi setup:

  • Adjust Memory Split (give GPU 128 MB): sudo raspi-config
  • Update: sudo apt-get update
  • Install Git: sudo apt-get install -y git
  • Install Dialog: sudo apt-get install -y dialog

 

I then proceeded with installing RetroPie:

  • Change to home directory: cd
  • Download RetroPie: git clone git://github.com/petrockblog/RetroPie-Setup.git
  • Change to setup directory: cd RetroPie-Setup
  • Run setup (as root): sudo ./retropie_setup.sh
  • … waited several hours for RetroPie setup to finish …

 

With RetroPie setup complete, I launched Emulation Station (emulationstation on command line).  The first time it runs it will probably detect that you do not have an input configuration file (for controllers) and walk you through the setup.  Basically, follow the on-screen instructions.  If it does not quite go as planned, you can always exit Emulation Station, move ~/.emulationstation/es_input.cfg out of the way, and re-launch it to do it again.  If you are using different controllers, your configuration file may vary; however, here is mine for the controllers I am using:

JOYNAME 2Axes 11Keys Game Pad
BUTTON 1 5
BUTTON 2 6
BUTTON 4 9
BUTTON 5 10
BUTTON 8 8
BUTTON 9 7
AXISPOS 0 4
AXISPOS 1 2
AXISNEG 0 3

 

Go ahead and exit Emulation Station.  Now that the controller is setup within Emulation Station, we have to configure the controller using retroarch-joyconfig.  We want to backup the existing file, and then run the configuration tool, redirecting it’s output to append to the configuration file:

  • Backup Existing: cp ~/RetroPie/configs/all/retroarch.cfg ~/RetroPie/configs/all/retroarch.cfg.orig
  • Run Config, Redirecting Output To Append: ./retroarch-joyconfig >> ~/RetroPie/configs/all/retroarch.cfg

This again will differ depending on what controller you are using, but here is what mine looks like:

##################################################
# Tomee SNES USB Controller
##################################################
# HotKey exit if SELECT + START
input_enable_hotkey_btn = "8"
input_exit_emulator_btn = "9"
# Player 1
input_player1_joypad_index = "0"
input_player1_a_btn = "1"
input_player1_b_btn = "2"
input_player1_x_btn = "0"
input_player1_y_btn = "3"
input_player1_l_btn = "4"
input_player1_r_btn = "5"
input_player1_start_btn = "9"
input_player1_select_btn = "8"
input_player1_left_axis = "-0"
input_player1_up_axis = "-1"
input_player1_right_axis = "+0"
input_player1_down_axis = "+1"
# Player 2
input_player2_joypad_index = "1"
input_player2_a_btn = "1"
input_player2_b_btn = "2"
input_player2_x_btn = "0"
input_player2_y_btn = "3"
input_player2_l_btn = "4"
input_player2_r_btn = "5"
input_player2_start_btn = "9"
input_player2_select_btn = "8"
input_player2_left_axis = "-0"
input_player2_up_axis = "-1"
input_player2_right_axis = "+0"
input_player2_down_axis = "+1"
##################################################

You can see I added-in the HotKey configuration so that if I press both SELECT and START at the same time, it will exit the emulator, returning me to Emulation Station (this is a handy feature for getting back to your menu of games).
 

I found that my default sound configuration on the Pi was not producing any output.  I narrowed this in to the channel being muted and at 0% volume.  So, I fixed that (I’m using the audio output on the model B, and RCA video out … if you’re using HDMI, your configuration may vary):

  • Check Current Sound Mixer Settings: amixer
  • Unmute: amixer set PCM unmute
  • Turn Up Volume: amixer set PCM 100%

 

Another glitch I ran into was getting the Nintendo emulator to run.  When I attempted to run it, it displayed an error on the screen and returned to Emulation Station.  I was able to look through the logs to find the error and it was complaining about a missing library.  It appeared that the file name (or the reference to it) changed at some point, causing the mismatch.  I resolved it by soft-linking the file name it was looking for to the installed file name on disk:

  • Change directory to the Nintendo emulator: cd ~/RetroPie/emulatorcores/fceu-next/fceumm-code/
  • Create the soft-link: ln -s fceumm_libretro.so libretro.so

 

At this point, the Nintendo, Super Nintendo, and Turbo Grafx 16 emulators were all working.
 

For reference, you will want to copy your ROMs to ~/RetroPie/roms/ where the subdirectories are:

  • atari2600 – Atari 2600 (*.a26, *.bin, *.gz, *.rom, *.zip)
  • doom – Doom (*.wad)
  • duke32 – Duke Nukem 3d (*.grp)
  • gamegear – Sega Game Gear (*.gg)
  • gb – Game Boy (*.gb)
  • gba – Game Boy Advance (*.gba)
  • gbc – Game Boy Color (*.gbc)
  • mame – MAME (*.zip)
  • mastersystem – Sega Master System II (*.sms)
  • megadrive – Sega Mega Drive / Genesis (*.md, *.smd)
  • neogeo – NeoGeo (*.zip)
  • nes – Nintendo (*.nes)
  • pcengine – Turbo Grafx 16 (*.pce)
  • psx – Sony Playstation 1 (*.7z, *.img, *.pbp)
  • scummvm – ScummVM (*.exe)
  • snes – Super Nintendo (*.fig, *.smc, *.swc)
  • zxspectrum – ZX Spectrum (*.z80)