Using Homebridge with cmdscript2 to control your Linux machine over HomeKit

A few of my tech projects experience occasional hiccups and need to be soft-reset from my Linux host (e.g. Wi-Fi SSID routed through VPN, Windows gaming VM with hardware passthough). This was annoying as it meant having a machine nearby to SSH and execute a few simple commands -- often just a systemctl restart foo. Fortunately, homebridge-cmdswitch2 can easily expose arbitrary commands as lights so I would be able to easily bounce the services via my phone.

First, since Homebridge should be running as its own system user, we need to give it permissions to restart services (as root). We don't want to grant services to all of /bin/systemctl, so a wrapper script will be placed at /usr/local/bin/serviceswitch to encapsulate the desired behavior. Grant the homebridge user permission to run it with sudo:

cat << EOF > /etc/sudoers.d/homebridge-cmdswitch
homebridge ALL = (root) NOPASSWD: /usr/local/bin/serviceswitch

Next, let's create that /usr/local/bin/serviceswitch script with service status, start and stop commands - using such a wrapper also has the benefit that complex checks consisting of several commands can be performed. Keep in mind these are now being run as root from Homebridge!

#!/bin/sh

if [ "$(id -u)" -ne 0 ];then
  echo "You must run this script as root."
  exit 1
fi

usage() {
  error="$1"
  if [ ! -z "$error" ];then
    echo "Error: $error"
  fi
  echo "Usage: $0 [action] [service]"
}

action="$1"
service="$2"
if [ -z "$action" ] || [ -z "$service" ];then
  usage
  exit 1
fi

case $action in
  start|stop|status) ;;
  *) usage "invalid action, must be one of [start, stop, status]"; exit 1;;
esac

case $service in
  vm-guests)
    [ "$action" == "start" ] && (systemctl start libvirt-guests)
    [ "$action" == "stop" ] && (systemctl stop libvirt-guests)
    [ "$action" == "status" ] && { systemctl -q is-active libvirt-guests; exit $?; }
    ;;
  fileserver)
    [ "$action" == "start" ] && (systemctl start smb;systemctl start nmb;systemctl start netatalk)
    [ "$action" == "stop" ] && (systemctl stop smb;systemctl stop nmb;systemctl stop netatalk)
    [ "$action" == "status" ] && { (systemctl -q is-active smb && systemctl -q is-active nmb && systemctl -q is-active netatalk); exit $?; }
    ;;
  web)
    [ "$action" == "start" ] && (systemctl start httpd)
    [ "$action" == "stop" ] && (systemctl stop httpd)
    [ "$action" == "status" ] && { systemctl -q is-active httpd; exit $?; }
    ;;
  *) usage "invalid service"; exit 1;;
esac
exit 0

Finally, here is the relevant platform section from the homebridge config:

{
  "platforms": [{
    "platform": "cmdSwitch2",
    "name": "Command Switch",
    "switches": [{
       "name" : "vm-guests",
        "on_cmd": "sudo /usr/local/bin/serviceswitch start vm-guests",
        "off_cmd": "sudo /usr/local/bin/serviceswitch stop vm-guests",
        "state_cmd": "sudo /usr/local/bin/serviceswitch status vm-guests",
        "polling": false,
        "interval": 5,
        "timeout": 10000
    },
    {
       "name" : "fileserver",
        "on_cmd": "sudo /usr/local/bin/serviceswitch start fileserver",
        "off_cmd": "sudo /usr/local/bin/serviceswitch stop fileserver",
        "state_cmd": "sudo /usr/local/bin/serviceswitch status fileserver",
        "polling": false,
        "interval": 5,
        "timeout": 10000
    },
    {
       "name" : "web",
        "on_cmd": "sudo /usr/local/bin/serviceswitch start web",
        "off_cmd": "sudo /usr/local/bin/serviceswitch stop web",
        "state_cmd": "sudo /usr/local/bin/serviceswitch status web",
        "polling": false,
        "interval": 5,
        "timeout": 10000
    }]
  }]
}

Using Monit to restart a running service, without automatically starting it

I recently ran into an issue where a bug in one of my Docker containers would intermittently chew through CPU until restarted. I wanted Monit to automatically restart the service when it was eating CPU (which ordinarily is trivial to do), but due to the mapped volume, I only wanted it to stop & start the service if it was already running. Otherwise, Monit would proceed to start the container on boot prior to the mounted drive being present, resulting in a bunch of headaches.

"if already running" turned out to be a little more complicated than I expected. Monit doesn't have a good answer for this built-in, so the key is to override the start action by executing a no-op when the service isn't running:

only check process home-assistant MATCHING ^python.+homeassistant
   start program = "/usr/bin/docker start home-assistant"
   stop  program = "/usr/bin/docker stop home-assistant"
   if cpu usage > 13% for 3 cycles then restart
   if does not exist then exec /bin/true

Monit considers 100% CPU usage to be full utilization on all cores, which is why you see 13% (you can also verify current service CPU usage checking the output of monit status). In my case, 13% is about 65% CPU on a single core which (over 3 minutes) I deemed enough to recognize when the bug had occurred.

Here you see I'm also using the MATCHING syntax because the entrypoint for Docker containers may change in the future (I don't maintain this one myself).

The only downside to this method is that Monit will log about the service not running repeatedly until it is started. In my case, because I start docker services on boot it wasn't an issue.

Automatically splitting, cropping and rotating multiple photos from a combined scan

I recently offered to digitize all the 4x6 inch family & childhood photos prints, which ended up being a harder task than I thought it would be due to some newbie mistakes.

Originally, I had thought it would be a piece of cake to simply scan multiple photos at a time with a flatbed scanner, which I could trigger from my computer and save images directly to it via the macOS Image Capture app. I'd then write a quick script to detect whitespace between photos and crop them out.

To maximize the number of print photos per scan, I arranged my scans like this:
Example of bad photo positioning on flatbed scanner

This turned out to be a terrible idea.

Scanners are not perfect, and both scanners I used in the process of digitization captured black bars near the edge of the scan, particularly on the side where the lid closes:

Example 1 of edge artifacts in scan

Example 2 of edge artifacts in scan

Example 3 of edge artifacts in scan

This is a classic example of something really easy for the human eye to detect, but something that is difficult to get computers to detect. Auto-split features in Photoshop didn't work, nor did open-source tooling like ImageMagick + Fred's multicrop.

Doing it properly

So, did you volunteer to digitize a bunch of photos as well? Don't be like me, simply arrange your photos like this and you'll have no problems at all:

  1. Use Image Capture (or equivalent for your OS) to begin scanning photos
  2. Set format to PNG and DPI to 300 (you can do 600 if you'd like but it will be considerably slower and isn't useful unless you intend to make larger prints than the originals)
  3. Position photos non-overlapping in the center of the flatbed so that whitespace exists on all sides, like this:
    Example of good photo positioning on flatbed scanner
  4. After you've completed all your scans, install ImageMagick. It's typically available via Homebrew or your OS' package manager.
  5. Download Fred's multicrop and run it:

    cd /path/to/scans
    mkdir split
    for photo in *.png;do
    /path/to/multicrop "$photo" "split/${photo// /_}"
    

    I noticed that the multicrop script has issues if you specify spaces in the output file, so this invokation automatically replaces them with an underscore.

I doubt this will be relevant for much longer since I'm likely the last generation that will need to do this, but hopefully this helps!

But wait, how did I fix it?

After learning the above, surely I didn't have to re-scan all the photos you might ask?

I was not thrilled about the prospect of cropping manually, but I was also not about to rescan some 2000 photos. With a bit of help from ImageMagick, I was able to get most of the pictures auto-cropped and rotated thanks to Fred's great script.

The photos that were joined by a black bar would still needed to be split manually, but most of the scanned photos could still benefit from being auto-cropped and rotated.

I wrote a quick script to address the issue:

  1. Chop 6px off the left of the combined scans, which was roughly the width of the black artifacting
  2. Take each combined scan and add a 50px margin to the left and top to ensure each individual photos would have whitespace on all sides
  3. Run Fred's multicrop script as usual

Here's the script:

getFilename() {
    filename=$(basename "$1")
    filename="${filename%.*}"
    echo $filename
}

getExtension() {
    filename=$(basename "$1")
    extension=$([[ "$filename" = *.* ]] && echo ".${filename##*.}" || echo '');
    echo $extension
}

pad() {
    in="$(getFilename "$1")"
    ext="$(getExtension "$1")"

  # crop 6px from left
    convert "${in}${ext}" -gravity West -chop 6x0 "tmp/${in}-cropped${ext}"

  # add 50px whitespace top and left
    convert "tmp/${in}-cropped${ext}" -gravity east -background white -extent $(identify -format '%[fx:W+50]x%[fx:H+50]' "${in}${ext}") "tmp/${in}-extended${ext}"
}

split() {
    in="$(getFilename "$1")"
    ext="$(getExtension "$1")"
    ~/bin/multicrop "tmp/${in}-extended${ext}" "output/${in// /_}-split${ext}"
}

mkdir -p tmp output done
for combined in *.png;do
    pad "$combined"
    split "$combined"
    mv "$combined" done
done

Home server with Docker containers via linuxserver.io

After a few years of meticulously maintaining a large shell script that setup my Fedora home server, finally got around to containerizing a good portion of it thanks to the fine team at linuxserver.io.

As the software set I tried to maintain grew, there were a few challenges with dependencies and I ended up having to install/compile a few software titles myself, which I generally try to avoid at all costs (since that means I'm on the hook for regularly checking for security updates, addressing compatibility issues with OS upgrades, etc).

After getting my docker-compose file right, it's been wonderful - a simple docker-compose pull updates everything and a small systemd service keeps the docker images running at boot. Mapped volumes mean none of the data is outside my host, and I can also use the host networking mode for images that I want auto-discovery for (e.g. Plex or SMB).

Plus, seeing as I've implemented docker-compose as a systemd service, I am able depend on zfs-keyvault to ensure that any dependent filesystem are mounted and available. Hurray!

You check out a sample config for my setup in this GitHub gist.

Automatically and securely mounting encrypted ZFS filesystems at boot with Azure Key Vault

The need for automation

As noted in my prior blogs, I use ZFS on Linux for my home fileserver and have been very impressed - it's been extremely stable, versatile and the command line utilities have simple syntax that work exactly as you'd expect them to.

A few months back native encryption was introduce into master branch for testing (you can read more here), and I have been using it to encrypt all my data. I chose not encrypt my root drive since it doesn't host any user data, and I do not want my boot to be blocked on password input - for example what if there's a power failure while I'm travelling for work?

However that still leaves two nagging problems:
1. It became tedious to manually SSH into my machine every time it restarts to type in numerous encrypted filesystem passphrases
2. A bunch of my systemd services depend on user data; issue in systemd (#8587) prevents using auto-generated mount dependenices to wait for the filesystems to be mounted so I have to start them menually.

Introducing zfs-keyvault

I decided to kill two birds with one stone and am happy to introduce zfs-keyvault, available on GitHub. It provides both a systemd service that can be depended upon by other services, as well automation for securely mounting encrypted ZFS filesystems.

On the client (with ZFS filesystems), a zkv utility is installed that can be used to manage an encrypted repository containing one or more ZFS filesystem's encryption keys. This repository is locally stored and its encryption key is placed in an Azure Key Vault.

On your preferred webhost or public cloud, a small Flask webserver called zkvgateway gates access to this repository key in Key Vault and can release under certain conditions.

On boot, the systemd service runs zkv which will reach out to the gateway, who in turn SMSs you with a PIN for approval. The inclusion of a PIN stops people from blindly hitting your endpoint to approve requests, and also prevents replay attacks. The gateway is also rate-limited to 1 request/s to stop brute-force attacks.

Once the PIN is confirmed over SMS, repository key is released from Azure Key Vault and the zkv utility can now decrypts the ZFS filesystem encryption keys which are locally stored, and begins mounting the filesystems. The filesystem encryption keys never leave your machine!

I've uploaded the server-side components as a Docker image named stewartadam/zkvgateway so it can be pulled and run easily. Enjoy!