Using Monit to restart a running service, without automatically starting it

I recently ran into an issue where a bug in one of my Docker containers would intermittently chew through CPU until restarted. I wanted Monit to automatically restart the service when it was eating CPU (which ordinarily is trivial to do), but due to the mapped volume, I only wanted it to stop & start the service if it was already running. Otherwise, Monit would proceed to start the container on boot prior to the mounted drive being present, resulting in a bunch of headaches.

"if already running" turned out to be a little more complicated than I expected. Monit doesn't have a good answer for this built-in, so the key is to override the start action by executing a no-op when the service isn't running:

only check process home-assistant MATCHING ^python.+homeassistant
   start program = "/usr/bin/docker start home-assistant"
   stop  program = "/usr/bin/docker stop home-assistant"
   if cpu usage > 13% for 3 cycles then restart
   if does not exist then exec /bin/true

Monit considers 100% CPU usage to be full utilization on all cores, which is why you see 13% (you can also verify current service CPU usage checking the output of monit status). In my case, 13% is about 65% CPU on a single core which (over 3 minutes) I deemed enough to recognize when the bug had occurred.

Here you see I'm also using the MATCHING syntax because the entrypoint for Docker containers may change in the future (I don't maintain this one myself).

The only downside to this method is that Monit will log about the service not running repeatedly until it is started. In my case, because I start docker services on boot it wasn't an issue.

Automatically splitting, cropping and rotating multiple photos from a combined scan

I recently offered to digitize all the 4x6 inch family & childhood photos prints, which ended up being a harder task than I thought it would be due to some newbie mistakes.

Originally, I had thought it would be a piece of cake to simply scan multiple photos at a time with a flatbed scanner, which I could trigger from my computer and save images directly to it via the macOS Image Capture app. I'd then write a quick script to detect whitespace between photos and crop them out.

To maximize the number of print photos per scan, I arranged my scans like this:
Example of bad photo positioning on flatbed scanner

This turned out to be a terrible idea.

Scanners are not perfect, and both scanners I used in the process of digitization captured black bars near the edge of the scan, particularly on the side where the lid closes:

Example 1 of edge artifacts in scan

Example 2 of edge artifacts in scan

Example 3 of edge artifacts in scan

This is a classic example of something really easy for the human eye to detect, but something that is difficult to get computers to detect. Auto-split features in Photoshop didn't work, nor did open-source tooling like ImageMagick + Fred's multicrop.

Doing it properly

So, did you volunteer to digitize a bunch of photos as well? Don't be like me, simply arrange your photos like this and you'll have no problems at all:

  1. Use Image Capture (or equivalent for your OS) to begin scanning photos
  2. Set format to PNG and DPI to 300 (you can do 600 if you'd like but it will be considerably slower and isn't useful unless you intend to make larger prints than the originals)
  3. Position photos non-overlapping in the center of the flatbed so that whitespace exists on all sides, like this:
    Example of good photo positioning on flatbed scanner
  4. After you've completed all your scans, install ImageMagick. It's typically available via Homebrew or your OS' package manager.
  5. Download Fred's multicrop and run it:

    cd /path/to/scans
    mkdir split
    for photo in *.png;do
    /path/to/multicrop "$photo" "split/${photo// /_}"
    

    I noticed that the multicrop script has issues if you specify spaces in the output file, so this invokation automatically replaces them with an underscore.

I doubt this will be relevant for much longer since I'm likely the last generation that will need to do this, but hopefully this helps!

But wait, how did I fix it?

After learning the above, surely I didn't have to re-scan all the photos you might ask?

I was not thrilled about the prospect of cropping manually, but I was also not about to rescan some 2000 photos. With a bit of help from ImageMagick, I was able to get most of the pictures auto-cropped and rotated thanks to Fred's great script.

The photos that were joined by a black bar would still needed to be split manually, but most of the scanned photos could still benefit from being auto-cropped and rotated.

I wrote a quick script to address the issue:

  1. Chop 6px off the left of the combined scans, which was roughly the width of the black artifacting
  2. Take each combined scan and add a 50px margin to the left and top to ensure each individual photos would have whitespace on all sides
  3. Run Fred's multicrop script as usual

Here's the script:

getFilename() {
    filename=$(basename "$1")
    filename="${filename%.*}"
    echo $filename
}

getExtension() {
    filename=$(basename "$1")
    extension=$([[ "$filename" = *.* ]] && echo ".${filename##*.}" || echo '');
    echo $extension
}

pad() {
    in="$(getFilename "$1")"
    ext="$(getExtension "$1")"

  # crop 6px from left
    convert "${in}${ext}" -gravity West -chop 6x0 "tmp/${in}-cropped${ext}"

  # add 50px whitespace top and left
    convert "tmp/${in}-cropped${ext}" -gravity east -background white -extent $(identify -format '%[fx:W+50]x%[fx:H+50]' "${in}${ext}") "tmp/${in}-extended${ext}"
}

split() {
    in="$(getFilename "$1")"
    ext="$(getExtension "$1")"
    ~/bin/multicrop "tmp/${in}-extended${ext}" "output/${in// /_}-split${ext}"
}

mkdir -p tmp output done
for combined in *.png;do
    pad "$combined"
    split "$combined"
    mv "$combined" done
done

Home server with Docker containers via linuxserver.io

After a few years of meticulously maintaining a large shell script that setup my Fedora home server, finally got around to containerizing a good portion of it thanks to the fine team at linuxserver.io.

As the software set I tried to maintain grew, there were a few challenges with dependencies and I ended up having to install/compile a few software titles myself, which I generally try to avoid at all costs (since that means I'm on the hook for regularly checking for security updates, addressing compatibility issues with OS upgrades, etc).

After getting my docker-compose file right, it's been wonderful - a simple docker-compose pull updates everything and a small systemd service keeps the docker images running at boot. Mapped volumes mean none of the data is outside my host, and I can also use the host networking mode for images that I want auto-discovery for (e.g. Plex or SMB).

Plus, seeing as I've implemented docker-compose as a systemd service, I am able depend on zfs-keyvault to ensure that any dependent filesystem are mounted and available. Hurray!

You check out a sample config for my setup in this GitHub gist.

Automatically and securely mounting encrypted ZFS filesystems at boot with Azure Key Vault

The need for automation

As noted in my prior blogs, I use ZFS on Linux for my home fileserver and have been very impressed - it's been extremely stable, versatile and the command line utilities have simple syntax that work exactly as you'd expect them to.

A few months back native encryption was introduce into master branch for testing (you can read more here), and I have been using it to encrypt all my data. I chose not encrypt my root drive since it doesn't host any user data, and I do not want my boot to be blocked on password input - for example what if there's a power failure while I'm travelling for work?

However that still leaves two nagging problems:
1. It became tedious to manually SSH into my machine every time it restarts to type in numerous encrypted filesystem passphrases
2. A bunch of my systemd services depend on user data; issue in systemd (#8587) prevents using auto-generated mount dependenices to wait for the filesystems to be mounted so I have to start them menually.

Introducing zfs-keyvault

I decided to kill two birds with one stone and am happy to introduce zfs-keyvault, available on GitHub. It provides both a systemd service that can be depended upon by other services, as well automation for securely mounting encrypted ZFS filesystems.

On the client (with ZFS filesystems), a zkv utility is installed that can be used to manage an encrypted repository containing one or more ZFS filesystem's encryption keys. This repository is locally stored and its encryption key is placed in an Azure Key Vault.

On your preferred webhost or public cloud, a small Flask webserver called zkvgateway gates access to this repository key in Key Vault and can release under certain conditions.

On boot, the systemd service runs zkv which will reach out to the gateway, who in turn SMSs you with a PIN for approval. The inclusion of a PIN stops people from blindly hitting your endpoint to approve requests, and also prevents replay attacks. The gateway is also rate-limited to 1 request/s to stop brute-force attacks.

Once the PIN is confirmed over SMS, repository key is released from Azure Key Vault and the zkv utility can now decrypts the ZFS filesystem encryption keys which are locally stored, and begins mounting the filesystems. The filesystem encryption keys never leave your machine!

I've uploaded the server-side components as a Docker image named stewartadam/zkvgateway so it can be pulled and run easily. Enjoy!

Policy-based routing on Linux to forward packets from a subnet or process through a VPN

In my last post, I covered how to route packages from a specific VLAN through a VPN on the USG. Here, I will show how to use policy-based routing on Linux to route packets from specific processes or subnets through a VPN connection on a Linux host in your LAN instead. You could then point to this host as the next-hop for a VLAN on your USG to achieve the same effect as in my last post.

Note that this post will assume a modern tooling including firewalld and NetworkManager, and that subnet 192.168.10.0/24 is your LAN. This post will send packets coming from 192.168.20.0/24 to VPN, but you could customize that as you see fit (e.g. send specific only hosts from your normal LAN subnet instead).

VPN network interface setup

First, let's create a VPN firewalld zone so we can easily apply firewall rules just to the VPN connection:

firewall-cmd --permanent --new-zone=VPN
firewall-cmd --reload

Next, create the VPN interface with NetworkManager:

VPN_USER=openvpn_username
VPN_PASSWORD=openvpn_password

# Setup VPN connection with NetworkManager
dnf install -y NetworkManager-openvpn
nmcli c add type vpn ifname vpn con-name vpn vpn-type openvpn
nmcli c mod vpn connection.zone "VPN"
nmcli c mod vpn connection.autoconnect "yes"
nmcli c mod vpn ipv4.method "auto"
nmcli c mod vpn ipv6.method "auto"

# Ensure it is never set as default route, nor listen to its DNS settings
# (doing so would push the VPN DNS for all lookups)
nmcli c mod vpn ipv4.never-default "yes"
nmcli c mod vpn ipv4.ignore-auto-dns on
nmcli c mod vpn ipv6.never-default "yes"
nmcli c mod vpn ipv6.ignore-auto-dns on

# Set configuration options
nmcli c mod vpn vpn.data "comp-lzo = adaptive, ca = /etc/openvpn/keys/vpn-ca.crt, password-flags = 0, connection-type = password, remote = remote.vpnhost.tld, username = $VPN_USER, reneg-seconds = 0"

# Configure VPN secrets for passwordless start
cat << EOF >> /etc/NetworkManager/system-connections/vpn

[vpn-secrets]
password=$VPN_PASSWORD
EOF
systemctl restart NetworkManager

Configure routing table and policy-based routing

Normally, a host has a single routing table and therefore only 1 default gateway. Static routes can be configured for next-hops, this is configuring the system to route based a packet's destination address, and we want to know how route based on the source address of a packet. For this, we need multiple routing tables (one for normal traffic, another for VPN traffic) and Policy Based Routing (PBR) to define rules on how to select the right one.

First, let's create a second routing table for VPN connections:

cat << EOF >> /etc/iproute2/rt_tables
100 vpn
EOF

Next, setup an IP rule to select between routing tables for incoming packets based on their source addres:

# Replace this with your LAN interface
IFACE=eno1

# Route incoming packets on VPN subnet towards VPN interface
cat << EOF >> /etc/sysconfig/network-scripts/rule-$IFACE
from 192.168.20.0/24 table vpn
EOF

Now that we can properly select which routing table to use, we need to configure routes on the vpn routing table:

cat << EOF > /etc/sysconfig/network-scripts/route-$IFACE
# Always allow LAN connectivity
192.168.10.0/24 dev $IFACE scope link metric 98 table vpn
192.168.20.0/24 dev $IFACE scope link metric 99 table vpn

# Blackhole by default to avoid privacy leaks if VPN disconnects
blackhole 0.0.0.0/0 metric 100 table vpn
EOF

You'll note that nowhere do we actually define the default gateway - because we can't yet. VPN connections often dynamically allocate IPs, so we'll need to configure the default route for the VPN table to match that particular IP each time we start the VPN connection (we'll do so with a smaller metric figure than the blackhole above of 100, thereby avoiding the blackhole rule).

So, we will configure NetworkManager to trigger a script upon bringing up the VPN interface:

cat << EOF > /etc/NetworkManager/dispatcher.d/90-vpn
VPN_UUID="\$(nmcli con show vpn | grep uuid | tr -s ' ' | cut -d' ' -f2)"
INTERFACE="\$1"
ACTION="\$2"

if [ "\$CONNECTION_UUID" == "\$VPN_UUID" ];then
  /usr/local/bin/configure_vpn_routes "\$INTERFACE" "\$ACTION"
fi
EOF

In that script, we will read the IP address of the VPN interface and install it as the default route. When the VPN is deactivated, we'll do the opposite and cleanup the route we added:

cat << EOF > /usr/local/bin/configure_vpn_routes
#!/bin/bash
# Configures a secondary routing table for use with VPN interface

interface=\$1
action=\$2

tables=/etc/iproute2/rt_tables
vpn_table=vpn
zone="\$(nmcli -t --fields connection.zone c show vpn | cut -d':' -f2)"

clear_vpn_routes() {
  table=$1
  /sbin/ip route show via 192.168/16 table \$table | while read route;do
    /sbin/ip route delete \$route table \$table
  done
}

clear_vpn_rules() {
  keep=\$(ip rule show from 192.168/16)
  /sbin/ip rule show from 192.168/16 | while read line;do
    rule="\$(echo \$line | cut -d':' -f2-)"
    (echo "\$keep" | grep -q "\$rule") && continue
    /sbin/ip rule delete \$rule
  done
}

if [ "\$action" = "vpn-up" ];then
  ip="\$(/sbin/ip route get 8.8.8.8 oif \$interface | head -n 1 | cut -d' ' -f5)"

  # Modify default route
  clear_vpn_routes \$vpn_table
  /sbin/ip route add default via \$ip dev \$interface table \$vpn_table

elif [ "\$action" = "vpn-down" ];then
  # Remove VPN routes
  clear_vpn_routes \$vpn_table
fi
EOF
chmod 755 /usr/local/bin/configure_vpn_routes

Bring up the VPN interface:

nmcli c up vpn

That's all, enjoy!

Sending all packets from a user through the VPN

I find this technique particularly versatile as one can also easily force all traffic from a particular user through the VPN tunnel:

# Replace this with your LAN interface
IFACE=eno1

# Username (or UID) of user who's traffic to send over VPN
USERNAME=foo

# Send any marked packets using VPN routing table
cat << EOF >> /etc/sysconfig/network-scripts/rule-$IFACE
fwmark 0x50 table vpn
EOF

# Mark all packets originating from processes owned by this user
firewall-cmd --permanent --direct --add-rule ipv4 mangle OUTPUT 0 -m owner --uid-owner $USERNAME -j MARK --set-mark 0x50
# Enable masquerade on the VPN zone (enables IP forwarding between interfaces)
firewall-cmd --permanent --add-masquerade --zone=VPN

firewall-cmd --reload

Note 0x50 is arbitrary, as long as it the rule and firewall rule match, you're fine.