• 4 min read
  • I recently offered to digitize all the 4x6 inch family & childhood photos prints, which ended up being a harder task than I thought it would be due to some newbie mistakes.

    Originally, I had thought it would be a piece of cake to simply scan multiple photos at a time with a flatbed scanner, which I could trigger from my computer and save images directly to it via the macOS Image Capture app. I’d then write a quick script to detect whitespace between photos and crop them out.

    To maximize the number of print photos per scan, I arranged my scans like this: Example of bad photo positioning on flatbed scanner

    This turned out to be a terrible idea.

    Scanners are not perfect, and both scanners I used in the process of digitization captured black bars near the edge of the scan, particularly on the side where the lid closes:

    Example 1 of edge artifacts in scan

    Example 2 of edge artifacts in scan

    Example 3 of edge artifacts in scan

    This is a classic example of something really easy for the human eye to detect, but something that is difficult to get computers to detect. Auto-split features in Photoshop didn’t work, nor did open-source tooling like ImageMagick + Fred’s multicrop.

    Doing it properly

    So, did you volunteer to digitize a bunch of photos as well? Don’t be like me, simply arrange your photos like this and you’ll have no problems at all:

    1. Use Image Capture (or equivalent for your OS) to begin scanning photos

    2. Set format to PNG and DPI to 300 (you can do 600 if you’d like but it will be considerably slower and isn’t useful unless you intend to make larger prints than the originals)

    3. Position photos non-overlapping in the center of the flatbed so that whitespace exists on all sides, like this: Example of good photo positioning on flatbed scanner

    4. After you’ve completed all your scans, install ImageMagick. It’s typically available via Homebrew or your OS’ package manager.

    5. Download Fred’s multicrop and run it:

      cd /path/to/scans
      mkdir split
      for photo in *.png;do
      /path/to/multicrop "$photo" "split/${photo// /_}"

      I noticed that the multicrop script has issues if you specify spaces in the output file, so this invokation automatically replaces them with an underscore.

    I doubt this will be relevant for much longer since I’m likely the last generation that will need to do this, but hopefully this helps!

    But wait, how did I fix it?

    After learning the above, surely I didn’t have to re-scan all the photos you might ask?

    I was not thrilled about the prospect of cropping manually, but I was also not about to rescan some 2000 photos. With a bit of help from ImageMagick, I was able to get most of the pictures auto-cropped and rotated thanks to Fred’s great script.

    The photos that were joined by a black bar would still needed to be split manually, but most of the scanned photos could still benefit from being auto-cropped and rotated.

    I wrote a quick script to address the issue:

    1. Chop 6px off the left of the combined scans, which was roughly the width of the black artifacting
    2. Take each combined scan and add a 50px margin to the left and top to ensure each individual photos would have whitespace on all sides
    3. Run Fred’s multicrop script as usual

    Here’s the script:

    getFilename() {
    	filename=$(basename "$1")
    	filename="${filename%.*}"
    	echo $filename
    }
    
    getExtension() {
    	filename=$(basename "$1")
    	extension=$([[ "$filename" = *.* ]] && echo ".${filename##*.}" || echo '');
    	echo $extension
    }
    
    pad() {
    	in="$(getFilename "$1")"
    	ext="$(getExtension "$1")"
    
      # crop 6px from left
    	convert "${in}${ext}" -gravity West -chop 6x0 "tmp/${in}-cropped${ext}"
    
      # add 50px whitespace top and left
    	convert "tmp/${in}-cropped${ext}" -gravity east -background white -extent $(identify -format '%[fx:W+50]x%[fx:H+50]' "${in}${ext}") "tmp/${in}-extended${ext}"
    }
    
    split() {
    	in="$(getFilename "$1")"
    	ext="$(getExtension "$1")"
    	~/bin/multicrop "tmp/${in}-extended${ext}" "output/${in// /_}-split${ext}"
    }
    
    mkdir -p tmp output done
    for combined in *.png;do
    	pad "$combined"
    	split "$combined"
    	mv "$combined" done
    done
  • 1 min read
  • After a few years of meticulously maintaining a large shell script that setup my Fedora home server, finally got around to containerizing a good portion of it thanks to the fine team at linuxserver.io.

    As the software set I tried to maintain grew, there were a few challenges with dependencies and I ended up having to install/compile a few software titles myself, which I generally try to avoid at all costs (since that means I’m on the hook for regularly checking for security updates, addressing compatibility issues with OS upgrades, etc).

    After getting my docker-compose file right, it’s been wonderful - a simple docker-compose pull updates everything and a small systemd service keeps the docker images running at boot. Mapped volumes mean none of the data is outside my host, and I can also use the host networking mode for images that I want auto-discovery for (e.g. Plex or SMB).

    Plus, seeing as I’ve implemented docker-compose as a systemd service, I am able depend on zfs-keyvault to ensure that any dependent filesystem are mounted and available. Hurray!

    You check out a sample config for my setup in this GitHub gist.

  • 2 min read
  • The need for automation

    As noted in my prior blogs, I use ZFS on Linux for my home fileserver and have been very impressed - it’s been extremely stable, versatile and the command line utilities have simple syntax that work exactly as you’d expect them to.

    A few months back native encryption was introduce into master branch for testing (you can read more here), and I have been using it to encrypt all my data. I chose not encrypt my root drive since it doesn’t host any user data, and I do not want my boot to be blocked on password input - for example what if there’s a power failure while I’m travelling for work?

    However that still leaves two nagging problems:

    1. It became tedious to manually SSH into my machine every time it restarts to type in numerous encrypted filesystem passphrases
    2. A bunch of my systemd services depend on user data; issue in systemd (#8587) prevents using auto-generated mount dependenices to wait for the filesystems to be mounted so I have to start them menually.

    Introducing zfs-keyvault

    I decided to kill two birds with one stone and am happy to introduce zfs-keyvault, available on GitHub. It provides both a systemd service that can be depended upon by other services, as well automation for securely mounting encrypted ZFS filesystems.

    On the client (with ZFS filesystems), a zkv utility is installed that can be used to manage an encrypted repository containing one or more ZFS filesystem’s encryption keys. This repository is locally stored and its encryption key is placed in an Azure Key Vault.

    On your preferred webhost or public cloud, a small Flask webserver called zkvgateway gates access to this repository key in Key Vault and can release under certain conditions.

    On boot, the systemd service runs zkv which will reach out to the gateway, who in turn SMSs you with a PIN for approval. The inclusion of a PIN stops people from blindly hitting your endpoint to approve requests, and also prevents replay attacks. The gateway is also rate-limited to 1 request/s to stop brute-force attacks.

    Once the PIN is confirmed over SMS, repository key is released from Azure Key Vault and the zkv utility can now decrypts the ZFS filesystem encryption keys which are locally stored, and begins mounting the filesystems. The filesystem encryption keys never leave your machine!

    I’ve uploaded the server-side components as a Docker image named stewartadam/zkvgateway so it can be pulled and run easily. Enjoy!

  • 5 min read
  • In my last post, I covered how to route packages from a specific VLAN through a VPN on the USG. Here, I will show how to use policy-based routing on Linux to route packets from specific processes or subnets through a VPN connection on a Linux host in your LAN instead. You could then point to this host as the next-hop for a VLAN on your USG to achieve the same effect as in my last post.

    Note that this post will assume a modern tooling including firewalld and NetworkManager, and that subnet 192.168.10.0/24 is your LAN. This post will send packets coming from 192.168.20.0/24 to VPN, but you could customize that as you see fit (e.g. send specific only hosts from your normal LAN subnet instead).

    VPN network interface setup

    First, let’s create a VPN firewalld zone so we can easily apply firewall rules just to the VPN connection:

    firewall-cmd --permanent --new-zone=VPN
    firewall-cmd --reload

    Next, create the VPN interface with NetworkManager:

    VPN_USER=openvpn_username
    VPN_PASSWORD=openvpn_password
    
    # Setup VPN connection with NetworkManager
    dnf install -y NetworkManager-openvpn
    nmcli c add type vpn ifname vpn con-name vpn vpn-type openvpn
    nmcli c mod vpn connection.zone "VPN"
    nmcli c mod vpn connection.autoconnect "yes"
    nmcli c mod vpn ipv4.method "auto"
    nmcli c mod vpn ipv6.method "auto"
    
    # Ensure it is never set as default route, nor listen to its DNS settings
    # (doing so would push the VPN DNS for all lookups)
    nmcli c mod vpn ipv4.never-default "yes"
    nmcli c mod vpn ipv4.ignore-auto-dns on
    nmcli c mod vpn ipv6.never-default "yes"
    nmcli c mod vpn ipv6.ignore-auto-dns on
    
    # Set configuration options
    nmcli c mod vpn vpn.data "comp-lzo = adaptive, ca = /etc/openvpn/keys/vpn-ca.crt, password-flags = 0, connection-type = password, remote = remote.vpnhost.tld, username = $VPN_USER, reneg-seconds = 0"
    
    # Configure VPN secrets for passwordless start
    cat > /etc/NetworkManager/system-connections/vpn
    
    [vpn-secrets]
    password=$VPN_PASSWORD
    EOF
    systemctl restart NetworkManager

    Configure routing table and policy-based routing

    Normally, a host has a single routing table and therefore only 1 default gateway. Static routes can be configured for next-hops, this is configuring the system to route based a packet’s destination address, and we want to know how route based on the source address of a packet. For this, we need multiple routing tables (one for normal traffic, another for VPN traffic) and Policy Based Routing (PBR) to define rules on how to select the right one.

    First, let’s create a second routing table for VPN connections:

    cat > /etc/iproute2/rt_tables
    100 vpn
    EOF

    Next, setup an IP rule to select between routing tables for incoming packets based on their source addres:

    # Replace this with your LAN interface
    IFACE=eno1
    
    # Route incoming packets on VPN subnet towards VPN interface
    cat > /etc/sysconfig/network-scripts/rule-$IFACE
    from 192.168.20.0/24 table vpn
    EOF

    Now that we can properly select which routing table to use, we need to configure routes on the vpn routing table:

    cat  /etc/sysconfig/network-scripts/route-$IFACE
    # Always allow LAN connectivity
    192.168.10.0/24 dev $IFACE scope link metric 98 table vpn
    192.168.20.0/24 dev $IFACE scope link metric 99 table vpn
    
    # Blackhole by default to avoid privacy leaks if VPN disconnects
    blackhole 0.0.0.0/0 metric 100 table vpn
    EOF

    You’ll note that nowhere do we actually define the default gateway - because we can’t yet. VPN connections often dynamically allocate IPs, so we’ll need to configure the default route for the VPN table to match that particular IP each time we start the VPN connection (we’ll do so with a smaller metric figure than the blackhole above of 100, thereby avoiding the blackhole rule).

    So, we will configure NetworkManager to trigger a script upon bringing up the VPN interface:

    cat  /etc/NetworkManager/dispatcher.d/90-vpn
    VPN_UUID="\$(nmcli con show vpn | grep uuid | tr -s ' ' | cut -d' ' -f2)"
    INTERFACE="\$1"
    ACTION="\$2"
    
    if [ "\$CONNECTION_UUID" == "\$VPN_UUID" ];then
      /usr/local/bin/configure_vpn_routes "\$INTERFACE" "\$ACTION"
    fi
    EOF

    In that script, we will read the IP address of the VPN interface and install it as the default route. When the VPN is deactivated, we’ll do the opposite and cleanup the route we added:

    cat  /usr/local/bin/configure_vpn_routes
    #!/bin/bash
    # Configures a secondary routing table for use with VPN interface
    
    interface=\$1
    action=\$2
    
    tables=/etc/iproute2/rt_tables
    vpn_table=vpn
    zone="\$(nmcli -t --fields connection.zone c show vpn | cut -d':' -f2)"
    
    clear_vpn_routes() {
      table=$1
      /sbin/ip route show via 192.168/16 table \$table | while read route;do
        /sbin/ip route delete \$route table \$table
      done
    }
    
    clear_vpn_rules() {
      keep=\$(ip rule show from 192.168/16)
      /sbin/ip rule show from 192.168/16 | while read line;do
        rule="\$(echo \$line | cut -d':' -f2-)"
        (echo "\$keep" | grep -q "\$rule") && continue
        /sbin/ip rule delete \$rule
      done
    }
    
    if [ "\$action" = "vpn-up" ];then
      ip="\$(/sbin/ip route get 8.8.8.8 oif \$interface | head -n 1 | cut -d' ' -f5)"
    
      # Modify default route
      clear_vpn_routes \$vpn_table
      /sbin/ip route add default via \$ip dev \$interface table \$vpn_table
    
    elif [ "\$action" = "vpn-down" ];then
      # Remove VPN routes
      clear_vpn_routes \$vpn_table
    fi
    EOF
    chmod 755 /usr/local/bin/configure_vpn_routes

    Bring up the VPN interface:

    nmcli c up vpn

    That’s all, enjoy!

    Sending all packets from a user through the VPN

    I find this technique particularly versatile as one can also easily force all traffic from a particular user through the VPN tunnel:

    # Replace this with your LAN interface
    IFACE=eno1
    
    # Username (or UID) of user who's traffic to send over VPN
    USERNAME=foo
    
    # Send any marked packets using VPN routing table
    cat > /etc/sysconfig/network-scripts/rule-$IFACE
    fwmark 0x50 table vpn
    EOF
    
    # Mark all packets originating from processes owned by this user
    firewall-cmd --permanent --direct --add-rule ipv4 mangle OUTPUT 0 -m owner --uid-owner $USERNAME -j MARK --set-mark 0x50
    # Enable masquerade on the VPN zone (enables IP forwarding between interfaces)
    firewall-cmd --permanent --add-masquerade --zone=VPN
    
    firewall-cmd --reload

    Note 0x50 is arbitrary, as long as it the rule and firewall rule match, you’re fine.

  • 5 min read
  • A little while back, I posted this on Reddit about setting up a Ubiquity Unifi Security Gateway (USG) or Edge Router Lite (ERL) to selectively route packets through a VPN interface; I wanted to elaborate a bit on the setup for this.

    The goal

    The goal was have my Unifi device establish two networks, one that behaves normally and another that routes all traffic through a VPN interface automatically. The value prop for a setup like this is that you can avoid having to configure each device & the VPN on each separately; simply connect to the network and that’s it. It’s simple, uses a single VPN connection for multiple devices and even lets friends & family use it easily with zero configuration.

    In my case, I setup a second SSID and tied to different subnet (tagged VLAN) so simply by switching networks, you could gain VPN protection. Normally, this is very difficult to do because the router has a single default route; all packets not destined for local networks will exit using said default route.

    This technique is made possible through the use of policy-based routing, which establishes multiple routing tables and rules on when to use a given table. This permits the router to determine the next-hop based on the source address, not the destination address.

    Configuration

    Here are some the basic steps to getting your USG configured:

    # Setup route using table #1 with next-hop as VPN, blackhole if VPN is down
    set protocols static table 1 route 0.0.0.0/0 blackhole distance 100
    set protocols static table 1 interface-route 0.0.0.0/0 next-hop-interface vtun0 distance 2
    
    # Set rules for when to send packets using routes from table 1
    set firewall modify SOURCE_ROUTE rule 10 description "Traffic from VLAN 11 to VPN"
    set firewall modify SOURCE_ROUTE rule 10 source address 192.168.20.0/24
    set firewall modify SOURCE_ROUTE rule 10 modify table 1
    set firewall modify SOURCE_ROUTE rule 10 action modify
    
    # Apply the rule
    set interfaces ethernet eth1 vif 11 firewall in modify SOURCE_ROUTE
    commit
    save

    As long as the Ubiquity router is the default gateway (it should be if it’s serving DHCP), machines on network 192.168.20.0/24 will now automatically route packets through the USG and then the USG will pick the vtun0 interface as its next-hop for those packets. Packets from other sources (e.g. your regular LAN) are routed normally through the WAN link.

    Note that you can also set the next-hop to a host, by using this static route instead of the one above (note removal of the blackhole - you’ll need to do that on the host specified in next-hop to avoid privacy leaks):

    # Setup route table #2 with next-hop as VPN via local server
    set protocols static table 1 route 0.0.0.0/0 next-hop 192.168.20.100

    This is useful if you have a home server connected to VPN, and want to route packets through its VPN connection instead of the USG (some additional setup required; more on that in this post).

    Troubleshooting

    If this setup does not work as expected, the easiest way to troubleshoot is to verify connectivity. tcpdump will be your best friend here. Something like tcpdump -i interface -A port 80 will trace all HTTP traffic on the interface supplied, and

    You’ll want to verify connectivity in this order:

    1. Verify source packets leave a host

    Use any machine on the 192.168.20.0/24 network to test and start generating some traffic, ensuring the packets do hit the network. You can use something simple like curl google.com to trigger some traffic, and monitor the network interface with tcpdump per above to make sure the packets are sent out.

    2. Verify packets are reaching your next-hop

    Your next-hop is probably the USG/ERL router, but could also be an IP on your network as well. Now that we’ve confirmed packets are leaving, make sure they are arriving by inspecting the LAN-side interface on your configured next-hop.

    Connect over SSH to the next hop (if it’s a USG/ERL read this) and run sudo -i. Use ip a to list all interfaces & configured IPs; this should let you pick out the interface name associated to an IP on the 192.168.20.0/24 network.

    Now run tcpdump against that interface and then generate some traffic on the test host from step 1. Do you see them arriving?

    3. Verify packets are exiting your next-hop on the VPN interface

    If you’ve made it this far packets arrive on your next-hop so let’s make sure it’s forwarding out through the right interface. First, list all configured policy-based routing rules with ip rule - you should expect to see 0 (default table) and 1 (for certain marked packets). List out each routing table using ip rule show table X to make sure things look as you’d expect. For example, ip route show table 1:

    default dev vtun0  scope link
    blackhole default  metric 100

    Or if you configured next-hop as a host instead:

    default via 192.168.20.100 dev eth1.11 

    Now run tcpdump on the shown interface and verify if packets are existing as expected.

    Note for Unifi users

    Lastly, note that if you use a USG instead of a ERL, these settings will not be persisted. Your settings will be overwritten by Unifi Controller after any provision or reboot operation — you will need to manually persist them by exporting to a config.gateway.json file.