My home server is multi-homed (multiple outgoing network interfaces) which a lot of the times is more trouble than it's worth... This time around I had a need to route a specific Docker container's traffic through a non-default outgoing interface (i.e. an OpenVPN interface to access secured resources). Below I'll show you how I made that happen.
Primer on multi-homed networking
Controlling incoming connections is generally easier than outgoing. Listening connections can be setup on all interfaces or on a specific IP, which will bind it to a specific incoming interface.
Outgoing connections, on the other hand, are a routing decision performed by the kernel. So regardless of the incoming connection, data flows out (generally) through the default route.
Policy-based routing customizes these routing decisions so that the route -- that is, the outgoing interface -- can be determined by a set of rules like source IP or marked packet by IP tables.
A bit about Docker networking
Docker is a marvel of technology but at times feels very user-hostile due to its rigidity - it makes a lot of assumptions and doesn't often communicate them well in documentation.
So, to the point: Docker supports adding multiple network interfaces to containers, great! I can have my specific container continue to join the default Docker network and talk to my other containers, and create a new network specifically for this container that maps to my target outgoing interface on the host.
However, the user-hostility: Docker doesn't let you customize which network is the container's default route. Normally I wouldn't care and we'd just use policy-based routing to fix that, but remember how it's the kernel's routing decision? Well containers don't have their own kernel. Docker is all NAT magic under the hood, and the actual routing decision is done on the host.
Turns out, you can influence the default route in a container... It's just that Docker uses the first network added to a container as the default, and from testing it appears to also add networks to containers alphabetically. OK then.
Putting it all together
Our recipe will leverage three key components:
1. A custom Docker network named such that Docker adds it to the container first, making it the default route
2. An IP tables rule to mark packets coming out of that Docker network
3. Policy-based routing on the host to route marked packets through the non-default interface
Here we go:
# create a new Docker-managed, bridged connection # 'avpn' because docker chooses the default route alphabetically DOCKER_SUBNET="184.108.40.206/16" docker network create --subnet=$DOCKER_SUBNET -d bridge -o com.docker.network.bridge.name=docker_vpn avpn # mark packets from the docker_vpn interface during prerouting, to destine them for non-default routing decisions # 0x25 is arbitrary, any hex (int mask) should work firewall-cmd --permanent --direct --add-rule ipv4 mangle PREROUTING 0 -i docker_vpn ! -d $DOCKER_SUBNET -j MARK --set-mark 0x25 # alternatively, for regular iptables: #iptables -t mangle -I PREROUTING 0 -i docker_vpn ! -d $DOCKER_SUBNET -j MARK --set-mark 0x25` # create new routing table - 100 is arbitrary, any integer 1-252 echo "100 vpn" >> /etc/iproute2/rt_tables # configure rules for when to route packets using the new table ip rule add from all fwmark 0x25 lookup vpn # setup a different default route on the new routing table # this route can differ from the normal routing table's default route ip route add default via 10.17.0.1 dev tun0 # connect the docker_vpn docker network connect docker_vpn mycontainer
That's it! You should now observe that outgoing traffic from
mycontainer going through
tun0 with gateway
10.17.0.1. To get this all setup programmatically on boot, I recommend looking into docker-compose to automatically attach the
docker_vpn network and create files
/etc/sysconfig/network-scripts to configure the routing rules.
Note that you may need an add-on package for NetworkManager to trigger the network-scripts - on Fedora, it's called