Port Forwarding on LXC with Raspberry Pi, Adguard and Awall

Port forwarding in LXC containers is not as straightforward as it is for Docker containers. This tutorial describes how to forward ports with Pi 4 and Awala.

Pantavisor uses LXC to run containers on embedded computers. By default, all containers share the host’s network namespace and as a result, the LXC configuration for the network will be:

lxc.net.[0].type = none

LXC – Manpages – lxc.container.conf.5

This means that if you have more than one container using the same network PORT, the container initialization will fail. A way to solve that problem is to create the containers with a virtual ethernet type of network and then do a port forwarding from the host’s network to the container IP.

In Docker, port forwarding is a  straightforward configuration when you run the container. But for LXC containers it is not that straightforward. In this post, we will describe how to forward ports inside the pantavisor. With Pantavisor tools you will create the LXC configuration and then use Awall to build the container IP tables that will manage the forwarding.

Configuring Raspberry Pi as a DNS server for  Adguard

In this use case, you will use a Raspberry PI as a DNS server with the Adguard app that will filter any ads directly from DNS. 

First, let’s observe the Adguard ports on Docker with:

docker run --name adguardhome \
    --restart unless-stopped \
    -v /my/own/workdir:/opt/adguardhome/work \
    -v /my/own/confdir:/opt/adguardhome/conf \
    -p 53:53/tcp -p 53:53/udp \
    -p 67:67/udp -p 68:68/udp \
    -p 3000:3000/tcp \
    -p 853:853/tcp \
    -p 784:784/udp -p 853:853/udp -p 8853:8853/udp \
    -p 5443:5443/tcp -p 5443:5443/udp \
    -d adguard/adguardhome

The ports to map are:

  • 53 tcp/udp: Plain DNS.
  • 67/udp, 68 tcp/udp: Add if you intend to use AdGuard Home as a DHCP server.
  • 3000/tcp: Add if you want to use  AdGuard Home’s admin panel and  also run AdGuard Home as an HTTPS/DNS-over-HTTPS server.
  • 853/tcp: Add if you plan to  run AdGuard Home as a DNS-over-TLS server.
  • 784/udp , 853 udp, 8853/udp: Add if you are going to run AdGuard Home as a DNS-over-QUIC server. You may only leave one or two of these.
  • 5443/tcp, 5443/udp: Add if you are going to run AdGuard Home as a DNSCrypt server.

In the Pantavisor schema, the awconnect container does all of the network management for the default image for Pantavisor on Raspberry PI. This is what allows you to connect the Raspberry Pi to the internet via wifi or to share your wired internet through  a wifi hotspot. 

It is this container that provides DNS and DHCP. In addition to this,   you may have another container using port 3000.

Before You Begin 

In order to work on this, we need to have a Pantavisor-enabled Raspberry PI 3+ or a Raspberry PI 4. You will also need to install the following on your development computer:

  1. fakeroot
  2. squashfs
  3. Docker
  4. pvr cli
  5. Sign up for an  account on hub.pantacor.com

You can install fakeroot and squashfs from your distro or with your OS package manager: apt, yum, apk, homebrew, etc.

For example, Ubuntu/Debian based distros install these utilities with:

sudo apt install fakeroot squashfs-tools

To install Docker for your particular OS, refer to the  Docker installation documentation 

Last but not least, you will also need to install the PVR CLI to manage pantavisor devices inside hub.pantacor.com.  You can find more about how to install PVR in Pantacor documentation.

Because we will send new revisions and configurations to the device remotely from your development computer,  the device needs to connect to the hub.pantacor.com cloud service.  You will need to create an account there.

Now that you have everything installed, we can move forward to solve our network isolation and port forwarding problem.

#1. Installing Pantavisor and claiming the device

A claiming process may enable an end-user to claim a device and proof ownership thereof. Such a process is initially triggered via a claim message that could be done in several ways.

After you’ve installed the Pantavisor image or if you installed Pantavisor already, you need to claim your device. There are several ways to claim your device:

1.- Using pre-claimed images

The easiest way to claim the device is by downloading an image that automatically claims it on the first boot. Download this image from: https://hub.pantacor.com/download-image

2.- Manual claiming

Another way to claim is by following  the getting started guide on pantavisor.io and then claiming the device manually with  the PVR CLI from  your development computer:

pvr device scan


The result of that command should look something like this:

sergiomarin@penguin:~$ pvr device scan
Scanning ...
        ID: 61147e8af0e0f5000a50d11c (unclaimed)
        Host: localhost.local.
        IPv4: [192.168.68.115]
        IPv6: [fe80::d4ac:9722:ab19:7b81]
        Port: 22
        Claim Cmd: pvr claim -c marginally-optimum-midge https://api.pantahub.com:443/devices/61147e8af0e0f5000a50d11c
Pantavisor devices detected in network: 1 (see above for details)

And after that take the Claim Cmd value and run that. For example:

pvr claim -c marginally-optimum-midge https://api.pantahub.com:443/devices/61147e8af0e0f5000a50d11c

After this step, you will be able to see your claimed device at  hub.pantacor.com  on the devices page.

You will see a new device with the status SYNCING. This means your device has been claimed and is uploading to the cloud. When that process is DONE, you will be able to clone and modify the device.

#2. Cloning the device URL to your computer

In the device details panel, you will see the Clone URL. Copy the URL and run the following on your development laptop:

pvr clone CLONE_URL

Example: 

pvr clone https://pvr.pantahub.com/highercomve/specially_brief_monkey

That creates a new folder called `specially_brief_monkey` much like the way `git clone` works. Inside the folder, you should see a structure similar to this:

specially_brief_monkey
├── awconnect
│   ├── lxc.container.conf
│   ├── root.squashfs
│   ├── root.squashfs.docker-digest
│   ├── run.json
│   └── src.json
├── bsp
│   ├── addon-plymouth.cpio.xz4
│   ├── build.json
│   ├── firmware.squashfs
│   ├── kernel.img
│   ├── modules.squashfs
│   ├── pantavisor
│   ├── run.json
│   └── src.json
├── _hostconfig
│   └── pvr
│       └── docker.json
├── network-mapping.json
├── pv-avahi
│   ├── lxc.container.conf
│   ├── root.squashfs
│   ├── root.squashfs.docker-digest
│   ├── run.json
│   └── src.json
├── pvr-sdk
│   ├── lxc.container.conf
│   ├── root.squashfs
│   ├── root.squashfs.docker-digest
│   ├── run.json
│   └── src.json
└── storage-mapping.json

This is how Pantavisor defines a device with its running containers: :

  • BSP: In embedded systems, a board support package (BSP) is the layer of software containing hardware-specific drivers and other routines that allow a particular operating system to function in a particular hardware environment.
  • awconnect: Pantavisor base platform for the device and where the network configuration is and is based on this container.
  • pv-avahi: a container that sends all the information needed to discover the device in the network using pvr device scan and uses the avahi protocol for that. You can see the source container here: https://gitlab.com/pantacor/pv-platforms/pv-avahi
  • pv-sdk: container with Pantavisor SDK that allows you to maintain the device directly from the device and is the one in charge of managing the ssh connections to the device and to the container inside it.

Every one of those containers will have inside a src.json file that describes the source from where the container was created as well as some of the Pantavisor-specific configurations to run the container.

#3. Adding Adguard to our device

We are going to use the adguard docker container as the source for our Pantavisor container (basically an LXC container). In order to add a new container from Docker inside the folder created by the pvr clone process, we are going to run the following:

pvr app add --from=adguard/adguardhome:latest adguard

That creates a new folder called Adguard inside the device definition with the following structure inside:

adguard/
├── lxc.container.conf
├── root.squashfs
├── root.squashfs.docker-digest
├── run.json
└── src.json

The src.json should look something like this:

{
  "#spec": "service-manifest-src@1",
  "template": "builtin-lxc-docker",
  "args": {
   "PV_RUNLEVEL": "app"
  },
  "config": {},
  "docker_name": "adguard/adguardhome",
  "docker_tag": "latest",
  "docker_digest": "adguard/adguardhome@sha256:cd5e6641e969ec8a1df1ed02dc969db49d6cf540055f14346d0d5d42951f75d6",
  "docker_source": "remote,local",
  "docker_platform": "linux/arm",
  "persistence": {}
}

#4. Configuring parameters for LXC features and Adguard persistence

For now, we will discuss only two parts of this file:  args and persistence.

  • args: This is a configuration for the pvr cli tooling that sets up some features for the LXC container.
  • persistence: Configuration for the volumes and persistence for the running container. By default, pvr  automatically adds all the volumes defined in the Dockerfile. You can add more volumes here, even if they aren’t defined in the Dockerfile.

As discussed, pvr adds all the containers by default to the host network namespace. As a result,  the first configuration that we need to do is isolate this container inside the LXC bridge network.

To do that  we need to add a couple of new arguments inside the args configuration:

  • PV_LXC_NETWORK_TYPE: This parameter  configures lxc.net.[0].type and other parameters inside the LXC configuration depending on the value we assign in there.  
  • PV_LXC_NETWORK_IPV4_ADDRESS: This is going to be the assigned IP address for the virtual network interface.

The resulting args will be as follows:

"args": {
    "PV_LXC_NETWORK_IPV4_ADDRESS": "10.0.3.20/24",
    "PV_LXC_NETWORK_TYPE": "veth",
    "PV_RUNLEVEL": "app"
}

If you want to know more about what arguments can be used, you can read the source code of our template to build lxc configurations.

And for persistence, we need to add the /opt/adguardhome/work and the /opt/adguardhome/conf folders. Both of these are used by Adguard in order to save the configuration of the service. 

The resulting persistence configuration will be:

"persistence": {
    "/opt/adguardhome/conf/": "permanent",
    "/opt/adguardhome/work/": "permanent"
}

#5. Install the Adguard App 

With the persistence set up and the LXC features enabled, you are ready to install the Adguard application using pvr.

First, check that the finalized src.json looks similar to this:

{
  "#spec": "service-manifest-src@1",
  "args": {
    "PV_LXC_NETWORK_IPV4_ADDRESS": "10.0.3.20/24",
    "PV_LXC_NETWORK_TYPE": "veth",
    "PV_RUNLEVEL": "app"
  },
  "config": {},
  "docker_digest": "adguard/adguardhome@sha256:cd5e6641e969ec8a1df1ed02dc969db49d6cf540055f14346d0d5d42951f75d6",
  "docker_name": "adguard/adguardhome",
  "docker_platform": "linux/arm",
  "docker_source": "remote,local",
  "docker_tag": "latest",
  "persistence": {
    "/opt/adguardhome/conf/": "permanent",
    "/opt/adguardhome/work/": "permanent"
  },
  "template": "builtin-lxc-docker"
}

If your `src.json` looks fine, let’s proceed with the installation using this command:

pvr app install adguard

In order to see how Adguard is installed and isolated, push the changes to the device: 

pvr add && pvr commit && pvr post -m "Install adguard into the device"

After this, you will be able to see a new revision in the device at  hub.pantacor.com which is similar to this one:

When the device status is DONE or UPDATED, the process is finished. The device should be running Adguard now, but because it is isolated, you won’t be able to use the web UI or even use it as DNS.

#6. Mapping the Adguard ports with Awall

The awconnect platform uses iptables to manage firewall and is the one that manages to route, and Awall is a utility to generate iptables and routes based on JSON configuration.

This is a good guide about how Awall works. I will not enter the deepest lands of Awall, instead, I will show how to use it for this use case.

Configure Awall for Pantavisor

First, let’s start by adding this small utility script to run awall without installing it on your computer, and instead, run it from a Docker container. You can see and download the script here.

wget https://gist.githubusercontent.com/highercomve/295cf75fb660be4c6b054627c330cb4b/raw/d51989dd5c57c36136814033af6d460db3024bef/awall2pvmwall && chmod +x awall2pvmwall

That script runs awall for the `awall.json` file in the root of the device folder as well as any `_awall configuration` folder inside any application or platform of the device.

We are going to create several JSON files to configure awall. Port_forward_example is the folder name for where my device repository lives and also where we will add all the new files for configuration.

mkdir -p adguard/_awall
touch awall.json
touch adguard/_awall/config.json
touch bsp/_awall.json

The configuration file bsp/_awall.json  defines a couple of variables for our device.

{
   "variable": {
   	"containernet_if": "lxcbr0",
   	"wan_if": "eth0",
   	"lan_if": "wlan0"
   }
}

These are the default values that Pantavisor sets for a Raspberry PI 4. If your Raspberry Pi is connected to the Internet via an ethernet cable, the wan_if will be eth0 and maybe your wifi will be a local area network. In some cases, you may be connected via wifi to an Internet provider and will use the eth0 as an entry point to your local network. 

The awall.json configuration file defines the general firewall rules for the device. We are going to add to that file everything that is directly related to the host network namespace.

{
   "description": "How to use awall",
   "filter": [
       {
           "action": "accept",
           "in": "internet",
           "service": [
               "ping",
               "dns",
               "ssh",
               "dhcp",
               "pvssh"
           ]
       },
       {
           "action": "accept",
           "in": "intranet",
           "service": [
               "dns",
               "ping",
               "ssh",
               "dhcp",
               "pvssh"
           ]
       },
       {
           "action": "accept",
           "in": "internet",
           "out": "_fw",
           "service": [
               "ssh",
               "dhcp",
               "pvssh"
           ]
       },
       {
           "action": "accept",
           "out": "internet",
           "service": [
               "http",
               "https"
           ]
       }
   ],
   "import": [
       "bsp"
   ],
   "policy": [
       {
           "action": "accept",
           "out": "internet"
       },
       {
           "action": "drop",
           "in": "internet"
       },
       {
           "action": "accept",
           "out": "intranet"
       },
       {
           "action": "drop",
           "in": "intranet"
       },
       {
           "action": "accept",
           "in": "containernet"
       },
       {
           "action": "accept",
           "out": "containernet"
       },
       {
           "action": "reject"
       }
   ],
   "service": {
       "pvssh": [
           {
               "port": 8222,
               "proto": "tcp"
           }
       ]
   },
   "snat": [
       {
           "out": "internet"
       }
   ],
   "zone": {
       "containernet": {
           "iface": "$containernet_if"
       },
       "internet": {
           "iface": "$wan_if"
       },
       "intranet": {
           "iface": "$lan_if"
       }
   }
}

With this configuration, we open access to some ports of the device:

  • ssh: port 22 to access the pvr-sdk container
  • pvssh: port 8222 to enter directly to a container
  • dns: port 53
  • ping: icmp port
  • dhcp: port 67 and 68

#7. Redirect traffic to the Adguard IP with DNAT

Now let’s redirect everything to the Adguard IP address by using a DNAT configuration. We will need to populate the `aguard/_awall/config.json` file with this configuration.

{
   "description": "Forward All need it ports to this container",
   "filter": [
       {
           "action": "accept",
           "dnat": "10.0.3.20",
           "in": "internet",
           "service": "adguard-web"
       },
       {
           "action": "accept",
           "dnat": "10.0.3.20",
           "in": "intranet",
           "service": "adguard-web"
       },
       {
           "action": "accept",
           "dnat": "10.0.3.20",
           "in": "internet",
           "service": "adguard-dotls"
       },
       {
           "action": "accept",
           "dnat": "10.0.3.20",
           "in": "intranet",
           "service": "adguard-dotls"
       },
       {
           "action": "accept",
           "dnat": "10.0.3.20",
           "in": "internet",
           "service": "adguard-dnsencrypt"
       },
       {
           "action": "accept",
           "dnat": "10.0.3.20",
           "in": "intranet",
           "service": "adguard-dnsencrypt"
       }
   ],
   "service": {
       "adguard-web": [
           {
               "port": 3000,
               "proto": "tcp"
           },
           {
               "port": 3000,
               "proto": "udp"
           }
       ],
       "adguard-dotls": [
           {
               "port": 53,
               "proto": "tcp"
           },
           {
               "port": 53,
               "proto": "udp"
           },
           {
               "port": 853,
               "proto": "tcp"
           },
           {
               "port": 853,
               "proto": "udp"
           },
           {
               "port": 784,
               "proto": "udp"
           },
           {
               "port": 8853,
               "proto": "udp"
           },
           {
               "port": 5443,
               "proto": "tcp"
           },
           {
               "port": 8853,
               "proto": "udp"
           }
       ],
       "adguard-dnsencrypt": [
           {
               "port": 5443,
               "proto": "tcp"
           },
           {
               "port": 8853,
               "proto": "udp"
           }
       ]
   }
}

There, you can see the ports grouped by service. All of those services will be translated from both the WAN and the LAN interface to the container IP. 

#8 Setting up the port forwarding and viewing the Adguard dashboard

Now that we have all the JSON configuration ready, we can run the awall2pvmwall script that will create a folder with this structure:

_config/
└── awconnect
	└── etc
    	└── iptables
        	├── dump
        	├── rules.v4
        	└── rules.v6

With this we can now post this new revision to our device:

pvr add .
pvr commit
pvr post -m "Update awconnect with port forwading to adguard"

NOTE: In here it is important to move from port 80 to 3000

After a couple of more next and next, you will be able to enter the adguard dashboard and use your Raspberry PI as DNS for your entire network and will also be blocking ads, using an upstream DNS via DoT, and a lot more. 

Here is a list of known DNS providers to choose from.

You can see the device I built for this guide in here or you could clone with:

pvr clone https://pvr.pantahub.com/highercomve/port_fordward_example

Now go and play with Pantavisor and containers and may the force be with you.

Leave a Reply