Introduction

Earlier this year I looked at switching from Bitwarden (online service) to self hosting Vaultwarden. The post I wrote was fairly high level and focused on different container options. It was a fairly generic deployment that didn’t go into much detail.

Configuration and security considerations were woefully neglected in that post. It also didn’t go into detail about how I’d integrate it into the VPS that I use as VPN, DNS, and now password vault server.

Also, since then I’ve learned a bit more about Docker, and I’ve started using Caddy instead of NGINX.

Having learned more, I’ve completely changed my Vaultwarden deployment. It is now more secure and I think easier to manage. I’d like to go into a lot more detail with this setup because I think existing deployment posts, my previous one included, don’t go far enough when deploying something as critical as a password vault.

High Level Design

I have Vaultwarden and Caddy are both running in Docker as containers and Vaultwarden is running on an isolated network. Both are controlled as one unit via a single docker compose file.

Vaultwarden’s network has no access to the internet, nor can it expose any ports to the local system. Caddy is used as the reverse proxy to allow access to Vaultwarden without needing to have clients connect directly.

Caddy is able to bridge the local system and Vaultwarden because it’s part of two Docker networks. The isolated network Vaultwarden is running on and an external network that can connect to the local system. Thus Caddy can expose itself to external connections and proxy requests to Vaultwarden.

I have a backup script for Vaultwarden utilizing restic. It does encrypted backups to a Backblaze B2 bucket and runs twice a day via a systemd timer.

Certbot is used to handle generating TLS certificates instead of letting Caddy. This may seem odd but I’ll explain later. But it has to do with Caddy only being exposed to VPN clients.

A quick note. When you see <WG0_IPv4> and <WG0_IPv6>, these are supposed to be the IP addresses you’ve configured as the host ‘Address’ for WireGuard. The same applies to <vw.DOMAIN>.<TLD>, where is your (sub) domain and tld extension.

Docker

I decided to use Docker because the compose file makes it very easy to work with multiple related services. Also, Docker has a lot of information available and is much more expensively documented than Podman. It was just easier to get this working with Docker.

One of the things I didn’t like about my last guide was Vaultwarden was not isolated. It could access the internet and was listening on the loopback interface. Anything on the server could interact with Vaultwarden. While this is unlikely to be a problem, I felt I could have better security.

There are a few reasons I’m using containers instead of installing Vaultwarden and Caddy directly on the server.

  1. There isn’t Vaultwarden package for Ubuntu.
  2. There isn’t a Caddy package for Ubuntu 22.04 LTS (which I’m using).
  3. Having both Caddy and Vaultwarden as containers, within Docker, allows me to isolate Vaultwarden and only allow Caddy to connect to it.

Compose file

I’m using /etc/docker/compose/ as the root location for any compose managed docker services. This gives me easy organization and I can reuse the same systemd service file (more later). Each compose needs to be in its own sub directory with its docker-compose.json file.

I have a single compose file in a vaultwarden sub directory which lists both Vaultwarden and Caddy since they are related and Caddy is dependant on Vaultwarden.

/etc/docker/compose/vaultwarden/docker-compose.yaml

version: '3'

services:
  vaultwarden:
    image: vaultwarden/server:latest
    container_name: vaultwarden
    restart: always
    env_file:
      - vaultwarden-variables.env
    volumes:
      - /srv/vaultwarden:/data
    networks:
      - internal

  caddy:
    image: caddy:latest
    container_name: caddy
    restart: always
    ports:
      - <WG0_IPv4>:80:80
      - <WG0_IPv4>:443:443
      - <WG0_IPv4>:443:443/udp
      - <WG0_IPv6>::1:80:80
      - <WG0_IPv6>::1:443:443
      - <WG0_IPv6>::1:443:443/udp
    volumes:
      - /srv/caddy/Caddyfile:/etc/caddy/Caddyfile:ro
      - /srv/caddy/tls:/tls:ro
      - /src/caddy/data:/data
      - /src/caddy/config:/config
      - /srv/caddy/logs:/logs
    env_file:
      - caddy-variables.env
    networks:
      - internal
      - external

networks:
  internal:
    driver: bridge
    internal: true
  external:
    driver: bridge

Environment Variables

Both Vaultwarden and Caddy have an env_file listed. Unlike environment I can separate any environment variables from the compose file. I like the separation this gives between the Docker and application configuration. One caveat of using env_file is, environment variables within the file cannot be referenced with compose file.

I didn’t want to use the .env file because I want to keep environment variables separate from each application. Caddy does not need access to Vaultwarden’s environment configuration and Vaultwarden does not need access to Caddy’s. The .env file loads all variables into both containers environment.

Networks

Vaultwarden is connected to the internal network which is configured with internal: true. This isolates the container from the local machine. You cannot forward ports to the local machine, nor can applications within the container access the internet. That said, the network is isolated not the container. So any other container configured to use that Docker network can connect to each other.

In this case I have a second external network which can forward ports to the local machine and connect to the internet.

Caddy is configured to connect to both external and internal networks that are defined in the compose file. This allows Caddy to connect to Vaultwarden and act as a reverse proxy. Plus, forward ports to the local machine so I can access Vaultwarden.

The network configuration in the compose file creates these two networks within Docker.

$ sudo docker network ls
NETWORK ID     NAME                   DRIVER    SCOPE
...
be03418f73f1   vaultwarden_external   bridge    local
b64a056a8ab5   vaultwarden_internal   bridge    local

These are ephemeral networks that are destroyed when the service stops.

Port Forwarding

Caddy is set to listen on the <WG0_IPv4> and <WG0_IPv6> IP address. It does not listen on ‘0.0.0.0’. I specifically want to restrict access to Vaultwarden to only clients connected to the VPN. Hence having Caddy only listen on the VPN interface.

Volumes

For persistent storage I’m using /srv/vaultwarden and /srv/caddy locations for each respectively. We need to ensure things like the Vaultwarden database, which stores all of my passwords, are not lost when the container is updated.

Env Files

Vaultwarden

/etc/docker/compose/vaultwarden/vaultwarden-variables.env

ROCKET_PORT=8001
ADMIN_TOKEN='$argon2id$...'

The ADMIN_TOKEN is used to secure the admin page. It’s recommended to generate and store a password hash instead of using a clear password for this setting. If you do not have the ADMIN_TOKEN variable (or it’s commented out) the admin interface is disabled. I remove it once I’ve finished configuring Vaultwarden with the admin interface.

The Vaultwarden container includes the utility needed to generate the password hash. Use either docker exec -it vaultwarden /vaultwarden hash on a running container or docker run --rm -it vaultwarden/server /vaultwarden hash if the container is not running. Then put the output in the env file.

Caddy

/etc/docker/compose/vaultwarden/caddy-variables.env

DOMAIN=<vw.DOMAIN>.<TLD>
LOG_FILE=/logs/caddy.log
LOG_FILE_DOMAIN=/logs/<vw.DOMAIN>.<TLD>.log

Any environment variables can be referenced from within the Caddyfile. However, none of these setting are required because you can put them directly in the Caddyfile. I find it convenient define them here.

Service

To make all this work we need the systemd service that runs the compose file. We can make a generic service file that can run anything in docker compose. The key is the %i in the file.

Anything %i will be replaced with whatever is after the @ when referencing the service. E.g. sudo systemctl start docker-compose@vaultwarden or sudo systemctl enable docker-compose@vaultwarden.

/etc/systemd/system/docker-compose@.service

[Unit]
Description=%i service with docker compose
PartOf=docker.service
After=docker.service
After=wg-quick@wg0.service
Requires=wg-quick@wg0.service

[Service]
Type=oneshot
RemainAfterExit=true
WorkingDirectory=/etc/docker/compose/%i
ExecStart=docker-compose up -d --remove-orphans
ExecStop=docker-compose down

[Install]
WantedBy=multi-user.target

In the [Unit] section Requires and After are specified to tell the service to start after WireGuard brings up the wg0 interface. Which is the interface for the VPN connection. The Requires element tells systemd to start the WireGuard VPN connection if it’s not already.

This is very important because we have Caddy’s exposed ports bind to the wg0 interface. If that interface is not up, and thus the bind IP addresses are not available, Docker will be unable to start the vaultwarden service because it will be unable to bind the ports.

Future Considerations

Right now Vaultwarden is the only service being fronted by Caddy and to make things simpler they’re tied together in one compose file. If, in the future, Caddy needs to front multiple services a few changes need to take place.

First, Caddy would need to be pulled out of Vaultwarden compose file and put into it’s own compose file in it’s own sub directory. E.g. /etc/docker/compose/caddy/. This should be very easy because we already have separate of environment variables by using different env_files.

Second, we need to stop using the combined systemd docker-compose@ based service model and the Caddy services needs its own service file. Vaultwarden would only need to depend on Docker. Allowing the generic docker-compose@ service to remove the wg0 dependency. Caddy would need to depend on Docker, Vaultwarden, wg0, and any other services it’s fronting. Other services would have their own dependencies but if they don’t, they can use the generic docker-compose@ service.

Third, the network directive in the compose file dynamically creates the networks and attaches the containers. This works well when they’re within the same compose file but won’t work if services are separated. We’d need to create persistent Docker networks for the containers to attach to using docker network create.

Vaultwarden

Before Vaultwarden can be used it needs to be configured. Set the ADMIN_TOKEN environment variable to enable the admin interface. Use the interface at https://<vw.DOMAIN>.<TLD>/admin to fully configure the Vaultwarden.

For initial set up I enable sign up and disable sign up verification Sign ups are so I create my user. Verify is turned off because Vaultwarden has no internet access so it can’t send verification emails. Once I’ve created my user I disable sign ups.

When you save the configuration in the admin interface it will create a JSON file with the configuration. Here is my configuration file in full but you really should configure yourself via the admin interface instead of starting with the JSON file.

/srv/vaultwarden/config.json

{
  "domain": "https://<vw.DOMAIN>.<TLD>",
  "sends_allowed": false,
  "trash_auto_delete_days": 30,
  "incomplete_2fa_time_limit": 3,
  "disable_icon_download": true,
  "signups_allowed": false,
  "signups_verify": true,
  "signups_verify_resend_time": 3600,
  "signups_verify_resend_limit": 6,
  "invitations_allowed": false,
  "emergency_access_allowed": true,
  "password_iterations": 600000,
  "password_hints_allowed": false,
  "show_password_hint": false,
  "invitation_org_name": "Vaultwarden",
  "ip_header": "X-Forwarded-For",
  "icon_service": "duckduckgo",
  "icon_redirect_code": 302,
  "icon_cache_ttl": 0,
  "icon_cache_negttl": 259200,
  "icon_download_timeout": 1,
  "icon_blacklist_non_global_ips": true,
  "disable_2fa_remember": false,
  "authenticator_disable_time_drift": false,
  "require_device_email": false,
  "reload_templates": false,
  "log_level": "warn",
  "log_timestamp_format": "%Y-%m-%d %H:%M:%S.%3f",
  "log_file": "/data/log",
  "admin_session_lifetime": 20,
  "_enable_yubico": false,
  "_enable_duo": false,
  "_enable_smtp": false,
  "use_sendmail": false,
  "smtp_security": "starttls",
  "smtp_port": 587,
  "smtp_from_name": "Vaultwarden",
  "smtp_timeout": 15,
  "smtp_embed_images": true,
  "smtp_accept_invalid_certs": false,
  "smtp_accept_invalid_hostnames": false,
  "_enable_email_2fa": false,
  "email_token_size": 6,
  "email_expiration_time": 600,
  "email_attempts_limit": 3
}

The domain variable is very important because we need to tell Vaultwarden where it’s being served from. Without this some things won’t work. You can should either set it here or as the DOMAIN environment variable. I recommend using the admin interface to save the domain to the json file instead of using the environment variable.

Pay attention to the ip_header option The default HTTP header Vaultwarden looks for when behind a reverse proxy is X-Real-IP. However, Caddy does not set this, instead you can change it to X-Forwarded-For which Caddy does automatically set. This important to ensure Vaultwarden knows who is originating the request. Otherwise Vaultwarden will only see Caddy as the requester.

You’ll also notice I disable a few things. Like email and Duo security. Vaultwarden has no access to the internet so it can’t use them.

Icon Cache

The icon cache needs special attention. When you’re look at the entires in your vault via an app or the web interface you’ll see website icons. The icons are downloaded from the server (Vaultwarden) via the app sending a request to https://<vw.DOMAIN>.<TLD>/icons with the domain being referenced as the path for the endpoint. The server then returns the website icon.

However, the server does this by downloading the favicon from the domain that’s being referenced. The icon is then cached in the data/icon_cace directory. If the icon couldn’t be downloaded it will place an empty file with .miss as the extension. All of the files in the cache will include the domain as part of the filename.

Problems

This poses two problem. First, Vaultwarden doesn’t have access to the internet so it can’t download the icons.

Second, and possibly serious, this leaks every domain in the vault.

Inadvertent Domain Disclosure

If there are a large number of users this isn’t a problem. In this situation, you’d expect nearly every domain name possible to be in the cache and there is no way to tie a domain to specific users.

However If you have a small number of users, or if you’re a single user like me, this is a major problem. This will expose all the domains within your vault. I highly recommend turning off the icon cache. You’ll see I have disable_icon_download set to true.

That said, you do not have to forego website icons for security. You can set icon_service to one of several external providers. The client will request the icon from the /icons endpoint but instead of Vaultwarden downloading the icon and returning it, Vaultwarden will instruct the client to download the icon from the external service.

In my case I have duckduckgo configured which is big enough and privacy focused there shouldn’t be an inadvertent disclosure of all domains in my vault.

Logging

You might have noticed I have logging set to warn and not the default info. Info will log access requests which will cause domain disclosure. E.g.

[2023-09-28 11:21:58.540][request][INFO] GET /icons/google.com/icon.png
[2023-09-28 11:21:58.540][response][INFO] (icon_external) GET /icons/<domain>/icon.png => 302 Found

I included the response log line as well here because it looks like the response is masking the domain but the request isn’t. This isn’t what’s happening. The request line is logging the URI and the response line is logging the route definition. Where <domain> is a parameter in the route. Which is defined in the Vaultwarden code as #[get("/<domain>/icon.png")].

Logging at warn level isn’t a problem because I’m using Caddy for access logging. Since I’m logging access with Caddy I’d want to set the log level here to warn anyway because having Vaultwarden logging access would be redundant.

There is still the problem of Caddy logging the leaked vault domain in its access log. However, there is a partial work around. In the Caddyfile I use the skip_log directive to prevent logging of requests to the /icons endpoint. This prevents disclosure though the Caddy logs but isn’t ideal because it eliminates the log entry entirely. Sadly, there is no way to have Caddy mask information in log entires.

If Vaultdwardn were to start making the vault domain in the /icons request I’d probably switch to using Vaultdwardn for access logs instead of Caddy.

Caddy

Configuring Caddy is pretty straight forward.

/srv/caddy/Caddyfile

{
	admin off
	log {
		output file {$LOG_FILE} {
			roll_size 10MB
			roll_keep 10
		}
	}
}

{$DOMAIN} {
	tls /tls/{$DOMAIN}/fullchain.pem /tls/{$DOMAIN}/privkey.pem
	encode gzip

	reverse_proxy vaultwarden:8001

	log {
		output file {$LOG_FILE_DOMAIN} {
			roll_size 10MB
			roll_keep 10
		}
	}
	skip_log /icons/*
}

The first block I turn off the admin interface and setup logging.

The second block defines the reverse proxy and logging for Vaultwarden.

A key part of the second block is setting the TLS cert that will be used. Since I’m restricting Caddy to only be available on the WireGuard interface it is unable to do automatic TLS certificates. Instead I will need to mange the externally. I have the certificates in /srv/caddy/tld/<vw.DOMAIN>.<TLD>/ on the server system and use a Docker volume to have it available to Caddy in the container.

Certbot

I’m using Certbot to generate real TLS certificates so I don’t have to deal with installing a self created CA on every one of my devices in order for the certificate to be trusted by browsers and apps.

First I allowed traffic in on port 80 to my internet connected interface (eth0). Then I created the Certbot entry using sudo certbot certonly --standalone -d <vw.DOMAIN>.<TLD> --http-01-address <PUBLIC_IPv6>. The --http-01-address option is very important because it tells Certbot to bind to the eth0 interface. Otherwise, Certbot will attempt to bind to 0.0.0.0 and fail because Caddy (via Docker) is bound to wg0 port 80. You can use your public IPv4 address but I chose to use my IPv6 one.

I’m using the http01 challenge with Certbot’s stand alone mode. When Certbot attempts a renewal it will start a small web server to serve the challenge. Once complete it shutdown it’s web server and exits. This is very convenient and makes renewals much easier to automate.

Renew Hooks

Certbot will look at the pre and post directories within /etc/letsencrypt/renewal-hooks/. Anything in those sub directories will be run before and after the renewal process. There is also a deploy directory which runs only after a successful renewal. The scripts in the renewal-hooks sub directories run for every renewal. The pre and post ones are useful but I’m not going to use deploy and do something slightly different instead.

Firewall

I don’t want to leave port 80 open to the world at all times. Even if there isn’t anything listening. To facilitate this I’m using the pre and post renewal hooks. I have a script for pre that opens the firewall on port 80 for eth0 and one for post that closes the firewall.

/etc/letsencrypt/renewal-hooks/pre/firewall_open

ufw allow in on eth0 to any port 80

/etc/letsencrypt/renewal-hooks/post/firewall_close

ufw delete allow in on eth0 to any port 80

Restarting Caddy

If the TLS certificate is updated Caddy needs to be restarted to load the new certificate. I created a small script that first copies the certificate to Caddy’s data volume, then restarts Caddy. Both Vaultwarden and Caddy are restarted because they’re manged by the same compose file. I don’t see this as a problem because Vaultwarden can’t be used without Caddy so it doesn’t matter if it restarts too.

Normally you’d use caddy reload, which you can have Docker run within the currently running container but we can’t use it. The admin interface is turned off and that provides the API for controlling Caddy. The reload command uses that API to instruct Caddy to reload. Since the admin interface isn’t available neither is the API to reload Caddy’s configuration. So our only option is a full restart.

/usr/local/bin/vaultwarden-tls-restart

cp /etc/letsencrypt/live/<vw.DOMAIN>.<TLD>/privkey.pem /etc/letsencrypt/live/<vw.DOMAIN>.<TLD>/fullchain.pem /srv/caddy/tls/<vw.DOMAIN>.<TLD>/
systemctl restart docker-compose@vaultwarden

Copying the certificate is important and I can’t just mount /etc/letsencrypt/live/ to Caddy’s /tls location. The *.pem files in /etc/letsencrypt/live/<vw.DOMAIN>.<TLD>/ are all soft links to the files in /etc/letsencrypt/archive. Mounting the live directory won’t work because the links won’t point to anything within the Caddy container. You could mount the entire /etc/letsencrypt directory and the links would work but only because they’re relative.

I don’t want to mount the entire directory because if the links change to absolute in the future it will break. Also, I don’t want to expose a lot of additional information to Caddy that it doesn’t need. Instead I just copy the files

Having Certbot Run the Deploy Script

At this point I have the script to copy the certificate and restart Caddy but it’s not part of /etc/letsencrypt/renewal-hooks and I said I’m not going to use the deploy sub directory because I don’t want this script run every time any certificate gets updated. Just in case in the future I have Certbot generate certificates for multiple domains.

Instead I added the renew_hook parameter to Certbot’s configuration file for <vw.DOMAIN>.<TLD>. This is how Certbot knowns it should run the vaultwarden-tls-restart script after a successful renewal.

/etc/letsencrypt/renewal/<vw.DOMAIN>.<TLD>.conf

# renew_before_expiry = 30 days
version = 1.21.0
archive_dir = /etc/letsencrypt/archive/<vw.DOMAIN>.<TLD>
cert = /etc/letsencrypt/live/<vw.DOMAIN>.<TLD>/cert.pem
privkey = /etc/letsencrypt/live/<vw.DOMAIN>.<TLD>/privkey.pem
chain = /etc/letsencrypt/live/<vw.DOMAIN>.<TLD>/chain.pem
fullchain = /etc/letsencrypt/live/<vw.DOMAIN>.<TLD>/fullchain.pem

# Options used in the renewal process
[renewalparams]
account = ...
authenticator = standalone
server = https://acme-v02.api.letsencrypt.org/directory
renew_hook = /usr/local/bin/vaultwarden-tls-restart
http01_address = <PUBLIC_IPv6>

Instead of editing the .conf file you could have specified the script as part of the certbot certonly ... command by specifying --deploy-hook /usr/local/bin/vaultwarden-tls-restart.

Backup

I went into detail about setting up an automated backup to a Backblaze B2 bucket previously. That said, I did make a few tweaks to the script. Most of this section is a repeat from that post. That said, it still backups to a Backblaze B2 Bucket and still uses the .backup command is used with sqlite to ensure database consistency when backing up Vaultwarden’s SQLite database.

The major differences are some file locations. Also, restic on Ubuntu 22.04 is an older version that doesn’t support initialization with the --repository-version option.

Script

The script is split into 3 parts, the ENV file that will have all of our sensitive configuration data, an exclude file, and the script itself.

Env

/usr/local/etc/vw-backup/env.conf

LOCALLOCATION="/srv/vaultwarden"
REMOTELOCATION="b2:<BUCKET_ID>"
B2KEYID="<KEYID>"
B2APPKEY="<KEY>"
PASS="<RESTIC_PASSWORD>"
KEEP_LAST=180
EXCLUDEFILE="/usr/local/etc/vw-backup/excludes"

This file needs to read only by root. It’s imperative the access keys are kept secret.

Exclude File

/usr/local/etc/vw-backup/excludes

db.sqlite3
db.sqlite3-shm
db.sqlite3-wal
tmp/

We’re excluding the running db. The backup db we’ll create in the script will be be backed up instead. It’s important we use sqlite to create a proper backup, otherwise we could end up with the backup database in an inconsistent state if writes were happening while it was being uploaded.

Script

/usr/local/bin/vaultwarden-backup

#!/bin/bash

# Backup location and access info
source /usr/local/etc/vw-backup/env.conf

# Export data restic will read from the env
export B2_ACCOUNT_ID=$B2KEYID
export B2_ACCOUNT_KEY=$B2APPKEY
export RESTIC_PASSWORD=$PASS

# Create a repo. Only needs to be done once.
#restic init -r "$REMOTELOCATION" -v

# Create SQLite DB backup to ensure we don't make a backup of the main db when it's in the middle of a write
echo "Backing up SQLite DB..."
sqlite3 /srv/vaultwarden/db.sqlite3 ".backup '/srv/vaultwarden/db-backup.sqlite3'"
echo "Finished backing up..."

# Backup all data
echo "Starting Backup to cloud..."
restic -r "$REMOTELOCATION" -v backup "$LOCALLOCATION" --exclude-file="$EXCLUDEFILE"
echo "Finished Backup to cloud..."

# Keep only the last X snapshots. Prune gets rid of any old files not referenced by any snapshot.
echo "Starting removal of old snapshots ..."
restic -r "$REMOTELOCATION" -v forget --keep-last "$KEEP_LAST" --prune
echo "Finished removal of old snapshots ..."

unset B2APPKEY
unset B2KEYID
unset RESTIC_PASSWORD

The first time we run this script we need it to create the repository. Uncomment the restic init... line and run the script. After that first run, comment that line out again. Initialization only needs to happen one time.

Timer

Now that we have our backup script we need to actually have it run. We’ll use a systemd timer to accomplish this

Start by creating the service file which defines what should be run.

/etc/systemd/system/vaultwarden-backup.service

[Unit]
Description=Vaultwarden backup

[Service]
Type=oneshot
ExecStart=/usr/local/bin/vaultwarden-backup

And the timer which will run twice a day.

/etc/systemd/system/vaultwarden-backup.timer

[Unit]
Description=Twice daily backup of Vaultwarden data

[Timer]
OnCalendar=04:00:00
OnCalendar=16:00:00
RandomizedDelaySec=1h
Persistent=true

[Install]
WantedBy=timers.target

WireGuard

I never really went into the WireGuard configuration before so I’m going to include it entirely. Not quite entirely, it’s sanitized.

In the conf PostUp and PreDown are used to setup firewall rules to open ports for the various services that are available on the interface. Also, there are post routing rules which allow forwarding traffic to eth0 to allow traffic on wg0 to a traverse out to the internet. This is used by the VPN clients when they’re routing all traffic through the VPN connection.

/etc/wireguard/wg0.conf

[Interface]
Address = <WG0_IPv4>/24
Address = <WG0_IPv6>::1/64
SaveConfig = true
PostUp = ufw route allow in on wg0 out on eth0
PostUp = ufw allow in on wg0 to <WG0_IPv4> port 53
PostUp = ufw allow in on wg0 to <WG0_IPv4> port 80
PostUp = ufw allow in on wg0 to <WG0_IPv4> port 443
PostUp = ufw allow in on wg0 to <WG0_IPv6>::1 port 53
PostUp = ufw allow in on wg0 to <WG0_IPv6>::1 port 80
PostUp = ufw allow in on wg0 to <WG0_IPv6>::1 port 443
PostUp = iptables -t nat -I POSTROUTING -o eth0 -j MASQUERADE
PostUp = ip6tables -t nat -I POSTROUTING -o eth0 -j MASQUERADE
PreDown = ufw route delete allow in on wg0 out on eth0
PreDown = ufw delete allow in on wg0 to <WG0_IPv4> port 53
PreDown = ufw delete allow in on wg0 to <WG0_IPv4> port 80
PreDown = ufw delete allow in on wg0 to <WG0_IPv4> port 443
PreDown = ufw delete allow in on wg0 to <WG0_IPv6>::1 port 53
PreDown = ufw delete allow in on wg0 to <WG0_IPv6>::1 port 80
PreDown = ufw delete allow in on wg0 to <WG0_IPv6>::1 port 443
PreDown = iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
PreDown = ip6tables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
ListenPort = 51820
PrivateKey = <PRIV_KEY>

[Peer]
...

DNS

Finally, with everything setup we need to setup proper DNS entires. In the public DNS for the <vw.DOMAIN>.<TLV> we need to have it resolve to the server. Otherwise, Certbot won’t be able to generate TLS certificates.

However, Caddy is serving on the wg0 interface using a private <WG0_IPv4> and <WG0_IPv6> IP addresses. Anything connected to the VPN needs <vw.DOMAIN>.<TLV> to resolve to these and not the public IP address.

Thankfully, I’m using Unbound for a private ad blocking DNS. So, a configuration file can be put in the conf.d directory and Unbound will resolve the domain to the internal IP address. Since only clients on the VPN can and will use the this DNS we can easily override IP resolution for <vw.DOMAIN>.<TLV>.

/etc/unbound/unbound.conf.d/<vw.DOMAIN>.conf

server:
	local-zone: "<vw.DOMAIN>.<TLD>." static
	local-data: "<vw.DOMAIN>.<TLD>. IN A <WG0_IPv4>"
	local-data: "<vw.DOMAIN>.<TLD>. IN AAAA <WG0_IPv6>::1"

Now Let’s Encrypt can resolve for public TLS generation and clients on the VPN will resolve to the internal address that Caddy is bound to. This is much easier than managing host file entries. Especially on iOS where that’s impossible.

Conclusion

I’m very happy with the detail of this post in contrast to my last Vaultwarden post. Last time I focused high level and mainly on containerization options. This time I dove much deeper into a real self hosted setup and configuration of each part. I’m also happy with how much cleaner the solution came out using Caddy.

I’m also extremely pleased with the configuration and setup from a security standpoint. There was no loss in functionality and I feel much better about the self hosting my password vault. The container isolation, log handing, and HTTP server configuration are much better than before. I feel like I’ve properly deployed Vaultwarden this time.

This configuration might be overkill for a private server that is only exposes services via a VPN. However, the point of moving to self hosted form Bitwarden’s service is to minimize the possibility of being part of a data breach. Necessitating needing to make my self hosted deployment as secure as possible.

Overall, I feel much better with my decision to self host and feel like I’ve finally gotten it right.