Introduction

When Lastpass first came on the scene I jumped on it because of how easy it makes syncing passwords between devices. Previously, I was using a local password manager that was only on my computer. Thankfully, mobile logins weren’t nearly as necessary for daily life back then. However, I still needed my computer to log into anything on my phone.

Over the years, Lastpass started having security incidents. This isn’t surprising with how big it became. However, at a certain point I switched to bitwarden because after LogMeIn, Inc (now GoTo) purchased them, the number of incidents started accelerating.

I’ve been happy with bitwarden for the past few years and haven’t run into any problems. While I’m happy with bitwarden, Lastpass had yet another security incident in 2022 and it was really bad. Customer password vaults were exfiltrated from their system. Vaults are encrypted by the user’s master password but this is still scary. This caused me to seriously consider the vulnerability footprint of using a password manger service. Even though bitwarden wasn’t compromised, they’re just as big of a target as Lastpass.

bitwarden is open source and allows you to run your own instance. I started thinking, maybe I should run my own instance. While I’m not going to claim to have better security capabilities than a large company with a dedicated security team, my footprint is much smaller. Attackers know where bitwarden is and what they have. They’re a high value target due to the vast amount of critical data they house. It should be possible for me to run my own instance, that is properly secured, and has a smaller attack surface than a public service.

Choosing Vaultwarden

So far, I’ve been talking about Lastpass, bitwarden, and self hosting bitwarden but this post’s title says Vaultwarden… I ultimately decided to use Vaultwarden as my self hosted password vault instead of bitwarden.

bitwarden’s Self Hosting Problems

I started looking at bitwarden’s self hosting and there are a few things I didn’t like.

bitwarden is heavy. Ridiculously heavy. The system resource it requires is far higher than I expected for what it does. Minimum 2 GB of memory and 12 GB of disk space. Recommended 4 GB of memory and 25 GB of disk space. Even the minimum exceeds the memory I have available for each of the VPS’s I have. For a web based password vault these requirements are a non-starter.

The system requirements are so high because it’s distribute as 11 docker containers. Also, it only supports Microsoft SQL Server. Docker is fine, but 11 containers to run the system is not something designed for a single user situation.

Not to mention using a proprietary SQL server isn’t very open source… While it should be possible to use a different SQL server, this is how the official distribution image is setup. I don’t want to start managing custom builds and packages.

That said, right now, there is a new unified deployment method that is in beta. This uses 1 container, and is a much simpler design. Plus, it’s not tied to Microsoft SQL. It connects to an external SQL server you setup. Such as, mariadb. This is great but it’s in beta and the system resources are still a problem for me due to bitwarden being a .Net application.

The final issue I have with bitwarden self hosting is that it requires you to get an installation id and key from bitwarden. I don’t mind they lock some features behind a license. I completely understand having a license file to unlock those features. However, requiring an account with bitwarden and requiring an installation id in order to use the package without any paid features is unnecessary.

An open source application, that is tied to a proprietary db (for now), and you have to get an installation key for the installation to run. No.

Vaultwarden

Vaultwarden is a Rust implementation of the bitwarden API. It does some but not all of what bitwarden does. The missing features aren’t thing that I need or use. Like bitwarden, it’s also officially distributed as a container. However, it’s only 1 container. Vaultwarden is just the server piece and web UI. Since it is an implementation of the bitwarden API, it work with and uses the official bitwarden GUI tools.

The drawback of Vaultwarden is it’s a small project and doesn’t have the backing of a big company like bitwarden. It’s also an implementation of the protocol bitwarden uses. If bitwarden makes a change to the API it will take time for Vaultwarden also make the same change. This has caused issues where the GUI tools have been updated and are were not able to connect to Vaultwarden until the Vaultwarden developers supported the API changes. That said, this is a mild disruption because the online vault is only used for syncing passwords between devices. This would be very bad for a large company but for only me it’s not a problem. I’m not creating passwords every day that I need synced.

The biggest draw is the reasonable system resource needs. It works with minimal, 100-200 MB, memory. Also, it has a small disk footprint. Oh and it defaults to the open source and fantastic SQLite database. It can work with other but SQLite is plenty for my needs. Being only 1 container is helpful too. Best of all there is no installation id or key needed to run it. The choice to go with Vaultwarden was easy.

VPS

After deciding I want to run my own instance of Vaultwarden, I needed to decide where I want to put it. Ideally it would be on its own VPS separate from anything else I have. However, I really don’t want to run a third VPS. Really, I should be running 4. My blog, VPN, DNS, and Vaultwarden. Instead, I’m going to run Vaultwarden on the VPS I have the VPN and DNS server on. The DNS server and Vaultwarden are only be available when connected to the VPN. That VPS doesn’t expose anything publicly other than the VPN connection. This should be secure enough even though multiple discrete services are running on the same server. They are all internal private services that are shielded from the public.

Vaultwarden Container

Vaultwarden officially distributes itself as a container that can be used by Docker. I’d like to use their image to make updates easier. Ubuntu doesn’t have any packages in their repo and I don’t want to use a third party. Also, I don’t want to build packages myself.

It’s a single container which can be run in either Docker or podman. Most people use Docker and I’m going to cover setting it up in Docker. However, I’m going to run it in podman, which I will also cover.

Docker is much more powerful than podman and the single Vaultwarden instances won’t use a fraction of what Docker offers. If I were going to use multiple containers, with multiple applications, and they all need to work together. Or if I needed to do mass deployments across multiple server and manage updates updates. Then Docker would make sense. For example, if I were deploying bitwarden. But for one container within one application, podman uses less system resources.

Docker Setup

In order to make working with Docker easier, we’re going to use docker-compose to manage everything. This gives us a very easy way to define and run Vaultwarden with all of the environment variables we want to use. It’s much easier than using the command line and specifying everything. Using docker-compose also allows you to define multiple containers that should be started when you have interrelated images you need to run. For example, if you’re running Nginx, Caddy or similar as your reverse proxy and you want to use a container instead of installing those directly on the system.

Install

We need to install Docker and docker-compose.

$ sudo apt install docker docker-compose

Docker recommends using the compose plugin instead of the standalone docker-compose application. However, Ubuntu 22.04 still uses the separate docker-compose package. It won’t matter if you’re installing it as a plugin or using the standalone-application. The compose file format and commands are all the same.

In addition to installing Docker, we also need to use Docker to install the Vaultwarden image.

$ sudo docker pull vaultwarden/server:latest

You could also use docker-compose pull referencing the compose file we’re going to create but it doesn’t really save much time and you have to cd to the compose file directory. Whereas you can run the above command anywhere.

Once we have Docker and the image, we need to create our persistent data directory. This can be anywhere you like as long as it can be written by the container. By default Docker daemon is running as root so the directory needs to be be read and writable by root.

$ sudo mkdir /srv/vaultwarden

Compose File

The compose file needs to be called docker-compose.yaml. We’ll will be put in a sub directory under a top level directory for any containers we might want managed this way.

/etc/docker-compose/vaultwarden/docker-compose.yaml

If we want to add other containers we’d create a directory under /etc/docker-compose/ with the service name and put it’s docker-compose.yaml file there.

Now we need to create the compose file for Vaultwarden, which will define how the container should function.

version: '3'

services:
  vaultwarden:
    image: vaultwarden/server:latest
    container_name: vaultwarden
    restart: always
    environment:
      - WEBSOCKET_ENABLED=true
      - ROCKET_PORT=8001
      - SIGNUPS_ALLOWED=true
      - SIGNUPS_VERIFY=false
      - INVITATIONS_ALLOWED=false
      - ADMIN_TOKEN=553e78f3faca4aefb078ec72063d5229
      - SHOW_PASSWORD_HINT=false
    volumes:
      - /srv/vaultwarden:/data
    ports:
      - 127.0.0.1:3012:3012
      - 127.0.0.1:8001:8001

A few things to note in here. The container_name allows us to reference this container from other containers. Like if you’re using Caddy. This relates to the internal network that Docker creates to allow containers to easily interact with one another. That is one of the advantages of Docker over podman. It creates an isolated “local” network that containers can use to communicate with each other.

The container is configured to always restart. Since the container is running under the Docker daemon, Docker is responsible for monitoring if the container goes down.

Also, the environment variables are set here. Some of these we will want to tweak later. For example, we need to allow signups to create our user in Vaultwarden and we need the ADMIN_TOKEN in order to configure Vaultwarden once we get it installed. We will want to disallow signups and remove the ADMIN_TOKEN variable to disable the admin interface.

The ADMIN_TOKEN is a random set of characters and should be treated like a password. Don’t copy the example I have here. Generate your own.

You’ll see where we map Vaultwarden’s /data directory that it uses for all data to the persistent, local directory we created at /srv/vaultwarden.

Finally, the ports from Vaultwarden in the container are mapped to our local system. It’s listening on localhost and not the wg0 interface that we’ll make Vaultwarden accessible through because Vaultwarden is not using TLS encryption. This is where the reverse proxy comes in. The reverse proxy will be accessible on wg0 to accept and forward requests to Vaultwarden. While Vaultwarden does support TLS, the project creators recommend using a real reverse proxy and not the internal web server, Rocket. We have Vaultwarden listening on port 8001 to ensure it won’t conflict with anything else we might need in the future. We could have the container listen on port 80 and still map it to 8001 on the local system. It’s a mapping so the port in the container doesn’t have to be the same port being accessed outside of the container. I just made them the same so it’s easier to keep track of.

Systemd

While the container is managed by Docker, we still need to have it start at boot and be able to easily start and stop it as needed. Luckily, we can use systemd to manage this part.

The compose file was put in /etc/docker-compose/vaultwarden. This was on purpose so we can use the same service to work with any compose managed Docker container.

/etc/systemd/system/docker-compose@.service

[Unit]
Description=%i service with docker compose
PartOf=docker.service
After=docker.service

[Service]
Type=oneshot
RemainAfterExit=true
WorkingDirectory=/etc/docker-compose/%i
ExecStart=docker-compose up -d --remove-orphans
ExecStop=docker-compose down

[Install]
WantedBy=multi-user.target

The key part here is the WorkingDirectory entry using %i as part of the path. %i references the name of the service that will be specified after the ‘@’ when setting it up in systemd. The full path systemd will create for the working directory will end up being /etc/docker-compose/<NAME>, which should have the docker-compose.yaml file we want to run. This works because docker-compose always uses the file named docker-compose.yaml in the current working directory. Really, the service name specified after the ‘@’ is the directory that has the compose file we want to operate on.

In the case of Vaultwarden, we’ve set it up so all we need to do is reference the service in systemd as “docker-compose@vaultwarden” and everything just works. If we want to add any additional compose managed containers, we can create a sub directory in /etc/docker-compose and drop in the docker-compose.yaml file. Then, just like we are with Vaultwarden, we can use the new sub directory name when working with systemd.

Now that we have the service file ready and our compose file in the right place, we can set Vaultwarden to start at boot.

$ sudo systemctl enable docker-compose@vaultwarden

Updating The Container

Vaultwarden will, of course, release updates and when that happens we’ll need to update the image and have our container restart using the updated version. This is surprisingly simple, pull the new image, and restart the container which will use the newest image.

We will also need to do a little house keeping to delete the old image.

$ sudo docker pull vaultwarden/server:latest
$ sudo systemctl restart docker-compose@vaultwarden
$ sudo docker images
$ sudo docker image rm <IMAGE ID>

Auto Updates

You could create a systemd service and timer that run every X time which will pull the latest image. If you already have the latest image it won’t do anything. Then restart Vaultwarden. This is fine and will always ensure you’re using the latest version. That said, Vaultwarden doesn’t have an aggressive release cycle. So, it’s probably enough to sign up for release notifications on their GitHub page or check when installing regular OS updates.

You will need to periodically delete any images for old releases.

That said, you can also automate removing old images using $ sudo docker system prune -f. This can be dangerous because it will remove anything that’s not running. If you have multiple containers running, and if any are down, those images will be deleted. I recommend against using this part and instead manually deleting old images. They’re not going to pile up very quickly with Vaultwarden’s release cycle.

podman Setup

As stated preivously, podman is much lighter weight than Docker but not nearly as powerful compared to everything Docker offers. Since I’m only running one container, Vaultwarden, I don’t need the vast majority of the functionality Docker provides. The much lower system resource usage of podman is the only reason why I’m using it instead of Docker. That said, most of Docker’s overhead comes from the Docker daemon and the networking layer it puts in place. I haven’t tested, but I suspect the more containers you have the smaller the resource usage gap will become between Docker and podman.

One thing to note is all of the podman commands are being run here with sudo. Even though podman is designed to work with regular users. Docker can be made to work with regular users too but doesn’t by default. If you want to create a user for Vaultwarden and run the container as that user you can. You will need to have Vaultwarden map to a high port a regular user can bind and not use port 80. Which we are already doing.

If you do go the non-root user route, using systemd is troublesome. Using the User= directive in the systemd service file doesn’t work properly with podman. There are workarounds but they’re not ideal. So instead I’m running podman as root similar to what’s expected by default with Docker.

Installing

First, we need to install podman.

$ sudo apt install podman

Then pull down the Vaultwarden image.

$ sudo podman pull docker.io/vaultwarden/server

You’ll notice how we have to tell podman to get the image from docker.io. This is because podman doesn’t have docker.io in it’s list of repositories to search. So we need to specify where to get the Vaultwarden image.

Just like we did with Docker, we need to create the persistent storage directory.

$ sudo mkdir /srv/vaultwarden

Using podman

podman’s API is nearly a identical to Docker’s. Which makes working with it very easy. Unfortunately, while podman does have something similar to docker-compose it can’t directly use compose files so we’ll need to manage it a bit differently.

First we’re going to create a file for the environment variables we passed into the container. We’ll make the following file.

/etc/Vaultwarden/vaultwarden.conf

And put our environment variables, one per line, into it.

WEBSOCKET_ENABLED=true
ROCKET_PORT=8001
SIGNUPS_ALLOWED=true
SIGNUPS_VERIFY=false
INVITATIONS_ALLOWED=false
ADMIN_TOKEN=553e78f3faca4aefb078ec72063d5229
SHOW_PASSWORD_HINT=false

Some of these we will want to tweak later. For example, we need to allow signups to create our user in Vaultwarden and we need the ADMIN_TOKEN in order to configure Vaultwarden once we get it installed. We will want to disallow signups and remove the ADMIN_TOKEN variable to disable the admin interface.

The ADMIN_TOKEN is a random set of characters and should be treated like a password. Don’t copy the example I have here. Generate your own.

We should have this file owned by the user podman is running as. So, root since I’m not getting into how to run podman containers as another user. It should to be set read only. While not strictly necessary, it is important to protect the ADMIN_TOKEN if you’re going to leave the admin interface enabled.

Now we can tell podman to create a container from the vaultwarden/server image with our configuration.

$ sudo podman run -d --name vaultwarden -v /srv/vaultwarden/:/data/:Z --env-file=/etc/Vaultwarden/vaultwarden.conf -p 127.0.0.1:8001:8001 -p 127.0.0.1:3012:3012 vaultwarden/server:latest

You’ll see where we map Vaultwarden’s /data directory that it uses for all data to the persistent, local directory we created at /srv/vaultwarden.

Finally, the ports from Vaultwarden in the container are mapped to our local system. It’s listening on localhost and not the wg0 interface that we’ll make Vaultwarden accessible through because Vaultwarden is not using TLS encryption. This is where the reverse proxy comes in. The reverse proxy will be accessible on wg0 to accept and forward requests to Vaultwarden. While Vaultwarden does support TLS, the project creators recommend using a real reverse proxy and not the internal web server, Rocket. We have Vaultwarden listening on port 8001 to ensure it won’t conflict with anything else we might need in the future. We could have the container listen on port 80 and still map it to 8001 on the local system. It’s a mapping so the port in the container doesn’t have to be the same port being accessed outside of the container. I just made them the same so it’s easier to keep track of.

Systemd

Next we need to turn that container into a systemd service. podman has a very handy command that will generate the systemd service file for us.

$ sudo podman generate systemd --new vaultwarden

You’ll end up with something that looks like this.

# container-71d681d65bd3960b0b09c6288eb258ababc7cbec3d818b6ff6410760afafdfb4.service
# autogenerated by Podman 3.4.4
# Sun Jan 22 16:01:55 UTC 2023

[Unit]
Description=Podman container-71d681d65bd3960b0b09c6288eb258ababc7cbec3d818b6ff6410760afafdfb4.service
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target
After=network-online.target
RequiresMountsFor=%t/containers

[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=70
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/usr/bin/podman run --cidfile=%t/%n.ctr-id --cgroups=no-conmon --rm --sdnotify=conmon --replace -d --name vaultwarden -v /srv/vaultwarden/:/data/:Z --env-file=/etc/Vaultwarden/vaultwarden.conf -p 127.0.0.1:8001:8001 -p 127.0.0.1:3012:3012 vaultwarden/server:latest
ExecStop=/usr/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/usr/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all

[Install]
WantedBy=default.target

We’ll put this into the following file so we can have systemd manage the service.

/etc/systemd/system/vaultwarden.service

Then we’ll enable it to start at boot.

sudo systemctl enable vaultwarden

Updating The Container

Vaultwarden will, of course, release updates and when that happens we’ll need to update the image and have our container restart using the updated version. This is surprisingly simple, pull the new image, and restart the container which will use the newest image.

We will also need to do a little house keeping to delete the old image.

$ sudo podman pull docker.io/vaultwarden/server
$ sudo systemctl restart vaultwarden
$ sudo podman images
$ sudo podman image rm <IMAGE ID>

Auto Updates

You could create a systemd service and timer that run every X time which will pull the latest image. If you already have the latest image it won’t do anything. Then restart Vaultwarden. This is fine and will always ensure you’re using the latest version. That said, Vaultwarden doesn’t have an aggressive release cycle. So, it’s probably enough to sign up for release notifications on their GitHub page or check when installing regular OS updates.

You will need to periodically delete any images for old releases.

Nginx as a Reverse Proxy for TLS

At this point we have Vaultwarden running in a container with either Docker or podman, but we can’t access it because it’s only listening on localhost. Now we need to setup our reverse proxy. I decided to use Nginx as the reverse proxy. Mainly because I’m familiar with it, and it’s a system package.

You could use something else, such as Caddy, which seems to be the preferred choice for most people self hosting Vaultwarden. You could also run the Nginx or Caddy in a container instead of installing it as a system package. If you’re using Docker compose files, then you can have both Vaultwarden and Caddy listed in the file and have it all packaged together in one systemd service.

I don’t see any need to run Nginx in a container when it’s already a system package.

Install Nginx

$ sudo apt install nginx

Reverse Proxy Config

Create a file in the Nginx sites-available directory for our Vaultwarden proxy.

/etc/nginx/sites-available/vaultwarden

server {
    listen <WG0_IP>:443 ssl http2;
    listen [<WG0_IP6>]:443 ssl http2;
    server_name vw.DOMAIN.TLD;

    error_log  /var/log/nginx/vw.DOMAIN.TLD.error.log;
    access_log /var/log/nginx/vw.DOMAIN.TLD.access.log;

    client_max_body_size 500M;

    location /notifications/hub {
            proxy_set_header Host $host;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_pass http://localhost:3012;
    }

    location / {
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_pass http://localhost:8001;
    }
}

With the above config you’ll need to change the <WG0_IP> and <WG0_IP6> to the ip address you want Nginx to bind to. In my case it’s the ip address I have wg0 for WireGuard using. Also, DOMAIN.TLD needs to be changed to the domain that will be used with the TLS certificate.

Since this is only going to be served to us on wg0 there is no reason to have Nginx listen on port 80 and upgrade the connection to 443. The bitwarden GUIs won’t even try to connect on port 80 to start with. You can add the redirect to make it more convenient if you’re using the web interface but I don’t see a need for it.

In order for the config to be used, a soft link from sites-available/vaultwarden to the sites-enabled directory needs to be created.

$ sudo ls -s /etc/nginx/sites-available/vaultwarden /etc/nginx/sites-enabled/vaultwarden

TLS Config

We want to create a strong dhparams file that will be used by the server.

$ sudo openssl dhparam -out /etc/nginx/dhparams.pem 4096

Now we need to create our TLS configuration in the conf.d directory.

/etc/nginx/conf.d/vw_tls.conf

ssl_certificate /etc/letsencrypt/live/DOMAIN.TLD/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/DOMAIN.TLD/privkey.pem;

ssl_session_cache shared:SSL:50m;
ssl_session_timeout 1d;
ssl_session_tickets off;

ssl_dhparam /etc/nginx/dhparam.pem;

ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM';
ssl_ecdh_curve secp384r1;

ssl_session_timeout  10m;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;

resolver 127.0.0.1 [::1];
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /etc/letsencrypt/live/DOMAIN.TLD/fullchain.pem;

add_header X-Content-Type-Options nosniff;
add_header X-Frame-Options DENY;
add_header X-XSS-Protection "1; mode=block";
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains; preload";

We’re using the local host as the DNS server here because I’m running a local Unbound resolver as my DNS server so I can do some other things with it. This will come into play later.

TLSv1.2 is enabled only because the bitwarden GUIs don’t yet support TLSv1.3. Even though this will only be available on a private network, I see no reason not to make it as secure as possible. Hence the aggressive cipher list.

On Ubuntu, the default configuration loads all files in the conf.d directory so we only need to drop this file in. If Nginx isn’t loading this file automatically, you can manually add the following line to the server section of the site configuration file.

include conf.d/vw_tls.conf;

Default Site

When Nginx is installed it will create a default site configuration that listens on port 0.0.0.0:80. Delete the symlink form sites-enabled so Nginx isn’t trying to serve anything publicly.

Certbot

Before we can use Nginx to reverse proxy to Vaultwarden, we need to generate the TLS certificate. We can do this in one of two ways. Using HTTP-01 or DNS-01 challenges.

Install certbot.

$ sudo apt install certbot

HTTP-01 Challenge

There are a few ways we can handle this. Using the Nginx plugin or using the standalone server that’s internal to certbot.

Nginx

For an HTTP challenge with Nginx we’ll need to add another entry to our Nginx config. Create a new sites-available.

/etc/nginx/sites-available/certbot

server {
    listen <PUBLIC_IP>:80;
    listen [<PUBLIC_IP>]:80;

    server_name vw.DOMAIN.TLD;

    location /.well-known/acme-challenge/ {
        root /var/www/acme;
    }

    location / {
        return 404
    }
}

This is going to allow certbot to work but not allow anything else. Every path other then the location defined for certbot will return a 404.

Now that we have the web server setup, we can install the Nginx plugin for certbot.

$ sudo apt install python3-certbot-nginx

Once installed we can configure certbot to start generating certificates for us.

certbot certonly --email me@DOMAIN.TLD --nginx -d vw.DOMAIN.TLD

certbot will ask us a few questions. The big one we need to worry about is telling certbot to make no changes to our Nginx config. We don’t want it to setup 80 to 443 redirects because we don’t want to redirect. Port 80 and 443 are being used for separate purposes and listening on different interfaces.

Standalone

Standalone is pretty simple and will start and then stop a webserver during the time the challenge is happening.

$ certbot certonly --enabled me@DOMAIN.TLD --standalone -d vw.DOMAIN.TLD

Subdomain DNS Entry with HTTP Challenge

With the HTTP challenge, you’ll need to add a subdomain entry to your DNS records if you’re using a subdomain. Which I am with vw.DOMAIN.TLD. This is needed so Let’s Encrypt can validate you own the subdomain. It’s fine to set the public DNS to point to the subdomain to the server. In my case I’ll do some magic with this later for internal name resolution.

DNS-01 Challenge

Alternatively we can use a DNS challenge. This has the advantage of not opening the server to the public internet for certificate generation. Before we can have certbot generate our TLS certificates we need to generate an API token that can modify our DNS settings with our DNS provider.

I’m going to use DigitalOcean as an example but this is very similar with any DNS provider supported by Let’s Encrypt. First generate our API token for the provider. Then create our credentials file.

/etc/letsencrypt/digitalocean.ini

dns_digitalocean_token = API_TOKEN

We want to make this file owned by root and only readable by root. We need to ensure the token is kept secure because anyone with the token can modify DNS records.

Setting up certbot is extremely easy.

$ certbot certonly --email me@DOMAIN.TLD --dns-digitalocean --dns-digitalocean-credentials /etc/letsencrypt/digitalocean.ini -d DOMAIN.TLD -d "*.DOMAIN.TLD"

The DNS challenge can generate wild card certificates, unlike the HTTP challenge. Instead of specifying the sub domain, the example tells certbot to generate a wild card. We could use this certificate for any additional subdomains you might want to add later.

Renew Hooks

When we renew the certificate we need Nginx to reload in order for it to pull in the new certificate. Also, with the HTTP challenge we want the firewall to open and then close port 80 when we’re not actively trying to renew the certificate.

The renew process has a really nice feature where you can specify hooks that get run on renewal. In /etc/letsencrypt/renewal-hooks there are 3 directories deploy, post, and pre.

Within each directory we can add scripts that will run at the various stages of the renewal process.

The pre and post scripts will be used to open and close the firewall on port 80. This is only needed if using the HTTP challenge.

/etc/letsencrypt/renewal-hooks/pre/firewall_open

ufw allow in on eth0 to any port 80

/etc/letsencrypt/renewal-hooks/pre/firewall_close

ufw delete allow in on eth0 to any port 80

Sadly, Let’s Encrypt doesn’t have a list of IP addresses for their servers. So we have to open port 80 to the world and we can’t restrict to only connections from Let’s Encrypt.

The deploy script is the most important script. This will only be run if a new certificate is generated. We’ll use this hook to restart Nginx.

/etc/letsencrypt/renewal-hooks/deploy/nginx_restart

systemctl restart nginx

Enable Auto Renew

Finally, we need to have the system run certbot automatically to renew the certificate. Thankfully, certbot has a systemd timer which you can enable to have it auto run and keep the certificate up to date.

$ sudo systemctl enable certbot.timer

Local DNS Entry for vw.DOMAIN.TLD

Currently Nginx is listening on the wg0 interface but since that’s an internal interface we have no way to turn vw.DOMAIN.TLD into an internal IP address. If we’re using HTTP challenge we would have set it to the public IP address of the server.

I’m using Unbound as my local DNS resolver and all of my WireGuard clients are using it as their DNS server. So, we can drop in a new config file into Unbound that will resolve anything querying Unbound to the internal IP address.

/etc/unbound/unbound.conf.d/vaultwarden.conf

server:
    local-zone: "vw.DOMAIN.TLD." static
    local-data: "vw.DOMAIN.TLD. A <WG0_IP>"
    local-data: "vw.DOMAIN.TLD. AAAA <WG0_IP6>"

This allows external clients like Let’s Encrypt to resolve to the public IP but anything on the internal VPN will resolve to the internal IP.

If you’re not using Unbound as your local DNS resolver like I am, you’ll most likely need to add a hosts file entry mapping from the domain to the local IP address.

Vaultwarden Config

Go to https://vw.DOMAIN.TLD and sign up for an account. We have verification turned off so it should just work. Now that we have our user account created we can move onto the admin config.

Go to https://vw.DOMAIN.TLD/admin to access the Vaultwarden admin page. The password is the ADMIN_TOKEN that was set in the Docker or podman configs. Configure it however you need.

Once configuration is done, go back the Docker compose file or the podman ENV file and set SIGNUPS_ALLOWED=false and delete the ADMIN_TOKEN line to disable access to the admin interface. Since this, at least for me, is a single user setup there is no reason I’d need to get into the admin interface in the future. So it’s best to disable it. If I ever need to get into it, I can add the ENV variable back and restart the container.

Connecting bitwarden GUIs to Vaultwarden

Since Vaultwarden is an implementation of the bitwarden server protocol, it uses the official bitwarden desktop and mobile GUIs. The bitwarden GUIs all have an option to specify the server it should connect to. Due to bitwarden having a self hosting option.

In the desktop app, click the account at the top right, choose “+ Add account”. You’ll go to a login screen. Click settings on the top left and it will allow you to specify the server it should connect to. If you don’t have an account already, would follow a similar procedure.

Backup

Backups are critical for this project. If the server goes down you lose the password database. While there is a local copy still in the apps, I want a real backup of the password database and any attachments that have been added.

I’m going to use restic and a Backblaze B2 container to store regular backups.

I haven’t talked about the database I’m using for Vaultwarden because I’m using the default SQLite database. Since it’s SQLite, it’s not great to copy an open db file. There is a small possibly of the data being written while the file is being copied. Which could put the copy in an inconsistent state. To deal with this, we’ll use the SQLite command line utility to create a backup db that properly accounts for this scenario.

We need to install restic and sqlite.

$ sudo apt install restic sqlite

Script

The script is split into 3 parts, the ENV file that will have all of our sensitive configuration data, an exclude file, and the script itself.

Env

Create the file /etc/Vaultwarden/backup_env.conf

LOCALLOCATION="/srv/vaultwarden"
REMOTELOCATION="b2:<BUCKET_ID>:<DIR>"
B2KEYID="<B2_KEY_ID>"
B2APPKEY="<B2_KEY>"
PASS="<RESITC_PASSWORD>"
KEEP_LAST=20
EXCLUDEFILE="/etc/Vaultwarden/backup_excludes"

Replace anything in ‘<…>’ with the appropriate information.

This file needs to read only by root.

$ sudo chown root:root /etc/Vaultwarden/backup_env.conf
$ sudo chmod 400 /etc/Vaultwarden/backup_env.conf

We’re making it read only because there is no reason to edit it after this point. If you do need to edit it you can manually change it back to write perms.

Exclude File

Create the exclude file /etc/Vaultwarden/backup_excludes

db.sqlite3
db.sqlite3-shm
db.sqlite3-wal
tmp/

We’re excluding the running db. The backup db we’ll create in the script will be be backed up.

Script

We’ll put the script in /usr/local/bin/vaultwarden-backup

#!/bin/bash

# Backup location and access info
source /etc/Vaultwarden/backup_env.conf

# Export data restic will read from the env
export B2_ACCOUNT_ID=$B2KEYID
export B2_ACCOUNT_KEY=$B2APPKEY
export RESTIC_PASSWORD=$PASS

# Create a repo. Only needs to be done once.
#restic init -r "$REMOTELOCATION" -v --repository-version latest

# Create SQLite DB backup to ensure we don't make a backup of the main db when it's in the middle of a write
echo "Backing up SQLite DB..."
sqlite3 /srv/vaultwarden/db.sqlite3 ".backup '/srv/vaultwarden/db-backup.sqlite3'"
echo "Finished backing up..."

# Backup all data
echo "Starting Backup to cloud..."
restic -r "$REMOTELOCATION" -v backup "$LOCALLOCATION" --exclude-file="$EXCLUDEFILE"
echo "Finished Backup to cloud..."

# Keep only the last X snapshots. Prune gets rid of any old files not referenced by any snapshot.
echo "Starting removal of old snapshots ..."
restic -r "$REMOTELOCATION" -v forget --keep-last "$KEEP_LAST" --prune
echo "Finished removal of old snapshots ..."

unset B2APPKEY
unset B2KEYID
unset RESTIC_PASSWORD

The first time we run this script we need it to create the repository. Uncomment the restic init... line and run the script. After that first run, comment that line out again. Init only needs to happen one time.

Timer

Now that we have our backup script we need to actually have it run. We’ll use a systemd timer to accomplish this

Start by creating the service file which defines what should be run.

/etc/systemd/system/vaultwarden-backup.service

[Unit]
Description=Vaultwarden backup

[Service]
Type=oneshot
ExecStart=/usr/local/bin/vaultwarden-backup

Now create the timer /etc/systemd/system/vaultwarden-backup.timer

[Unit]
Description=Twice daily backup of Vaultwarden data 

[Timer]
OnCalendar=04:00:00
OnCalendar=16:00:00
RandomizedDelaySec=1h
Persistent=true

[Install]
WantedBy=timers.target

The final step is to enable the timer in systemd so it will run.

$ sudo systemctl enable vaultwarden-backup.timer

Conclusion

What started as simple task of setting up bitwarden listening only on my VPN interface turned into a much larger project. After deciding bitwarden wouldn’t work for me, I started researching Vaultwarden. After figuring out Docker, I looked at Docker compose. While looking at container technologies, I saw podman and had to look into it too.

Needing to use a reverse proxy I looked at Caddy and spent time playing with it. Untimely deciding I didn’t want to use it. So I started setting up Nginx. I was planning to use certbot with DigitalOcean’s DNS but then I found out they don’t provide scoped API keys making that option not viable. So I started looking at cert bot with Nginx. While looking at that, I saw the standalone server option. Which then lead to looking further into how I could get HTTP challenges working while still having the Nginx proxy only on the wg0 interface.

Of course I also had to figure out how to handle vault backups. Which lead to me writing a better version of my laptop’s backup script. Which lead me not only into reworking the script but also looking into systemd timers.

This was supposed to be a simple guide of how I setup an open source password manager server on my VPS and have it only accessible through a WireGuard VPN. Instead, this turned into me doing a semi generic, semi purpose specific guide about every option I researched.

The only part of this project that went as expected was how to set the DNS entries in Unbound. Everything else was multiple rabbit holes that either were something I saw that I felt I should look into or something not working out like I was hoping and me needing to find an alternative solution.

While I’m happy I learned a lot from from this project, and I have a working private password vault setup, it took far longer than I had planned. Hopefully others will find at least something here helpful.