Introduction
Recently I setup a VPN to so I could get around geo-restrictions for for a specific streaming service I’m using. So far it’s been working well. Now that I have the server, I started thinking about what else I could do with it. One thing that jumped out at me is DNS. I configured my WireGuard client connections to use CloudFlare and fallback to Google’s DNS servers. While this does provide privacy from my ISP and queries originate from the same VPN server, felt like I could do better.
Instead of using a public DNS server directly, I decided to setup a local DNS resolver that would not only provide DNS, but also connect to the public DNS server using a secure connection. This would provide true privacy for my DNS queries. Additionally, I could configure VPN endpoints on my laptop and phone to only forward DNS, thereby allowing me to use the VPN server as a private DNS to avoid my ISP seeing my queries. Even when I’m not tunneling all traffic through the VPN server.
The general setup would entail having the standard, unencrypted, DNS queries that run on on port 53 routed through the VPN tunnel. This was already happening but going directly to the upstream DNS provider. Instead, the client would connect to a DNS resolver running on the server. Which would connect to the public DNS using an encrypted connection and relay the result back to my computer / phone.
I could take this one step further and have it act similar to a Pi-Hole by having it load a block list for ads and malware. That would increase the usefulness of this setup and provides another reason to go this route.
Why Not Use a Local Pi-Hole
The first question that comes to mind is why not setup a local Pi-Hole instead of routing the DNS to a remote server over a VPN. The first reason against this is, the two devices I want to use this with are mobile devices. A laptop and phone. Both are used away from home and a Pi-Hole would only be accessible on my home network. Unless I went to the trouble of setting up a VPN into my home network which is unnecessary when I already have a VPN.
I could setup a Pi-Hole to connect to the VPN and have everything on the local network use it. Then I could still have the DNS only VPN profiles on my laptop and phone for when I’m away. That way everything on my home network would use the secure DNS I’ve setup.
That said, I really wouldn’t get any benefit from running a local Pi-Hole. I only have two other devices on my home network and they won’t get anything out of connecting to a Pi-Hole.
My work laptop which wouldn’t use this at all. I have to VPN into my work network using a work VPN. It uses the work DNS server available through the VPN because it provides DNS for all the internal servers I use on the work network.
The other device I have on my home network is my PlayStation. Which is, more or less, a closed system. Ad blocking isn’t really necessary since the only ads are ads from the PlayStation store. The streaming services I use on the PlayStation are all paid and I have no ad plans. Malware protection isn’t necessary because all of the apps on the PlayStation connected to a limited number of known servers related to them.
I’d end up with a Pi-Hole to provide DNS to two devices which can connect directly to the VPN server with less work. Maintaining a Pi-Hole isn’t a lot of work but work is work. And without an tangible benefit there is no point in going down this path.
Upstream DNS Server
I don’t want to create an authoritative DNS server or a resolver that connects to the root DNS server. That is way too much for my needs. Plus it’s generally frowned upon small, one off, single use system from using the root DNS servers directly.
I decided to use CloudFlare’s 1.1.1.1 DNS.
They claim anonymity, and it’s fast. I was originally planning to use 1.1.1.1
but looking at their
setup page they have 1.1.1.1
for Families. I’m
specifically interested in the “Block malware” 1.1.1.2
endpoint. I was already planning to have
malware blocking as part of this project and the fact that CloudFlare offers it is fantastic.
Sadly, this doesn’t do ad blocking so I’ll still need to load an ad blocking list into the DNS
resolver I’m going to run.
One of the other key features of 1.1.1.1
, it supports DNS over TLS. I consider this a requirement
when choosing an any upstream DNS server. Google’s 8.8.8.8
server also supports DNS over TLS but
I’m fine with CloudFlare’s offering.
CloudFlare also offers DNS over HTTPS. While secure, I prefer DNS over TLS because it does not have the additional HTTP framing overhead. That said, I won’t have anywhere near the utilization where that would matter.
Setup
Currently, most Linux systems (I’m using Ubuntu for this) have the systemd-resolved
service which
acts as a DNS resolver. This can read the /etc/hosts file allowing for ad blocking. It also
supports DNS over TLS to the remote DNS server it queries. This is all great except we
can’t use it. systemd-resolved
will only listen on the lo
interface. Even though it binds
to 0.0.0.0
it can only be accessed internally. This won’t work because we need to make the DNS
resolver available to VPN clients on the wg0
interface.
Unbound
For the resolver I decided to use Unbound. It is small, secure, lightweight and supports all the features I want. Such as, DNS over TLS.
Configuration
A very helpful tool Unbound installs with is unbound-checkconf
which will validate the configuration
file. It will allow you to verify there are no syntax errors and that Unbound will be able to start.
Interaction with systemd-resolved
I mentioned earlier that systemd-resolved
listens on the lo
interface. This is only partially
true. It will bind to all interfaces with 0.0.0.0
. However, it will not accept requests from
anywhere other than the local host.
Since we’re going to use Unbound as our DNS resolver we need to remove systemd-resolved
from the
equation. Otherwise, we’ll have two resolvers chained together or depending on how we configure
Unbound, they’ll conflict with each other.
Option 1
Have Unbound listen on the wg0
interface bound to 172.0.0.1
, which is the IP I have assigned
to the WireGuard server. Unbound would also listen on the lo
interface for local use.
The disadvantage of this is, Unbound has to depend on the WireGuard interface
being up. WireGuard must start before Unbound. If WireGuard has not started the
wg0
and 172.16.0.1
address will not be available. If not present, Unbound
will fail to start because it will not be able to bind the port.
The advantage is Unbound is only listening on lo
and wg0
completely locking it to internal use.
Option 2
Have Unbound listen globally on 0.0.0.0
. We’d need to secure Unbound
via firewall rules and using it’s internal access control system in order to prevent outside access.
We absolutely do not want to expose Unbound publicly. This could lead to abuse of our server from a resource stand point. It could also be used by attackers to carry out DNS amplification attacks. It also increases the threat exposure of the server.
The disadvantage is, we have a public service we need to prevent from being accessible publicly. That said, I already locked down input and output ports both on the server and network firewalls. If you’ve properly secured the server, this shouldn’t be a problem.
The advantage is, Unbound is not dependant on WireGuard. If WireGuard is not
running Unbound can still start. This is important since Unbound is the system
DNS resolver and without Unbound running local lookups will not work. This means things like
apt
wouldn’t be able to work.
This is the option I chose because there are multiple ways to secure Unbound and prevent outside use. I really don’t like having Unbound, which the system itself uses for DNS, from being in a position where it might be prevented from starting due to an ephemeral interface not being created.
Option not
Something that I was wondering when I started looking at this was could Unbound be running on lo
and
have a routing rule that would redirect from wg0
to lo
. Similar to how traffic is routed from wg0
through eth0
to traverse the VPN to the wider internet.
By default this is not possible. It is considered a security measure to prevent external clients
from interacting with local services. That said, it is possible to disable this using
net.ipv4.conf.all.route_localnet
and allow routing from external interfaces to lo
.
I decided against doing this and I don’t consider it a viable option. This doesn’t provide any more security than using the firewall to prevent incoming connections to Unbound on the external interfaces.
Configuration File
The unbound.conf
file points to the unbound.conf.d
and will load any .conf
files from that directory.
We’re going to leave the main one alone and create /etc/unbound/unbound.conf.d/server.conf
as
the server configuration.
server:
interface: 0.0.0.0
interface: ::0
port: 53
access-control: 0.0.0.0/0 deny
access-control: 127.0.0.0/8 allow
access-control: 172.16.0.0/24 allow
access-control: ::0/0 deny
access-control: ::1 allow
access-control: fd5d:878b:91b5::0/64 allow
num-threads: 2
hide-identity: yes
hide-version: yes
hide-trustanchor: yes
harden-glue: yes
harden-dnssec-stripped: yes
harden-referral-path: yes
tls-cert-bundle: /etc/ssl/certs/ca-certificates.crt
remote-control:
control-enable: no
Some of the options in the configuration file should default to the specified value if not set
but they are being set explicitly in case the default ever changes due to an update. The big
one being control-enable: no
which is the remote configuration management interface.
The access-control
option is being used to allow only specific subnets
make DNS requests. Specifically, local and WireGuard clients. This is not a
substitute for proper firewall configuration! It is an additional security
layer we should utilize.
This configuration also sets tls-cert-bundle
which is the root certificate chain used for TLS
certificate valuation. It is necessary because of the DNS over TLS configuration file. The
cert file and path specified is for Ubuntu systems. It could be different in other distributions.
Now we have the /etc/unbound/unbound.conf.d/upstream-dns.conf
file which defines the upstream
DSN servers Unbound should use for resolution.
forward-zone:
name: "."
forward-tls-upstream: yes
forward-addr: 1.1.1.2@853#cloudflare-dns.com
forward-addr: 1.0.0.2@853#cloudflare-dns.com
forward-addr: 2606:4700:4700::1112@853#cloudflare-dns.com
forward-addr: 2606:4700:4700::1002@853#cloudflare-dns.com
This is the file that connects the Unbound resolver to CloudFlare’s DNS. Here we set a forward zone as “.” which forwards everything, not explicitly set (block list) within Unbound, elsewhere.
The forward-tls-upstream: yes
is what enables the use of DNS over TLS connections. You’ll see
that the addresses are referencing port 853 which is the DNS over TLS port. DNS over TLS uses TCP
connections instead of UDP by the way.
Every address entry has cloudflare-dns.com
after a #
. This isn’t a comment, it is
part of the DNS over TLS resolution. The DNS resolver is responsible for converting hostnames to
IP addresses but it needs to request that info from an upstream DNS server. Who is the only one who
can tell our DNS server what it’s IP address is. So we have to configure the IP address our resolver
should connect to and provide it the expected hostname that should be used for TLS certificate
valuation with that address.
Block Lists
For the block list I’m using the list maintained by Steven Black. It’s in host file format which isn’t accepted by Unbound. I wrote a small script which will download and convert the data into a file loadable by Unbound. It creates ipv4 and ipv6 block entries.
#!/usr/bin/env python
import sys
import urllib.request
URL = 'https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts'
def main():
domains = set()
data = ''
with urllib.request.urlopen(URL) as f:
data = f.read().decode('utf-8')
for line in data.splitlines():
if not line.startswith('0.0.0.0'):
continue
_, _, domain = line.partition(' ')
domain = domain.partition('#')[0].strip()
if not domain or domain == '0.0.0.0' or domain == 'localhost':
continue
domains.add(domain)
buf = [ 'server:' ]
for domain in sorted(domains):
buf.append('\tlocal-zone: "{domain}" redirect'.format(domain=domain))
buf.append('\tlocal-data: "{domain} A 0.0.0.0"'.format(domain=domain))
buf.append('\tlocal-data: "{domain} AAAA ::"'.format(domain=domain))
print('\n'.join(buf))
return 0
if __name__ == '__main__':
sys.exit(main())
The file output looks like this:
server:
local-zone: "domain.tld" redirect
local-data: "domain.tld A 0.0.0.0"
local-data: "domain.tld AAAA ::"
...
For each domain a redirect directive is given which then allows the domain we’ve set to be returned. This is the “NULL IP” address which cannot be routed by clients. This is not the “catch-all” domain that is used when binding to interfaces.
I’m using the NULL IP in local-data
entries instead of having the local-zone
directive return
always_nxdomain
. NXDOMAIN (no domain) will cause a client to fallback to the next resolver if there are
multiple configured. This prevents the blocking from being skipped if there is an unexpected configuration
error on the client. Such as if a browser has some kind of internal DNS fallback.
Put the output of this script in /etc/unbound/unbound.conf.d/
. It can be called anything but I’m using
the file name adblock.conf
.
The script could be run automatically and the file updated using a systemd timer or similar. The host file is updated about once a week (quickly looking at the commit history).
However, I prefer to update it manually each month when I run server updates. This way I can verify the
updated file didn’t have some kind of mistake that will prevent Unbound from reloading with the changes.
The worst thing is having DNS stop working when I’m out and can’t immediately correct the issue. Even
with the unbound-checkconf
tool, I’d still rather spend a minute doing updating the file manually.
Verify Unbound is Listening
Once configured and started we can use ss -ltpn
to verify it’s listening on 0.0.0.0
and ::0
.
Note:
Unbound will fail to start until the following network changes are made. This is because systemd-resolved
is currently listening on 0.0.0.0
. It may be advisable to have Unbound listen on localhost at first to
verify it’s configured and working. Then make the network changes and update the Unbound configuration to
listen globally.
Network Changes
There are a few things we need to either edit or create to facilitate using Unbound as the system DNS resolver.
Systemd
Create the file /etc/systemd/resolved.conf.d/local.conf
. This file is the systemd-resolved
configuration that will allow the system to use our Unbound server.
[Resolve]
DNS=127.0.0.1 ::1
DNSStubListener=no
This does two things. First it sets the DNS servers the system will use for
lookups. Which is Unbound we have running on the system. It is necessary to
specify the DNS servers otherwise /etc/resolve.conf
won’t be updated properly
and DNS resolution won’t work. Which relates to the next line.
Setting DNSStubListener=no
disables systemd-resolved
being an intermediary for DNS queries. If
that is on (default if yes to act as a stub listener), /etc/resolve.conf
will be set to 127.0.0.53
. systemd-resolved
will then forward queries to any servers set in DNS=
. We don’t want it to act as an intermediary since
we’re using Unbound for the system resolver.
DigitalOcean
If using Ubuntu with DigitalOcean, a few more files need to be changed. Some of this might apply to other vendors and some might apply to Ubuntu but for my server this is necessary.
netplan
Since I’m using Ubuntu the netplan configuration needs to be updated.
The netplan cloud initialization file /etc/netplan/50-cloud-init.yaml
needs to have the nameserver
sections removed from all of the interfaces. By default DigitalOcean’s name servers are going to be
preconfigured for each interface. The default for a new VPS is two interfaces total. One for public
and the other private and accessible only within the DigitalOcean datacenter.
/etc/resolve.conf
is updated with the servers configured here. By removing them only the global
server configured in the resolved configuration directory will be used by all interfaces.
Systemd
While the netplan sets name servers per interface, DigitalOcean also creates
/etc/systemd/resolved.conf.d/DigitalOcean.conf
. This sets the DigitalOcean DNS servers as global DNS servers
the system should use. This is in addition to the per interface DNS set in netplan. This is used by any interface
that doesn’t have an explicit DNS server set. Such as lo
, or wg0
.
This file can’t be edited or deleted because DigitalOcean creates the file on boot and sets it to their DNS servers. Normally this is fine and what you’d want. However, we’re trying to have all DNS go through Unbound and be securely routed to CloudFlare.
We could modify the cloud config and disable using netplan. However, I don’t
see needing to go to that extent. The better solution is to follow the advise
given in man resolved.conf
. It says:
To disable a configuration file supplied by the vendor, the recommended way
is to place a symlink to /dev/null in the configuration directory in /etc/,
with the same filename as the vendor configuration file.
When DigitalOcean writes the DNS settings to the file on boot, the changes will be thrown away and the system DNS settings won’t change.
Verifying DNS Configuration Changes
At this point DNS should be working. We can also use resolvectl status
to check what DNS
servers are being used for each interface. If everything is correct there should only be
servers set in the “Global” scope and they should be 127.0.0.1 ::1
. Which were set in the
local.conf
file that was created.
Firewall
Firewall changes are not necessary in order to have Unbound work with the lo
interface.
Everything should work if you’re using a sane firewall, like ufw.
I’m using ufw which default blocks incoming connections on public interfaces. Additionally, I have a default incoming block policy set on DigitalOcean’s network firewall configuration for the VPS. Just make sure port 53 is not be open publicly. We want to have this blocked like it does by default. Even with Unbound’s access control restrictions, we want to be as safe as possible.
DNS over TLS uses port 853 with TCP connections. If you have outbound port restrictions in place, be sure to open this.
WireGuard and wg0
WireGuard’s wg0
interface is considered a public interface. Due to default in policy being block
all but allowed ports, WireGuard clients will not be able to use Unbound for DNS.
We need to add a rule to make this work.
ufw allow in on wg0 to 172.16.0.1 port 53
This will only allow anything connected to wg0
to connect to port 53 on the server. Which was configured
to have the local address 172.16.0.1
. This IP is part of the WireGuard configuration for the VPN.
It is possible to add this is a PostUp
rule in WireGuard’s configuration and have a corresponding
PreDown
rule which will delete the firewall entry if you want to ensure the rule is only active
when WireGuard is running.
WireGuard Client Profiles
In the client’s configuration file, DNS = <WireGuard ipv4 address>, <WireGuard ipv6 address>
to the
[Interface]
section.
If the profile is intended only for routing DNS and not a full tunnel, then
AllowedIPs = <WireGuard ipv4 address>, <WireGuard ipv6 address>
needs to be set in the [Peer]
section.
This tells WireGuard to route connections to the DNS server through the VPN.
Performance
I’ve haven’t done extensive performance testing but I have a small sample of lookup times with various configurations to gauge performance. This is all highly unscientific and has a lot of factors that cannot be isolated or accounted for. Using DNS lookups, and ping times over the internet is only going to give very rough estimates.
Factors impacting performance
There are a handful of factors that will could impact query performance.
DNS over TLS to the upstream DNS provider
I did not check if it made any difference using DNS over TLS or not on the server. That said, my home DNS doesn’t use TLS and, without routing through the VPN, DNS queries are in line with query time on the server when going through a TLS connection.
Upstream DNS provider using malware domain blocking
I tested unbound to connected to 1.1.1.1
, 8.8.8.8
and 1.1.1.2
. I did not see
a difference in performance between the there.
The block list loaded into Unbound
I disabled the block list in Unbound and while Unbound did start much faster the DNS performance wasn’t any different than with it loaded.
The VPN overhead
I’m not willing to run Unbound publicly in order to determine WireGuard’s true impact. That said, based on the other information I’ve collected, I don’t think it’s necessary. I’m confident WireGuard has very little performance impact with DNS queries.
The VPS is an data center located on the other side of the world from me.
This can be estimated using ping
to determine latency to each server returning DNS queries.
Ping times:
From | to | Avg (ms) |
---|---|---|
Home | 8.8.8.8 | 18 |
Server | 1.1.1.1 | 2 |
Server | 8.8.8.8 | 2 |
Home | Server | 123 |
The VPN being located on the other side of the world made a sizeable difference. It’s a huge bottle neck and adds a lot of latency.
I was very surprised about the 2 ms ping time to 1.1.1.1
, and 8.8.8.8
. There must be a node in the same
data center the VPS is in.
Query times
Results of dig
across 6 domain names.
Location | DNS Server | WG | Avg (ms) | Max (ms) |
---|---|---|---|---|
Home | 8.8.8.8 | N | 53 | 187 |
Home | 1.1.1.1 | Y | 158 | 180 |
Home | 8.8.8.8 | Y | 160 | 200 |
Server | 1.1.1.1 | N | 40 | 158 |
Server | 1.1.1.2 | N | 41 | 154 |
Server | 8.8.8.8 | N | 40 | 178 |
The server going to 1.1.1.1
, 1.1.1.2
and 8.8.8.8
are negligible difference between the query times. They performed
the same as far as I’m concerned and even have the same ping times. The 2 ms ping times to the DNS servers on
the VPS definitely help reduce the query time.
Home to 8.8.8.8
is always consistent about about 53 ms. Accounting for the 18 ms ping time, that puts the query
time in line with the server.
That said, home through WireGuard to the VPS has a sizable impact. This is very much due to the 123 ms
latency to get to the server. 160 - 123
= 37
ms which is very close to what I was seeing for query time
on the server. It’s also close to what I’m seeing from home to 8.8.8.8
after accounting for that ping time.
Broken down another way that’s a bit easier to understand. Where Latency is the time it takes round trip to send and receive a message from the server being queried. Query is the amount of time it takes for the DNS server being used to process and return the query. Total time is the time it took from the client sending the quest to receiving the response.
Location | DNS Server | WG | Latency (ms) | Query (ms) | Total Time (ms) |
---|---|---|---|---|---|
Home | 8.8.8.8 | N | 18 | 35 | 53 |
Server | 1.1.1.1 | N | 2 | 38 | 40 |
Home | Server | Y | 123 | 37 | 160 |
Conclusion
Setting up Unbound as the system DNS resolver was more involved than I had expected. Mainly due to the
changes required to make systemd-resolved
play nicely with Unbound being the local system resolver.
It also took me awhile to get the routing firewall rule figured out. In order to all WireGuard clients to use
Unbound. That said, it turned out to be single, simple rule.
I knew that routing DNS to a server on the other side of the world would increase DNS query time but
I’m surprised by extent of the impact. I might look at setting up a DNS specific server geographically
closer to me. Which should bring the total query time in line with connecting to 8.8.8.8
directly.
That said, I actually forgot to log off the VPN and didn’t notice for a few days that I was still connected
and routing DNS through the VPN. While it’s a large impact it wasn’t immediately apparent to me.